title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
β | url
stringlengths 79
342
|
---|---|---|---|
Chapter 3. Reference | Chapter 3. Reference 3.1. aggregate-providers attributes You can configure aggregate-providers by setting the providers attributes. Table 3.1. aggregate-providers Attributes Attribute Description providers The list of providers to aggregate. Elytron uses the first suitable provider found on the list. 3.2. credential-store Attributes You can configure credential-store by setting its attributes. Table 3.2. credential-store Attributes Attribute Description create Specifies whether the credential store should create storage when it does not exist. The default values is false . credential-reference The reference to the credential used to create protection parameter. This can be in clear text or as a reference to a credential stored in a credential-store . implementation-properties Map of credentials store implementation-specific properties. modifiable Whether you can modify the credential store. The default value is true . other-providers The name of the providers to obtain the providers to search for the one that can create the required Jakarta Connectors objects within the credential store. This is valid only for keystore-based credential store. If this is not specified, then the global list of providers is used instead. path The file name of the credential store. provider-name The name of the provider to use to instantiate the CredentialStoreSpi . If the provider is not specified, then the first provider found that can create an instance of the specified type will be used. providers The name of the providers to obtain the providers to search for the one that can create the required credential store type. If this is not specified, then the global list of providers is used instead. relative-to The base path this credential store path is relative to. type Type of the credential store, for example, KeyStoreCredentialStore . 3.3. credential-store implementation properties You can configure the credential-store implementation by setting its attributes. Table 3.3. credential-store implementation properties Attribute Description cryptoAlg Cryptographic algorithm name to be used to encrypt decrypt entries at external storage. This attribute is only valid if external is enabled. Defaults to AES . external Whether data is stored to external storage and encrypted by the keyAlias . Defaults to false . externalPath Specifies path to external storage. This attribute is only valid if external is enabled. keyAlias The secret key alias within the credential store that is used to encrypt or decrypt data to the external storage. keyStoreType The keystore type, such as PKCS11 . Defaults to KeyStore.getDefaultType() . 3.4. expression=encryption Attributes You can configure expression=encryption by setting its attributes. Table 3.4. expression=encryption Attributes Attribute Description default-resolver Optional attribute. The resolver to use when an encrypted expression is defined without one. For example if you set "exampleResolver" as the default-resolver and you create an encrypted expression with the command /subsystem=elytron/expression=encryption:create-expression(clear-text=TestPassword) , Elytron uses "exampleResolver" as the resolver for this encrypted expression. prefix The prefix to use within an encrypted expression. Default is ENC . This attribute is provided for those cases where ENC might already be defined. You shouldn't change this value unless it conflicts with an already defined ENC prefix. resolvers A list of defined resolvers. A resolver has the following attributes: name - The name of the individual configuration used to reference it. credential-store - Reference to the credential store instance that contains the secret key this resolver uses. secret-key - The alias of the secret key Elytron should use from within a given credential store. 3.5. provider-loader attributes You can configure provider-loader by setting its attributes. Table 3.5. provider-loader attributes Attribute Description argument An argument to be passed into the constructor as the Provider is instantiated. class-names The list of the fully qualified class names of providers to load. These are loaded after the service-loader discovered providers, and any duplicates will be skipped. configuration The key and value configuration to be passed to the provider to initialize it. module The name of the module to load the provider from. path The path of the file to use to initialize the providers. relative-to The base path of the configuration file. 3.6. secret-key-credential-store Attributes You can configure secret-key-credential-store by setting its attributes. Table 3.6. secret-key-credential-store Attributes Attribute Description create Set the value to false if you do not want Elytron to create one if it doesn't already exist. Defaults to true . default-alias The alias name for a key generated by default. The default value is key . key-size The size of a generated key. The default size is 256 bits. You can set the value to one of the following: 128 192 256 path The path to the credential store. populate If a credential store does not contain a default-alias , this attribute indicates whether Elytron should create one. The default is true . relative-to A reference to a previously defined path that the attribute path is relative to. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/secure_storage_of_credentials_in_jboss_eap/reference |
6.2.3. Creating the Logical Volume | 6.2.3. Creating the Logical Volume The following command creates the striped logical volume striped_logical_volume from the volume group volgroup01 . This example creates a logical volume that is 2 gigabytes in size, with three stripes and a stripe size of 4 kilobytes. | [
"lvcreate -i3 -I4 -L2G -nstriped_logical_volume volgroup01 Rounding size (512 extents) up to stripe boundary size (513 extents) Logical volume \"striped_logical_volume\" created"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/lv_create_ex2 |
Part IV. Installing and configuring Red Hat Process Automation Manager on Red Hat JBoss Web Server | Part IV. Installing and configuring Red Hat Process Automation Manager on Red Hat JBoss Web Server Red Hat Decision Manager is a subset of Red Hat Process Automation Manager. Starting with this release, the distribution files for Red Hat Decision Manager are replaced with Red Hat Process Automation Manager files. There are no Decision Manager artifacts. The Red Hat Decision Manager subscription, support entitlements, and fees remain the same. Red Hat Decision Manager subscribers will continue to receive full support for the decision management and optimization capabilities of Red Hat Decision Manager. The business process management (BPM) capabilities of Red Hat Process Automation Manager are exclusive to Red Hat Process Automation Manager subscribers. They are available for use by Red Hat Decision Manager subscribers but with development support services only. Red Hat Decision Manager subscribers can upgrade to a full Red Hat Process Automation Manager subscription at any time to receive full support for BPM features. This document describes how to install Red Hat Process Automation Manager 7.13 on JBoss Web Server. Note Support for Red Hat Decision Manager on Red Hat JBoss Web Server and Apache Tomcat is now in the maintenance phase. Red Hat will continue to support Red Hat Process Automation Manager on these platforms with the following limitations: Red Hat will not release new certifications or software functionality. Red Hat will release only qualified security patches that have a critical impact and mission-critical bug fix patches. In the future, Red Hat might direct customers to migrate to new platforms and product components that are compatible with the Red Hat hybrid cloud strategy. Prerequisites You have reviewed the information in Planning a Red Hat Decision Manager installation . You have installed Red Hat JBoss Web Server 5.5.1. For information about installing Red Hat JBoss Web Server, see the Red Hat JBoss Web Server Installation Guide . | null | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/installing_and_configuring_red_hat_decision_manager/assembly-install-on-jws |
Chapter 10. Managing bare-metal hosts | Chapter 10. Managing bare-metal hosts When you install OpenShift Container Platform on a bare-metal cluster, you can provision and manage bare-metal nodes by using machine and machineset custom resources (CRs) for bare-metal hosts that exist in the cluster. 10.1. About bare metal hosts and nodes To provision a Red Hat Enterprise Linux CoreOS (RHCOS) bare metal host as a node in your cluster, first create a MachineSet custom resource (CR) object that corresponds to the bare metal host hardware. Bare metal host compute machine sets describe infrastructure components specific to your configuration. You apply specific Kubernetes labels to these compute machine sets and then update the infrastructure components to run on only those machines. Machine CR's are created automatically when you scale up the relevant MachineSet containing a metal3.io/autoscale-to-hosts annotation. OpenShift Container Platform uses Machine CR's to provision the bare metal node that corresponds to the host as specified in the MachineSet CR. 10.2. Maintaining bare metal hosts You can maintain the details of the bare metal hosts in your cluster from the OpenShift Container Platform web console. Navigate to Compute Bare Metal Hosts , and select a task from the Actions drop down menu. Here you can manage items such as BMC details, boot MAC address for the host, enable power management, and so on. You can also review the details of the network interfaces and drives for the host. You can move a bare metal host into maintenance mode. When you move a host into maintenance mode, the scheduler moves all managed workloads off the corresponding bare metal node. No new workloads are scheduled while in maintenance mode. You can deprovision a bare metal host in the web console. Deprovisioning a host does the following actions: Annotates the bare metal host CR with cluster.k8s.io/delete-machine: true Scales down the related compute machine set Note Powering off the host without first moving the daemon set and unmanaged static pods to another node can cause service disruption and loss of data. Additional resources Adding compute machines to bare metal 10.2.1. Adding a bare metal host to the cluster using the web console You can add bare metal hosts to the cluster in the web console. Prerequisites Install an RHCOS cluster on bare metal. Log in as a user with cluster-admin privileges. Procedure In the web console, navigate to Compute Bare Metal Hosts . Select Add Host New with Dialog . Specify a unique name for the new bare metal host. Set the Boot MAC address . Set the Baseboard Management Console (BMC) Address . Enter the user credentials for the host's baseboard management controller (BMC). Select to power on the host after creation, and select Create . Scale up the number of replicas to match the number of available bare metal hosts. Navigate to Compute MachineSets , and increase the number of machine replicas in the cluster by selecting Edit Machine count from the Actions drop-down menu. Note You can also manage the number of bare metal nodes using the oc scale command and the appropriate bare metal compute machine set. 10.2.2. Adding a bare metal host to the cluster using YAML in the web console You can add bare metal hosts to the cluster in the web console using a YAML file that describes the bare metal host. Prerequisites Install a RHCOS compute machine on bare metal infrastructure for use in the cluster. Log in as a user with cluster-admin privileges. Create a Secret CR for the bare metal host. Procedure In the web console, navigate to Compute Bare Metal Hosts . Select Add Host New from YAML . Copy and paste the below YAML, modifying the relevant fields with the details of your host: apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: <bare_metal_host_name> spec: online: true bmc: address: <bmc_address> credentialsName: <secret_credentials_name> 1 disableCertificateVerification: True 2 bootMACAddress: <host_boot_mac_address> 1 credentialsName must reference a valid Secret CR. The baremetal-operator cannot manage the bare metal host without a valid Secret referenced in the credentialsName . For more information about secrets and how to create them, see Understanding secrets . 2 Setting disableCertificateVerification to true disables TLS host validation between the cluster and the baseboard management controller (BMC). Select Create to save the YAML and create the new bare metal host. Scale up the number of replicas to match the number of available bare metal hosts. Navigate to Compute MachineSets , and increase the number of machines in the cluster by selecting Edit Machine count from the Actions drop-down menu. Note You can also manage the number of bare metal nodes using the oc scale command and the appropriate bare metal compute machine set. 10.2.3. Automatically scaling machines to the number of available bare metal hosts To automatically create the number of Machine objects that matches the number of available BareMetalHost objects, add a metal3.io/autoscale-to-hosts annotation to the MachineSet object. Prerequisites Install RHCOS bare metal compute machines for use in the cluster, and create corresponding BareMetalHost objects. Install the OpenShift Container Platform CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Annotate the compute machine set that you want to configure for automatic scaling by adding the metal3.io/autoscale-to-hosts annotation. Replace <machineset> with the name of the compute machine set. USD oc annotate machineset <machineset> -n openshift-machine-api 'metal3.io/autoscale-to-hosts=<any_value>' Wait for the new scaled machines to start. Note When you use a BareMetalHost object to create a machine in the cluster and labels or selectors are subsequently changed on the BareMetalHost , the BareMetalHost object continues be counted against the MachineSet that the Machine object was created from. 10.2.4. Removing bare metal hosts from the provisioner node In certain circumstances, you might want to temporarily remove bare metal hosts from the provisioner node. For example, during provisioning when a bare metal host reboot is triggered by using the OpenShift Container Platform administration console or as a result of a Machine Config Pool update, OpenShift Container Platform logs into the integrated Dell Remote Access Controller (iDrac) and issues a delete of the job queue. To prevent the management of the number of Machine objects that matches the number of available BareMetalHost objects, add a baremetalhost.metal3.io/detached annotation to the MachineSet object. Note This annotation has an effect for only BareMetalHost objects that are in either Provisioned , ExternallyProvisioned or Ready/Available state. Prerequisites Install RHCOS bare metal compute machines for use in the cluster and create corresponding BareMetalHost objects. Install the OpenShift Container Platform CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Annotate the compute machine set that you want to remove from the provisioner node by adding the baremetalhost.metal3.io/detached annotation. USD oc annotate machineset <machineset> -n openshift-machine-api 'baremetalhost.metal3.io/detached' Wait for the new machines to start. Note When you use a BareMetalHost object to create a machine in the cluster and labels or selectors are subsequently changed on the BareMetalHost , the BareMetalHost object continues be counted against the MachineSet that the Machine object was created from. In the provisioning use case, remove the annotation after the reboot is complete by using the following command: USD oc annotate machineset <machineset> -n openshift-machine-api 'baremetalhost.metal3.io/detached-' Additional resources Expanding the cluster MachineHealthChecks on bare metal 10.2.5. Powering off bare-metal hosts You can power off bare-metal cluster hosts in the web console or by applying a patch in the cluster by using the OpenShift CLI ( oc ). Before you power off a host, you should mark the node as unschedulable and drain all pods and workloads from the node. Prerequisites You have installed a RHCOS compute machine on bare-metal infrastructure for use in the cluster. You have logged in as a user with cluster-admin privileges. You have configured the host to be managed and have added BMC credentials for the cluster host. You can add BMC credentials by applying a Secret custom resource (CR) in the cluster or by logging in to the web console and configuring the bare-metal host to be managed. Procedure In the web console, mark the node that you want to power off as unschedulable. Perform the following steps: Navigate to Nodes and select the node that you want to power off. Expand the Actions menu and select Mark as unschedulable . Manually delete or relocate running pods on the node by adjusting the pod deployments or scaling down workloads on the node to zero. Wait for the drain process to complete. Navigate to Compute Bare Metal Hosts . Expand the Options menu for the bare-metal host that you want to power off, and select Power Off . Select Immediate power off . Alternatively, you can patch the BareMetalHost resource for the host that you want to power off by using oc . Get the name of the managed bare-metal host. Run the following command: USD oc get baremetalhosts -n openshift-machine-api -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.provisioning.state}{"\n"}{end}' Example output master-0.example.com managed master-1.example.com managed master-2.example.com managed worker-0.example.com managed worker-1.example.com managed worker-2.example.com managed Mark the node as unschedulable: USD oc adm cordon <bare_metal_host> 1 1 <bare_metal_host> is the host that you want to shut down, for example, worker-2.example.com . Drain all pods on the node: USD oc adm drain <bare_metal_host> --force=true Pods that are backed by replication controllers are rescheduled to other available nodes in the cluster. Safely power off the bare-metal host. Run the following command: USD oc patch <bare_metal_host> --type json -p '[{"op": "replace", "path": "/spec/online", "value": false}]' After you power on the host, make the node schedulable for workloads. Run the following command: USD oc adm uncordon <bare_metal_host> | [
"apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: <bare_metal_host_name> spec: online: true bmc: address: <bmc_address> credentialsName: <secret_credentials_name> 1 disableCertificateVerification: True 2 bootMACAddress: <host_boot_mac_address>",
"oc annotate machineset <machineset> -n openshift-machine-api 'metal3.io/autoscale-to-hosts=<any_value>'",
"oc annotate machineset <machineset> -n openshift-machine-api 'baremetalhost.metal3.io/detached'",
"oc annotate machineset <machineset> -n openshift-machine-api 'baremetalhost.metal3.io/detached-'",
"oc get baremetalhosts -n openshift-machine-api -o jsonpath='{range .items[*]}{.metadata.name}{\"\\t\"}{.status.provisioning.state}{\"\\n\"}{end}'",
"master-0.example.com managed master-1.example.com managed master-2.example.com managed worker-0.example.com managed worker-1.example.com managed worker-2.example.com managed",
"oc adm cordon <bare_metal_host> 1",
"oc adm drain <bare_metal_host> --force=true",
"oc patch <bare_metal_host> --type json -p '[{\"op\": \"replace\", \"path\": \"/spec/online\", \"value\": false}]'",
"oc adm uncordon <bare_metal_host>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/scalability_and_performance/managing-bare-metal-hosts |
Chapter 10. Troubleshooting Ceph objects | Chapter 10. Troubleshooting Ceph objects As a storage administrator, you can use the ceph-objectstore-tool utility to perform high-level or low-level object operations. The ceph-objectstore-tool utility can help you troubleshoot problems related to objects within a particular OSD or placement group. You can also start OSD containers in rescue/maintenance mode to repair OSDs without installing Ceph packages on the OSD node. Important Manipulating objects can cause unrecoverable data loss. Contact Red Hat support before using the ceph-objectstore-tool utility. 10.1. Prerequisites Verify there are no network-related issues. 10.2. Troubleshooting Ceph objects in a containerized environment The OSD container can be started in rescue/maintenance mode to repair OSDs in Red Hat Ceph Storage 4 without installing Ceph packages on the OSD node. You can use ceph-bluestore-tool to run consistency check with fsck command, or to run consistency check and repair any errors with repair command. Important This procedure is specific to containerized deployments only. Skip this section for bare-metal deployments Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the Ceph OSD node. Stopping the ceph-osd daemon. Procedure Set noout flag on cluster. Example Login to the node hosting the OSD container. Backup /etc/systemd/system/[email protected] unit file to /root directory. Example Move /run/ceph-osd@OSD_ID.service-cid file to /root . Example Edit /etc/systemd/system/[email protected] unit file and add -it --entrypoint /bin/bash option to podman command. Example Reload systemd manager configuration. Example Restart the OSD service associated with the OSD_ID . Syntax Replace OSD_ID with the ID of the OSD. Example Login to the container associated with the OSD_ID . Syntax Example Get osd fsid and activate the OSD to mount OSD's logical volume (LV). Syntax Example Run fsck and repair commands. Syntax Example After exiting the container, copy /etc/systemd/system/[email protected] unit file from /root directory. Example Reload systemd manager configuration. Example Move /run/ceph-osd@ OSD_ID .service-cid file to /tmp . Example Restart the OSD service associated with the OSD_ID . Syntax Example Additional Resources For more information on stopping an OSD, see the Starting, Stopping, and Restarting the Ceph Daemons by Instance section in the Red Hat Ceph Storage Administration Guide . 10.3. Troubleshooting high-level object operations As a storage administrator, you can use the ceph-objectstore-tool utility to perform high-level object operations. The ceph-objectstore-tool utility supports the following high-level object operations: List objects List lost objects Fix lost objects Important Manipulating objects can cause unrecoverable data loss. Contact Red Hat support before using the ceph-objectstore-tool utility. 10.3.1. Prerequisites Root-level access to the Ceph OSD nodes. 10.3.2. Listing objects The OSD can contain zero to many placement groups, and zero to many objects within a placement group (PG). The ceph-objectstore-tool utility allows you to list objects stored within an OSD. Prerequisites Root-level access to the Ceph OSD node. Stopping the ceph-osd daemon. Procedure Verify the appropriate OSD is down: Example For containerized deployments, to access the bluestore tool, follow the below steps: Set noout flag on cluster. Example Login to the node hosting the OSD container. Backup /etc/systemd/system/[email protected] unit file to /root directory. Example Move /run/ceph-osd@OSD_ID.service-cid file to /root . Example Edit /etc/systemd/system/[email protected] unit file and add -it --entrypoint /bin/bash option to podman command. Example Reload systemd manager configuration. Example Restart the OSD service associated with the OSD_ID . Syntax Replace OSD_ID with the ID of the OSD. Example Login to the container associated with the OSD_ID . Syntax Example Get osd fsid and activate the OSD to mount OSD's logical volume (LV). Syntax Example Identify all the objects within an OSD, regardless of their placement group: Example Identify all the objects within a placement group: Example Identify the PG an object belongs to: Example For containerized deployments, to revert the changes, follow the below steps: After exiting the container, copy /etc/systemd/system/[email protected] unit file from /root directory. Example Reload systemd manager configuration. Example Move /run/ceph-osd@ OSD_ID .service-cid file to /tmp . Example Restart the OSD service associated with the OSD_ID . Syntax Example Additional Resources For more information on stopping an OSD, see the Starting, Stopping, and Restarting the Ceph Daemons by Instance section in the Red Hat Ceph Storage Administration Guide . 10.3.3. Listing lost objects An OSD can mark objects as lost or unfound . You can use the ceph-objectstore-tool to list the lost and unfound objects stored within an OSD. Prerequisites Root-level access to the Ceph OSD node. Stopping the ceph-osd daemon. Procedure Verify the appropriate OSD is down: Example For containerized deployments, to access the bluestore tool, follow the below steps: Set noout flag on cluster. Example Login to the node hosting the OSD container. Backup /etc/systemd/system/[email protected] unit file to /root directory. Example Move /run/ceph-osd@OSD_ID.service-cid file to /root . Example Edit /etc/systemd/system/[email protected] unit file and add -it --entrypoint /bin/bash option to podman command. Example Reload systemd manager configuration. Example Restart the OSD service associated with the OSD_ID . Syntax Replace OSD_ID with the ID of the OSD. Example Login to the container associated with the OSD_ID . Syntax Example Get osd fsid and activate the OSD to mount OSD's logical volume (LV). Syntax Example Use the ceph-objectstore-tool utility to list lost and unfound objects. Select the appropriate circumstance: To list all the lost objects: Example To list all the lost objects within a placement group: Example To list a lost object by its identifier: Example For containerized deployments, to revert the changes, follow the below steps: After exiting the container, copy /etc/systemd/system/[email protected] unit file from /root directory. Example Reload systemd manager configuration. Example Move /run/ceph-osd@ OSD_ID .service-cid file to /tmp . Example Restart the OSD service associated with the OSD_ID . Syntax Example Additional Resources For more information on stopping an OSD, see the Starting, Stopping, and Restarting a Ceph Daemons by Instance section in the Red Hat Ceph Storage Administration Guide. 10.3.4. Fixing lost objects You can use the ceph-objectstore-tool utility to list and fix lost and unfound objects stored within a Ceph OSD. This procedure applies only to legacy objects. Prerequisites Root-level access to the Ceph OSD node. Stopping the ceph-osd daemon. Procedure Verify the appropriate OSD is down: Syntax Example For containerized deployments, to access the bluestore tool, follow the below steps: Set noout flag on cluster. Example Login to the node hosting the OSD container. Backup /etc/systemd/system/[email protected] unit file to /root directory. Example Move /run/ceph-osd@OSD_ID.service-cid file to /root . Example Edit /etc/systemd/system/[email protected] unit file and add -it --entrypoint /bin/bash option to podman command. Example Reload systemd manager configuration. Example Restart the OSD service associated with the OSD_ID . Syntax Replace OSD_ID with the ID of the OSD. Example Login to the container associated with the OSD_ID . Syntax Example Get osd fsid and activate the OSD to mount OSD's logical volume (LV). Syntax Example To list all the lost legacy objects: Syntax Example Use the ceph-objectstore-tool utility to fix lost and unfound objects as a ceph user. Select the appropriate circumstance: To fix all lost objects: Syntax Example To fix all the lost objects within a placement group: Example To fix a lost object by its identifier: Syntax Example For containerized deployments, to revert the changes, follow the below steps: After exiting the container, copy /etc/systemd/system/[email protected] unit file from /root directory. Example Reload systemd manager configuration. Example Move /run/ceph-osd@ OSD_ID .service-cid file to /tmp . Example Restart the OSD service associated with the OSD_ID . Syntax Example Additional Resources For more information on stopping an OSD, see the Starting, Stopping, and Restarting the Ceph Daemons by Instance section in the Red Hat Ceph Storage Administration Guide . 10.4. Troubleshooting low-level object operations As a storage administrator, you can use the ceph-objectstore-tool utility to perform low-level object operations. The ceph-objectstore-tool utility supports the following low-level object operations: Manipulate the object's content Remove an object List the object map (OMAP) Manipulate the OMAP header Manipulate the OMAP key List the object's attributes Manipulate the object's attribute key Important Manipulating objects can cause unrecoverable data loss. Contact Red Hat support before using the ceph-objectstore-tool utility. 10.4.1. Prerequisites Root-level access to the Ceph OSD nodes. 10.4.2. Manipulating the object's content With the ceph-objectstore-tool utility, you can get or set bytes on an object. Important Setting the bytes on an object can cause unrecoverable data loss. To prevent data loss, make a backup copy of the object. Prerequisites Root-level access to the Ceph OSD node. Stopping the ceph-osd daemon. Procedure Verify the appropriate OSD is down: Example For containerized deployments, to access the bluestore tool, follow the below steps: Set noout flag on cluster. Example Login to the node hosting the OSD container. Backup /etc/systemd/system/[email protected] unit file to /root directory. Example Move /run/ceph-osd@OSD_ID.service-cid file to /root . Example Edit /etc/systemd/system/[email protected] unit file and add -it --entrypoint /bin/bash option to podman command. Example Reload systemd manager configuration. Example Restart the OSD service associated with the OSD_ID . Syntax Replace OSD_ID with the ID of the OSD. Example Login to the container associated with the OSD_ID . Syntax Example Get osd fsid and activate the OSD to mount OSD's logical volume (LV). Syntax Example Find the object by listing the objects of the OSD or placement group (PG). Before setting the bytes on an object, make a backup and a working copy of the object: Example Edit the working copy object file and modify the object contents accordingly. Set the bytes of the object: Example For containerized deployments, to revert the changes, follow the below steps: After exiting the container, copy /etc/systemd/system/[email protected] unit file from /root directory. Example Reload systemd manager configuration. Example Move /run/ceph-osd@ OSD_ID .service-cid file to /tmp . Example Restart the OSD service associated with the OSD_ID . Syntax Example Additional Resources For more information on stopping an OSD, see the Starting, Stopping, and Restarting the Ceph Daemons by Instance section in the Red Hat Ceph Storage Administration Guide . 10.4.3. Removing an object Use the ceph-objectstore-tool utility to remove an object. By removing an object, its contents and references are removed from the placement group (PG). Important You cannot recreate an object once it is removed. Prerequisites Root-level access to the Ceph OSD node. Stopping the ceph-osd daemon. Procedure Verify the appropriate OSD is down: Example For containerized deployments, to access the bluestore tool, follow the below steps: Set noout flag on cluster. Example Login to the node hosting the OSD container. Backup /etc/systemd/system/[email protected] unit file to /root directory. Example Move /run/ceph-osd@OSD_ID.service-cid file to /root . Example Edit /etc/systemd/system/[email protected] unit file and add -it --entrypoint /bin/bash option to podman command. Example Reload systemd manager configuration. Example Restart the OSD service associated with the OSD_ID . Syntax Replace OSD_ID with the ID of the OSD. Example Login to the container associated with the OSD_ID . Syntax Example Get osd fsid and activate the OSD to mount OSD's logical volume (LV). Syntax Example Remove an object: Syntax Example For containerized deployments, to revert the changes, follow the below steps: After exiting the container, copy /etc/systemd/system/[email protected] unit file from /root directory. Example Reload systemd manager configuration. Example Move /run/ceph-osd@ OSD_ID .service-cid file to /tmp . Example Restart the OSD service associated with the OSD_ID . Syntax Example Additional Resources For more information on stopping an OSD, see the Starting, Stopping, and Restarting the Ceph Daemons by Instance section in the Red Hat Ceph Storage Administration Guide . 10.4.4. Listing the object map Use the ceph-objectstore-tool utility to list the contents of the object map (OMAP). The output provides you a list of keys. Prerequisites Root-level access to the Ceph OSD node. Stopping the ceph-osd daemon. Procedure Verify the appropriate OSD is down: Example For containerized deployments, to access the bluestore tool, follow the below steps: Set noout flag on cluster. Example Login to the node hosting the OSD container. Backup /etc/systemd/system/[email protected] unit file to /root directory. Example Move /run/ceph-osd@OSD_ID.service-cid file to /root . Example Edit /etc/systemd/system/[email protected] unit file and add -it --entrypoint /bin/bash option to podman command. Example Reload systemd manager configuration. Example Restart the OSD service associated with the OSD_ID . Syntax Replace OSD_ID with the ID of the OSD. Example Login to the container associated with the OSD_ID . Syntax Example Get osd fsid and activate the OSD to mount OSD's logical volume (LV). Syntax Example List the object map: Example For containerized deployments, to revert the changes, follow the below steps: After exiting the container, copy /etc/systemd/system/[email protected] unit file from /root directory. Example Reload systemd manager configuration. Example Move /run/ceph-osd@ OSD_ID .service-cid file to /tmp . Example Restart the OSD service associated with the OSD_ID . Syntax Example Additional Resources For more information on stopping an OSD, see the Starting, Stopping, and Restarting the Ceph Daemons by Instance section in the Red Hat Ceph Storage Administration Guide . 10.4.5. Manipulating the object map header The ceph-objectstore-tool utility will output the object map (OMAP) header with the values associated with the object's keys. Prerequisites Root-level access to the Ceph OSD node. Stopping the ceph-osd daemon. Procedure For containerized deployments, to access the bluestore tool, follow the below steps: Set noout flag on cluster. Example Login to the node hosting the OSD container. Backup /etc/systemd/system/[email protected] unit file to /root directory. Example Move /run/ceph-osd@OSD_ID.service-cid file to /root . Example Edit /etc/systemd/system/[email protected] unit file and add -it --entrypoint /bin/bash option to podman command. Example Reload systemd manager configuration. Example Restart the OSD service associated with the OSD_ID . Syntax Replace OSD_ID with the ID of the OSD. Example Login to the container associated with the OSD_ID . Syntax Example Get osd fsid and activate the OSD to mount OSD's logical volume (LV). Syntax Example Verify the appropriate OSD is down: Syntax Example Get the object map header: Syntax Example Set the object map header: Syntax Example For containerized deployments, to revert the changes, follow the below steps: After exiting the container, copy /etc/systemd/system/[email protected] unit file from /root directory. Example Reload systemd manager configuration. Example Move /run/ceph-osd@ OSD_ID .service-cid file to /tmp . Example Restart the OSD service associated with the OSD_ID . Syntax Example Additional Resources For more information on stopping an OSD, see the Starting, Stopping, and Restarting the Ceph Daemons by Instance section in the Red Hat Ceph Storage Administration Guide . 10.4.6. Manipulating the object map key Use the ceph-objectstore-tool utility to change the object map (OMAP) key. You need to provide the data path, the placement group identifier (PG ID), the object, and the key in the OMAP. Prerequisites Root-level access to the Ceph OSD node. Stopping the ceph-osd daemon. Procedure For containerized deployments, to access the bluestore tool, follow the below steps: Set noout flag on cluster. Example Login to the node hosting the OSD container. Backup /etc/systemd/system/[email protected] unit file to /root directory. Example Move /run/ceph-osd@OSD_ID.service-cid file to /root . Example Edit /etc/systemd/system/[email protected] unit file and add -it --entrypoint /bin/bash option to podman command. Example Reload systemd manager configuration. Example Restart the OSD service associated with the OSD_ID . Syntax Replace OSD_ID with the ID of the OSD. Example Login to the container associated with the OSD_ID . Syntax Example Get osd fsid and activate the OSD to mount OSD's logical volume (LV). Syntax Example Get the object map key: Syntax Example Set the object map key: Syntax Example Remove the object map key: Syntax Example For containerized deployments, to revert the changes, follow the below steps: After exiting the container, copy /etc/systemd/system/[email protected] unit file from /root directory. Example Reload systemd manager configuration. Example Move /run/ceph-osd@ OSD_ID .service-cid file to /tmp . Example Restart the OSD service associated with the OSD_ID . Syntax Example Additional Resources For more information on stopping an OSD, see the Starting, Stopping, and Restarting the Ceph Daemons by Instance section in the Red Hat Ceph Storage Administration Guide . 10.4.7. Listing the object's attributes Use the ceph-objectstore-tool utility to list an object's attributes. The output provides you with the object's keys and values. Prerequisites Root-level access to the Ceph OSD node. Stopping the ceph-osd daemon. Procedure Verify the appropriate OSD is down: Example For containerized deployments, to access the bluestore tool, follow the below steps: Set noout flag on cluster. Example Login to the node hosting the OSD container. Backup /etc/systemd/system/[email protected] unit file to /root directory. Example Move /run/ceph-osd@OSD_ID.service-cid file to /root . Example Edit /etc/systemd/system/[email protected] unit file and add -it --entrypoint /bin/bash option to podman command. Example Reload systemd manager configuration. Example Restart the OSD service associated with the OSD_ID . Syntax Replace OSD_ID with the ID of the OSD. Example Login to the container associated with the OSD_ID . Syntax Example Get osd fsid and activate the OSD to mount OSD's logical volume (LV). Syntax Example List the object's attributes: Example For containerized deployments, to revert the changes, follow the below steps: After exiting the container, copy /etc/systemd/system/[email protected] unit file from /root directory. Example Reload systemd manager configuration. Example Move /run/ceph-osd@ OSD_ID .service-cid file to /tmp . Example Restart the OSD service associated with the OSD_ID . Syntax Example Additional Resources For more information on stopping an OSD, see the Starting, Stopping, and Restarting the Ceph Daemons by Instance section in the Red Hat Ceph Storage Administration Guide . 10.4.8. Manipulating the object attribute key Use the ceph-objectstore-tool utility to change an object's attributes. To manipulate the object's attributes you need the data and journal paths, the placement group identifier (PG ID), the object, and the key in the object's attribute. Prerequisites Root-level access to the Ceph OSD node. Stopping the ceph-osd daemon. Procedure Verify the appropriate OSD is down: Example For containerized deployments, to access the bluestore tool, follow the below steps: Set noout flag on cluster. Example Login to the node hosting the OSD container. Backup /etc/systemd/system/[email protected] unit file to /root directory. Example Move /run/ceph-osd@OSD_ID.service-cid file to /root . Example Edit /etc/systemd/system/[email protected] unit file and add -it --entrypoint /bin/bash option to podman command. Example Reload systemd manager configuration. Example Restart the OSD service associated with the OSD_ID . Syntax Replace OSD_ID with the ID of the OSD. Example Login to the container associated with the OSD_ID . Syntax Example Get osd fsid and activate the OSD to mount OSD's logical volume (LV). Syntax Example Get the object's attributes: Syntax Example Set an object's attributes: Syntax Example Remove an object's attributes: Syntax Example For containerized deployments, to revert the changes, follow the below steps: After exiting the container, copy /etc/systemd/system/[email protected] unit file from /root directory. Example Reload systemd manager configuration. Example Move /run/ceph-osd@ OSD_ID .service-cid file to /tmp . Example Restart the OSD service associated with the OSD_ID . Syntax Example Additional Resources For more information on stopping an OSD, see the Starting, Stopping, and Restarting the Ceph Daemons by Instance section in the Red Hat Ceph Storage Administration Guide . 10.5. Additional Resources For Red Hat Ceph Storage support, see the Red Hat Customer Portal . | [
"ceph osd set noout",
"cp /etc/systemd/system/[email protected] /root/[email protected]",
"mv /run/[email protected] /root",
"Please do not change this file directly since it is managed by Ansible and will be overwritten [Unit] Description=Ceph OSD After=network.target [Service] EnvironmentFile=-/etc/environment ExecStartPre=-/usr/bin/rm -f /%t/%n-pid /%t/%n-cid ExecStartPre=-/usr/bin/podman rm -f ceph-osd-%i ExecStart=/usr/bin/podman run -it --entrypoint /bin/bash -d --conmon-pidfile /%t/%n-pid --cidfile /%t/%n-cid --rm --net=host --privileged=true --pid=host --ipc=host --cpus=2 -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -v /var/lib/ceph:/var/lib/ceph:z -v /etc/ceph:/etc/ceph:z -v /var/run/ceph:/var/run/ceph:z -v /var/run/udev/:/var/run/udev/ -v /var/log/ceph:/var/log/ceph:z -e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=0 -e CLUSTER=ceph -v /run/lvm/:/run/lvm/ -e CEPH_DAEMON=OSD_CEPH_VOLUME_ACTIVATE -e CONTAINER_IMAGE=registry.redhat.io/rhceph/rhceph-4-rhel8:latest -e OSD_ID=%i -e DEBUG=stayalive --name=ceph-osd-%i registry.redhat.io/rhceph/rhceph-4-rhel8:latest ExecStop=-/usr/bin/sh -c \"/usr/bin/podman rm -f `cat /%t/%n-cid`\" KillMode=none Restart=always RestartSec=10s TimeoutStartSec=120 TimeoutStopSec=15 Type=forking PIDFile=/%t/%n-pid [Install] WantedBy=multi-user.target",
"systemctl daemon-reload",
"systemctl restart ceph-osd@ OSD_ID .service",
"systemctl restart [email protected]",
"exec -it ceph-osd- OSD_ID /bin/bash",
"podman exec -it ceph-osd-0 /bin/bash",
"ceph-volume lvm list |grep -A15 \"osd\\. OSD_ID \"|grep \"osd fsid\" ceph-volume lvm activate --bluestore OSD_ID OSD_FSID",
"ceph-volume lvm list |grep -A15 \"osd\\.0\"|grep \"osd fsid\" osd fsid 087eee15-6561-40a3-8fe4-9583ba64a4ff ceph-volume lvm activate --bluestore 0 087eee15-6561-40a3-8fe4-9583ba64a4ff Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc --path /var/lib/ceph/osd/ceph-0 --no-mon-config Running command: /usr/bin/ln -snf /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -R ceph:ceph /dev/mapper/ceph--41c69f8f--30e2--4685--9c5c--c605898c5537-osd--data--d073e8b3--0b89--4271--af5b--83045fd000dc Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/systemctl enable ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff stderr: Created symlink /etc/systemd/system/multi-user.target.wants/[email protected] /usr/lib/systemd/system/[email protected]. Running command: /usr/bin/systemctl enable --runtime ceph-osd@0 stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph [email protected] /usr/lib/systemd/system/[email protected]. Running command: /usr/bin/systemctl start ceph-osd@0 stderr: Running in chroot, ignoring request: start --> ceph-volume lvm activate successful for osd ID: 0",
"ceph-bluestore-tool fsck --path /var/lib/ceph/osd/ceph- OSD_ID ceph-bluestore-tool repair --path /var/lib/ceph/osd/ceph- OSD_ID",
"ceph-bluestore-tool fsck --path /var/lib/ceph/osd/ceph-0 fsck success",
"ceph-bluestore-tool repair --path /var/lib/ceph/osd/ceph-0 repair success",
"cp /etc/systemd/system/[email protected] /root/[email protected] cp /root/[email protected] /etc/systemd/system/[email protected]",
"systemctl daemon-reload",
"mv /run/[email protected] /tmp",
"systemctl restart ceph-osd@ OSD_ID .service",
"systemctl restart [email protected]",
"systemctl status ceph-osd@ OSD_NUMBER",
"systemctl status ceph-osd@1",
"ceph osd set noout",
"cp /etc/systemd/system/[email protected] /root/[email protected]",
"mv /run/[email protected] /root",
"Please do not change this file directly since it is managed by Ansible and will be overwritten [Unit] Description=Ceph OSD After=network.target [Service] EnvironmentFile=-/etc/environment ExecStartPre=-/usr/bin/rm -f /%t/%n-pid /%t/%n-cid ExecStartPre=-/usr/bin/podman rm -f ceph-osd-%i ExecStart=/usr/bin/podman run -it --entrypoint /bin/bash -d --conmon-pidfile /%t/%n-pid --cidfile /%t/%n-cid --rm --net=host --privileged=true --pid=host --ipc=host --cpus=2 -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -v /var/lib/ceph:/var/lib/ceph:z -v /etc/ceph:/etc/ceph:z -v /var/run/ceph:/var/run/ceph:z -v /var/run/udev/:/var/run/udev/ -v /var/log/ceph:/var/log/ceph:z -e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=0 -e CLUSTER=ceph -v /run/lvm/:/run/lvm/ -e CEPH_DAEMON=OSD_CEPH_VOLUME_ACTIVATE -e CONTAINER_IMAGE=registry.redhat.io/rhceph/rhceph-4-rhel8:latest -e OSD_ID=%i -e DEBUG=stayalive --name=ceph-osd-%i registry.redhat.io/rhceph/rhceph-4-rhel8:latest ExecStop=-/usr/bin/sh -c \"/usr/bin/podman rm -f `cat /%t/%n-cid`\" KillMode=none Restart=always RestartSec=10s TimeoutStartSec=120 TimeoutStopSec=15 Type=forking PIDFile=/%t/%n-pid [Install] WantedBy=multi-user.target",
"systemctl daemon-reload",
"systemctl restart ceph-osd@ OSD_ID .service",
"systemctl restart [email protected]",
"exec -it ceph-osd- OSD_ID /bin/bash",
"podman exec -it ceph-osd-0 /bin/bash",
"ceph-volume lvm list |grep -A15 \"osd\\. OSD_ID \"|grep \"osd fsid\" ceph-volume lvm activate --bluestore OSD_ID OSD_FSID",
"ceph-volume lvm list |grep -A15 \"osd\\.0\"|grep \"osd fsid\" osd fsid 087eee15-6561-40a3-8fe4-9583ba64a4ff ceph-volume lvm activate --bluestore 0 087eee15-6561-40a3-8fe4-9583ba64a4ff Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc --path /var/lib/ceph/osd/ceph-0 --no-mon-config Running command: /usr/bin/ln -snf /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -R ceph:ceph /dev/mapper/ceph--41c69f8f--30e2--4685--9c5c--c605898c5537-osd--data--d073e8b3--0b89--4271--af5b--83045fd000dc Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/systemctl enable ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff stderr: Created symlink /etc/systemd/system/multi-user.target.wants/[email protected] /usr/lib/systemd/system/[email protected]. Running command: /usr/bin/systemctl enable --runtime ceph-osd@0 stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph [email protected] /usr/lib/systemd/system/[email protected]. Running command: /usr/bin/systemctl start ceph-osd@0 stderr: Running in chroot, ignoring request: start --> ceph-volume lvm activate successful for osd ID: 0",
"ceph-objectstore-tool --data-path PATH_TO_OSD --op list",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --op list",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID --op list",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c --op list",
"ceph-objectstore-tool --data-path PATH_TO_OSD --op list OBJECT_ID",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --op list default.region",
"cp /etc/systemd/system/[email protected] /root/[email protected] cp /root/[email protected] /etc/systemd/system/[email protected]",
"systemctl daemon-reload",
"mv /run/[email protected] /tmp",
"systemctl restart ceph-osd@ OSD_ID .service",
"systemctl restart [email protected]",
"systemctl status ceph-osd@ OSD_NUMBER",
"systemctl status ceph-osd@1",
"ceph osd set noout",
"cp /etc/systemd/system/[email protected] /root/[email protected]",
"mv /run/[email protected] /root",
"Please do not change this file directly since it is managed by Ansible and will be overwritten [Unit] Description=Ceph OSD After=network.target [Service] EnvironmentFile=-/etc/environment ExecStartPre=-/usr/bin/rm -f /%t/%n-pid /%t/%n-cid ExecStartPre=-/usr/bin/podman rm -f ceph-osd-%i ExecStart=/usr/bin/podman run -it --entrypoint /bin/bash -d --conmon-pidfile /%t/%n-pid --cidfile /%t/%n-cid --rm --net=host --privileged=true --pid=host --ipc=host --cpus=2 -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -v /var/lib/ceph:/var/lib/ceph:z -v /etc/ceph:/etc/ceph:z -v /var/run/ceph:/var/run/ceph:z -v /var/run/udev/:/var/run/udev/ -v /var/log/ceph:/var/log/ceph:z -e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=0 -e CLUSTER=ceph -v /run/lvm/:/run/lvm/ -e CEPH_DAEMON=OSD_CEPH_VOLUME_ACTIVATE -e CONTAINER_IMAGE=registry.redhat.io/rhceph/rhceph-4-rhel8:latest -e OSD_ID=%i -e DEBUG=stayalive --name=ceph-osd-%i registry.redhat.io/rhceph/rhceph-4-rhel8:latest ExecStop=-/usr/bin/sh -c \"/usr/bin/podman rm -f `cat /%t/%n-cid`\" KillMode=none Restart=always RestartSec=10s TimeoutStartSec=120 TimeoutStopSec=15 Type=forking PIDFile=/%t/%n-pid [Install] WantedBy=multi-user.target",
"systemctl daemon-reload",
"systemctl restart ceph-osd@ OSD_ID .service",
"systemctl restart [email protected]",
"exec -it ceph-osd- OSD_ID /bin/bash",
"podman exec -it ceph-osd-0 /bin/bash",
"ceph-volume lvm list |grep -A15 \"osd\\. OSD_ID \"|grep \"osd fsid\" ceph-volume lvm activate --bluestore OSD_ID OSD_FSID",
"ceph-volume lvm list |grep -A15 \"osd\\.0\"|grep \"osd fsid\" osd fsid 087eee15-6561-40a3-8fe4-9583ba64a4ff ceph-volume lvm activate --bluestore 0 087eee15-6561-40a3-8fe4-9583ba64a4ff Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc --path /var/lib/ceph/osd/ceph-0 --no-mon-config Running command: /usr/bin/ln -snf /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -R ceph:ceph /dev/mapper/ceph--41c69f8f--30e2--4685--9c5c--c605898c5537-osd--data--d073e8b3--0b89--4271--af5b--83045fd000dc Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/systemctl enable ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff stderr: Created symlink /etc/systemd/system/multi-user.target.wants/[email protected] /usr/lib/systemd/system/[email protected]. Running command: /usr/bin/systemctl enable --runtime ceph-osd@0 stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph [email protected] /usr/lib/systemd/system/[email protected]. Running command: /usr/bin/systemctl start ceph-osd@0 stderr: Running in chroot, ignoring request: start --> ceph-volume lvm activate successful for osd ID: 0",
"ceph-objectstore-tool --data-path PATH_TO_OSD --op list-lost",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --op list-lost",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID --op list-lost",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c --op list-lost",
"ceph-objectstore-tool --data-path PATH_TO_OSD --op list-lost OBJECT_ID",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --op list-lost default.region",
"cp /etc/systemd/system/[email protected] /root/[email protected] cp /root/[email protected] /etc/systemd/system/[email protected]",
"systemctl daemon-reload",
"mv /run/[email protected] /tmp",
"systemctl restart ceph-osd@ OSD_ID .service",
"systemctl restart [email protected]",
"systemctl status ceph-osd@ OSD_NUMBER",
"systemctl status ceph-osd@1",
"ceph osd set noout",
"cp /etc/systemd/system/[email protected] /root/[email protected]",
"mv /run/[email protected] /root",
"Please do not change this file directly since it is managed by Ansible and will be overwritten [Unit] Description=Ceph OSD After=network.target [Service] EnvironmentFile=-/etc/environment ExecStartPre=-/usr/bin/rm -f /%t/%n-pid /%t/%n-cid ExecStartPre=-/usr/bin/podman rm -f ceph-osd-%i ExecStart=/usr/bin/podman run -it --entrypoint /bin/bash -d --conmon-pidfile /%t/%n-pid --cidfile /%t/%n-cid --rm --net=host --privileged=true --pid=host --ipc=host --cpus=2 -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -v /var/lib/ceph:/var/lib/ceph:z -v /etc/ceph:/etc/ceph:z -v /var/run/ceph:/var/run/ceph:z -v /var/run/udev/:/var/run/udev/ -v /var/log/ceph:/var/log/ceph:z -e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=0 -e CLUSTER=ceph -v /run/lvm/:/run/lvm/ -e CEPH_DAEMON=OSD_CEPH_VOLUME_ACTIVATE -e CONTAINER_IMAGE=registry.redhat.io/rhceph/rhceph-4-rhel8:latest -e OSD_ID=%i -e DEBUG=stayalive --name=ceph-osd-%i registry.redhat.io/rhceph/rhceph-4-rhel8:latest ExecStop=-/usr/bin/sh -c \"/usr/bin/podman rm -f `cat /%t/%n-cid`\" KillMode=none Restart=always RestartSec=10s TimeoutStartSec=120 TimeoutStopSec=15 Type=forking PIDFile=/%t/%n-pid [Install] WantedBy=multi-user.target",
"systemctl daemon-reload",
"systemctl restart ceph-osd@ OSD_ID .service",
"systemctl restart [email protected]",
"exec -it ceph-osd- OSD_ID /bin/bash",
"podman exec -it ceph-osd-0 /bin/bash",
"ceph-volume lvm list |grep -A15 \"osd\\. OSD_ID \"|grep \"osd fsid\" ceph-volume lvm activate --bluestore OSD_ID OSD_FSID",
"ceph-volume lvm list |grep -A15 \"osd\\.0\"|grep \"osd fsid\" osd fsid 087eee15-6561-40a3-8fe4-9583ba64a4ff ceph-volume lvm activate --bluestore 0 087eee15-6561-40a3-8fe4-9583ba64a4ff Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc --path /var/lib/ceph/osd/ceph-0 --no-mon-config Running command: /usr/bin/ln -snf /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -R ceph:ceph /dev/mapper/ceph--41c69f8f--30e2--4685--9c5c--c605898c5537-osd--data--d073e8b3--0b89--4271--af5b--83045fd000dc Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/systemctl enable ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff stderr: Created symlink /etc/systemd/system/multi-user.target.wants/[email protected] /usr/lib/systemd/system/[email protected]. Running command: /usr/bin/systemctl enable --runtime ceph-osd@0 stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph [email protected] /usr/lib/systemd/system/[email protected]. Running command: /usr/bin/systemctl start ceph-osd@0 stderr: Running in chroot, ignoring request: start --> ceph-volume lvm activate successful for osd ID: 0",
"ceph-objectstore-tool --data-path PATH_TO_OSD --op fix-lost --dry-run",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --op fix-lost --dry-run",
"su - ceph -c 'ceph-objectstore-tool --data-path PATH_TO_OSD --op fix-lost'",
"su - ceph -c 'ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --op fix-lost'",
"su - ceph -c 'ceph-objectstore-tool --data-path _PATH_TO_OSD_ --pgid _PG_ID_ --op fix-lost'",
"su - ceph -c 'ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c --op fix-lost'",
"su - ceph -c 'ceph-objectstore-tool --data-path PATH_TO_OSD --op fix-lost OBJECT_ID '",
"su - ceph -c 'ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --op fix-lost default.region'",
"cp /etc/systemd/system/[email protected] /root/[email protected] cp /root/[email protected] /etc/systemd/system/[email protected]",
"systemctl daemon-reload",
"mv /run/[email protected] /tmp",
"systemctl restart ceph-osd@ OSD_ID .service",
"systemctl restart [email protected]",
"systemctl status ceph-osd@USDOSD_NUMBER",
"systemctl status ceph-osd@1",
"ceph osd set noout",
"cp /etc/systemd/system/[email protected] /root/[email protected]",
"mv /run/[email protected] /root",
"Please do not change this file directly since it is managed by Ansible and will be overwritten [Unit] Description=Ceph OSD After=network.target [Service] EnvironmentFile=-/etc/environment ExecStartPre=-/usr/bin/rm -f /%t/%n-pid /%t/%n-cid ExecStartPre=-/usr/bin/podman rm -f ceph-osd-%i ExecStart=/usr/bin/podman run -it --entrypoint /bin/bash -d --conmon-pidfile /%t/%n-pid --cidfile /%t/%n-cid --rm --net=host --privileged=true --pid=host --ipc=host --cpus=2 -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -v /var/lib/ceph:/var/lib/ceph:z -v /etc/ceph:/etc/ceph:z -v /var/run/ceph:/var/run/ceph:z -v /var/run/udev/:/var/run/udev/ -v /var/log/ceph:/var/log/ceph:z -e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=0 -e CLUSTER=ceph -v /run/lvm/:/run/lvm/ -e CEPH_DAEMON=OSD_CEPH_VOLUME_ACTIVATE -e CONTAINER_IMAGE=registry.redhat.io/rhceph/rhceph-4-rhel8:latest -e OSD_ID=%i -e DEBUG=stayalive --name=ceph-osd-%i registry.redhat.io/rhceph/rhceph-4-rhel8:latest ExecStop=-/usr/bin/sh -c \"/usr/bin/podman rm -f `cat /%t/%n-cid`\" KillMode=none Restart=always RestartSec=10s TimeoutStartSec=120 TimeoutStopSec=15 Type=forking PIDFile=/%t/%n-pid [Install] WantedBy=multi-user.target",
"systemctl daemon-reload",
"systemctl restart ceph-osd@ OSD_ID .service",
"systemctl restart [email protected]",
"exec -it ceph-osd- OSD_ID /bin/bash",
"podman exec -it ceph-osd-0 /bin/bash",
"ceph-volume lvm list |grep -A15 \"osd\\. OSD_ID \"|grep \"osd fsid\" ceph-volume lvm activate --bluestore OSD_ID OSD_FSID",
"ceph-volume lvm list |grep -A15 \"osd\\.0\"|grep \"osd fsid\" osd fsid 087eee15-6561-40a3-8fe4-9583ba64a4ff ceph-volume lvm activate --bluestore 0 087eee15-6561-40a3-8fe4-9583ba64a4ff Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc --path /var/lib/ceph/osd/ceph-0 --no-mon-config Running command: /usr/bin/ln -snf /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -R ceph:ceph /dev/mapper/ceph--41c69f8f--30e2--4685--9c5c--c605898c5537-osd--data--d073e8b3--0b89--4271--af5b--83045fd000dc Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/systemctl enable ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff stderr: Created symlink /etc/systemd/system/multi-user.target.wants/[email protected] /usr/lib/systemd/system/[email protected]. Running command: /usr/bin/systemctl enable --runtime ceph-osd@0 stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph [email protected] /usr/lib/systemd/system/[email protected]. Running command: /usr/bin/systemctl start ceph-osd@0 stderr: Running in chroot, ignoring request: start --> ceph-volume lvm activate successful for osd ID: 0",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT get-bytes > OBJECT_FILE_NAME ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT get-bytes > OBJECT_FILE_NAME",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' get-bytes > zone_info.default.backup ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' get-bytes > zone_info.default.working-copy",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT set-bytes < OBJECT_FILE_NAME",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' set-bytes < zone_info.default.working-copy",
"cp /etc/systemd/system/[email protected] /root/[email protected] cp /root/[email protected] /etc/systemd/system/[email protected]",
"systemctl daemon-reload",
"mv /run/[email protected] /tmp",
"systemctl restart ceph-osd@ OSD_ID .service",
"systemctl restart [email protected]",
"systemctl status ceph-osd@USDOSD_NUMBER",
"systemctl status ceph-osd@1",
"ceph osd set noout",
"cp /etc/systemd/system/[email protected] /root/[email protected]",
"mv /run/[email protected] /root",
"Please do not change this file directly since it is managed by Ansible and will be overwritten [Unit] Description=Ceph OSD After=network.target [Service] EnvironmentFile=-/etc/environment ExecStartPre=-/usr/bin/rm -f /%t/%n-pid /%t/%n-cid ExecStartPre=-/usr/bin/podman rm -f ceph-osd-%i ExecStart=/usr/bin/podman run -it --entrypoint /bin/bash -d --conmon-pidfile /%t/%n-pid --cidfile /%t/%n-cid --rm --net=host --privileged=true --pid=host --ipc=host --cpus=2 -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -v /var/lib/ceph:/var/lib/ceph:z -v /etc/ceph:/etc/ceph:z -v /var/run/ceph:/var/run/ceph:z -v /var/run/udev/:/var/run/udev/ -v /var/log/ceph:/var/log/ceph:z -e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=0 -e CLUSTER=ceph -v /run/lvm/:/run/lvm/ -e CEPH_DAEMON=OSD_CEPH_VOLUME_ACTIVATE -e CONTAINER_IMAGE=registry.redhat.io/rhceph/rhceph-4-rhel8:latest -e OSD_ID=%i -e DEBUG=stayalive --name=ceph-osd-%i registry.redhat.io/rhceph/rhceph-4-rhel8:latest ExecStop=-/usr/bin/sh -c \"/usr/bin/podman rm -f `cat /%t/%n-cid`\" KillMode=none Restart=always RestartSec=10s TimeoutStartSec=120 TimeoutStopSec=15 Type=forking PIDFile=/%t/%n-pid [Install] WantedBy=multi-user.target",
"systemctl daemon-reload",
"systemctl restart ceph-osd@ OSD_ID .service",
"systemctl restart [email protected]",
"exec -it ceph-osd- OSD_ID /bin/bash",
"podman exec -it ceph-osd-0 /bin/bash",
"ceph-volume lvm list |grep -A15 \"osd\\. OSD_ID \"|grep \"osd fsid\" ceph-volume lvm activate --bluestore OSD_ID OSD_FSID",
"ceph-volume lvm list |grep -A15 \"osd\\.0\"|grep \"osd fsid\" osd fsid 087eee15-6561-40a3-8fe4-9583ba64a4ff ceph-volume lvm activate --bluestore 0 087eee15-6561-40a3-8fe4-9583ba64a4ff Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc --path /var/lib/ceph/osd/ceph-0 --no-mon-config Running command: /usr/bin/ln -snf /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -R ceph:ceph /dev/mapper/ceph--41c69f8f--30e2--4685--9c5c--c605898c5537-osd--data--d073e8b3--0b89--4271--af5b--83045fd000dc Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/systemctl enable ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff stderr: Created symlink /etc/systemd/system/multi-user.target.wants/[email protected] /usr/lib/systemd/system/[email protected]. Running command: /usr/bin/systemctl enable --runtime ceph-osd@0 stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph [email protected] /usr/lib/systemd/system/[email protected]. Running command: /usr/bin/systemctl start ceph-osd@0 stderr: Running in chroot, ignoring request: start --> ceph-volume lvm activate successful for osd ID: 0",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT remove",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' remove",
"cp /etc/systemd/system/[email protected] /root/[email protected] cp /root/[email protected] /etc/systemd/system/[email protected]",
"systemctl daemon-reload",
"mv /run/[email protected] /tmp",
"systemctl restart ceph-osd@ OSD_ID .service",
"systemctl restart [email protected]",
"systemctl status ceph-osd@ OSD_NUMBER",
"systemctl status ceph-osd@1",
"ceph osd set noout",
"cp /etc/systemd/system/[email protected] /root/[email protected]",
"mv /run/[email protected] /root",
"Please do not change this file directly since it is managed by Ansible and will be overwritten [Unit] Description=Ceph OSD After=network.target [Service] EnvironmentFile=-/etc/environment ExecStartPre=-/usr/bin/rm -f /%t/%n-pid /%t/%n-cid ExecStartPre=-/usr/bin/podman rm -f ceph-osd-%i ExecStart=/usr/bin/podman run -it --entrypoint /bin/bash -d --conmon-pidfile /%t/%n-pid --cidfile /%t/%n-cid --rm --net=host --privileged=true --pid=host --ipc=host --cpus=2 -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -v /var/lib/ceph:/var/lib/ceph:z -v /etc/ceph:/etc/ceph:z -v /var/run/ceph:/var/run/ceph:z -v /var/run/udev/:/var/run/udev/ -v /var/log/ceph:/var/log/ceph:z -e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=0 -e CLUSTER=ceph -v /run/lvm/:/run/lvm/ -e CEPH_DAEMON=OSD_CEPH_VOLUME_ACTIVATE -e CONTAINER_IMAGE=registry.redhat.io/rhceph/rhceph-4-rhel8:latest -e OSD_ID=%i -e DEBUG=stayalive --name=ceph-osd-%i registry.redhat.io/rhceph/rhceph-4-rhel8:latest ExecStop=-/usr/bin/sh -c \"/usr/bin/podman rm -f `cat /%t/%n-cid`\" KillMode=none Restart=always RestartSec=10s TimeoutStartSec=120 TimeoutStopSec=15 Type=forking PIDFile=/%t/%n-pid [Install] WantedBy=multi-user.target",
"systemctl daemon-reload",
"systemctl restart ceph-osd@ OSD_ID .service",
"systemctl restart [email protected]",
"exec -it ceph-osd- OSD_ID /bin/bash",
"podman exec -it ceph-osd-0 /bin/bash",
"ceph-volume lvm list |grep -A15 \"osd\\. OSD_ID \"|grep \"osd fsid\" ceph-volume lvm activate --bluestore OSD_ID OSD_FSID",
"ceph-volume lvm list |grep -A15 \"osd\\.0\"|grep \"osd fsid\" osd fsid 087eee15-6561-40a3-8fe4-9583ba64a4ff ceph-volume lvm activate --bluestore 0 087eee15-6561-40a3-8fe4-9583ba64a4ff Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc --path /var/lib/ceph/osd/ceph-0 --no-mon-config Running command: /usr/bin/ln -snf /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -R ceph:ceph /dev/mapper/ceph--41c69f8f--30e2--4685--9c5c--c605898c5537-osd--data--d073e8b3--0b89--4271--af5b--83045fd000dc Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/systemctl enable ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff stderr: Created symlink /etc/systemd/system/multi-user.target.wants/[email protected] /usr/lib/systemd/system/[email protected]. Running command: /usr/bin/systemctl enable --runtime ceph-osd@0 stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph [email protected] /usr/lib/systemd/system/[email protected]. Running command: /usr/bin/systemctl start ceph-osd@0 stderr: Running in chroot, ignoring request: start --> ceph-volume lvm activate successful for osd ID: 0",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT list-omap",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' list-omap",
"cp /etc/systemd/system/[email protected] /root/[email protected] cp /root/[email protected] /etc/systemd/system/[email protected]",
"systemctl daemon-reload",
"mv /run/[email protected] /tmp",
"systemctl restart ceph-osd@ OSD_ID .service",
"systemctl restart [email protected]",
"ceph osd set noout",
"cp /etc/systemd/system/[email protected] /root/[email protected]",
"mv /run/[email protected] /root",
"Please do not change this file directly since it is managed by Ansible and will be overwritten [Unit] Description=Ceph OSD After=network.target [Service] EnvironmentFile=-/etc/environment ExecStartPre=-/usr/bin/rm -f /%t/%n-pid /%t/%n-cid ExecStartPre=-/usr/bin/podman rm -f ceph-osd-%i ExecStart=/usr/bin/podman run -it --entrypoint /bin/bash -d --conmon-pidfile /%t/%n-pid --cidfile /%t/%n-cid --rm --net=host --privileged=true --pid=host --ipc=host --cpus=2 -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -v /var/lib/ceph:/var/lib/ceph:z -v /etc/ceph:/etc/ceph:z -v /var/run/ceph:/var/run/ceph:z -v /var/run/udev/:/var/run/udev/ -v /var/log/ceph:/var/log/ceph:z -e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=0 -e CLUSTER=ceph -v /run/lvm/:/run/lvm/ -e CEPH_DAEMON=OSD_CEPH_VOLUME_ACTIVATE -e CONTAINER_IMAGE=registry.redhat.io/rhceph/rhceph-4-rhel8:latest -e OSD_ID=%i -e DEBUG=stayalive --name=ceph-osd-%i registry.redhat.io/rhceph/rhceph-4-rhel8:latest ExecStop=-/usr/bin/sh -c \"/usr/bin/podman rm -f `cat /%t/%n-cid`\" KillMode=none Restart=always RestartSec=10s TimeoutStartSec=120 TimeoutStopSec=15 Type=forking PIDFile=/%t/%n-pid [Install] WantedBy=multi-user.target",
"systemctl daemon-reload",
"systemctl restart ceph-osd@ OSD_ID .service",
"systemctl restart [email protected]",
"exec -it ceph-osd- OSD_ID /bin/bash",
"podman exec -it ceph-osd-0 /bin/bash",
"ceph-volume lvm list |grep -A15 \"osd\\. OSD_ID \"|grep \"osd fsid\" ceph-volume lvm activate --bluestore OSD_ID OSD_FSID",
"ceph-volume lvm list |grep -A15 \"osd\\.0\"|grep \"osd fsid\" osd fsid 087eee15-6561-40a3-8fe4-9583ba64a4ff ceph-volume lvm activate --bluestore 0 087eee15-6561-40a3-8fe4-9583ba64a4ff Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc --path /var/lib/ceph/osd/ceph-0 --no-mon-config Running command: /usr/bin/ln -snf /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -R ceph:ceph /dev/mapper/ceph--41c69f8f--30e2--4685--9c5c--c605898c5537-osd--data--d073e8b3--0b89--4271--af5b--83045fd000dc Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/systemctl enable ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff stderr: Created symlink /etc/systemd/system/multi-user.target.wants/[email protected] /usr/lib/systemd/system/[email protected]. Running command: /usr/bin/systemctl enable --runtime ceph-osd@0 stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph [email protected] /usr/lib/systemd/system/[email protected]. Running command: /usr/bin/systemctl start ceph-osd@0 stderr: Running in chroot, ignoring request: start --> ceph-volume lvm activate successful for osd ID: 0",
"systemctl status ceph-osd@ OSD_NUMBER",
"systemctl status ceph-osd@1",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT get-omaphdr > OBJECT_MAP_FILE_NAME",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' get-omaphdr > zone_info.default.omaphdr.txt",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT get-omaphdr < OBJECT_MAP_FILE_NAME",
"su - ceph -c 'ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' set-omaphdr < zone_info.default.omaphdr.txt",
"cp /etc/systemd/system/[email protected] /root/[email protected] cp /root/[email protected] /etc/systemd/system/[email protected]",
"systemctl daemon-reload",
"mv /run/[email protected] /tmp",
"systemctl restart ceph-osd@ OSD_ID .service",
"systemctl restart [email protected]",
"ceph osd set noout",
"cp /etc/systemd/system/[email protected] /root/[email protected]",
"mv /run/[email protected] /root",
"Please do not change this file directly since it is managed by Ansible and will be overwritten [Unit] Description=Ceph OSD After=network.target [Service] EnvironmentFile=-/etc/environment ExecStartPre=-/usr/bin/rm -f /%t/%n-pid /%t/%n-cid ExecStartPre=-/usr/bin/podman rm -f ceph-osd-%i ExecStart=/usr/bin/podman run -it --entrypoint /bin/bash -d --conmon-pidfile /%t/%n-pid --cidfile /%t/%n-cid --rm --net=host --privileged=true --pid=host --ipc=host --cpus=2 -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -v /var/lib/ceph:/var/lib/ceph:z -v /etc/ceph:/etc/ceph:z -v /var/run/ceph:/var/run/ceph:z -v /var/run/udev/:/var/run/udev/ -v /var/log/ceph:/var/log/ceph:z -e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=0 -e CLUSTER=ceph -v /run/lvm/:/run/lvm/ -e CEPH_DAEMON=OSD_CEPH_VOLUME_ACTIVATE -e CONTAINER_IMAGE=registry.redhat.io/rhceph/rhceph-4-rhel8:latest -e OSD_ID=%i -e DEBUG=stayalive --name=ceph-osd-%i registry.redhat.io/rhceph/rhceph-4-rhel8:latest ExecStop=-/usr/bin/sh -c \"/usr/bin/podman rm -f `cat /%t/%n-cid`\" KillMode=none Restart=always RestartSec=10s TimeoutStartSec=120 TimeoutStopSec=15 Type=forking PIDFile=/%t/%n-pid [Install] WantedBy=multi-user.target",
"systemctl daemon-reload",
"systemctl restart ceph-osd@ OSD_ID .service",
"systemctl restart [email protected]",
"exec -it ceph-osd- OSD_ID /bin/bash",
"podman exec -it ceph-osd-0 /bin/bash",
"ceph-volume lvm list |grep -A15 \"osd\\. OSD_ID \"|grep \"osd fsid\" ceph-volume lvm activate --bluestore OSD_ID OSD_FSID",
"ceph-volume lvm list |grep -A15 \"osd\\.0\"|grep \"osd fsid\" osd fsid 087eee15-6561-40a3-8fe4-9583ba64a4ff ceph-volume lvm activate --bluestore 0 087eee15-6561-40a3-8fe4-9583ba64a4ff Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc --path /var/lib/ceph/osd/ceph-0 --no-mon-config Running command: /usr/bin/ln -snf /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -R ceph:ceph /dev/mapper/ceph--41c69f8f--30e2--4685--9c5c--c605898c5537-osd--data--d073e8b3--0b89--4271--af5b--83045fd000dc Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/systemctl enable ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff stderr: Created symlink /etc/systemd/system/multi-user.target.wants/[email protected] /usr/lib/systemd/system/[email protected]. Running command: /usr/bin/systemctl enable --runtime ceph-osd@0 stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph [email protected] /usr/lib/systemd/system/[email protected]. Running command: /usr/bin/systemctl start ceph-osd@0 stderr: Running in chroot, ignoring request: start --> ceph-volume lvm activate successful for osd ID: 0",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT get-omap KEY > OBJECT_MAP_FILE_NAME",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' get-omap \"\" > zone_info.default.omap.txt",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT set-omap KEY < OBJECT_MAP_FILE_NAME",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' set-omap \"\" < zone_info.default.omap.txt",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT rm-omap KEY",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' rm-omap \"\"",
"cp /etc/systemd/system/[email protected] /root/[email protected] cp /root/[email protected] /etc/systemd/system/[email protected]",
"systemctl daemon-reload",
"mv /run/[email protected] /tmp",
"systemctl restart ceph-osd@ OSD_ID .service",
"systemctl restart [email protected]",
"systemctl status ceph-osd@ OSD_NUMBER",
"systemctl status ceph-osd@1",
"ceph osd set noout",
"cp /etc/systemd/system/[email protected] /root/[email protected]",
"mv /run/[email protected] /root",
"Please do not change this file directly since it is managed by Ansible and will be overwritten [Unit] Description=Ceph OSD After=network.target [Service] EnvironmentFile=-/etc/environment ExecStartPre=-/usr/bin/rm -f /%t/%n-pid /%t/%n-cid ExecStartPre=-/usr/bin/podman rm -f ceph-osd-%i ExecStart=/usr/bin/podman run -it --entrypoint /bin/bash -d --conmon-pidfile /%t/%n-pid --cidfile /%t/%n-cid --rm --net=host --privileged=true --pid=host --ipc=host --cpus=2 -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -v /var/lib/ceph:/var/lib/ceph:z -v /etc/ceph:/etc/ceph:z -v /var/run/ceph:/var/run/ceph:z -v /var/run/udev/:/var/run/udev/ -v /var/log/ceph:/var/log/ceph:z -e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=0 -e CLUSTER=ceph -v /run/lvm/:/run/lvm/ -e CEPH_DAEMON=OSD_CEPH_VOLUME_ACTIVATE -e CONTAINER_IMAGE=registry.redhat.io/rhceph/rhceph-4-rhel8:latest -e OSD_ID=%i -e DEBUG=stayalive --name=ceph-osd-%i registry.redhat.io/rhceph/rhceph-4-rhel8:latest ExecStop=-/usr/bin/sh -c \"/usr/bin/podman rm -f `cat /%t/%n-cid`\" KillMode=none Restart=always RestartSec=10s TimeoutStartSec=120 TimeoutStopSec=15 Type=forking PIDFile=/%t/%n-pid [Install] WantedBy=multi-user.target",
"systemctl daemon-reload",
"systemctl restart ceph-osd@ OSD_ID .service",
"systemctl restart [email protected]",
"exec -it ceph-osd- OSD_ID /bin/bash",
"podman exec -it ceph-osd-0 /bin/bash",
"ceph-volume lvm list |grep -A15 \"osd\\. OSD_ID \"|grep \"osd fsid\" ceph-volume lvm activate --bluestore OSD_ID OSD_FSID",
"ceph-volume lvm list |grep -A15 \"osd\\.0\"|grep \"osd fsid\" osd fsid 087eee15-6561-40a3-8fe4-9583ba64a4ff ceph-volume lvm activate --bluestore 0 087eee15-6561-40a3-8fe4-9583ba64a4ff Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc --path /var/lib/ceph/osd/ceph-0 --no-mon-config Running command: /usr/bin/ln -snf /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -R ceph:ceph /dev/mapper/ceph--41c69f8f--30e2--4685--9c5c--c605898c5537-osd--data--d073e8b3--0b89--4271--af5b--83045fd000dc Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/systemctl enable ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff stderr: Created symlink /etc/systemd/system/multi-user.target.wants/[email protected] /usr/lib/systemd/system/[email protected]. Running command: /usr/bin/systemctl enable --runtime ceph-osd@0 stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph [email protected] /usr/lib/systemd/system/[email protected]. Running command: /usr/bin/systemctl start ceph-osd@0 stderr: Running in chroot, ignoring request: start --> ceph-volume lvm activate successful for osd ID: 0",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT list-attrs",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' list-attrs",
"cp /etc/systemd/system/[email protected] /root/[email protected] cp /root/[email protected] /etc/systemd/system/[email protected]",
"systemctl daemon-reload",
"mv /run/[email protected] /tmp",
"systemctl restart ceph-osd@ OSD_ID .service",
"systemctl restart [email protected]",
"systemctl status ceph-osd@USDOSD_NUMBER",
"systemctl status ceph-osd@1",
"ceph osd set noout",
"cp /etc/systemd/system/[email protected] /root/[email protected]",
"mv /run/[email protected] /root",
"Please do not change this file directly since it is managed by Ansible and will be overwritten [Unit] Description=Ceph OSD After=network.target [Service] EnvironmentFile=-/etc/environment ExecStartPre=-/usr/bin/rm -f /%t/%n-pid /%t/%n-cid ExecStartPre=-/usr/bin/podman rm -f ceph-osd-%i ExecStart=/usr/bin/podman run -it --entrypoint /bin/bash -d --conmon-pidfile /%t/%n-pid --cidfile /%t/%n-cid --rm --net=host --privileged=true --pid=host --ipc=host --cpus=2 -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -v /var/lib/ceph:/var/lib/ceph:z -v /etc/ceph:/etc/ceph:z -v /var/run/ceph:/var/run/ceph:z -v /var/run/udev/:/var/run/udev/ -v /var/log/ceph:/var/log/ceph:z -e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=0 -e CLUSTER=ceph -v /run/lvm/:/run/lvm/ -e CEPH_DAEMON=OSD_CEPH_VOLUME_ACTIVATE -e CONTAINER_IMAGE=registry.redhat.io/rhceph/rhceph-4-rhel8:latest -e OSD_ID=%i -e DEBUG=stayalive --name=ceph-osd-%i registry.redhat.io/rhceph/rhceph-4-rhel8:latest ExecStop=-/usr/bin/sh -c \"/usr/bin/podman rm -f `cat /%t/%n-cid`\" KillMode=none Restart=always RestartSec=10s TimeoutStartSec=120 TimeoutStopSec=15 Type=forking PIDFile=/%t/%n-pid [Install] WantedBy=multi-user.target",
"systemctl daemon-reload",
"systemctl restart ceph-osd@ OSD_ID .service",
"systemctl restart [email protected]",
"exec -it ceph-osd- OSD_ID /bin/bash",
"podman exec -it ceph-osd-0 /bin/bash",
"ceph-volume lvm list |grep -A15 \"osd\\. OSD_ID \"|grep \"osd fsid\" ceph-volume lvm activate --bluestore OSD_ID OSD_FSID",
"ceph-volume lvm list |grep -A15 \"osd\\.0\"|grep \"osd fsid\" osd fsid 087eee15-6561-40a3-8fe4-9583ba64a4ff ceph-volume lvm activate --bluestore 0 087eee15-6561-40a3-8fe4-9583ba64a4ff Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc --path /var/lib/ceph/osd/ceph-0 --no-mon-config Running command: /usr/bin/ln -snf /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -R ceph:ceph /dev/mapper/ceph--41c69f8f--30e2--4685--9c5c--c605898c5537-osd--data--d073e8b3--0b89--4271--af5b--83045fd000dc Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/systemctl enable ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff stderr: Created symlink /etc/systemd/system/multi-user.target.wants/[email protected] /usr/lib/systemd/system/[email protected]. Running command: /usr/bin/systemctl enable --runtime ceph-osd@0 stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph [email protected] /usr/lib/systemd/system/[email protected]. Running command: /usr/bin/systemctl start ceph-osd@0 stderr: Running in chroot, ignoring request: start --> ceph-volume lvm activate successful for osd ID: 0",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT get-attrs KEY > OBJECT_ATTRS_FILE_NAME",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' get-attrs \"oid\" > zone_info.default.attr.txt",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT set-attrs KEY < OBJECT_ATTRS_FILE_NAME",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' set-attrs \"oid\" < zone_info.default.attr.txt",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT rm-attrs KEY",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' rm-attrs \"oid\"",
"cp /etc/systemd/system/[email protected] /root/[email protected] cp /root/[email protected] /etc/systemd/system/[email protected]",
"systemctl daemon-reload",
"mv /run/[email protected] /tmp",
"systemctl restart ceph-osd@ OSD_ID .service",
"systemctl restart [email protected]"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/troubleshooting_guide/troubleshooting-ceph-objects |
5.2. Resizing an Online Multipath Device | 5.2. Resizing an Online Multipath Device If you need to resize an online multipath device, use the following procedure. Resize your physical device. Execute the following command to find the paths to the LUN: Resize your paths. For SCSI devices, writing a 1 to the rescan file for the device causes the SCSI driver to rescan, as in the following command: Ensure that you run this command for each of the path devices. For example, if your path devices are sda , sdb , sde , and sdf , you would run the following commands: Resize your multipath device by executing the multipathd resize command: Resize the file system (assuming no LVM or DOS partitions are used): | [
"multipath -l",
"echo 1 > /sys/block/ path_device /device/rescan",
"echo 1 > /sys/block/sda/device/rescan echo 1 > /sys/block/sdb/device/rescan echo 1 > /sys/block/sde/device/rescan echo 1 > /sys/block/sdf/device/rescan",
"multipathd resize map multipath_device",
"resize2fs /dev/mapper/mpatha"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/dm_multipath/online_device_resize |
Image APIs | Image APIs OpenShift Container Platform 4.16 Reference guide for image APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/image_apis/index |
Chapter 1. Preparing to deploy OpenShift Data Foundation | Chapter 1. Preparing to deploy OpenShift Data Foundation Deploying OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provides you with the option to create internal cluster resources. This will result in the internal provisioning of the base services, which helps to make additional storage classes available to applications. Before you begin the deployment of Red Hat OpenShift Data Foundation, follow these steps: Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) HashiCorp Vault, follow these steps: Ensure that you have a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . When the Token authentication method is selected for encryption then refer to Enabling cluster-wide encryption with the Token authentication using KMS . When the Kubernetes authentication method is selected for encryption then refer to Enabling cluster-wide encryption with the Kubernetes authentication using KMS . Ensure that you are using signed certificates on your Vault servers. Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) Thales CipherTrust Manager, you must first enable the Key Management Interoperability Protocol (KMIP) and use signed certificates on your server. Follow these steps: Create a KMIP client if one does not exist. From the user interface, select KMIP Client Profile Add Profile . Add the CipherTrust username to the Common Name field during profile creation. Create a token by navigating to KMIP Registration Token New Registration Token . Copy the token for the step. To register the client, navigate to KMIP Registered Clients Add Client . Specify the Name . Paste the Registration Token from the step, then click Save . Download the Private Key and Client Certificate by clicking Save Private Key and Save Certificate respectively. To create a new KMIP interface, navigate to Admin Settings Interfaces Add Interface . Select KMIP Key Management Interoperability Protocol and click . Select a free Port . Select Network Interface as all . Select Interface Mode as TLS, verify client cert, user name taken from client cert, auth request is optional . (Optional) You can enable hard delete to delete both metadata and material when the key is deleted. It is disabled by default. Select the CA to be used, and click Save . To get the server CA certificate, click on the Action menu (...) on the right of the newly created interface, and click Download Certificate . Optional: If StorageClass encryption is to be enabled during deployment, create a key to act as the Key Encryption Key (KEK): Navigate to Keys Add Key . Enter Key Name . Set the Algorithm and Size to AES and 256 respectively. Enable Create a key in Pre-Active state and set the date and time for activation. Ensure that Encrypt and Decrypt are enabled under Key Usage . Copy the ID of the newly created Key to be used as the Unique Identifier during deployment. Minimum starting node requirements An OpenShift Data Foundation cluster will be deployed with minimum configuration when the standard deployment resource requirement is not met. See Resource requirements section in Planning guide. Disaster recovery requirements [Technology Preview] Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced subscription A valid Red Hat Advanced Cluster Management for Kubernetes subscription To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . For detailed requirements, see Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads guide, and Requirements and recommendations section of the Install guide in Red Hat Advanced Cluster Management for Kubernetes documentation. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_openshift_data_foundation_using_google_cloud/preparing_to_deploy_openshift_data_foundation |
22.16. Configure NTP | 22.16. Configure NTP To change the default configuration of the NTP service, use a text editor running as root user to edit the /etc/ntp.conf file. This file is installed together with ntpd and is configured to use time servers from the Red Hat pool by default. The man page ntp.conf(5) describes the command options that can be used in the configuration file apart from the access and rate limiting commands which are explained in the ntp_acc(5) man page. 22.16.1. Configure Access Control to an NTP Service To restrict or control access to the NTP service running on a system, make use of the restrict command in the ntp.conf file. See the commented out example: The restrict command takes the following form: restrict address mask option where address and mask specify the IP addresses to which you want to apply the restriction, and option is one or more of: ignore - All packets will be ignored, including ntpq and ntpdc queries. kod - a " Kiss-o'-death " packet is to be sent to reduce unwanted queries. limited - do not respond to time service requests if the packet violates the rate limit default values or those specified by the discard command. ntpq and ntpdc queries are not affected. For more information on the discard command and the default values, see Section 22.16.2, "Configure Rate Limiting Access to an NTP Service" . lowpriotrap - traps set by matching hosts to be low priority. nomodify - prevents any changes to the configuration. noquery - prevents ntpq and ntpdc queries, but not time queries, from being answered. nopeer - prevents a peer association being formed. noserve - deny all packets except ntpq and ntpdc queries. notrap - prevents ntpdc control message protocol traps. notrust - deny packets that are not cryptographically authenticated. ntpport - modify the match algorithm to only apply the restriction if the source port is the standard NTP UDP port 123 . version - deny packets that do not match the current NTP version. To configure rate limit access to not respond at all to a query, the respective restrict command has to have the limited option. If ntpd should reply with a KoD packet, the restrict command needs to have both limited and kod options. The ntpq and ntpdc queries can be used in amplification attacks (see CVE-2013-5211 for more details), do not remove the noquery option from the restrict default command on publicly accessible systems. | [
"Hosts on local network are less restricted. #restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-Configure_NTP |
Chapter 62. Deprecated Packages | Chapter 62. Deprecated Packages The following packages are now deprecated. For information regarding replaced packages or availability in an unsupported RHEL 8 repository (if applicable), see Considerations in adopting RHEL 8 . a2ps abrt-addon-upload-watch abrt-devel abrt-gui-devel abrt-retrace-client acpid-sysvinit advancecomp adwaita-icon-theme-devel adwaita-qt-common adwaita-qt4 agg aic94xx-firmware akonadi akonadi-devel akonadi-mysql alacarte alsa-tools anaconda-widgets-devel ant-antunit ant-antunit-javadoc antlr-C++-doc antlr-python antlr-tool apache-commons-configuration apache-commons-configuration-javadoc apache-commons-daemon apache-commons-daemon-javadoc apache-commons-daemon-jsvc apache-commons-dbcp apache-commons-dbcp-javadoc apache-commons-digester apache-commons-digester-javadoc apache-commons-jexl apache-commons-jexl-javadoc apache-commons-pool apache-commons-pool-javadoc apache-commons-validator apache-commons-validator-javadoc apache-commons-vfs apache-commons-vfs-ant apache-commons-vfs-examples apache-commons-vfs-javadoc apache-rat apache-rat-core apache-rat-javadoc apache-rat-plugin apache-rat-tasks apr-util-nss args4j args4j-javadoc ark ark-libs asciidoc-latex at-spi at-spi-devel at-spi-python at-sysvinit atlas-static attica attica-devel audiocd-kio audiocd-kio-devel audiocd-kio-libs audiofile audiofile-devel audit-libs-python audit-libs-static authconfig-gtk authd autogen-libopts-devel automoc autotrace-devel avahi-dnsconfd avahi-glib-devel avahi-gobject-devel avahi-qt3 avahi-qt3-devel avahi-qt4 avahi-qt4-devel avahi-tools avahi-ui avahi-ui-devel avahi-ui-tools avalon-framework avalon-framework-javadoc avalon-logkit avalon-logkit-javadoc bacula-console-bat bacula-devel bacula-traymonitor baekmuk-ttf-batang-fonts baekmuk-ttf-dotum-fonts baekmuk-ttf-fonts-common baekmuk-ttf-fonts-ghostscript baekmuk-ttf-gulim-fonts baekmuk-ttf-hline-fonts base64coder base64coder-javadoc batik batik-demo batik-javadoc batik-rasterizer batik-slideshow batik-squiggle batik-svgpp batik-ttf2svg bcc-devel bcel bison-devel blas-static blas64-devel blas64-static bltk bluedevil bluedevil-autostart bmc-snmp-proxy bogofilter-bogoupgrade bridge-utils bsdcpio bsh-demo bsh-utils btrfs-progs btrfs-progs-devel buildnumber-maven-plugin buildnumber-maven-plugin-javadoc bwidget bzr bzr-doc cairo-tools cal10n caribou caribou-antler caribou-devel caribou-gtk2-module caribou-gtk3-module cdi-api-javadoc cdparanoia-static cdrskin ceph-common check-static cheese-libs-devel cifs-utils-devel cim-schema-docs cim-schema-docs cjkuni-ukai-fonts clutter-gst2-devel clutter-tests cmpi-bindings-pywbem cobertura cobertura-javadoc cockpit-machines-ovirt codehaus-parent codemodel codemodel-javadoc cogl-tests colord-extra-profiles colord-kde compat-cheese314 compat-dapl compat-dapl-devel compat-dapl-static compat-dapl-utils compat-db compat-db-headers compat-db47 compat-exiv2-023 compat-gcc-44 compat-gcc-44-c++ compat-gcc-44-gfortran compat-glade315 compat-glew compat-glibc compat-glibc-headers compat-gnome-desktop314 compat-grilo02 compat-libcap1 compat-libcogl-pango12 compat-libcogl12 compat-libcolord1 compat-libf2c-34 compat-libgdata13 compat-libgfortran-41 compat-libgnome-bluetooth11 compat-libgnome-desktop3-7 compat-libgweather3 compat-libical1 compat-libmediaart0 compat-libmpc compat-libpackagekit-glib2-16 compat-libstdc++-33 compat-libtiff3 compat-libupower-glib1 compat-libxcb compat-locales-sap-common compat-openldap compat-openmpi16 compat-openmpi16-devel compat-opensm-libs compat-poppler022 compat-poppler022-cpp compat-poppler022-glib compat-poppler022-qt compat-sap-c++-5 compat-sap-c++-6 compat-sap-c++-7 conman console-setup coolkey coolkey-devel cpptest cpptest-devel cppunit cppunit-devel cppunit-doc cpuid cracklib-python crda-devel crit criu-devel crypto-utils cryptsetup-python cvs cvs-contrib cvs-doc cvs-inetd cvsps cyrus-imapd-devel dapl dapl-devel dapl-static dapl-utils dbus-doc dbus-python-devel dbus-tests dbusmenu-qt dbusmenu-qt-devel dbusmenu-qt-devel-docs debugmode dejagnu dejavu-lgc-sans-fonts dejavu-lgc-sans-mono-fonts dejavu-lgc-serif-fonts deltaiso dhcp-devel dialog-devel dleyna-connector-dbus-devel dleyna-core-devel dlm-devel dmraid dmraid-devel dmraid-events dmraid-events-logwatch docbook-simple docbook-slides docbook-style-dsssl docbook-utils docbook-utils-pdf docbook5-schemas docbook5-style-xsl docbook5-style-xsl-extensions docker-rhel-push-plugin dom4j dom4j-demo dom4j-javadoc dom4j-manual dovecot-pigeonhole dracut-fips dracut-fips-aesni dragon drm-utils drpmsync dtdinst e2fsprogs-static ecj edac-utils-devel efax efivar-devel egl-utils ekiga ElectricFence emacs-a2ps emacs-a2ps-el emacs-auctex emacs-auctex-doc emacs-git emacs-git-el emacs-gnuplot emacs-gnuplot-el emacs-php-mode empathy enchant-aspell enchant-voikko eog-devel epydoc espeak-devel evince-devel evince-dvi evolution-data-server-doc evolution-data-server-perl evolution-data-server-tests evolution-devel evolution-devel-docs evolution-tests expat-static expect-devel expectk farstream farstream-devel farstream-python farstream02-devel fedfs-utils-admin fedfs-utils-client fedfs-utils-common fedfs-utils-devel fedfs-utils-lib fedfs-utils-nsdbparams fedfs-utils-python fedfs-utils-server felix-bundlerepository felix-bundlerepository-javadoc felix-framework felix-framework-javadoc felix-osgi-obr felix-osgi-obr-javadoc felix-shell felix-shell-javadoc fence-sanlock festival festival-devel festival-docs festival-freebsoft-utils festival-lib festival-speechtools-devel festival-speechtools-libs festival-speechtools-utils festvox-awb-arctic-hts festvox-bdl-arctic-hts festvox-clb-arctic-hts festvox-jmk-arctic-hts festvox-kal-diphone festvox-ked-diphone festvox-rms-arctic-hts festvox-slt-arctic-hts file-static filebench filesystem-content finch finch-devel finger finger-server flatpak-devel fltk-fluid fltk-static flute-javadoc folks folks-devel folks-tools fontforge-devel fontpackages-tools fonttools fop fop-javadoc fprintd-devel freeradius-python freetype-demos fros fros-gnome fros-recordmydesktop fwupd-devel fwupdate-devel gamin-python gavl-devel gcab gcc-gnat gcc-go gcc-objc gcc-objc++ gcc-plugin-devel gconf-editor gd-progs gdk-pixbuf2-tests gdm-devel gdm-pam-extensions-devel gedit-devel gedit-plugin-bookmarks gedit-plugin-bracketcompletion gedit-plugin-charmap gedit-plugin-codecomment gedit-plugin-colorpicker gedit-plugin-colorschemer gedit-plugin-commander gedit-plugin-drawspaces gedit-plugin-findinfiles gedit-plugin-joinlines gedit-plugin-multiedit gedit-plugin-smartspaces gedit-plugin-synctex gedit-plugin-terminal gedit-plugin-textsize gedit-plugin-translate gedit-plugin-wordcompletion gedit-plugins gedit-plugins-data gegl-devel geoclue geoclue-devel geoclue-doc geoclue-gsmloc geoclue-gui GeoIP GeoIP-data GeoIP-devel GeoIP-update geronimo-jaspic-spec geronimo-jaspic-spec-javadoc geronimo-jaxrpc geronimo-jaxrpc-javadoc geronimo-jms geronimo-jta geronimo-jta-javadoc geronimo-osgi-support geronimo-osgi-support-javadoc geronimo-saaj geronimo-saaj-javadoc ghostscript-chinese ghostscript-chinese-zh_CN ghostscript-chinese-zh_TW ghostscript-cups ghostscript-devel ghostscript-gtk giflib-utils gimp-data-extras gimp-help gimp-help-ca gimp-help-da gimp-help-de gimp-help-el gimp-help-en_GB gimp-help-es gimp-help-fr gimp-help-it gimp-help-ja gimp-help-ko gimp-help-nl gimp-help-nn gimp-help-pt_BR gimp-help-ru gimp-help-sl gimp-help-sv gimp-help-zh_CN git-bzr git-cvs git-gnome-keyring git-hg git-p4 gjs-tests glade glade3 glade3-libgladeui glade3-libgladeui-devel glassfish-dtd-parser glassfish-dtd-parser-javadoc glassfish-jaxb-javadoc glassfish-jsp glassfish-jsp-javadoc glew glib-networking-tests gmp-static gnome-clocks gnome-common gnome-contacts gnome-desktop3-tests gnome-devel-docs gnome-dictionary gnome-doc-utils gnome-doc-utils-stylesheets gnome-documents gnome-documents-libs gnome-icon-theme gnome-icon-theme-devel gnome-icon-theme-extras gnome-icon-theme-legacy gnome-icon-theme-symbolic gnome-packagekit gnome-packagekit-common gnome-packagekit-installer gnome-packagekit-updater gnome-python2 gnome-python2-bonobo gnome-python2-canvas gnome-python2-devel gnome-python2-gconf gnome-python2-gnome gnome-python2-gnomevfs gnome-settings-daemon-devel gnome-software-devel gnome-vfs2 gnome-vfs2-devel gnome-vfs2-smb gnome-weather gnome-weather-tests gnote gnu-efi-utils gnu-getopt gnu-getopt-javadoc gnuplot-latex gnuplot-minimal gob2 gom-devel google-noto-sans-korean-fonts google-noto-sans-simplified-chinese-fonts google-noto-sans-traditional-chinese-fonts gperftools gperftools-devel gperftools-libs gpm-static grantlee grantlee-apidocs grantlee-devel graphviz-graphs graphviz-guile graphviz-java graphviz-lua graphviz-ocaml graphviz-perl graphviz-php graphviz-python graphviz-ruby graphviz-tcl groff-doc groff-perl groff-x11 groovy groovy-javadoc grub2 grub2-ppc-modules grub2-ppc64-modules gsm-tools gsound-devel gssdp-utils gstreamer gstreamer-devel gstreamer-devel-docs gstreamer-plugins-bad-free gstreamer-plugins-bad-free-devel gstreamer-plugins-bad-free-devel-docs gstreamer-plugins-base gstreamer-plugins-base-devel gstreamer-plugins-base-devel-docs gstreamer-plugins-base-tools gstreamer-plugins-good gstreamer-plugins-good-devel-docs gstreamer-python gstreamer-python-devel gstreamer-tools gstreamer1-devel-docs gstreamer1-plugins-base-devel-docs gstreamer1-plugins-base-tools gstreamer1-plugins-ugly-free-devel gtk-vnc gtk-vnc-devel gtk-vnc-python gtk-vnc2-devel gtk3-devel-docs gtk3-immodules gtk3-tests gtkhtml3 gtkhtml3-devel gtksourceview3-tests gucharmap gucharmap-devel gucharmap-libs gupnp-av-devel gupnp-av-docs gupnp-dlna-devel gupnp-dlna-docs gupnp-docs gupnp-igd-python gutenprint-devel gutenprint-extras gutenprint-foomatic gvfs-tests gvnc-devel gvnc-tools gvncpulse gvncpulse-devel gwenview gwenview-libs hamcrest hawkey-devel highcontrast-qt highcontrast-qt4 highcontrast-qt5 highlight-gui hispavoces-pal-diphone hispavoces-sfl-diphone hsakmt hsakmt-devel hspell-devel hsqldb hsqldb-demo hsqldb-javadoc hsqldb-manual htdig html2ps http-parser-devel httpunit httpunit-doc httpunit-javadoc i2c-tools-eepromer i2c-tools-python ibus-pygtk2 ibus-qt ibus-qt-devel ibus-qt-docs ibus-rawcode ibus-table-devel ibutils ibutils-devel ibutils-libs icc-profiles-openicc icon-naming-utils im-chooser im-chooser-common ImageMagick ImageMagick-c++ ImageMagick-c++-devel ImageMagick-devel ImageMagick-doc ImageMagick-perl imsettings imsettings-devel imsettings-gsettings imsettings-libs imsettings-qt imsettings-xim indent infinipath-psm infinipath-psm-devel iniparser iniparser-devel iok ipa-gothic-fonts ipa-mincho-fonts ipa-pgothic-fonts ipa-pmincho-fonts iperf3-devel iproute-doc ipset-devel ipsilon ipsilon-authform ipsilon-authgssapi ipsilon-authldap ipsilon-base ipsilon-client ipsilon-filesystem ipsilon-infosssd ipsilon-persona ipsilon-saml2 ipsilon-saml2-base ipsilon-tools-ipa iputils-sysvinit iscsi-initiator-utils-devel isdn4k-utils isdn4k-utils-devel isdn4k-utils-doc isdn4k-utils-static isdn4k-utils-vboxgetty isomd5sum-devel isorelax istack-commons-javadoc ixpdimm_sw ixpdimm_sw-devel ixpdimm-cli ixpdimm-monitor jai-imageio-core jai-imageio-core-javadoc jakarta-oro jakarta-taglibs-standard jakarta-taglibs-standard-javadoc jandex jandex-javadoc jansson-devel-doc jarjar jarjar-javadoc jarjar-maven-plugin jasper jasper-utils java-1.6.0-openjdk java-1.6.0-openjdk-demo java-1.6.0-openjdk-devel java-1.6.0-openjdk-javadoc java-1.6.0-openjdk-src java-1.7.0-openjdk java-1.7.0-openjdk-accessibility java-1.7.0-openjdk-demo java-1.7.0-openjdk-devel java-1.7.0-openjdk-headless java-1.7.0-openjdk-javadoc java-1.7.0-openjdk-src java-1.8.0-openjdk-accessibility-debug java-1.8.0-openjdk-debug java-1.8.0-openjdk-demo-debug java-1.8.0-openjdk-devel-debug java-1.8.0-openjdk-headless-debug java-1.8.0-openjdk-javadoc-debug java-1.8.0-openjdk-javadoc-zip-debug java-1.8.0-openjdk-src-debug java-11-openjdk-debug java-11-openjdk-demo-debug java-11-openjdk-devel-debug java-11-openjdk-headless-debug java-11-openjdk-javadoc-debug java-11-openjdk-javadoc-zip-debug java-11-openjdk-jmods-debug java-11-openjdk-src-debug javamail jaxen jboss-ejb-3.1-api jboss-ejb-3.1-api-javadoc jboss-el-2.2-api jboss-el-2.2-api-javadoc jboss-jaxrpc-1.1-api jboss-jaxrpc-1.1-api-javadoc jboss-servlet-2.5-api jboss-servlet-2.5-api-javadoc jboss-servlet-3.0-api jboss-servlet-3.0-api-javadoc jboss-specs-parent jboss-transaction-1.1-api jboss-transaction-1.1-api-javadoc jdom jettison jettison-javadoc jetty-annotations jetty-ant jetty-artifact-remote-resources jetty-assembly-descriptors jetty-build-support jetty-build-support-javadoc jetty-client jetty-continuation jetty-deploy jetty-distribution-remote-resources jetty-http jetty-io jetty-jaas jetty-jaspi jetty-javadoc jetty-jmx jetty-jndi jetty-jsp jetty-jspc-maven-plugin jetty-maven-plugin jetty-monitor jetty-parent jetty-plus jetty-project jetty-proxy jetty-rewrite jetty-runner jetty-security jetty-server jetty-servlet jetty-servlets jetty-start jetty-test-policy jetty-test-policy-javadoc jetty-toolchain jetty-util jetty-util-ajax jetty-version-maven-plugin jetty-version-maven-plugin-javadoc jetty-webapp jetty-websocket-api jetty-websocket-client jetty-websocket-common jetty-websocket-parent jetty-websocket-server jetty-websocket-servlet jetty-xml jing jing-javadoc jline-demo jna jna-contrib jna-javadoc joda-convert joda-convert-javadoc js js-devel jsch-demo json-glib-tests jsr-311 jsr-311-javadoc juk junit junit-demo jvnet-parent k3b k3b-common k3b-devel k3b-libs kaccessible kaccessible-libs kactivities kactivities-devel kamera kate kate-devel kate-libs kate-part kcalc kcharselect kcm_colors kcm_touchpad kcm-gtk kcolorchooser kcoloredit kde-base-artwork kde-baseapps kde-baseapps-devel kde-baseapps-libs kde-filesystem kde-l10n kde-l10n-Arabic kde-l10n-Basque kde-l10n-Bosnian kde-l10n-British kde-l10n-Bulgarian kde-l10n-Catalan kde-l10n-Catalan-Valencian kde-l10n-Croatian kde-l10n-Czech kde-l10n-Danish kde-l10n-Dutch kde-l10n-Estonian kde-l10n-Farsi kde-l10n-Finnish kde-l10n-Galician kde-l10n-Greek kde-l10n-Hebrew kde-l10n-Hungarian kde-l10n-Icelandic kde-l10n-Interlingua kde-l10n-Irish kde-l10n-Kazakh kde-l10n-Khmer kde-l10n-Latvian kde-l10n-Lithuanian kde-l10n-LowSaxon kde-l10n-Norwegian kde-l10n-Norwegian-Nynorsk kde-l10n-Polish kde-l10n-Portuguese kde-l10n-Romanian kde-l10n-Serbian kde-l10n-Slovak kde-l10n-Slovenian kde-l10n-Swedish kde-l10n-Tajik kde-l10n-Thai kde-l10n-Turkish kde-l10n-Ukrainian kde-l10n-Uyghur kde-l10n-Vietnamese kde-l10n-Walloon kde-plasma-networkmanagement kde-plasma-networkmanagement-libreswan kde-plasma-networkmanagement-libs kde-plasma-networkmanagement-mobile kde-print-manager kde-runtime kde-runtime-devel kde-runtime-drkonqi kde-runtime-libs kde-settings kde-settings-ksplash kde-settings-minimal kde-settings-plasma kde-settings-pulseaudio kde-style-oxygen kde-style-phase kde-wallpapers kde-workspace kde-workspace-devel kde-workspace-ksplash-themes kde-workspace-libs kdeaccessibility kdeadmin kdeartwork kdeartwork-screensavers kdeartwork-sounds kdeartwork-wallpapers kdeclassic-cursor-theme kdegraphics kdegraphics-devel kdegraphics-libs kdegraphics-strigi-analyzer kdegraphics-thumbnailers kdelibs kdelibs-apidocs kdelibs-common kdelibs-devel kdelibs-ktexteditor kdemultimedia kdemultimedia-common kdemultimedia-devel kdemultimedia-libs kdenetwork kdenetwork-common kdenetwork-devel kdenetwork-fileshare-samba kdenetwork-kdnssd kdenetwork-kget kdenetwork-kget-libs kdenetwork-kopete kdenetwork-kopete-devel kdenetwork-kopete-libs kdenetwork-krdc kdenetwork-krdc-devel kdenetwork-krdc-libs kdenetwork-krfb kdenetwork-krfb-libs kdepim kdepim-devel kdepim-libs kdepim-runtime kdepim-runtime-libs kdepimlibs kdepimlibs-akonadi kdepimlibs-apidocs kdepimlibs-devel kdepimlibs-kxmlrpcclient kdeplasma-addons kdeplasma-addons-devel kdeplasma-addons-libs kdesdk kdesdk-cervisia kdesdk-common kdesdk-devel kdesdk-dolphin-plugins kdesdk-kapptemplate kdesdk-kapptemplate-template kdesdk-kcachegrind kdesdk-kioslave kdesdk-kmtrace kdesdk-kmtrace-devel kdesdk-kmtrace-libs kdesdk-kompare kdesdk-kompare-devel kdesdk-kompare-libs kdesdk-kpartloader kdesdk-kstartperf kdesdk-kuiviewer kdesdk-lokalize kdesdk-okteta kdesdk-okteta-devel kdesdk-okteta-libs kdesdk-poxml kdesdk-scripts kdesdk-strigi-analyzer kdesdk-thumbnailers kdesdk-umbrello kdeutils kdeutils-common kdeutils-minimal kdf kernel-rt-doc kernel-rt-trace kernel-rt-trace-devel kernel-rt-trace-kvm keytool-maven-plugin keytool-maven-plugin-javadoc kgamma kgpg kgreeter-plugins khotkeys khotkeys-libs kiconedit kinfocenter kio_sysinfo kmag kmenuedit kmix kmod-oracleasm kolourpaint kolourpaint-libs konkretcmpi konkretcmpi-devel konkretcmpi-python konsole konsole-part kross-interpreters kross-python kross-ruby kruler ksaneplugin kscreen ksnapshot ksshaskpass ksysguard ksysguard-libs ksysguardd ktimer kwallet kwin kwin-gles kwin-gles-libs kwin-libs kwrite kxml kxml-javadoc lapack64-devel lapack64-static lasso-devel latrace lcms2-utils ldns-doc ldns-python libabw-devel libabw-doc libabw-tools libappindicator libappindicator-devel libappindicator-docs libappstream-glib-builder libappstream-glib-builder-devel libart_lgpl libart_lgpl-devel libasan-static libavc1394-devel libbase-javadoc libblockdev-btrfs libblockdev-btrfs-devel libblockdev-crypto-devel libblockdev-devel libblockdev-dm-devel libblockdev-fs-devel libblockdev-kbd-devel libblockdev-loop-devel libblockdev-lvm-devel libblockdev-mdraid-devel libblockdev-mpath-devel libblockdev-nvdimm-devel libblockdev-part-devel libblockdev-swap-devel libblockdev-utils-devel libblockdev-vdo-devel libbluedevil libbluedevil-devel libbluray-devel libbonobo libbonobo-devel libbonoboui libbonoboui-devel libbytesize-devel libcacard-tools libcap-ng-python libcdr-devel libcdr-doc libcdr-tools libcgroup-devel libchamplain-demos libchewing libchewing-devel libchewing-python libcmis-devel libcmis-tools libcryptui libcryptui-devel libdb-devel-static libdb-java libdb-java-devel libdb-tcl libdb-tcl-devel libdbi libdbi-dbd-mysql libdbi-dbd-pgsql libdbi-dbd-sqlite libdbi-devel libdbi-drivers libdbusmenu-gtk2 libdbusmenu-gtk2-devel libdbusmenu-gtk3-devel libdhash-devel libdmapsharing-devel libdmmp-devel libdmx-devel libdnet-progs libdnet-python libdnf-devel libdv-tools libdvdnav-devel libeasyfc-devel libeasyfc-gobject-devel libee libee-devel libee-utils libesmtp libesmtp-devel libestr-devel libetonyek-doc libetonyek-tools libevdev-utils libexif-doc libexttextcat-devel libexttextcat-tools libfastjson-devel libfdt libfonts-javadoc libformula-javadoc libfprint-devel libfreehand-devel libfreehand-doc libfreehand-tools libgcab1-devel libgccjit libgdither-devel libgee06 libgee06-devel libgepub libgepub-devel libgfortran-static libgfortran4 libgfortran5 libgit2-devel libglade2 libglade2-devel libGLEWmx libgnat libgnat-devel libgnat-static libgnome libgnome-devel libgnome-keyring-devel libgnomecanvas libgnomecanvas-devel libgnomeui libgnomeui-devel libgo libgo-devel libgo-static libgovirt-devel libgudev-devel libgxim libgxim-devel libgxps-tools libhangul-devel libhbaapi-devel libhif-devel libical-glib libical-glib-devel libical-glib-doc libid3tag libid3tag-devel libiec61883-utils libieee1284-python libimobiledevice-python libimobiledevice-utils libindicator libindicator-devel libindicator-gtk3-devel libindicator-tools libinvm-cim libinvm-cim-devel libinvm-cli libinvm-cli-devel libinvm-i18n libinvm-i18n-devel libiodbc libiodbc-devel libipa_hbac-devel libiptcdata-devel libiptcdata-python libitm-static libixpdimm-cim libixpdimm-core libjpeg-turbo-static libkcddb libkcddb-devel libkcompactdisc libkcompactdisc-devel libkdcraw libkdcraw-devel libkexiv2 libkexiv2-devel libkipi libkipi-devel libkkc-devel libkkc-tools libksane libksane-devel libkscreen libkscreen-devel libkworkspace liblayout-javadoc libloader-javadoc liblognorm-devel liblouis-devel liblouis-doc liblouis-utils libmatchbox-devel libmbim-devel libmediaart-devel libmediaart-tests libmnl-static libmodman-devel libmodulemd-devel libmpc-devel libmsn libmsn-devel libmspub-devel libmspub-doc libmspub-tools libmtp-examples libmudflap libmudflap-devel libmudflap-static libmwaw-devel libmwaw-doc libmwaw-tools libmx libmx-devel libmx-docs libndp-devel libnetfilter_cthelper-devel libnetfilter_cttimeout-devel libnftnl-devel libnl libnl-devel libnm-gtk libnm-gtk-devel libntlm libntlm-devel libobjc libodfgen-doc libofa libofa-devel liboil liboil-devel libopenraw-pixbuf-loader liborcus-devel liborcus-doc liborcus-tools libosinfo-devel libosinfo-vala libotf-devel libpagemaker-devel libpagemaker-doc libpagemaker-tools libpinyin-devel libpinyin-tools libpipeline-devel libplist-python libpng-static libpng12-devel libproxy-kde libpst libpst-devel libpst-devel-doc libpst-doc libpst-python libpurple-perl libpurple-tcl libqmi-devel libquadmath-static LibRaw-static librelp-devel libreoffice libreoffice-bsh libreoffice-gdb-debug-support libreoffice-glade libreoffice-librelogo libreoffice-nlpsolver libreoffice-officebean libreoffice-officebean-common libreoffice-postgresql libreoffice-rhino libreofficekit-devel librepo-devel libreport-compat libreport-devel libreport-gtk-devel libreport-web-devel librepository-javadoc librevenge-doc librsvg2-tools libseccomp-devel libselinux-static libsemanage-devel libsemanage-static libserializer-javadoc libsexy libsexy-devel libsmbios-devel libsmi-devel libsndfile-utils libsolv-demo libsolv-devel libsolv-tools libspiro-devel libss-devel libsss_certmap-devel libsss_idmap-devel libsss_nss_idmap-devel libsss_simpleifp-devel libstaroffice-devel libstaroffice-doc libstaroffice-tools libstdc++-static libstoragemgmt-devel libstoragemgmt-targetd-plugin libtar-devel libteam-devel libtheora-devel-docs libtiff-static libtimezonemap-devel libtnc libtnc-devel libtranslit libtranslit-devel libtranslit-icu libtranslit-m17n libtsan-static libudisks2-devel libuninameslist-devel libunwind libunwind-devel libusal-devel libusb-static libusbmuxd-utils libuser-devel libvdpau-docs libverto-glib libverto-glib-devel libverto-libevent-devel libverto-tevent libverto-tevent-devel libvirt-cim libvirt-daemon-driver-lxc libvirt-daemon-lxc libvirt-gconfig-devel libvirt-glib-devel libvirt-gobject-devel libvirt-java libvirt-java-devel libvirt-java-javadoc libvirt-login-shell libvirt-snmp libvisio-doc libvisio-tools libvma-devel libvma-utils libvoikko-devel libvpx-utils libwebp-java libwebp-tools libwpd-tools libwpg-tools libwps-tools libwsman-devel libwvstreams libwvstreams-devel libwvstreams-static libxcb-doc libXevie libXevie-devel libXfont libXfont-devel libxml2-static libxslt-python libXvMC-devel libzapojit libzapojit-devel libzmf-devel libzmf-doc libzmf-tools lldpad-devel log4cxx log4cxx-devel log4j-manual lpsolve-devel lua-devel lua-static lvm2-cluster lvm2-python-libs lvm2-sysvinit lz4-static m17n-contrib m17n-contrib-extras m17n-db-devel m17n-db-extras m17n-lib-devel m17n-lib-tools m2crypto malaga-devel man-pages-cs man-pages-es man-pages-es-extra man-pages-fr man-pages-it man-pages-ja man-pages-ko man-pages-pl man-pages-ru man-pages-zh-CN mariadb-bench marisa-devel marisa-perl marisa-python marisa-ruby marisa-tools maven-changes-plugin maven-changes-plugin-javadoc maven-deploy-plugin maven-deploy-plugin-javadoc maven-doxia-module-fo maven-ear-plugin maven-ear-plugin-javadoc maven-ejb-plugin maven-ejb-plugin-javadoc maven-error-diagnostics maven-gpg-plugin maven-gpg-plugin-javadoc maven-istack-commons-plugin maven-jarsigner-plugin maven-jarsigner-plugin-javadoc maven-javadoc-plugin maven-javadoc-plugin-javadoc maven-jxr maven-jxr-javadoc maven-osgi maven-osgi-javadoc maven-plugin-jxr maven-project-info-reports-plugin maven-project-info-reports-plugin-javadoc maven-release maven-release-javadoc maven-release-manager maven-release-plugin maven-reporting-exec maven-repository-builder maven-repository-builder-javadoc maven-scm maven-scm-javadoc maven-scm-test maven-shared-jar maven-shared-jar-javadoc maven-site-plugin maven-site-plugin-javadoc maven-verifier-plugin maven-verifier-plugin-javadoc maven-wagon-provider-test maven-wagon-scm maven-war-plugin maven-war-plugin-javadoc mdds-devel meanwhile-devel meanwhile-doc memcached-devel memstomp mesa-demos mesa-libxatracker-devel mesa-private-llvm mesa-private-llvm-devel metacity-devel mgetty mgetty-sendfax mgetty-viewfax mgetty-voice migrationtools minizip minizip-devel mkbootdisk mobile-broadband-provider-info-devel mod_auth_mellon-diagnostics mod_revocator ModemManager-vala mono-icon-theme mozjs17 mozjs17-devel mozjs24 mozjs24-devel mpich-3.0-autoload mpich-3.0-doc mpich-3.2-autoload mpich-3.2-doc mpitests-compat-openmpi16 msv-demo msv-msv msv-rngconv msv-xmlgen mvapich2-2.0-devel mvapich2-2.0-doc mvapich2-2.0-psm-devel mvapich2-2.2-devel mvapich2-2.2-doc mvapich2-2.2-psm-devel mvapich2-2.2-psm2-devel mvapich23-devel mvapich23-doc mvapich23-psm-devel mvapich23-psm2-devel nagios-plugins-bacula nasm nasm-doc nasm-rdoff ncurses-static nekohtml nekohtml-demo nekohtml-javadoc nepomuk-core nepomuk-core-devel nepomuk-core-libs nepomuk-widgets nepomuk-widgets-devel net-snmp-gui net-snmp-perl net-snmp-python net-snmp-sysvinit netsniff-ng NetworkManager-glib NetworkManager-glib-devel newt-static nfsometer nfstest nhn-nanum-brush-fonts nhn-nanum-fonts-common nhn-nanum-myeongjo-fonts nhn-nanum-pen-fonts nmap-frontend nss_compat_ossl nss_compat_ossl-devel nss-pem nss-pkcs11-devel ntp-doc ntp-perl nuvola-icon-theme nuxwdog nuxwdog-client-java nuxwdog-client-perl nuxwdog-devel objectweb-anttask objectweb-anttask-javadoc objectweb-asm ocaml-brlapi ocaml-calendar ocaml-calendar-devel ocaml-csv ocaml-csv-devel ocaml-curses ocaml-curses-devel ocaml-docs ocaml-emacs ocaml-fileutils ocaml-fileutils-devel ocaml-gettext ocaml-gettext-devel ocaml-libvirt ocaml-libvirt-devel ocaml-ocamlbuild-doc ocaml-source ocaml-x11 ocaml-xml-light ocaml-xml-light-devel oci-register-machine okular okular-devel okular-libs okular-part opa-libopamgt-devel opal opal-devel open-vm-tools-devel open-vm-tools-test opencc-tools openchange-client openchange-devel openchange-devel-docs opencv-devel-docs opencv-python OpenEXR openhpi-devel openjade openjpeg-devel openjpeg-libs openldap-servers openldap-servers-sql openlmi openlmi-account openlmi-account-doc openlmi-fan openlmi-fan-doc openlmi-hardware openlmi-hardware-doc openlmi-indicationmanager-libs openlmi-indicationmanager-libs-devel openlmi-journald openlmi-journald-doc openlmi-logicalfile openlmi-logicalfile-doc openlmi-networking openlmi-networking-doc openlmi-pcp openlmi-powermanagement openlmi-powermanagement-doc openlmi-providers openlmi-providers-devel openlmi-python-base openlmi-python-providers openlmi-python-test openlmi-realmd openlmi-realmd-doc openlmi-service openlmi-service-doc openlmi-software openlmi-software-doc openlmi-storage openlmi-storage-doc openlmi-tools openlmi-tools-doc openobex openobex-apps openobex-devel openscap-containers openscap-engine-sce-devel openslp-devel openslp-server opensm-static opensp openssh-server-sysvinit openssl-static openssl098e openwsman-perl openwsman-ruby oprofile-devel oprofile-gui oprofile-jit optipng ORBit2 ORBit2-devel orc-doc ortp ortp-devel oscilloscope oxygen-cursor-themes oxygen-gtk oxygen-gtk2 oxygen-gtk3 oxygen-icon-theme PackageKit-yum-plugin pakchois-devel pam_snapper pango-tests paps-devel passivetex pax pciutils-devel-static pcp-collector pcp-monitor pcre-tools pcre2-static pcre2-tools pentaho-libxml-javadoc pentaho-reporting-flow-engine-javadoc perl-AppConfig perl-Archive-Extract perl-B-Keywords perl-Browser-Open perl-Business-ISBN perl-Business-ISBN-Data perl-CGI-Session perl-Class-Load perl-Class-Load-XS perl-Class-Singleton perl-Config-Simple perl-Config-Tiny perl-Convert-ASN1 perl-CPAN-Changes perl-CPANPLUS perl-CPANPLUS-Dist-Build perl-Crypt-CBC perl-Crypt-DES perl-Crypt-OpenSSL-Bignum perl-Crypt-OpenSSL-Random perl-Crypt-OpenSSL-RSA perl-Crypt-PasswdMD5 perl-Crypt-SSLeay perl-CSS-Tiny perl-Data-Peek perl-DateTime perl-DateTime-Format-DateParse perl-DateTime-Locale perl-DateTime-TimeZone perl-DBD-Pg-tests perl-DBIx-Simple perl-Devel-Cover perl-Devel-Cycle perl-Devel-EnforceEncapsulation perl-Devel-Leak perl-Devel-Symdump perl-Digest-SHA1 perl-Email-Address perl-FCGI perl-File-Find-Rule-Perl perl-File-Inplace perl-Font-AFM perl-Font-TTF perl-FreezeThaw perl-GD perl-GD-Barcode perl-Hook-LexWrap perl-HTML-Format perl-HTML-FormatText-WithLinks perl-HTML-FormatText-WithLinks-AndTables perl-HTML-Tree perl-HTTP-Daemon perl-Image-Base perl-Image-Info perl-Image-Xbm perl-Image-Xpm perl-Inline perl-Inline-Files perl-IO-CaptureOutput perl-IO-stringy perl-JSON-tests perl-LDAP perl-libxml-perl perl-List-MoreUtils perl-Locale-Maketext-Gettext perl-Locale-PO perl-Log-Message perl-Log-Message-Simple perl-Mail-DKIM perl-Mixin-Linewise perl-Module-Implementation perl-Module-Manifest perl-Module-Signature perl-Net-Daemon perl-Net-DNS-Nameserver perl-Net-DNS-Resolver-Programmable perl-Net-LibIDN perl-Net-Telnet perl-Newt perl-Object-Accessor perl-Object-Deadly perl-Package-Constants perl-Package-DeprecationManager perl-Package-Stash perl-Package-Stash-XS perl-PAR-Dist perl-Parallel-Iterator perl-Params-Validate perl-Parse-CPAN-Meta perl-Parse-RecDescent perl-Perl-Critic perl-Perl-Critic-More perl-Perl-MinimumVersion perl-Perl4-CoreLibs perl-PlRPC perl-Pod-Coverage perl-Pod-Coverage-TrustPod perl-Pod-Eventual perl-Pod-POM perl-Pod-Spell perl-PPI perl-PPI-HTML perl-PPIx-Regexp perl-PPIx-Utilities perl-Probe-Perl perl-Readonly-XS perl-SGMLSpm perl-Sort-Versions perl-String-Format perl-String-Similarity perl-Syntax-Highlight-Engine-Kate perl-Task-Weaken perl-Template-Toolkit perl-Term-UI perl-Test-ClassAPI perl-Test-CPAN-Meta perl-Test-DistManifest perl-Test-EOL perl-Test-HasVersion perl-Test-Inter perl-Test-Manifest perl-Test-Memory-Cycle perl-Test-MinimumVersion perl-Test-MockObject perl-Test-NoTabs perl-Test-Object perl-Test-Output perl-Test-Perl-Critic perl-Test-Perl-Critic-Policy perl-Test-Pod perl-Test-Pod-Coverage perl-Test-Portability-Files perl-Test-Script perl-Test-Spelling perl-Test-SubCalls perl-Test-Synopsis perl-Test-Tester perl-Test-Vars perl-Test-Without-Module perl-Text-CSV_XS perl-Text-Iconv perl-Tree-DAG_Node perl-Unicode-Map8 perl-Unicode-String perl-UNIVERSAL-can perl-UNIVERSAL-isa perl-Version-Requirements perl-WWW-Curl perl-XML-Dumper perl-XML-Filter-BufferText perl-XML-Grove perl-XML-Handler-YAWriter perl-XML-LibXSLT perl-XML-SAX-Writer perl-XML-TreeBuilder perl-XML-Twig perl-XML-Writer perl-XML-XPathEngine perl-YAML-Tiny perltidy phonon phonon-backend-gstreamer phonon-devel php-pecl-memcache php-pspell pidgin-perl pinentry-qt pinentry-qt4 pki-javadoc plasma-scriptengine-python plasma-scriptengine-ruby plexus-digest plexus-digest-javadoc plexus-mail-sender plexus-mail-sender-javadoc plexus-tools-pom plymouth-devel pm-utils pm-utils-devel pngcrush pngnq polkit-kde polkit-qt polkit-qt-devel polkit-qt-doc poppler-demos poppler-qt poppler-qt-devel popt-static postfix-sysvinit pothana2000-fonts powerpc-utils-python pprof pps-tools pptp-setup procps-ng-devel protobuf-emacs protobuf-emacs-el protobuf-java protobuf-javadoc protobuf-lite-devel protobuf-lite-static protobuf-python protobuf-static protobuf-vim psutils psutils-perl ptlib ptlib-devel publican publican-common-db5-web publican-common-web publican-doc publican-redhat pulseaudio-esound-compat pulseaudio-module-gconf pulseaudio-module-zeroconf pulseaudio-qpaeq pygpgme pygtk2-libglade pykde4 pykde4-akonadi pykde4-devel pyldb-devel pyliblzma PyOpenGL PyOpenGL-Tk pyOpenSSL-doc pyorbit pyorbit-devel PyPAM pyparsing-doc PyQt4 PyQt4-devel pytalloc-devel python-appindicator python-beaker python-cffi-doc python-cherrypy python-criu python-debug python-deltarpm python-dtopt python-fpconst python-gpod python-gudev python-inotify-examples python-ipaddr python-IPy python-isodate python-isomd5sum python-kitchen python-kitchen-doc python-libteam python-lxml-docs python-matplotlib python-matplotlib-doc python-matplotlib-qt4 python-matplotlib-tk python-memcached python-mutagen python-paramiko python-paramiko-doc python-paste python-pillow-devel python-pillow-doc python-pillow-qt python-pillow-sane python-pillow-tk python-rados python-rbd python-reportlab-docs python-rtslib-doc python-setproctitle python-slip-gtk python-smbc python-smbc-doc python-smbios python-sphinx-doc python-tempita python-tornado python-tornado-doc python-twisted-core python-twisted-core-doc python-twisted-web python-twisted-words python-urlgrabber python-volume_key python-webob python-webtest python-which python-zope-interface python2-caribou python2-futures python2-gexiv2 python2-smartcols python2-solv python2-subprocess32 qca-ossl qca2 qca2-devel qdox qimageblitz qimageblitz-devel qimageblitz-examples qjson qjson-devel qpdf-devel qt qt-assistant qt-config qt-demos qt-devel qt-devel-private qt-doc qt-examples qt-mysql qt-odbc qt-postgresql qt-qdbusviewer qt-qvfb qt-settings qt-x11 qt3 qt3-config qt3-designer qt3-devel qt3-devel-docs qt3-MySQL qt3-ODBC qt3-PostgreSQL qt5-qt3d-doc qt5-qtbase-doc qt5-qtcanvas3d-doc qt5-qtconnectivity-doc qt5-qtdeclarative-doc qt5-qtenginio qt5-qtenginio-devel qt5-qtenginio-doc qt5-qtenginio-examples qt5-qtgraphicaleffects-doc qt5-qtimageformats-doc qt5-qtlocation-doc qt5-qtmultimedia-doc qt5-qtquickcontrols-doc qt5-qtquickcontrols2-doc qt5-qtscript-doc qt5-qtsensors-doc qt5-qtserialbus-devel qt5-qtserialbus-doc qt5-qtserialport-doc qt5-qtsvg-doc qt5-qttools-doc qt5-qtwayland-doc qt5-qtwebchannel-doc qt5-qtwebsockets-doc qt5-qtx11extras-doc qt5-qtxmlpatterns-doc quagga quagga-contrib quota-devel qv4l2 rarian-devel rcs rdate rdist readline-static realmd-devel-docs Red_Hat_Enterprise_Linux-Release_Notes-7-as-IN Red_Hat_Enterprise_Linux-Release_Notes-7-bn-IN Red_Hat_Enterprise_Linux-Release_Notes-7-de-DE Red_Hat_Enterprise_Linux-Release_Notes-7-en-US Red_Hat_Enterprise_Linux-Release_Notes-7-es-ES Red_Hat_Enterprise_Linux-Release_Notes-7-fr-FR Red_Hat_Enterprise_Linux-Release_Notes-7-gu-IN Red_Hat_Enterprise_Linux-Release_Notes-7-hi-IN Red_Hat_Enterprise_Linux-Release_Notes-7-it-IT Red_Hat_Enterprise_Linux-Release_Notes-7-ja-JP Red_Hat_Enterprise_Linux-Release_Notes-7-kn-IN Red_Hat_Enterprise_Linux-Release_Notes-7-ko-KR Red_Hat_Enterprise_Linux-Release_Notes-7-ml-IN Red_Hat_Enterprise_Linux-Release_Notes-7-mr-IN Red_Hat_Enterprise_Linux-Release_Notes-7-or-IN Red_Hat_Enterprise_Linux-Release_Notes-7-pa-IN Red_Hat_Enterprise_Linux-Release_Notes-7-pt-BR Red_Hat_Enterprise_Linux-Release_Notes-7-ru-RU Red_Hat_Enterprise_Linux-Release_Notes-7-ta-IN Red_Hat_Enterprise_Linux-Release_Notes-7-te-IN Red_Hat_Enterprise_Linux-Release_Notes-7-zh-CN Red_Hat_Enterprise_Linux-Release_Notes-7-zh-TW redhat-access-plugin-ipa redhat-bookmarks redhat-lsb-supplemental redhat-lsb-trialuse redhat-upgrade-dracut redhat-upgrade-dracut-plymouth redhat-upgrade-tool redland-mysql redland-pgsql redland-virtuoso regexp relaxngcc rest-devel resteasy-base-jettison-provider resteasy-base-tjws rhdb-utils rhino rhino-demo rhino-javadoc rhino-manual rhythmbox-devel rngom rngom-javadoc rp-pppoe rrdtool-php rrdtool-python rsh rsh-server rsyslog-libdbi rsyslog-udpspoof rtcheck rtctl ruby-tcltk rubygem-net-http-persistent rubygem-net-http-persistent-doc rubygem-thor rubygem-thor-doc rusers rusers-server rwho sac-javadoc samba-dc samba-devel satyr-devel satyr-python saxon saxon-demo saxon-javadoc saxon-manual saxon-scripts sbc-devel sblim-cim-client2 sblim-cim-client2-javadoc sblim-cim-client2-manual sblim-cmpi-base sblim-cmpi-base-devel sblim-cmpi-base-test sblim-cmpi-fsvol sblim-cmpi-fsvol-devel sblim-cmpi-fsvol-test sblim-cmpi-network sblim-cmpi-network-devel sblim-cmpi-network-test sblim-cmpi-nfsv3 sblim-cmpi-nfsv3-test sblim-cmpi-nfsv4 sblim-cmpi-nfsv4-test sblim-cmpi-params sblim-cmpi-params-test sblim-cmpi-sysfs sblim-cmpi-sysfs-test sblim-cmpi-syslog sblim-cmpi-syslog-test sblim-gather sblim-gather-devel sblim-gather-provider sblim-gather-test sblim-indication_helper sblim-indication_helper-devel sblim-smis-hba sblim-testsuite sblim-wbemcli scannotation scannotation-javadoc scpio screen SDL-static seahorse-nautilus seahorse-sharing sendmail-sysvinit setools-devel setools-gui setools-libs-tcl setuptool shared-desktop-ontologies shared-desktop-ontologies-devel shim-unsigned-ia32 shim-unsigned-x64 sisu sisu-parent slang-slsh slang-static smbios-utils smbios-utils-bin smbios-utils-python snakeyaml snakeyaml-javadoc snapper snapper-devel snapper-libs sntp SOAPpy soprano soprano-apidocs soprano-devel source-highlight-devel sox sox-devel speex-tools spice-xpi sqlite-tcl squid-migration-script squid-sysvinit sssd-libwbclient-devel sssd-polkit-rules stax2-api stax2-api-javadoc strigi strigi-devel strigi-libs strongimcv subversion-kde subversion-python subversion-ruby sudo-devel suitesparse-doc suitesparse-static supermin-helper svgpart svrcore svrcore-devel sweeper syslinux-devel syslinux-perl system-config-date system-config-date-docs system-config-firewall system-config-firewall-base system-config-firewall-tui system-config-keyboard system-config-keyboard-base system-config-language system-config-printer system-config-users-docs system-switch-java systemd-sysv t1lib t1lib-apps t1lib-devel t1lib-static t1utils taglib-doc talk talk-server tang-nagios targetd tcl-pgtcl tclx tclx-devel tcp_wrappers tcp_wrappers-devel tcp_wrappers-libs teamd-devel teckit-devel telepathy-farstream telepathy-farstream-devel telepathy-filesystem telepathy-gabble telepathy-glib telepathy-glib-devel telepathy-glib-vala telepathy-haze telepathy-logger telepathy-logger-devel telepathy-mission-control telepathy-mission-control-devel telepathy-salut tex-preview texinfo texlive-collection-documentation-base texlive-mh texlive-mh-doc texlive-misc texlive-thailatex texlive-thailatex-doc tix-doc tncfhh tncfhh-devel tncfhh-examples tncfhh-libs tncfhh-utils tog-pegasus-test tokyocabinet-devel-doc tomcat tomcat-admin-webapps tomcat-docs-webapp tomcat-el-2.2-api tomcat-javadoc tomcat-jsp-2.2-api tomcat-jsvc tomcat-lib tomcat-servlet-3.0-api tomcat-webapps totem-devel totem-pl-parser-devel tracker-devel tracker-docs tracker-needle tracker-preferences trang trousers-static txw2 txw2-javadoc unique3 unique3-devel unique3-docs uriparser uriparser-devel usbguard-devel usbredir-server ustr ustr-debug ustr-debug-static ustr-devel ustr-static uuid-c++ uuid-c++-devel uuid-dce uuid-dce-devel uuid-perl uuid-php v4l-utils v4l-utils-devel-tools vala-doc valadoc valadoc-devel valgrind-openmpi vemana2000-fonts vigra vigra-devel virtuoso-opensource virtuoso-opensource-utils vlgothic-p-fonts vsftpd-sysvinit vte3 vte3-devel wayland-doc webkitgtk3 webkitgtk3-devel webkitgtk3-doc webkitgtk4-doc webrtc-audio-processing-devel weld-parent whois woodstox-core woodstox-core-javadoc wordnet wordnet-browser wordnet-devel wordnet-doc ws-commons-util ws-commons-util-javadoc ws-jaxme ws-jaxme-javadoc ws-jaxme-manual wsdl4j wsdl4j-javadoc wvdial x86info xchat-tcl xdg-desktop-portal-devel xerces-c xerces-c-devel xerces-c-doc xferstats xguest xhtml2fo-style-xsl xhtml2ps xisdnload xml-commons-apis12 xml-commons-apis12-javadoc xml-commons-apis12-manual xmlgraphics-commons xmlgraphics-commons-javadoc xmlrpc-c-apps xmlrpc-client xmlrpc-common xmlrpc-javadoc xmlrpc-server xmlsec1-gcrypt-devel xmlsec1-nss-devel xmlto-tex xmlto-xhtml xmltoman xorg-x11-apps xorg-x11-drv-intel-devel xorg-x11-drv-keyboard xorg-x11-drv-mouse xorg-x11-drv-mouse-devel xorg-x11-drv-openchrome xorg-x11-drv-openchrome-devel xorg-x11-drv-synaptics xorg-x11-drv-synaptics-devel xorg-x11-drv-vmmouse xorg-x11-drv-void xorg-x11-server-source xorg-x11-xkb-extras xpp3 xpp3-javadoc xpp3-minimal xsettings-kde xstream xstream-javadoc xulrunner xulrunner-devel xz-compat-libs yelp-xsl-devel yum-langpacks yum-NetworkManager-dispatcher yum-plugin-filter-data yum-plugin-fs-snapshot yum-plugin-keys yum-plugin-list-data yum-plugin-local yum-plugin-merge-conf yum-plugin-ovl yum-plugin-post-transaction-actions yum-plugin-pre-transaction-actions yum-plugin-protectbase yum-plugin-ps yum-plugin-rpm-warm-cache yum-plugin-show-leaves yum-plugin-upgrade-helper yum-plugin-verify yum-updateonboot | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.6_release_notes/chap-red_hat_enterprise_linux-7.6_release_notes-deprecated_packages |
Chapter 5. OAuthClient [oauth.openshift.io/v1] | Chapter 5. OAuthClient [oauth.openshift.io/v1] Description OAuthClient describes an OAuth client Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 5.1. Specification Property Type Description accessTokenInactivityTimeoutSeconds integer AccessTokenInactivityTimeoutSeconds overrides the default token inactivity timeout for tokens granted to this client. The value represents the maximum amount of time that can occur between consecutive uses of the token. Tokens become invalid if they are not used within this temporal window. The user will need to acquire a new token to regain access once a token times out. This value needs to be set only if the default set in configuration is not appropriate for this client. Valid values are: - 0: Tokens for this client never time out - X: Tokens time out if there is no activity for X seconds The current minimum allowed value for X is 300 (5 minutes) WARNING: existing tokens' timeout will not be affected (lowered) by changing this value accessTokenMaxAgeSeconds integer AccessTokenMaxAgeSeconds overrides the default access token max age for tokens granted to this client. 0 means no expiration. additionalSecrets array (string) AdditionalSecrets holds other secrets that may be used to identify the client. This is useful for rotation and for service account token validation apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources grantMethod string GrantMethod is a required field which determines how to handle grants for this client. Valid grant handling methods are: - auto: always approves grant requests, useful for trusted clients - prompt: prompts the end user for approval of grant requests, useful for third-party clients kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata redirectURIs array (string) RedirectURIs is the valid redirection URIs associated with a client respondWithChallenges boolean RespondWithChallenges indicates whether the client wants authentication needed responses made in the form of challenges instead of redirects scopeRestrictions array ScopeRestrictions describes which scopes this client can request. Each requested scope is checked against each restriction. If any restriction matches, then the scope is allowed. If no restriction matches, then the scope is denied. scopeRestrictions[] object ScopeRestriction describe one restriction on scopes. Exactly one option must be non-nil. secret string Secret is the unique secret associated with a client 5.1.1. .scopeRestrictions Description ScopeRestrictions describes which scopes this client can request. Each requested scope is checked against each restriction. If any restriction matches, then the scope is allowed. If no restriction matches, then the scope is denied. Type array 5.1.2. .scopeRestrictions[] Description ScopeRestriction describe one restriction on scopes. Exactly one option must be non-nil. Type object Property Type Description clusterRole object ClusterRoleScopeRestriction describes restrictions on cluster role scopes literals array (string) ExactValues means the scope has to match a particular set of strings exactly 5.1.3. .scopeRestrictions[].clusterRole Description ClusterRoleScopeRestriction describes restrictions on cluster role scopes Type object Required roleNames namespaces allowEscalation Property Type Description allowEscalation boolean AllowEscalation indicates whether you can request roles and their escalating resources namespaces array (string) Namespaces is the list of namespaces that can be referenced. * means any of them (including *) roleNames array (string) RoleNames is the list of cluster roles that can referenced. * means anything 5.2. API endpoints The following API endpoints are available: /apis/oauth.openshift.io/v1/oauthclients DELETE : delete collection of OAuthClient GET : list or watch objects of kind OAuthClient POST : create an OAuthClient /apis/oauth.openshift.io/v1/watch/oauthclients GET : watch individual changes to a list of OAuthClient. deprecated: use the 'watch' parameter with a list operation instead. /apis/oauth.openshift.io/v1/oauthclients/{name} DELETE : delete an OAuthClient GET : read the specified OAuthClient PATCH : partially update the specified OAuthClient PUT : replace the specified OAuthClient /apis/oauth.openshift.io/v1/watch/oauthclients/{name} GET : watch changes to an object of kind OAuthClient. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 5.2.1. /apis/oauth.openshift.io/v1/oauthclients HTTP method DELETE Description delete collection of OAuthClient Table 5.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 5.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind OAuthClient Table 5.3. HTTP responses HTTP code Reponse body 200 - OK OAuthClientList schema 401 - Unauthorized Empty HTTP method POST Description create an OAuthClient Table 5.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.5. Body parameters Parameter Type Description body OAuthClient schema Table 5.6. HTTP responses HTTP code Reponse body 200 - OK OAuthClient schema 201 - Created OAuthClient schema 202 - Accepted OAuthClient schema 401 - Unauthorized Empty 5.2.2. /apis/oauth.openshift.io/v1/watch/oauthclients HTTP method GET Description watch individual changes to a list of OAuthClient. deprecated: use the 'watch' parameter with a list operation instead. Table 5.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 5.2.3. /apis/oauth.openshift.io/v1/oauthclients/{name} Table 5.8. Global path parameters Parameter Type Description name string name of the OAuthClient HTTP method DELETE Description delete an OAuthClient Table 5.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 5.10. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified OAuthClient Table 5.11. HTTP responses HTTP code Reponse body 200 - OK OAuthClient schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OAuthClient Table 5.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.13. HTTP responses HTTP code Reponse body 200 - OK OAuthClient schema 201 - Created OAuthClient schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OAuthClient Table 5.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.15. Body parameters Parameter Type Description body OAuthClient schema Table 5.16. HTTP responses HTTP code Reponse body 200 - OK OAuthClient schema 201 - Created OAuthClient schema 401 - Unauthorized Empty 5.2.4. /apis/oauth.openshift.io/v1/watch/oauthclients/{name} Table 5.17. Global path parameters Parameter Type Description name string name of the OAuthClient HTTP method GET Description watch changes to an object of kind OAuthClient. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 5.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/oauth_apis/oauthclient-oauth-openshift-io-v1 |
Chapter 7. Creating an OAuth application in GitHub | Chapter 7. Creating an OAuth application in GitHub The following sections describe how to authorize Red Hat Quay to integrate with GitHub by creating an OAuth application. This allows Red Hat Quay to access GitHub repositories on behalf of a user. OAuth integration with GitHub is primarily used to allow features like automated builds, where Red Hat Quay can be enabled to monitor specific GitHub repositories for changes like commits or pull requests, and trigger contain image builds when those changes are made. 7.1. Create new GitHub application Use the following procedure to create an OAuth application in Github. Procedure Log into GitHub Enterprise . In the navigation pane, select your username Your organizations . In the navigation pane, select Applications Developer Settings . In the navigation pane, click OAuth Apps New OAuth App . You are navigated to the following page: Enter a name for the application in the Application name textbox. In the Homepage URL textbox, enter your Red Hat Quay URL. Note If you are using public GitHub, the Homepage URL entered must be accessible by your users. It can still be an internal URL. In the Authorization callback URL , enter https://<RED_HAT_QUAY_URL>/oauth2/github/callback . Click Register application to save your settings. When the new application's summary is shown, record the Client ID and the Client Secret shown for the new application. | null | https://docs.redhat.com/en/documentation/red_hat_quay/3/html/builders_and_image_automation/github-app |
Chapter 6. ImageStreamLayers [image.openshift.io/v1] | Chapter 6. ImageStreamLayers [image.openshift.io/v1] Description ImageStreamLayers describes information about the layers referenced by images in this image stream. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required blobs images 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources blobs object blobs is a map of blob name to metadata about the blob. blobs{} object ImageLayerData contains metadata about an image layer. images object images is a map between an image name and the names of the blobs and config that comprise the image. images{} object ImageBlobReferences describes the blob references within an image. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 6.1.1. .blobs Description blobs is a map of blob name to metadata about the blob. Type object 6.1.2. .blobs{} Description ImageLayerData contains metadata about an image layer. Type object Required size mediaType Property Type Description mediaType string MediaType of the referenced object. size integer Size of the layer in bytes as defined by the underlying store. This field is optional if the necessary information about size is not available. 6.1.3. .images Description images is a map between an image name and the names of the blobs and config that comprise the image. Type object 6.1.4. .images{} Description ImageBlobReferences describes the blob references within an image. Type object Property Type Description config string config, if set, is the blob that contains the image config. Some images do not have separate config blobs and this field will be set to nil if so. imageMissing boolean imageMissing is true if the image is referenced by the image stream but the image object has been deleted from the API by an administrator. When this field is set, layers and config fields may be empty and callers that depend on the image metadata should consider the image to be unavailable for download or viewing. layers array (string) layers is the list of blobs that compose this image, from base layer to top layer. All layers referenced by this array will be defined in the blobs map. Some images may have zero layers. manifests array (string) manifests is the list of other image names that this image points to. For a single architecture image, it is empty. For a multi-arch image, it consists of the digests of single architecture images, such images shouldn't have layers nor config. 6.2. API endpoints The following API endpoints are available: /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreams/{name}/layers GET : read layers of the specified ImageStream 6.2.1. /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreams/{name}/layers Table 6.1. Global path parameters Parameter Type Description name string name of the ImageStreamLayers namespace string object name and auth scope, such as for teams and projects Table 6.2. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read layers of the specified ImageStream Table 6.3. HTTP responses HTTP code Reponse body 200 - OK ImageStreamLayers schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/image_apis/imagestreamlayers-image-openshift-io-v1 |
4.3. Grouping Directory Entries | 4.3. Grouping Directory Entries After creating the required entries, group them for ease of administration. The Directory Server supports several methods for grouping entries: Using groups Using roles 4.3.1. About Groups Groups, as the name implies, are simply collections of users. There are several different types of groups in Directory Server which reflects the type of memberships allowed, like certificate groups, URL groups, and unique groups (where every member must be unique). Each type of group is defined by an object class (such as groupOfUniqueNames ) and a corresponding member attribute (such as uniqueMember ). The type of group identifies the type of members. The configuration of the group depends on how those members are added to the group. Directory Server has two kinds of groups: Static groups have a finite and defined list of members which are added manually to the group entry. Dynamic groups use filters to recognize which entries are members of the group, so the group membership is constantly changed as the entries which match the group filter change. Groups are the simplest form of organizing entries in Directory Server. They are largely manually configured and there is no functionality or behavior for them beyond being an organization method. (Really, groups do not "do" anything to directory entries, though groups can be manipulated by LDAP clients to perform operations.) 4.3.1.1. Listing Group Membership in User Entries Groups are essentially lists of user DNs. By default, group membership is only reflected in the group entry itself, not on the user entries. The MemberOf Plug-in, however, uses the group member entries to update user entries dynamically, to reflect on the user entry what groups the user belongs to. The MemberOf Plug-in automatically scans group entries with a specified member attribute, traces back all of the user DNs, and creates a corresponding memberOf attribute on the user entry, with the name of the group. Group membership is determined by the member attribute on the group entry, but group membership for all groups for a user is reflected in the user's entry in the memberOf attribute. The name of every group to which a user belongs is listed as a memberOf attribute. The values of those memberOf attributes are managed by the Directory Server. Note It is possible, as outlined in Section 6.2.1, "About Using Multiple Databases" , to store different suffixes in different databases. By default, the MemberOf Plug-in only looks for potential members for users who are in the same database as the group. If users are stored in a different database than the group, then the user entries will not be updated with memberOf attributes because the plug-in cannot ascertain the relationship between them. The MemberOf Plug-in can be configured to search through all configured databases by enabling the memberOfAllBackends attributes. A single instance of the MemberOf Plug-in can be configured to identify multiple member attributes by setting the multi-valued memberofgroupattr in the plug-in entry, so the MemberOf Plug-in can manage multiple types of groups. 4.3.1.2. Automatically Adding New Entries to Groups Group management can be a critical factor for managing directory data, especially for clients which use Directory Server data and organization or which use groups to apply functionality to entries. Groups make it easier to apply policies consistently and reliably across the directory. Password policies, access control lists, and other rules can all be based on group membership. Being able to assign new entries to groups, automatically, at the time that an account is created ensures that the appropriate policies and functionality are immediately applied to those entries - without requiring administrator intervention. The Automembership Plug-in essentially allows a static group to act like a dynamic group. It uses a set of rules (based on entry attributes, directory location, and regular expressions) to assign a user automatically to a specified group. There can be instances where entries that match the LDAP search filter should be added to different groups, depending on the value of some other attribute. For example, machines may need to be added to different groups depending on their IP address or physical location; users may need to be in different groups depending on their employee ID number. Automember definitions are a set of nested entries, with the Auto Membership Plug-in container, then the automember definition, and then any regular expression conditions for that definition. Figure 4.13. Regular Expression Conditions Note Automembership assignments are only made automatically when an entry is added to the Directory Server. For existing entries or entries which are edited to meet an automember rule, there is a fix-up task which can be run to assign the proper group membership. 4.3.2. About Roles Roles are a sort of hybrid group, behaving as both a static and a dynamic group. With a group, entries are added to a group entry as members. With a role, the role attribute is added to an entry and then that attribute is used to identify members in the role entry automatically. Roles effectively and automatically organize users in a number of different ways: Explicitly listing role members. Viewing the role will display the complete list of members for that role. The role itself can be queried to check membership (which is not possible with a dynamic group). Showing to what roles an entry belongs. Because role membership is determined by an attribute on an entry, simply viewing an entry will show all of the roles to which it belongs. This is similar to the memberOf attributes for groups, only it is not necessary to enable or configure a plug-in instance for this functionality to work. It is automatic. Assigning the appropriate roles. Role membership is assigned through the entry , not through the role, so the roles to which a user belongs can be easily assigned and removed by editing the entry, in a single step. Managed roles can do everything that can normally be done with static groups. The role members can be filtered using filtered roles, similarly to the filtering with dynamic groups. Roles are easier to use than groups, more flexible in their implementation, and reduce client complexity. Role members are entries that possess the role. Members can be specified either explicitly or dynamically. How role membership is specified depends upon the type of role. Directory Server supports three types of roles: Managed roles have an explicit enumerated list of members. Filtered roles are assigned entries to the role depending upon the attribute contained by each entry, specified in an LDAP filter. Entries that match the filter possess the role. Nested roles are roles that contain other roles. Roles The concept of activating/inactivating roles allows entire groups of entries to be activated or inactivated in just one operation. That is, the members of a role can be temporarily disabled by inactivating the role to which they belong. When a role is inactivated, it does not mean that the user cannot bind to the server using that role entry. The meaning of an inactivated role is that the user cannot bind to the server using any of the entries that belong to that role; the entries that belong to an inactivated role will have the nsAccountLock attribute set to true . When a nested role is inactivated, a user cannot bind to the server if it is a member of any role within the nested role. All the entries that belong to a role that directly or indirectly are members of the nested role have nsAccountLock set to true . There can be several layers of nested roles, and inactivating a nested role at any point in the nesting will inactivate all roles and users beneath it. 4.3.3. Deciding Between Roles and Groups Roles and groups can accomplish the same goals. Managed roles can do everything that static groups can do, while filtered roles can filter and identify members as dynamic groups do. Both roles and groups have advantages and disadvantages. Deciding whether to use roles or groups (or a mix) depends on balancing client needs and server resources. Roles reduce client-side complexity, which is their key benefit. With roles, the client application can check role membership by searching the nsRole operational attribute on entries; this multi-valued attribute identifies every role to which the entry belongs. From the client application point of view, the method for checking membership is uniform and is performed on the server side. However, this ease of use for clients comes at the cost of increased server complexity. Evaluating roles is more resource-intensive for the Directory Server than evaluating groups because the server does the work for the client application. While groups are easier for the server, they require smarter and more complex clients to use them effectively. For example, dynamic groups, from an application point of view, offer no support from the server to provide a list of group members. Instead, the application retrieves the group definitions and then runs the filter. Group membership is only reflected on user entries if the appropriate plug-ins are configured. Ultimately, the methods for determining group membership are not uniform or predictable. Note One thing that can balance managing group membership is the MemberOf Plug-in. Using the memberOf strikes a nice balance between being simple for the client to use and being efficient for the server to calculate. The MemberOf Plug-in dynamically creates memberOf attribute on a user entry whenever a user is added to a group. A client can run a single search on a group entry to get a list of all of its members, or a single search on a user entry to get a complete list of all the groups it belongs to. The server only has maintenance overhead when the membership is modified. Since both the specified member (group) and memberOf (user) attributes are stored in the database, there is no extra processing required for searches, which makes the searches from the clients very efficient. | null | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/deployment_guide/Designing_the_Directory_Tree-Grouping_Directory_Entries |
function::task_nice | function::task_nice Name function::task_nice - The nice value of the task Synopsis Arguments task task_struct pointer Description This function returns the nice value of the given task. | [
"task_nice:long(task:long)"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-task-nice |
4.2. RHEA-2012:0797 - new packages: crash-gcore-command | 4.2. RHEA-2012:0797 - new packages: crash-gcore-command New crash-gcore-command packages are now available for Red Hat Enterprise Linux 6. The crash-gcore-command extension module is used to dynamically add a gcore command to a running crash utility session on a kernel dumpfile. The command will create a core dump file for a specified user task program that was running when a kernel crashed. The resultant core dump file may then be used with gdb. This enhancement update adds the crash-gcore-command packages to Red Hat Enterprise Linux 6. (BZ# 692799 ) All users who require the crash-gcore-command should install these new packages. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/rhea-2012-0797 |
Chapter 8. Removing Windows nodes | Chapter 8. Removing Windows nodes You can remove a Windows node by deleting its host Windows machine. 8.1. Deleting a specific machine You can delete a specific machine. Note You cannot delete a control plane machine. Prerequisites Install an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure View the machines that are in the cluster and identify the one to delete: USD oc get machine -n openshift-machine-api The command output contains a list of machines in the <clusterid>-worker-<cloud_region> format. Delete the machine: USD oc delete machine <machine> -n openshift-machine-api Important By default, the machine controller tries to drain the node that is backed by the machine until it succeeds. In some situations, such as with a misconfigured pod disruption budget, the drain operation might not be able to succeed in preventing the machine from being deleted. You can skip draining the node by annotating "machine.openshift.io/exclude-node-draining" in a specific machine. If the machine being deleted belongs to a machine set, a new machine is immediately created to satisfy the specified number of replicas. | [
"oc get machine -n openshift-machine-api",
"oc delete machine <machine> -n openshift-machine-api"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/windows_container_support_for_openshift/removing-windows-nodes |
Chapter 72. Running a test scenario locally | Chapter 72. Running a test scenario locally In Red Hat Process Automation Manager, you can either run the test scenarios directly in Business Central or locally using the command line. Procedure In Business Central, go to Menu Design Projects and click the project name. On the Project's home page, select the Settings tab. Select git URL and click the Clipboard to copy the git url. Open a command terminal and navigate to the directory where you want to clone the git project. Run the following command: Replace your_git_project_url with relevant data like git://localhost:9418/MySpace/ProjectTestScenarios . Once the project is successfully cloned, navigate to the git project directory and execute the following command: Your project's build information and the test results (such as, the number of tests run and whether the test run was a success or not) are displayed in the command terminal. In case of failures, make the necessary changes in Business Central, pull the changes and run the command again. | [
"git clone your_git_project_url",
"mvn clean test"
] | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/test-scenarios-running-locally-proc |
1.3. Hardware Compatibility | 1.3. Hardware Compatibility Hardware specifications change almost daily, it is recommended that all systems be checked for compatibility. The most recent list of supported hardware can be found in the Red Hat Gluster Storage Server Compatible Physical, Virtual Server and Client OS Platforms List , available online at https://access.redhat.com/knowledge/articles/66206 . You must ensure that your environments meets the hardware compatibility outlined in this article. Hardware specifications change rapidly and full compatibility is not guaranteed. Hardware compatibility is a particularly important concern if you have an older or custom-built system. | null | https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/installation_guide/hardware_compatibility |
2.2.2. Network Time Protocol Setup | 2.2.2. Network Time Protocol Setup As opposed to the manual setup described above, you can also synchronize the system clock with a remote server over the Network Time Protocol ( NTP ). For the one-time synchronization only, use the ntpdate command: Firstly, check whether the selected NTP server is accessible: For example: When you find a satisfactory server, run the ntpdate command followed by one or more server addresses: For instance: Unless an error message is displayed, the system time should now be set. You can check the current by setting typing date without any additional arguments as shown in Section 2.2.1, "Date and Time Setup" . In most cases, these steps are sufficient. Only if you really need one or more system services to always use the correct time, enable running the ntpdate at boot time: For more information about system services and their setup, see Chapter 12, Services and Daemons . Note If the synchronization with the time server at boot time keeps failing, i.e., you find a relevant error message in the /var/log/boot.log system log, try to add the following line to /etc/sysconfig/network : However, the more convenient way is to set the ntpd daemon to synchronize the time at boot time automatically: Open the NTP configuration file /etc/ntp.conf in a text editor such as vi or nano , or create a new one if it does not already exist: Now add or edit the list of public NTP servers. If you are using Red Hat Enterprise Linux 6, the file should already contain the following lines, but feel free to change or expand these according to your needs: The iburst directive at the end of each line is to speed up the initial synchronization. As of Red Hat Enterprise Linux 6.5 it is added by default. If upgrading from a minor release, and your /etc/ntp.conf file has been modified, then the upgrade to Red Hat Enterprise Linux 6.5 will create a new file /etc/ntp.conf.rpmnew and will not alter the existing /etc/ntp.conf file. Once you have the list of servers complete, in the same file, set the proper permissions, giving the unrestricted access to localhost only: Save all changes, exit the editor, and restart the NTP daemon: Make sure that ntpd is started at boot time: | [
"~]# ntpdate -q server_address",
"~]# ntpdate -q 0.rhel.pool.ntp.org",
"~]# ntpdate server_address",
"~]# ntpdate 0.rhel.pool.ntp.org 1.rhel.pool.ntp.org",
"~]# chkconfig ntpdate on",
"NETWORKWAIT=1",
"~]# nano /etc/ntp.conf",
"server 0.rhel.pool.ntp.org iburst server 1.rhel.pool.ntp.org iburst server 2.rhel.pool.ntp.org iburst server 3.rhel.pool.ntp.org iburst",
"restrict default kod nomodify notrap nopeer noquery restrict -6 default kod nomodify notrap nopeer noquery restrict 127.0.0.1 restrict -6 ::1",
"~]# service ntpd restart",
"~]# chkconfig ntpd on"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sect-Date_and_Time_Configuration-Command_Line_Configuration-Network_Time_Protocol |
31.5. Blacklisting a Module | 31.5. Blacklisting a Module Sometimes, for various performance or security reasons, it is necessary to prevent the system from using a certain kernel module. This can be achieved by module blacklisting , which is a mechanism used by the modprobe utility to ensure that the kernel cannot automatically load certain modules, or that the modules cannot be loaded at all. This is useful in certain situations, such as when using a certain module poses a security risk to your system, or when the module controls the same hardware or service as another module, and loading both modules would cause the system, or its component, to become unstable or non-operational. To blacklist a module, you have to add the following line to the specified configuration file in the /etc/modprobe.d/ directory as root: blacklist <module_name> where <module_name> is the name of the module being blacklisted. You can modify the /etc/modprobe.d/blacklist.conf file that already exists on the system by default. However, the preferred method is to create a separate configuration file, /etc/modprobe.d/ <module_name> .conf , that will contain settings specific only to the given kernel module. Example 31.4. An example of /etc/modprobe.d/blacklist.conf # # Listing a module here prevents the hotplug scripts from loading it. # Usually that'd be so that some other driver will bind it instead, # no matter which driver happens to get probed first. Sometimes user # mode tools can also control driver binding. # # Syntax: see modprobe.conf(5). # # watchdog drivers blacklist i8xx_tco # framebuffer drivers blacklist aty128fb blacklist atyfb blacklist radeonfb blacklist i810fb blacklist cirrusfb blacklist intelfb blacklist kyrofb blacklist i2c-matroxfb blacklist hgafb blacklist nvidiafb blacklist rivafb blacklist savagefb blacklist sstfb blacklist neofb blacklist tridentfb blacklist tdfxfb blacklist virgefb blacklist vga16fb blacklist viafb # ISDN - see bugs 154799, 159068 blacklist hisax blacklist hisax_fcpcipnp # sound drivers blacklist snd-pcsp # I/O dynamic configuration support for s390x (bz #563228) blacklist chsc_sch The blacklist <module_name> command, however, does not prevent the module from being loaded manually, or from being loaded as a dependency for another kernel module that is not blacklisted. To ensure that a module cannot be loaded on the system at all, modify the specified configuration file in the /etc/modprobe.d/ directory as root with the following line: install <module_name> /bin/true where <module_name> is the name of the blacklisted module. Example 31.5. Using module blacklisting as a temporary problem solution Let's say that a flaw in the Linux kernel's PPP over L2TP module ( pppol2pt ) has been found, and this flaw could be misused to compromise your system. If your system does not require the pppol2pt module to function, you can follow this procedure to blacklist pppol2pt completely until this problem is fixed: Verify whether pppol2pt is currently loaded in the kernel by running the following command: If the module is loaded, you need to unload it and all its dependencies to prevent its possible misuse. See Section 31.4, "Unloading a Module" for instructions on how to safely unload it. Run the following command to ensure that pppol2pt cannot be loaded to the kernel: Note that this command overwrites the content of the /etc/modprobe.d/pppol2tp.conf file if it already exists on your system. Check and back up your existing pppol2tp.conf before running this command. Also, if you were unable to unload the module, you have to reboot the system for this command to take effect. After the problem with the pppol2pt module has been properly fixed, you can delete the /etc/modprobe.d/pppol2tp.conf file or restore its content, which will allow your system to load the pppol2pt module with its original configuration. Important Before blacklisting a kernel module, always ensure that the module is not vital for your current system configuration to function properly. Improper blacklisting of a key kernel module can result in an unstable or non-operational system. | [
"# Listing a module here prevents the hotplug scripts from loading it. Usually that'd be so that some other driver will bind it instead, no matter which driver happens to get probed first. Sometimes user mode tools can also control driver binding. # Syntax: see modprobe.conf(5). # watchdog drivers blacklist i8xx_tco framebuffer drivers blacklist aty128fb blacklist atyfb blacklist radeonfb blacklist i810fb blacklist cirrusfb blacklist intelfb blacklist kyrofb blacklist i2c-matroxfb blacklist hgafb blacklist nvidiafb blacklist rivafb blacklist savagefb blacklist sstfb blacklist neofb blacklist tridentfb blacklist tdfxfb blacklist virgefb blacklist vga16fb blacklist viafb ISDN - see bugs 154799, 159068 blacklist hisax blacklist hisax_fcpcipnp sound drivers blacklist snd-pcsp I/O dynamic configuration support for s390x (bz #563228) blacklist chsc_sch",
"install <module_name> /bin/true",
"~]# lsmod | grep ^pppol2tp && echo \"The module is loaded\" || echo \"The module is not loaded\"",
"~]# echo \"install pppol2tp /bin/true\" > /etc/modprobe.d/pppol2tp.conf"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/blacklisting_a_module |
probe::signal.sys_tkill | probe::signal.sys_tkill Name probe::signal.sys_tkill - Sending a kill signal to a thread Synopsis Values name Name of the probe point sig_name A string representation of the signal sig The specific signal sent to the process pid_name The name of the signal recipient sig_pid The PID of the process receiving the kill signal Description The tkill call is analogous to kill(2), except that it also allows a process within a specific thread group to be targeted. Such processes are targeted through their unique thread IDs (TID). | [
"signal.sys_tkill"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-signal-sys-tkill |
Chapter 2. Initializing InstructLab | Chapter 2. Initializing InstructLab You must initialize the InstructLab environments to begin working with the Red Hat Enterprise Linux AI models. 2.1. Creating your RHEL AI environment You can start interacting with LLMs and the RHEL AI tooling by initializing the InstructLab environment. Important System profiles for AMD and Intel machines is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites You installed RHEL AI with the bootable container image. You have root user access on your machine. Procedure Optional: You can view your machine's information by running the following command: USD ilab system info Initialize InstructLab by running the following command: USD ilab config init The RHEL AI CLI starts setting up your environment and config.yaml file. The CLI automatically detects your machine's hardware and selects a system profile based on the GPU types. System profiles populate the config.yaml file with the proper parameter values based on your detected hardware. Example output of profile auto-detection Generating config file and profiles: /home/user/.config/instructlab/config.yaml /home/user/.local/share/instructlab/internal/system_profiles/ We have detected the NVIDIA H100 X4 profile as an exact match for your system. -------------------------------------------- Initialization completed successfully! You're ready to start using `ilab`. Enjoy! -------------------------------------------- If the CLI does not detect an exact match for your system, you can manually select a system profile when prompted. Select your hardware vendor and configuration that matches your system. Example output of selecting system profiles Please choose a system profile to use. System profiles apply to all parts of the config file and set hardware specific defaults for each command. First, please select the hardware vendor your system falls into [0] NO SYSTEM PROFILE [1] NVIDIA Enter the number of your choice [0]: 4 You selected: NVIDIA , please select the specific hardware configuration that most closely matches your system. [0] No system profile [1] NVIDIA H100 X2 [2] NVIDIA H100 X8 [3] NVIDIA H100 X4 [4] NVIDIA L4 X8 [5] NVIDIA A100 X2 [6] NVIDIA A100 X8 [7] NVIDIA A100 X4 [8] NVIDIA L40S X4 [9] NVIDIA L40S X8 Enter the number of your choice [hit enter for hardware defaults] [0]: 3 Example output of a completed ilab config init run. You selected: /Users/<user>/.local/share/instructlab/internal/system_profiles/nvidia/H100/h100_x4.yaml -------------------------------------------- Initialization completed successfully! You're ready to start using `ilab`. Enjoy! -------------------------------------------- If you want to use the skeleton taxonomy tree, which includes two skills and one knowledge qna.yaml file, you can clone the skeleton repository and place it in the taxonomy directory by running the following command: rm -rf ~/.local/share/instructlab/taxonomy/ ; git clone https://github.com/RedHatOfficial/rhelai-sample-taxonomy.git ~/.local/share/instructlab/taxonomy/ If the incorrect system profile is auto-detected, you can run the following command: USD ilab config init --profile <path-to-system-profile> where <path-to-system-profile> Specify the path to the correct system profile. You can find the system profiles in the ~/.local/share/instructlab/internal/system_profiles path. Example profile selection command USD ilab config init --profile ~/.local/share/instructlab/internal/system_profiles/amd/mi300x/mi300x_x8.yaml Directory structure of the InstructLab environment 1 ~/.config/instructlab/config.yaml : Contains the config.yaml file. 2 ~/.cache/instructlab/models/ : Contains all downloaded large language models, including the saved output of ones you generate with RHEL AI. 3 ~/.local/share/instructlab/datasets/ : Contains data output from the SDG phase, built on modifications to the taxonomy repository. 4 ~/.local/share/instructlab/taxonomy/ : Contains the skill and knowledge data. 5 ~/.local/share/instructlab/phased/<phase1-or-phase2>/checkpoints/ : Contains the output of the multi-phase training process Verification You can view the full config.yaml file by running the following command USD ilab config show You can also manually edit the config.yaml file by running the following command: USD ilab config edit | [
"ilab system info",
"ilab config init",
"Generating config file and profiles: /home/user/.config/instructlab/config.yaml /home/user/.local/share/instructlab/internal/system_profiles/ We have detected the NVIDIA H100 X4 profile as an exact match for your system. -------------------------------------------- Initialization completed successfully! You're ready to start using `ilab`. Enjoy! --------------------------------------------",
"Please choose a system profile to use. System profiles apply to all parts of the config file and set hardware specific defaults for each command. First, please select the hardware vendor your system falls into [0] NO SYSTEM PROFILE [1] NVIDIA Enter the number of your choice [0]: 4 You selected: NVIDIA Next, please select the specific hardware configuration that most closely matches your system. [0] No system profile [1] NVIDIA H100 X2 [2] NVIDIA H100 X8 [3] NVIDIA H100 X4 [4] NVIDIA L4 X8 [5] NVIDIA A100 X2 [6] NVIDIA A100 X8 [7] NVIDIA A100 X4 [8] NVIDIA L40S X4 [9] NVIDIA L40S X8 Enter the number of your choice [hit enter for hardware defaults] [0]: 3",
"You selected: /Users/<user>/.local/share/instructlab/internal/system_profiles/nvidia/H100/h100_x4.yaml -------------------------------------------- Initialization completed successfully! You're ready to start using `ilab`. Enjoy! --------------------------------------------",
"rm -rf ~/.local/share/instructlab/taxonomy/ ; git clone https://github.com/RedHatOfficial/rhelai-sample-taxonomy.git ~/.local/share/instructlab/taxonomy/",
"ilab config init --profile <path-to-system-profile>",
"ilab config init --profile ~/.local/share/instructlab/internal/system_profiles/amd/mi300x/mi300x_x8.yaml",
"ββ ~/.config/instructlab/config.yaml 1 ββ ~/.cache/instructlab/models/ 2 ββ ~/.local/share/instructlab/datasets 3 ββ ~/.local/share/instructlab/taxonomy 4 ββ ~/.local/share/instructlab/phased/<phase1-or-phase2>/checkpoints/ 5",
"ilab config show",
"ilab config edit"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.3/html/building_your_rhel_ai_environment/initializing_instructlab |
Chapter 8. Triggering updates on image stream changes | Chapter 8. Triggering updates on image stream changes When an image stream tag is updated to point to a new image, OpenShift Container Platform can automatically take action to roll the new image out to resources that were using the old image. You configure this behavior in different ways depending on the type of resource that references the image stream tag. 8.1. OpenShift Container Platform resources OpenShift Container Platform deployment configurations and build configurations can be automatically triggered by changes to image stream tags. The triggered action can be run using the new value of the image referenced by the updated image stream tag. 8.2. Triggering Kubernetes resources Kubernetes resources do not have fields for triggering, unlike deployment and build configurations, which include as part of their API definition a set of fields for controlling triggers. Instead, you can use annotations in OpenShift Container Platform to request triggering. The annotation is defined as follows: apiVersion: v1 kind: Pod metadata: annotations: image.openshift.io/triggers: [ { "from": { "kind": "ImageStreamTag", 1 "name": "example:latest", 2 "namespace": "myapp" 3 }, "fieldPath": "spec.template.spec.containers[?(@.name==\"web\")].image", 4 "paused": false 5 }, # ... ] # ... 1 Required: kind is the resource to trigger from must be ImageStreamTag . 2 Required: name must be the name of an image stream tag. 3 Optional: namespace defaults to the namespace of the object. 4 Required: fieldPath is the JSON path to change. This field is limited and accepts only a JSON path expression that precisely matches a container by ID or index. For pods, the JSON path is spec.containers[?(@.name='web')].image . 5 Optional: paused is whether or not the trigger is paused, and the default value is false . Set paused to true to temporarily disable this trigger. When one of the core Kubernetes resources contains both a pod template and this annotation, OpenShift Container Platform attempts to update the object by using the image currently associated with the image stream tag that is referenced by trigger. The update is performed against the fieldPath specified. Examples of core Kubernetes resources that can contain both a pod template and annotation include: CronJobs Deployments StatefulSets DaemonSets Jobs ReplicationControllers Pods 8.3. Setting the image trigger on Kubernetes resources When adding an image trigger to deployments, you can use the oc set triggers command. For example, the sample command in this procedure adds an image change trigger to the deployment named example so that when the example:latest image stream tag is updated, the web container inside the deployment updates with the new image value. This command sets the correct image.openshift.io/triggers annotation on the deployment resource. Procedure Trigger Kubernetes resources by entering the oc set triggers command: USD oc set triggers deploy/example --from-image=example:latest -c web Example deployment with trigger annotation apiVersion: apps/v1 kind: Deployment metadata: annotations: image.openshift.io/triggers: '[{"from":{"kind":"ImageStreamTag","name":"example:latest"},"fieldPath":"spec.template.spec.containers[?(@.name==\"container\")].image"}]' # ... Unless the deployment is paused, this pod template update automatically causes a deployment to occur with the new image value. | [
"apiVersion: v1 kind: Pod metadata: annotations: image.openshift.io/triggers: [ { \"from\": { \"kind\": \"ImageStreamTag\", 1 \"name\": \"example:latest\", 2 \"namespace\": \"myapp\" 3 }, \"fieldPath\": \"spec.template.spec.containers[?(@.name==\\\"web\\\")].image\", 4 \"paused\": false 5 }, # ]",
"oc set triggers deploy/example --from-image=example:latest -c web",
"apiVersion: apps/v1 kind: Deployment metadata: annotations: image.openshift.io/triggers: '[{\"from\":{\"kind\":\"ImageStreamTag\",\"name\":\"example:latest\"},\"fieldPath\":\"spec.template.spec.containers[?(@.name==\\\"container\\\")].image\"}]'"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/images/triggering-updates-on-imagestream-changes |
Authorization APIs | Authorization APIs OpenShift Container Platform 4.16 Reference guide for authorization APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/authorization_apis/index |
Chapter 1. Installing and configuring CUPS | Chapter 1. Installing and configuring CUPS You can use CUPS to print from a local host. You can also use this host to share printers in the network and act as a print server. Procedure Install the cups package: If you configure a CUPS as a print server, edit the /etc/cups/cupsd.conf file, and make the following changes: If you want to remotely configure CUPS or use this host as a print server, configure on which IP addresses and ports the service listens: By default, CUPS listens only on localhost interfaces ( 127.0.0.1 and ::1 ). Specify IPv6 addresses in square brackets. Important Do not configure CUPS to listen on interfaces that allow access from untrustworthy networks, such as the internet. Configure which IP ranges can access the service by allowing the respective IP ranges in the <Location /> directive: In the <Location /admin> directive, configure which IP addresses and ranges can access the CUPS administration services: With these settings, only the hosts with the IP addresses 192.0.2.15 and 2001:db8:1::22 can access the administration services. Optional: Configure IP addresses and ranges that are allowed to access the configuration and log files in the web interface: If you run the firewalld service and want to configure remote access to CUPS, open the CUPS port in firewalld : If you run CUPS on a host with multiple interfaces, consider limiting the access to the required networks. Enable and start the cups service: Verification Use a browser, and access http:// <hostname> :631 . If you can connect to the web interface, CUPS works. Note that certain features, such as the Administration tab, require authentication and an HTTPS connection. By default, CUPS uses a self-signed certificate for HTTPS access and, consequently, the connection is not secure when you authenticate. steps Configuring TLS encryption on a CUPS server Optional: Granting administration permissions to manage a CUPS server in the web interface Adding a printer to CUPS by using the web interface Using and configuring firewalld | [
"dnf install cups",
"Listen 192.0.2.1:631 Listen [2001:db8:1::1]:631",
"<Location /> Allow from 192.0.2.0/24 Allow from [2001:db8:1::1]/32 Order allow,deny </Location>",
"<Location /admin> Allow from 192.0.2.15/32 Allow from [2001:db8:1::22]/128 Order allow,deny </Location>",
"<Location /admin/conf> Allow from 192.0.2.15/32 Allow from [2001:db8:1::22]/128 </Location> <Location /admin/log> Allow from 192.0.2.15/32 Allow from [2001:db8:1::22]/128 </Location>",
"firewall-cmd --permanent --add-port=631/tcp firewall-cmd --reload",
"systemctl enable --now cups"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_using_a_cups_printing_server/installing-and-configuring-cups_configuring-printing |
Chapter 6. Additional Resources | Chapter 6. Additional Resources This chapter provides references to other relevant sources of information about Red Hat Software Collections 3.5 and Red Hat Enterprise Linux. 6.1. Red Hat Product Documentation The following documents are directly or indirectly relevant to this book: Red Hat Software Collections 3.5 Packaging Guide - The Packaging Guide for Red Hat Software Collections explains the concept of Software Collections, documents the scl utility, and provides a detailed explanation of how to create a custom Software Collection or extend an existing one. Red Hat Developer Toolset 9.1 Release Notes - The Release Notes for Red Hat Developer Toolset document known problems, possible issues, changes, and other important information about this Software Collection. Red Hat Developer Toolset 9.1 User Guide - The User Guide for Red Hat Developer Toolset contains more information about installing and using this Software Collection. Using Red Hat Software Collections Container Images - This book provides information on how to use container images based on Red Hat Software Collections. The available container images include applications, daemons, databases, as well as the Red Hat Developer Toolset container images. The images can be run on Red Hat Enterprise Linux 7 Server and Red Hat Enterprise Linux Atomic Host. Getting Started with Containers - This guide contains a comprehensive overview of information about building and using container images on Red Hat Enterprise Linux 7 and Red Hat Enterprise Linux Atomic Host. Using and Configuring Red Hat Subscription Manager - The Using and Configuring Red Hat Subscription Manager book provides detailed information on how to register Red Hat Enterprise Linux systems, manage subscriptions, and view notifications for the registered systems. Red Hat Enterprise Linux 6 Deployment Guide - The Deployment Guide for Red Hat Enterprise Linux 6 provides relevant information regarding the deployment, configuration, and administration of this system. Red Hat Enterprise Linux 7 System Administrator's Guide - The System Administrator's Guide for Red Hat Enterprise Linux 7 provides information on deployment, configuration, and administration of this system. 6.2. Red Hat Developers Red Hat Developer Program - The Red Hat Developers community portal. Overview of Red Hat Software Collections on Red Hat Developers - The Red Hat Developers portal provides a number of tutorials to get you started with developing code using different development technologies. This includes the Node.js, Perl, PHP, Python, and Ruby Software Collections. Red Hat Developer Blog - The Red Hat Developer Blog contains up-to-date information, best practices, opinion, product and program announcements as well as pointers to sample code and other resources for those who are designing and developing applications based on Red Hat technologies. | null | https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.5_release_notes/chap-additional_resources |
Preface | Preface Open Java Development Kit (OpenJDK) is a free and open source implementation of the Java Platform, Standard Edition (Java SE). The Red Hat build of OpenJDK is available in four versions: 8u, 11u, 17u, and 21u. Packages for the Red Hat build of OpenJDK are made available on Red Hat Enterprise Linux and Microsoft Windows and shipped as a JDK and JRE in the Red Hat Ecosystem Catalog. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_red_hat_build_of_openjdk_17.0.12/pr01 |
Observability | Observability Red Hat OpenShift GitOps 1.12 Using observability features to view Argo CD logs and monitor the performance and health of Argo CD and application resources Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.12/html/observability/index |
Chapter 1. About OpenShift Lightspeed | Chapter 1. About OpenShift Lightspeed The following topics provide an overview of Red Hat OpenShift Lightspeed and discuss functional requirements. 1.1. OpenShift Lightspeed overview Red Hat OpenShift Lightspeed is a generative AI-powered virtual assistant for OpenShift Container Platform. Lightspeed functionality uses a natural-language interface in the OpenShift web console to provide answers to questions that you ask about the product. This early access program exists so that customers can provide feedback on the user experience, features and capabilities, issues encountered, and any other aspects of the product so that Lightspeed can become more aligned with your needs when it is released and made generally available. 1.1.1. About product coverage Red Hat OpenShift Lightspeed generates answers to questions based on the content in the official OpenShift Container Platform product documentation. The documentation for the following products is not part of the OpenShift product documentation; therefore, Lightspeed has limited context for generating answers about these products: Builds for Red Hat OpenShift Red Hat Advanced Cluster Security for Kubernetes Red Hat Advanced Cluster Management for Kubernetes Red Hat CodeReady Workspaces Red Hat OpenShift GitOps Red Hat OpenShift Pipelines Red Hat OpenShift Serverless Red Hat OpenShift Service Mesh 3.x Red Hat Quay 1.2. OpenShift Requirements OpenShift Lightspeed requires OpenShift Container Platform 4.15 or later running on x86 hardware. Any installation type or deployment architecture is supported so long as the cluster is 4.15+ and x86-based. For the OpenShift Lightspeed Technology Preview release, the cluster you use must be connected to the Internet and it must have telemetry enabled. Telemetry is enabled by default. If you are using a standard installation process for OpenShift confirm that it does not disable telemetry. 1.3. Large Language Model (LLM) requirements As part of the Technology Preview release, OpenShift Lightspeed can rely on the following Software as a Service (SaaS) Large Language Model (LLM) providers: OpenAI Microsoft Azure OpenAI IBM watsonx Note Many self-hosted or self-managed model servers claim API compatibility with OpenAI. It is possible to configure the OpenShift Lightspeed OpenAI provider to point to an API-compatible model server. If the model server is truly API-compatible, especially with respect to authentication, then it may work. These configurations have not been tested by Red Hat, and issues related to their use are outside the scope of Technology Preview support. For OpenShift Lightspeed configurations with Red Hat OpenShift AI, you must host your own LLM provider. 1.3.1. About OpenAI To use OpenAI with Red Hat OpenShift Lightspeed, you will need access to the OpenAI API platform . 1.3.2. About Azure OpenAI To use Microsoft Azure with Red Hat OpenShift Lightspeed, you must have access to Microsoft Azure OpenAI. 1.3.3. About WatsonX To use IBM watsonx with Red Hat OpenShift Lightspeed, you will need an account with IBM Cloud's WatsonX . 1.3.4. About Red Hat Enterprise Linux AI Red Hat Enterprise Linux AI is OpenAI API-compatible, and is configured in a similar manner as the OpenAI provider. You can configure Red Hat Enterprise Linux AI as the (Large Language Model) LLM provider. Because the Red Hat Enterprise Linux is in a different environment than the OpenShift Lightspeed deployment, the model deployment must allow access using a secure connection. For more information, see Optional: Allowing access to a model from a secure endpoint . 1.3.5. About Red Hat OpenShift AI Red Hat OpenShift AI is OpenAI API-compatible, and is configured largely the same as the OpenAI provider. You need a Large Language Model (LLM) deployed on the single model-serving platform of Red Hat OpenShift AI using the Virtual Large Language Model (vLLM) runtime. If the model deployment is in a different OpenShift environment than the OpenShift Lightspeed deployment, the model deployment must include a route to expose it outside the cluster. For more information, see About the single-model serving platform . 1.4. About data use Red Hat OpenShift Lightspeed is a virtual assistant you interact with using natural language. Using the OpenShift Lightspeed interface, you send chat messages that OpenShift Lightspeed transforms and sends to the Large Language Model (LLM) provider you have configured for your environment. These messages can contain information about your cluster, cluster resources, or other aspects of your environment. The OpenShift Lightspeed Technology Preview release has limited capabilities to filter or redact the information you provide to the LLM. Do not enter information into the OpenShift Lightspeed interface that you do not want to send to the LLM provider. By using the OpenShift Lightspeed as part of the Technology Preview release, you agree that Red Hat may use all of the messages that you exchange with the LLM provider for any purpose. The transcript recording data uses the Red Hat Insights system's back-end, and is subject to the same access restrictions and other security policies. You may email Red Hat and request that your data be deleted at the end of the Technology Preview release period. 1.5. About data, telemetry, transcript, and feedback collection OpenShift Lightspeed is a virtual assistant that you interact with using natural language. Communicating with OpenShift Lightspeed involves sending chat messages, which may include information about your cluster, your cluster resources, or other aspects of your environment. These messages are sent to OpenShift Lightspeed, potentially with some content filtered or redacted, and then sent to the LLM provider that you have configured. Do not enter any information into the OpenShift Lightspeed user interface that you do not want sent to the LLM provider. The transcript recording data uses the Red Hat Insights system back-end and is subject to the same access restrictions and other security policies described in Red Hat Insights data and application security . 1.6. About remote health monitoring Red Hat records basic information using the Telemeter Client and the Insights Operator, which is generally referred to as Remote Health Monitoring in OpenShift clusters. The OpenShift documentation for remote health monitoring explains data collection and includes instructions for opting out. If you wish to disable transcript or feedback collection, you must follow the procedure for opting out of remote health monitoring. For more information, see "About remote health monitoring" in the OpenShift Container Platform documentation. 1.6.1. Transcript collection overview Transcripts are sent to Red Hat every two hours, by default. If you are using the filtering and redaction functionality, the filtered or redacted content is sent to Red Hat. Red Hat does not see the original non-redacted content, and the redaction takes place before any content is captured in logs. OpenShift Lightspeed temporarily logs and stores complete transcripts of conversations that users have with the virtual assistant. This includes the following information: Queries from the user. The complete message sent to the configured Large Language Model (LLM) provider, which includes system instructions, referenced documentation, and the user question. The complete response from the LLM provider. Transcripts originate from the cluster and are associated with the cluster. Red Hat can assign specific clusters to specific customer accounts. Transcripts do not contain any information about users. 1.6.2. Feedback collection overview OpenShift Lightspeed collects feedback from users who engage with the feedback feature in the virtual assistant interface. If a user submits feedback, the feedback score (thumbs up or down), text feedback (if entered), the user query, and the LLM provider response are stored and sent to Red Hat on the same schedule as transcript collection. If you are using the filtering and redaction functionality, the filtered or redacted content is sent to Red Hat. Red Hat will not see the original non-redacted content, and the redaction takes place before any content is captured in logs. Feedback is associated with the cluster from which it originated, and Red Hat can attribute specific clusters to specific customer accounts. Feedback does not contain any information about which user submitted the feedback, and feedback cannot be tied to any individual user. 1.7. Additional resources Creating the credential secret using the web console Creating the credential secret using the CLI Creating the Lightspeed custom resource file using the web console Creating the Lightspeed custom resource file using the CLI | null | https://docs.redhat.com/en/documentation/red_hat_openshift_lightspeed/1.0tp1/html/about/ols-about-openshift-lightspeed |
Chapter 33. Using Ansible to integrate IdM with NIS domains and netgroups | Chapter 33. Using Ansible to integrate IdM with NIS domains and netgroups 33.1. NIS and its benefits In UNIX environments, the network information service (NIS) is a common way to centrally manage identities and authentication. NIS, which was originally named Yellow Pages (YP), centrally manages authentication and identity information such as: Users and passwords Host names and IP addresses POSIX groups For modern network infrastructures, NIS is considered too insecure because, for example, it neither provides host authentication, nor is data sent encrypted over the network. To work around the problems, NIS is often integrated with other protocols to enhance security. If you use Identity Management (IdM), you can use the NIS server plug-in to connect clients that cannot be fully migrated to IdM. IdM integrates netgroups and other NIS data into the IdM domain. Additionally, you can easily migrate user and host identities from a NIS domain to IdM. Netgroups can be used everywhere that NIS groups are expected. Additional resources NIS in IdM NIS netgroups in IdM Migrating from NIS to Identity Management 33.2. NIS in IdM NIS objects in IdM NIS objects are integrated and stored in the Directory Server back end in compliance with RFC 2307 . IdM creates NIS objects in the LDAP directory and clients retrieve them through, for example, System Security Services Daemon (SSSD) or nss_ldap using an encrypted LDAP connection. IdM manages netgroups, accounts, groups, hosts, and other data. IdM uses a NIS listener to map passwords, groups, and netgroups to IdM entries. NIS Plug-ins in IdM For NIS support, IdM uses the following plug-ins provided in the slapi-nis package: NIS Server Plug-in The NIS Server plug-in enables the IdM-integrated LDAP server to act as a NIS server for clients. In this role, Directory Server dynamically generates and updates NIS maps according to the configuration. Using the plug-in, IdM serves clients using the NIS protocol as an NIS server. Schema Compatibility Plug-in The Schema Compatibility plug-in enables the Directory Server back end to provide an alternate view of entries stored in part of the directory information tree (DIT). This includes adding, dropping, or renaming attribute values, and optionally retrieving values for attributes from multiple entries in the tree. For further details, see the /usr/share/doc/slapi-nis- version /sch-getting-started.txt file. 33.3. NIS netgroups in IdM NIS entities can be stored in netgroups. Compared to UNIX groups, netgroups provide support for: Nested groups (groups as members of other groups). Grouping hosts. A netgroup defines a set of the following information: host, user, and domain. This set is called a triple . These three fields can contain: A value. A dash ( - ), which specifies "no valid value" No value. An empty field specifies a wildcard. When a client requests a NIS netgroup, IdM translates the LDAP entry : To a traditional NIS map and sends it to the client over the NIS protocol by using the NIS plug-in. To an LDAP format that is compliant with RFC 2307 or RFC 2307bis. 33.4. Using Ansible to ensure that a netgroup is present You can use an Ansible playbook to ensure that an IdM netgroup is present. The example describes how to ensure that the TestNetgroup1 group is present. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package on the Ansible controller. You have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server in the ~/ MyPlaybooks / directory. You have stored your ipaadmin_password in the secret.yml Ansible vault. Procedure Create your Ansible playbook file netgroup-present.yml with the following content: Run the playbook: Additional resources NIS in IdM /usr/share/doc/ansible-freeipa/README-netgroup.md /usr/share/doc/ansible-freeipa/playbooks/netgroup 33.5. Using Ansible to ensure that members are present in a netgroup You can use an Ansible playbook to ensure that IdM users, groups, and netgroups are members of a netgroup. The example describes how to ensure that the TestNetgroup1 group has the following members: The user1 and user2 IdM users The group1 IdM group The admins netgroup An idmclient1 host that is an IdM client Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package on the Ansible controller. You have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server in the ~/ MyPlaybooks / directory. You have stored your ipaadmin_password in the secret.yml Ansible vault. The TestNetgroup1 IdM netgroup exists. The user1 and user2 IdM users exist. The group1 IdM group exists. The admins IdM netgroup exists. Procedure Create your Ansible playbook file IdM-members-present-in-a-netgroup.yml with the following content: Run the playbook: Additional resources NIS in IdM /usr/share/doc/ansible-freeipa/README-netgroup.md /usr/share/doc/ansible-freeipa/playbooks/netgroup 33.6. Using Ansible to ensure that a member is absent from a netgroup You can use an Ansible playbook to ensure that IdM users are members of a netgroup. The example describes how to ensure that the TestNetgroup1 group does not have the user1 IdM user among its members. netgroup Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package on the Ansible controller. You have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server in the ~/ MyPlaybooks / directory. You have stored your ipaadmin_password in the secret.yml Ansible vault. The TestNetgroup1 netgroup exists. Procedure Create your Ansible playbook file IdM-member-absent-from-a-netgroup.yml with the following content: Run the playbook: Additional resources NIS in IdM /usr/share/doc/ansible-freeipa/README-netgroup.md /usr/share/doc/ansible-freeipa/playbooks/netgroup 33.7. Using Ansible to ensure that a netgroup is absent You can use an Ansible playbook to ensure that a netgroup does not exist in Identity Management (IdM). The example describes how to ensure that the TestNetgroup1 group does not exist in your IdM domain. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package on the Ansible controller. You have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server in the ~/ MyPlaybooks / directory. You have stored your ipaadmin_password in the secret.yml Ansible vault. Procedure Create your Ansible playbook file netgroup-absent.yml with the following content: Run the playbook: Additional resources NIS in IdM /usr/share/doc/ansible-freeipa/README-netgroup.md /usr/share/doc/ansible-freeipa/playbooks/netgroup | [
"( host.example.com ,, nisdomain.example.com ) (-, user , nisdomain.example.com )",
"--- - name: Playbook to manage IPA netgroup. hosts: ipaserver become: no vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure netgroup members are present ipanetgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: TestNetgroup1",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory_/netgroup-present.yml",
"--- - name: Playbook to manage IPA netgroup. hosts: ipaserver become: no vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure netgroup members are present ipanetgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: TestNetgroup1 user: user1,user2 group: group1 host: idmclient1 netgroup: admins action: member",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory_/IdM-members-present-in-a-netgroup.yml",
"--- - name: Playbook to manage IPA netgroup. hosts: ipaserver become: no vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure netgroup user, \"user1\", is absent ipanetgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: TestNetgroup1 user: \"user1\" action: member state: absent",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory_/IdM-member-absent-from-a-netgroup.yml",
"--- - name: Playbook to manage IPA netgroup. hosts: ipaserver become: no vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure netgroup my_netgroup1 is absent ipanetgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: my_netgroup1 state: absent",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory_/netgroup-absent.yml"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/using_ansible_to_install_and_manage_identity_management/using-ansible-to-integrate-idm-with-nis-domains-and-netgroups_using-ansible-to-install-and-manage-identity-management |
Chapter 1. Security APIs | Chapter 1. Security APIs 1.1. CertificateSigningRequest [certificates.k8s.io/v1] Description CertificateSigningRequest objects provide a mechanism to obtain x509 certificates by submitting a certificate signing request, and having it asynchronously approved and issued. Kubelets use this API to obtain: 1. client certificates to authenticate to kube-apiserver (with the "kubernetes.io/kube-apiserver-client-kubelet" signerName). 2. serving certificates for TLS endpoints kube-apiserver can connect to securely (with the "kubernetes.io/kubelet-serving" signerName). This API can be used to request client certificates to authenticate to kube-apiserver (with the "kubernetes.io/kube-apiserver-client" signerName), or to obtain certificates from custom non-Kubernetes signers. Type object 1.2. CredentialsRequest [cloudcredential.openshift.io/v1] Description CredentialsRequest is the Schema for the credentialsrequests API Type object 1.3. PodSecurityPolicyReview [security.openshift.io/v1] Description PodSecurityPolicyReview checks which service accounts (not users, since that would be cluster-wide) can create the PodTemplateSpec in question. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object 1.4. PodSecurityPolicySelfSubjectReview [security.openshift.io/v1] Description PodSecurityPolicySelfSubjectReview checks whether this user/SA tuple can create the PodTemplateSpec Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object 1.5. PodSecurityPolicySubjectReview [security.openshift.io/v1] Description PodSecurityPolicySubjectReview checks whether a particular user/SA tuple can create the PodTemplateSpec. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object 1.6. RangeAllocation [security.openshift.io/v1] Description RangeAllocation is used so we can easily expose a RangeAllocation typed for security group Compatibility level 4: No compatibility is provided, the API can change at any point for any reason. These capabilities should not be used by applications needing long term support. Type object 1.7. Secret [v1] Description Secret holds secret data of a certain type. The total bytes of the values in the Data field must be less than MaxSecretSize bytes. Type object 1.8. SecurityContextConstraints [security.openshift.io/v1] Description SecurityContextConstraints governs the ability to make requests that affect the SecurityContext that will be applied to a container. For historical reasons SCC was exposed under the core Kubernetes API group. That exposure is deprecated and will be removed in a future release - users should instead use the security.openshift.io group to manage SecurityContextConstraints. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.9. ServiceAccount [v1] Description ServiceAccount binds together: * a name, understood by users, and perhaps by peripheral systems, for an identity * a principal that can be authenticated and authorized * a set of secrets Type object | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/security_apis/security-apis |
Release notes for Red Hat build of OpenJDK 8.0.422 | Release notes for Red Hat build of OpenJDK 8.0.422 Red Hat build of OpenJDK 8 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/release_notes_for_red_hat_build_of_openjdk_8.0.422/index |
Chapter 2. Installation | Chapter 2. Installation This chapter guides you through the steps to install AMQ Ruby in your environment. 2.1. Prerequisites You must have a subscription to access AMQ release files and repositories. To install packages on Red Hat Enterprise Linux, you must register your system . To use AMQ Ruby, you must install Ruby in your environment. 2.2. Installing on Red Hat Enterprise Linux Procedure Use the subscription-manager command to subscribe to the required package repositories. If necessary, replace <variant> with the value for your variant of Red Hat Enterprise Linux (for example, server or workstation ). Red Hat Enterprise Linux 7 USD sudo subscription-manager repos --enable=amq-clients-2-for-rhel-7- <variant> -rpms Red Hat Enterprise Linux 8 USD sudo subscription-manager repos --enable=amq-clients-2-for-rhel-8-x86_64-rpms Use the yum command to install the rubygem-qpid_proton and rubygem-qpid_proton-doc packages. USD sudo yum install rubygem-qpid_proton rubygem-qpid_proton-doc For more information about using packages, see Appendix B, Using Red Hat Enterprise Linux packages . | [
"sudo subscription-manager repos --enable=amq-clients-2-for-rhel-7- <variant> -rpms",
"sudo subscription-manager repos --enable=amq-clients-2-for-rhel-8-x86_64-rpms",
"sudo yum install rubygem-qpid_proton rubygem-qpid_proton-doc"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_ruby_client/installation |
probe::ioscheduler_trace.unplug_timer | probe::ioscheduler_trace.unplug_timer Name probe::ioscheduler_trace.unplug_timer - Fires when unplug timer associated Synopsis ioscheduler_trace.unplug_timer Values rq_queue request queue name Name of the probe point Description with a request queue expires. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-ioscheduler-trace-unplug-timer |
Chapter 19. Squid Caching Proxy | Chapter 19. Squid Caching Proxy Squid is a high-performance proxy caching server for web clients, supporting FTP, Gopher, and HTTP data objects. It reduces bandwidth and improves response times by caching and reusing frequently-requested web pages. [17] In Red Hat Enterprise Linux, the squid package provides the Squid Caching Proxy. Enter the following command to see if the squid package is installed: If it is not installed and you want to use squid, use the yum utility as root to install it: 19.1. Squid Caching Proxy and SELinux When SELinux is enabled, Squid runs confined by default. Confined processes run in their own domains, and are separated from other confined processes. If a confined process is compromised by an attacker, depending on SELinux policy configuration, an attacker's access to resources and the possible damage they can do is limited. The following example demonstrates the Squid processes running in their own domain. This example assumes the squid package is installed: Run the getenforce command to confirm SELinux is running in enforcing mode: The command returns Enforcing when SELinux is running in enforcing mode. Enter the following command as the root user to start the squid daemon: Confirm that the service is running. The output should include the information below (only the time stamp will differ): Enter the following command to view the squid processes: The SELinux context associated with the squid processes is system_u:system_r:squid_t:s0 . The second last part of the context, squid_t , is the type. A type defines a domain for processes and a type for files. In this case, the Squid processes are running in the squid_t domain. SELinux policy defines how processes running in confined domains, such as squid_t , interact with files, other processes, and the system in general. Files must be labeled correctly to allow squid access to them. When the /etc/squid/squid.conf file is configured so squid listens on a port other than the default TCP ports 3128, 3401 or 4827, the semanage port command must be used to add the required port number to the SELinux policy configuration. The following example demonstrates configuring squid to listen on a port that is not initially defined in SELinux policy configuration for it, and, as a consequence, the server failing to start. This example also demonstrates how to then configure the SELinux system to allow the daemon to successfully listen on a non-standard port that is not already defined in the policy. This example assumes the squid package is installed. Run each command in the example as the root user: Confirm the squid daemon is not running: If the output differs, stop the process: Enter the following command to view the ports SELinux allows squid to listen on: Edit /etc/squid/squid.conf as root. Configure the http_port option so it lists a port that is not configured in SELinux policy configuration for squid . In this example, the daemon is configured to listen on port 10000: Run the setsebool command to make sure the squid_connect_any Boolean is set to off. This ensures squid is only permitted to operate on specific ports: Start the squid daemon: An SELinux denial message similar to the following is logged: For SELinux to allow squid to listen on port 10000, as used in this example, the following command is required: Start squid again and have it listen on the new port: Now that SELinux has been configured to allow Squid to listen on a non-standard port (TCP 10000 in this example), it starts successfully on this port. [17] See the Squid Caching Proxy project page for more information. | [
"~]USD rpm -q squid package squid is not installed",
"~]# yum install squid",
"~]USD getenforce Enforcing",
"~]# systemctl start squid.service",
"~]# systemctl status squid.service squid.service - Squid caching proxy Loaded: loaded (/usr/lib/systemd/system/squid.service; disabled) Active: active (running) since Mon 2013-08-05 14:45:53 CEST; 2s ago",
"~]USD ps -eZ | grep squid system_u:system_r:squid_t:s0 27018 ? 00:00:00 squid system_u:system_r:squid_t:s0 27020 ? 00:00:00 log_file_daemon",
"~]# systemctl status squid.service squid.service - Squid caching proxy Loaded: loaded (/usr/lib/systemd/system/squid.service; disabled) Active: inactive (dead)",
"~]# systemctl stop squid.service",
"~]# semanage port -l | grep -w -i squid_port_t squid_port_t tcp 3401, 4827 squid_port_t udp 3401, 4827",
"Squid normally listens to port 3128 http_port 10000",
"~]# setsebool -P squid_connect_any 0",
"~]# systemctl start squid.service Job for squid.service failed. See 'systemctl status squid.service' and 'journalctl -xn' for details.",
"localhost setroubleshoot: SELinux is preventing the squid (squid_t) from binding to port 10000. For complete SELinux messages. run sealert -l 97136444-4497-4fff-a7a7-c4d8442db982",
"~]# semanage port -a -t squid_port_t -p tcp 10000",
"~]# systemctl start squid.service"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/chap-managing_confined_services-squid_caching_proxy |
Appendix B. The cephadm commands | Appendix B. The cephadm commands The cephadm is a command line tool to manage the local host for the Cephadm Orchestrator. It provides commands to investigate and modify the state of the current host. Some of the commands are generally used for debugging. Note cephadm is not required on all hosts, however, it is useful when investigating a particular daemon. The cephadm-ansible-preflight playbook installs cephadm on all hosts and the cephadm-ansible purge playbook requires cephadm be installed on all hosts to work properly. adopt Description Convert an upgraded storage cluster daemon to run cephadm . Syntax Example ceph-volume Description This command is used to list all the devices on the particular host. Run the ceph-volume command inside a container Deploys OSDs with different device technologies like lvm or physical disks using pluggable tools and follows a predictable, and robust way of preparing, activating, and starting OSDs. Syntax Example check-host Description Check the host configuration that is suitable for a Ceph cluster. Syntax Example deploy Description Deploys a daemon on the local host. Syntax Example enter Description Run an interactive shell inside a running daemon container. Syntax Example help Description View all the commands supported by cephadm . Syntax Example install Description Install the packages. Syntax Example inspect-image Description Inspect the local Ceph container image. Syntax Example list-networks Description List the IP networks. Syntax Example ls Description List daemon instances known to cephadm on the hosts. You can use --no-detail for the command to run faster, which gives details of the daemon name, fsid, style, and systemd unit per daemon. You can use --legacy-dir option to specify a legacy base directory to search for daemons. Syntax Example logs Description Print journald logs for a daemon container. This is similar to the journalctl command. Syntax Example prepare-host Description Prepare a host for cephadm . Syntax Example pull Description Pull the Ceph image. Syntax Example registry-login Description Give cephadm login information for an authenticated registry. Cephadm attempts to log the calling host into that registry. Syntax Example You can also use a JSON registry file containing the login info formatted as: Syntax Example rm-daemon Description Remove a specific daemon instance. If you run the cephadm rm-daemon command on the host directly, although the command removes the daemon, the cephadm mgr module notices that the daemon is missing and redeploys it. This command is problematic and should be used only for experimental purposes and debugging. Syntax Example rm-cluster Description Remove all the daemons from a storage cluster on that specific host where it is run. Similar to rm-daemon , if you remove a few daemons this way and the Ceph Orchestrator is not paused and some of those daemons belong to services that are not unmanaged, the cephadm orchestrator just redeploys them there. Syntax Example Important To better clean up the node as part of performing the cluster removal, cluster logs under /var/log/ceph directory are deleted when cephadm rm-cluster command is run. The cluster logs are removed as long as --keep-logs is not passed to the rm-cluster command. Note If the cephadm rm-cluster command is run on a host that is part of an existing cluster where the host is managed by Cephadm and the Cephadm Manager module is still enabled and running, then Cephadm might immediately start deploying new daemons, and more logs could appear. To avoid this, disable the cephadm mgr module before purging the cluster. rm-repo Description Remove a package repository configuration. This is mainly used for the disconnected installation of Red Hat Ceph Storage. Syntax Example run Description Run a Ceph daemon, in a container, in the foreground. Syntax Example shell Description Run an interactive shell with access to Ceph commands over the inferred or specified Ceph cluster. You can enter the shell using the cephadm shell command and run all the orchestrator commands within the shell. Syntax Example unit Description Start, stop, restart, enable, and disable the daemons with this operation. This operates on the daemon's systemd unit. Syntax Example version Description Provides the version of the storage cluster. Syntax Example | [
"cephadm adopt [-h] --name DAEMON_NAME --style STYLE [--cluster CLUSTER ] --legacy-dir [ LEGACY_DIR ] --config-json CONFIG_JSON ] [--skip-firewalld] [--skip-pull]",
"cephadm adopt --style=legacy --name prometheus.host02",
"cephadm ceph-volume inventory/simple/raw/lvm [-h] [--fsid FSID ] [--config-json CONFIG_JSON ] [--config CONFIG , -c CONFIG ] [--keyring KEYRING , -k KEYRING ]",
"cephadm ceph-volume inventory --fsid f64f341c-655d-11eb-8778-fa163e914bcc",
"cephadm check-host [--expect-hostname HOSTNAME ]",
"cephadm check-host --expect-hostname host02",
"cephadm shell deploy DAEMON_TYPE [-h] [--name DAEMON_NAME ] [--fsid FSID ] [--config CONFIG , -c CONFIG ] [--config-json CONFIG_JSON ] [--keyring KEYRING ] [--key KEY ] [--osd-fsid OSD_FSID ] [--skip-firewalld] [--tcp-ports TCP_PORTS ] [--reconfig] [--allow-ptrace] [--memory-request MEMORY_REQUEST ] [--memory-limit MEMORY_LIMIT ] [--meta-json META_JSON ]",
"cephadm shell deploy mon --fsid f64f341c-655d-11eb-8778-fa163e914bcc",
"cephadm enter [-h] [--fsid FSID ] --name NAME [command [command ...]]",
"cephadm enter --name 52c611f2b1d9",
"cephadm help",
"cephadm help",
"cephadm install PACKAGES",
"cephadm install ceph-common ceph-osd",
"cephadm --image IMAGE_ID inspect-image",
"cephadm --image 13ea90216d0be03003d12d7869f72ad9de5cec9e54a27fd308e01e467c0d4a0a inspect-image",
"cephadm list-networks",
"cephadm list-networks",
"cephadm ls [--no-detail] [--legacy-dir LEGACY_DIR ]",
"cephadm ls --no-detail",
"cephadm logs [--fsid FSID ] --name DAEMON_NAME cephadm logs [--fsid FSID ] --name DAEMON_NAME -- -n NUMBER # Last N lines cephadm logs [--fsid FSID ] --name DAEMON_NAME -- -f # Follow the logs",
"cephadm logs --fsid 57bddb48-ee04-11eb-9962-001a4a000672 --name osd.8 cephadm logs --fsid 57bddb48-ee04-11eb-9962-001a4a000672 --name osd.8 -- -n 20 cephadm logs --fsid 57bddb48-ee04-11eb-9962-001a4a000672 --name osd.8 -- -f",
"cephadm prepare-host [--expect-hostname HOSTNAME ]",
"cephadm prepare-host cephadm prepare-host --expect-hostname host01",
"cephadm [-h] [--image IMAGE_ID ] pull",
"cephadm --image 13ea90216d0be03003d12d7869f72ad9de5cec9e54a27fd308e01e467c0d4a0a pull",
"cephadm registry-login --registry-url [ REGISTRY_URL ] --registry-username [ USERNAME ] --registry-password [ PASSWORD ] [--fsid FSID ] [--registry-json JSON_FILE ]",
"cephadm registry-login --registry-url registry.redhat.io --registry-username myuser1 --registry-password mypassword1",
"cat REGISTRY_FILE { \"url\":\" REGISTRY_URL \", \"username\":\" REGISTRY_USERNAME \", \"password\":\" REGISTRY_PASSWORD \" }",
"cat registry_file { \"url\":\"registry.redhat.io\", \"username\":\"myuser\", \"password\":\"mypass\" } cephadm registry-login -i registry_file",
"cephadm rm-daemon [--fsid FSID ] [--name DAEMON_NAME ] [--force ] [--force-delete-data]",
"cephadm rm-daemon --fsid f64f341c-655d-11eb-8778-fa163e914bcc --name osd.8",
"cephadm rm-cluster [--fsid FSID ] [--force]",
"cephadm rm-cluster --fsid f64f341c-655d-11eb-8778-fa163e914bcc",
"ceph mgr module disable cephadm",
"cephadm rm-repo [-h]",
"cephadm rm-repo",
"cephadm run [--fsid FSID ] --name DAEMON_NAME",
"cephadm run --fsid f64f341c-655d-11eb-8778-fa163e914bcc --name osd.8",
"cephadm shell [--fsid FSID ] [--name DAEMON_NAME , -n DAEMON_NAME ] [--config CONFIG , -c CONFIG ] [--mount MOUNT , -m MOUNT ] [--keyring KEYRING , -k KEYRING ] [--env ENV , -e ENV ]",
"cephadm shell -- ceph orch ls cephadm shell",
"cephadm unit [--fsid FSID ] --name DAEMON_NAME start/stop/restart/enable/disable",
"cephadm unit --fsid f64f341c-655d-11eb-8778-fa163e914bcc --name osd.8 start",
"cephadm version",
"cephadm version"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/installation_guide/the-cephadm-commands_install |
Chapter 4. AlertRelabelConfig [monitoring.openshift.io/v1] | Chapter 4. AlertRelabelConfig [monitoring.openshift.io/v1] Description AlertRelabelConfig defines a set of relabel configs for alerts. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec describes the desired state of this AlertRelabelConfig object. status object status describes the current state of this AlertRelabelConfig object. 4.1.1. .spec Description spec describes the desired state of this AlertRelabelConfig object. Type object Required configs Property Type Description configs array configs is a list of sequentially evaluated alert relabel configs. configs[] object RelabelConfig allows dynamic rewriting of label sets for alerts. See Prometheus documentation: - https://prometheus.io/docs/prometheus/latest/configuration/configuration/#alert_relabel_configs - https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config 4.1.2. .spec.configs Description configs is a list of sequentially evaluated alert relabel configs. Type array 4.1.3. .spec.configs[] Description RelabelConfig allows dynamic rewriting of label sets for alerts. See Prometheus documentation: - https://prometheus.io/docs/prometheus/latest/configuration/configuration/#alert_relabel_configs - https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config Type object Property Type Description action string action to perform based on regex matching. Must be one of: 'Replace', 'Keep', 'Drop', 'HashMod', 'LabelMap', 'LabelDrop', or 'LabelKeep'. Default is: 'Replace' modulus integer modulus to take of the hash of the source label values. This can be combined with the 'HashMod' action to set 'target_label' to the 'modulus' of a hash of the concatenated 'source_labels'. This is only valid if sourceLabels is not empty and action is not 'LabelKeep' or 'LabelDrop'. regex string regex against which the extracted value is matched. Default is: '(.*)' regex is required for all actions except 'HashMod' replacement string replacement value against which a regex replace is performed if the regular expression matches. This is required if the action is 'Replace' or 'LabelMap' and forbidden for actions 'LabelKeep' and 'LabelDrop'. Regex capture groups are available. Default is: 'USD1' separator string separator placed between concatenated source label values. When omitted, Prometheus will use its default value of ';'. sourceLabels array (string) sourceLabels select values from existing labels. Their content is concatenated using the configured separator and matched against the configured regular expression for the 'Replace', 'Keep', and 'Drop' actions. Not allowed for actions 'LabelKeep' and 'LabelDrop'. targetLabel string targetLabel to which the resulting value is written in a 'Replace' action. It is required for 'Replace' and 'HashMod' actions and forbidden for actions 'LabelKeep' and 'LabelDrop'. Regex capture groups are available. 4.1.4. .status Description status describes the current state of this AlertRelabelConfig object. Type object Property Type Description conditions array conditions contains details on the state of the AlertRelabelConfig, may be empty. conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } 4.1.5. .status.conditions Description conditions contains details on the state of the AlertRelabelConfig, may be empty. Type array 4.1.6. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 4.2. API endpoints The following API endpoints are available: /apis/monitoring.openshift.io/v1/alertrelabelconfigs GET : list objects of kind AlertRelabelConfig /apis/monitoring.openshift.io/v1/namespaces/{namespace}/alertrelabelconfigs DELETE : delete collection of AlertRelabelConfig GET : list objects of kind AlertRelabelConfig POST : create an AlertRelabelConfig /apis/monitoring.openshift.io/v1/namespaces/{namespace}/alertrelabelconfigs/{name} DELETE : delete an AlertRelabelConfig GET : read the specified AlertRelabelConfig PATCH : partially update the specified AlertRelabelConfig PUT : replace the specified AlertRelabelConfig /apis/monitoring.openshift.io/v1/namespaces/{namespace}/alertrelabelconfigs/{name}/status GET : read status of the specified AlertRelabelConfig PATCH : partially update status of the specified AlertRelabelConfig PUT : replace status of the specified AlertRelabelConfig 4.2.1. /apis/monitoring.openshift.io/v1/alertrelabelconfigs HTTP method GET Description list objects of kind AlertRelabelConfig Table 4.1. HTTP responses HTTP code Reponse body 200 - OK AlertRelabelConfigList schema 401 - Unauthorized Empty 4.2.2. /apis/monitoring.openshift.io/v1/namespaces/{namespace}/alertrelabelconfigs HTTP method DELETE Description delete collection of AlertRelabelConfig Table 4.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind AlertRelabelConfig Table 4.3. HTTP responses HTTP code Reponse body 200 - OK AlertRelabelConfigList schema 401 - Unauthorized Empty HTTP method POST Description create an AlertRelabelConfig Table 4.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.5. Body parameters Parameter Type Description body AlertRelabelConfig schema Table 4.6. HTTP responses HTTP code Reponse body 200 - OK AlertRelabelConfig schema 201 - Created AlertRelabelConfig schema 202 - Accepted AlertRelabelConfig schema 401 - Unauthorized Empty 4.2.3. /apis/monitoring.openshift.io/v1/namespaces/{namespace}/alertrelabelconfigs/{name} Table 4.7. Global path parameters Parameter Type Description name string name of the AlertRelabelConfig HTTP method DELETE Description delete an AlertRelabelConfig Table 4.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified AlertRelabelConfig Table 4.10. HTTP responses HTTP code Reponse body 200 - OK AlertRelabelConfig schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified AlertRelabelConfig Table 4.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.12. HTTP responses HTTP code Reponse body 200 - OK AlertRelabelConfig schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified AlertRelabelConfig Table 4.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.14. Body parameters Parameter Type Description body AlertRelabelConfig schema Table 4.15. HTTP responses HTTP code Reponse body 200 - OK AlertRelabelConfig schema 201 - Created AlertRelabelConfig schema 401 - Unauthorized Empty 4.2.4. /apis/monitoring.openshift.io/v1/namespaces/{namespace}/alertrelabelconfigs/{name}/status Table 4.16. Global path parameters Parameter Type Description name string name of the AlertRelabelConfig HTTP method GET Description read status of the specified AlertRelabelConfig Table 4.17. HTTP responses HTTP code Reponse body 200 - OK AlertRelabelConfig schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified AlertRelabelConfig Table 4.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.19. HTTP responses HTTP code Reponse body 200 - OK AlertRelabelConfig schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified AlertRelabelConfig Table 4.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.21. Body parameters Parameter Type Description body AlertRelabelConfig schema Table 4.22. HTTP responses HTTP code Reponse body 200 - OK AlertRelabelConfig schema 201 - Created AlertRelabelConfig schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/monitoring_apis/alertrelabelconfig-monitoring-openshift-io-v1 |
Chapter 5. Network considerations for NFV | Chapter 5. Network considerations for NFV The undercloud host requires at least the following networks: Provisioning network - Provides DHCP and PXE-boot functions to help discover bare-metal systems for use in the overcloud. External network - A separate network for remote connectivity to all nodes. The interface connecting to this network requires a routable IP address, either defined statically, or generated dynamically from an external DHCP service. The minimal overcloud network configuration includes the following NIC configurations: Single NIC configuration - One NIC for the provisioning network on the native VLAN and tagged VLANs that use subnets for the different overcloud network types. Dual NIC configuration - One NIC for the provisioning network and the other NIC for the external network. Dual NIC configuration - One NIC for the provisioning network on the native VLAN, and the other NIC for tagged VLANs that use subnets for different overcloud network types. Multiple NIC configuration - Each NIC uses a subnet for a different overcloud network type. For more information on the networking requirements, see Preparing your undercloud networking in the Director Installation and Usage guide. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/network_functions_virtualization_planning_and_configuration_guide/network-consid-nfv_rhosp-nfv |
Chapter 10. Log storage | Chapter 10. Log storage 10.1. About log storage You can use an internal Loki or Elasticsearch log store on your cluster for storing logs, or you can use a ClusterLogForwarder custom resource (CR) to forward logs to an external store. 10.1.1. Log storage types Loki is a horizontally scalable, highly available, multi-tenant log aggregation system offered as a GA log store for logging for Red Hat OpenShift that can be visualized with the OpenShift Observability UI. The Loki configuration provided by OpenShift Logging is a short-term log store designed to enable users to perform fast troubleshooting with the collected logs. For that purpose, the logging for Red Hat OpenShift configuration of Loki has short-term storage, and is optimized for very recent queries. For long-term storage or queries over a long time period, users should look to log stores external to their cluster. Elasticsearch indexes incoming log records completely during ingestion. Loki indexes only a few fixed labels during ingestion and defers more complex parsing until after the logs have been stored. This means Loki can collect logs more quickly. 10.1.1.1. About the Elasticsearch log store The logging Elasticsearch instance is optimized and tested for short term storage, approximately seven days. If you want to retain your logs over a longer term, it is recommended you move the data to a third-party storage system. Elasticsearch organizes the log data from Fluentd into datastores, or indices , then subdivides each index into multiple pieces called shards , which it spreads across a set of Elasticsearch nodes in an Elasticsearch cluster. You can configure Elasticsearch to make copies of the shards, called replicas , which Elasticsearch also spreads across the Elasticsearch nodes. The ClusterLogging custom resource (CR) allows you to specify how the shards are replicated to provide data redundancy and resilience to failure. You can also specify how long the different types of logs are retained using a retention policy in the ClusterLogging CR. Note The number of primary shards for the index templates is equal to the number of Elasticsearch data nodes. The Red Hat OpenShift Logging Operator and companion OpenShift Elasticsearch Operator ensure that each Elasticsearch node is deployed using a unique deployment that includes its own storage volume. You can use a ClusterLogging custom resource (CR) to increase the number of Elasticsearch nodes, as needed. See the Elasticsearch documentation for considerations involved in configuring storage. Note A highly-available Elasticsearch environment requires at least three Elasticsearch nodes, each on a different host. Role-based access control (RBAC) applied on the Elasticsearch indices enables the controlled access of the logs to the developers. Administrators can access all logs and developers can access only the logs in their projects. 10.1.2. Querying log stores You can query Loki by using the LogQL log query language . 10.1.3. Additional resources Loki components documentation Loki Object Storage documentation 10.2. Installing log storage You can use the OpenShift CLI ( oc ) or the Red Hat OpenShift Service on AWS web console to deploy a log store on your Red Hat OpenShift Service on AWS cluster. Note The Logging 5.9 release does not contain an updated version of the OpenShift Elasticsearch Operator. If you currently use the OpenShift Elasticsearch Operator released with Logging 5.8, it will continue to work with Logging until the EOL of Logging 5.8. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. For more information on the Logging lifecycle dates, see Platform Agnostic Operators . 10.2.1. Deploying a Loki log store You can use the Loki Operator to deploy an internal Loki log store on your Red Hat OpenShift Service on AWS cluster. After install the Loki Operator, you must configure Loki object storage by creating a secret, and create a LokiStack custom resource (CR). 10.2.1.1. Loki deployment sizing Sizing for Loki follows the format of 1x.<size> where the value 1x is number of instances and <size> specifies performance capabilities. Important It is not possible to change the number 1x for the deployment size. Table 10.1. Loki sizing 1x.demo 1x.extra-small 1x.small 1x.medium Data transfer Demo use only 100GB/day 500GB/day 2TB/day Queries per second (QPS) Demo use only 1-25 QPS at 200ms 25-50 QPS at 200ms 25-75 QPS at 200ms Replication factor None 2 2 2 Total CPU requests None 14 vCPUs 34 vCPUs 54 vCPUs Total CPU requests if using the ruler None 16 vCPUs 42 vCPUs 70 vCPUs Total memory requests None 31Gi 67Gi 139Gi Total memory requests if using the ruler None 35Gi 83Gi 171Gi Total disk requests 40Gi 430Gi 430Gi 590Gi Total disk requests if using the ruler 80Gi 750Gi 750Gi 910Gi 10.2.1.2. Installing Logging and the Loki Operator using the web console To install and configure logging on your Red Hat OpenShift Service on AWS cluster, an Operator such as Loki Operator for log storage must be installed first. This can be done from the OperatorHub within the web console. Prerequisites You have access to a supported object store (AWS S3, Google Cloud Storage, Azure, Swift, Minio, OpenShift Data Foundation). You have administrator permissions. You have access to the Red Hat OpenShift Service on AWS web console. Procedure In the Red Hat OpenShift Service on AWS web console Administrator perspective, go to Operators OperatorHub . Type Loki Operator in the Filter by keyword field. Click Loki Operator in the list of available Operators, and then click Install . Important The Community Loki Operator is not supported by Red Hat. Select stable or stable-x.y as the Update channel . Note The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-x.y , where x.y represents the major and minor version of logging you have installed. For example, stable-5.7 . The Loki Operator must be deployed to the global operator group namespace openshift-operators-redhat , so the Installation mode and Installed Namespace are already selected. If this namespace does not already exist, it is created for you. Select Enable Operator-recommended cluster monitoring on this namespace. This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. For Update approval select Automatic , then click Install . If the approval strategy in the subscription is set to Automatic , the update process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to Manual , you must manually approve pending updates. Install the Red Hat OpenShift Logging Operator: In the Red Hat OpenShift Service on AWS web console, click Operators OperatorHub . Choose Red Hat OpenShift Logging from the list of available Operators, and click Install . Ensure that the A specific namespace on the cluster is selected under Installation Mode . Ensure that Operator recommended namespace is openshift-logging under Installed Namespace . Select Enable Operator recommended cluster monitoring on this namespace . This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-logging namespace. Select stable-5.y as the Update Channel . Select an Approval Strategy . The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. The Manual strategy requires a user with appropriate credentials to approve the Operator update. Click Install . Go to the Operators Installed Operators page. Click the All instances tab. From the Create new drop-down list, select LokiStack . Select YAML view , and then use the following template to create a LokiStack CR: Example LokiStack CR apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v13 effectiveDate: "<yyyy>-<mm>-<dd>" secret: name: logging-loki-s3 4 type: s3 5 credentialMode: 6 storageClassName: <storage_class_name> 7 tenants: mode: openshift-logging 8 1 Use the name logging-loki . 2 You must specify the openshift-logging namespace. 3 Specify the deployment size. In the logging 5.8 and later versions, the supported size options for production instances of Loki are 1x.extra-small , 1x.small , or 1x.medium . 4 Specify the name of your log store secret. 5 Specify the corresponding storage type. 6 Optional field, logging 5.9 and later. Supported user configured values are as follows: static is the default authentication mode available for all supported object storage types using credentials stored in a Secret. token for short-lived tokens retrieved from a credential source. In this mode the static configuration does not contain credentials needed for the object storage. Instead, they are generated during runtime using a service, which allows for shorter-lived credentials and much more granular control. This authentication mode is not supported for all object storage types. token-cco is the default value when Loki is running on managed STS mode and using CCO on STS/WIF clusters. 7 Specify the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. Available storage classes for your cluster can be listed by using the oc get storageclasses command. 8 LokiStack defaults to running in multi-tenant mode, which cannot be modified. One tenant is provided for each log type: audit, infrastructure, and application logs. This enables access control for individual users and user groups to different log streams. Important It is not possible to change the number 1x for the deployment size. Click Create . Create an OpenShift Logging instance: Switch to the Administration Custom Resource Definitions page. On the Custom Resource Definitions page, click ClusterLogging . On the Custom Resource Definition details page, select View Instances from the Actions menu. On the ClusterLoggings page, click Create ClusterLogging . You might have to refresh the page to load the data. In the YAML field, replace the code with the following: apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging 2 spec: collection: type: vector logStore: lokistack: name: logging-loki retentionPolicy: application: maxAge: 7d audit: maxAge: 7d infra: maxAge: 7d type: lokistack visualization: type: ocp-console ocpConsole: logsLimit: 15 managementState: Managed 1 Name must be instance . 2 Namespace must be openshift-logging . Verification Go to Operators Installed Operators . Make sure the openshift-logging project is selected. In the Status column, verify that you see green checkmarks with InstallSucceeded and the text Up to date . Note An Operator might display a Failed status before the installation finishes. If the Operator install completes with an InstallSucceeded message, refresh the page. 10.2.1.3. Creating a secret for Loki object storage by using the web console To configure Loki object storage, you must create a secret. You can create a secret by using the Red Hat OpenShift Service on AWS web console. Prerequisites You have administrator permissions. You have access to the Red Hat OpenShift Service on AWS web console. You installed the Loki Operator. Procedure Go to Workloads Secrets in the Administrator perspective of the Red Hat OpenShift Service on AWS web console. From the Create drop-down list, select From YAML . Create a secret that uses the access_key_id and access_key_secret fields to specify your credentials and the bucketnames , endpoint , and region fields to define the object storage location. AWS is used in the following example: Example Secret object apiVersion: v1 kind: Secret metadata: name: logging-loki-s3 namespace: openshift-logging stringData: access_key_id: AKIAIOSFODNN7EXAMPLE access_key_secret: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1 Additional resources Loki object storage 10.2.1.4. Workload identity federation Workload identity federation enables authentication to cloud-based log stores using short-lived tokens. Prerequisites Red Hat OpenShift Service on AWS 4.14 and later Logging 5.9 and later Procedure If you use the Red Hat OpenShift Service on AWS web console to install the Loki Operator, clusters that use short-lived tokens are automatically detected. You are prompted to create roles and supply the data required for the Loki Operator to create a CredentialsRequest object, which populates a secret. If you use the OpenShift CLI ( oc ) to install the Loki Operator, you must manually create a subscription object using the appropriate template for your storage provider, as shown in the following examples. This authentication strategy is only supported for the storage providers indicated. Azure sample subscription apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: "stable-5.9" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: CLIENTID value: <your_client_id> - name: TENANTID value: <your_tenant_id> - name: SUBSCRIPTIONID value: <your_subscription_id> - name: REGION value: <your_region> AWS sample subscription apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: "stable-5.9" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: ROLEARN value: <role_ARN> 10.2.1.5. Creating a LokiStack custom resource by using the web console You can create a LokiStack custom resource (CR) by using the Red Hat OpenShift Service on AWS web console. Prerequisites You have administrator permissions. You have access to the Red Hat OpenShift Service on AWS web console. You installed the Loki Operator. Procedure Go to the Operators Installed Operators page. Click the All instances tab. From the Create new drop-down list, select LokiStack . Select YAML view , and then use the following template to create a LokiStack CR: apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging spec: size: 1x.small 2 storage: schemas: - effectiveDate: '2023-10-15' version: v13 secret: name: logging-loki-s3 3 type: s3 4 credentialMode: 5 storageClassName: <storage_class_name> 6 tenants: mode: openshift-logging 1 Use the name logging-loki . 2 Specify the deployment size. In the logging 5.8 and later versions, the supported size options for production instances of Loki are 1x.extra-small , 1x.small , or 1x.medium . 3 Specify the secret used for your log storage. 4 Specify the corresponding storage type. 5 Optional field, logging 5.9 and later. Supported user configured values are as follows: static is the default authentication mode available for all supported object storage types using credentials stored in a Secret. token for short-lived tokens retrieved from a credential source. In this mode the static configuration does not contain credentials needed for the object storage. Instead, they are generated during runtime using a service, which allows for shorter-lived credentials and much more granular control. This authentication mode is not supported for all object storage types. token-cco is the default value when Loki is running on managed STS mode and using CCO on STS/WIF clusters. 6 Enter the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. Available storage classes for your cluster can be listed by using the oc get storageclasses command. 10.2.1.6. Installing Logging and the Loki Operator using the CLI To install and configure logging on your Red Hat OpenShift Service on AWS cluster, an Operator such as Loki Operator for log storage must be installed first. This can be done from the Red Hat OpenShift Service on AWS CLI. Prerequisites You have administrator permissions. You installed the OpenShift CLI ( oc ). You have access to a supported object store. For example: AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation. Note The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-x.y , where x.y represents the major and minor version of logging you have installed. For example, stable-5.7 . Create a Namespace object for Loki Operator: Example Namespace object apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 annotations: openshift.io/node-selector: "" labels: openshift.io/cluster-monitoring: "true" 2 1 You must specify the openshift-operators-redhat namespace. To prevent possible conflicts with metrics, you should configure the Prometheus Cluster Monitoring stack to scrape metrics from the openshift-operators-redhat namespace and not the openshift-operators namespace. The openshift-operators namespace might contain community Operators, which are untrusted and could publish a metric with the same name as an Red Hat OpenShift Service on AWS metric, which would cause conflicts. 2 A string value that specifies the label as shown to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. Apply the Namespace object by running the following command: USD oc apply -f <filename>.yaml Create a Subscription object for Loki Operator: Example Subscription object apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat 1 spec: channel: stable 2 name: loki-operator source: redhat-operators 3 sourceNamespace: openshift-marketplace 1 You must specify the openshift-operators-redhat namespace. 2 Specify stable , or stable-5.<y> as the channel. 3 Specify redhat-operators . If your Red Hat OpenShift Service on AWS cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object you created when you configured the Operator Lifecycle Manager (OLM). Apply the Subscription object by running the following command: USD oc apply -f <filename>.yaml Create a namespace object for the Red Hat OpenShift Logging Operator: Example namespace object apiVersion: v1 kind: Namespace metadata: name: openshift-logging 1 annotations: openshift.io/node-selector: "" labels: openshift.io/cluster-logging: "true" openshift.io/cluster-monitoring: "true" 2 1 The Red Hat OpenShift Logging Operator is only deployable to the openshift-logging namespace. 2 A string value that specifies the label as shown to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. Apply the namespace object by running the following command: USD oc apply -f <filename>.yaml Create an OperatorGroup object Example OperatorGroup object apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging 1 spec: targetNamespaces: - openshift-logging 1 You must specify the openshift-logging namespace. Apply the OperatorGroup object by running the following command: USD oc apply -f <filename>.yaml Create a Subscription object: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging 1 spec: channel: stable 2 name: cluster-logging source: redhat-operators 3 sourceNamespace: openshift-marketplace 1 You must specify the openshift-logging namespace. 2 Specify stable , or stable-5.<y> as the channel. 3 Specify redhat-operators . If your Red Hat OpenShift Service on AWS cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object you created when you configured the Operator Lifecycle Manager (OLM). Apply the Subscription object by running the following command: USD oc apply -f <filename>.yaml Create a LokiStack CR: Example LokiStack CR apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v13 effectiveDate: "<yyyy>-<mm>-<dd>" secret: name: logging-loki-s3 4 type: s3 5 credentialMode: 6 storageClassName: <storage_class_name> 7 tenants: mode: openshift-logging 8 1 Use the name logging-loki . 2 You must specify the openshift-logging namespace. 3 Specify the deployment size. In the logging 5.8 and later versions, the supported size options for production instances of Loki are 1x.extra-small , 1x.small , or 1x.medium . 4 Specify the name of your log store secret. 5 Specify the corresponding storage type. 6 Optional field, logging 5.9 and later. Supported user configured values are as follows: static is the default authentication mode available for all supported object storage types using credentials stored in a Secret. token for short-lived tokens retrieved from a credential source. In this mode the static configuration does not contain credentials needed for the object storage. Instead, they are generated during runtime using a service, which allows for shorter-lived credentials and much more granular control. This authentication mode is not supported for all object storage types. token-cco is the default value when Loki is running on managed STS mode and using CCO on STS/WIF clusters. 7 Specify the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. Available storage classes for your cluster can be listed by using the oc get storageclasses command. 8 LokiStack defaults to running in multi-tenant mode, which cannot be modified. One tenant is provided for each log type: audit, infrastructure, and application logs. This enables access control for individual users and user groups to different log streams. Apply the LokiStack CR object by running the following command: USD oc apply -f <filename>.yaml Create a ClusterLogging CR object: Example ClusterLogging CR object apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging 2 spec: collection: type: vector logStore: lokistack: name: logging-loki retentionPolicy: application: maxAge: 7d audit: maxAge: 7d infra: maxAge: 7d type: lokistack visualization: type: ocp-console ocpConsole: logsLimit: 15 managementState: Managed 1 Name must be instance . 2 Namespace must be openshift-logging . Apply the ClusterLogging CR object by running the following command: USD oc apply -f <filename>.yaml Verify the installation by running the following command: USD oc get pods -n openshift-logging Example output USD oc get pods -n openshift-logging NAME READY STATUS RESTARTS AGE cluster-logging-operator-fb7f7cf69-8jsbq 1/1 Running 0 98m collector-222js 2/2 Running 0 18m collector-g9ddv 2/2 Running 0 18m collector-hfqq8 2/2 Running 0 18m collector-sphwg 2/2 Running 0 18m collector-vv7zn 2/2 Running 0 18m collector-wk5zz 2/2 Running 0 18m logging-view-plugin-6f76fbb78f-n2n4n 1/1 Running 0 18m lokistack-sample-compactor-0 1/1 Running 0 42m lokistack-sample-distributor-7d7688bcb9-dvcj8 1/1 Running 0 42m lokistack-sample-gateway-5f6c75f879-bl7k9 2/2 Running 0 42m lokistack-sample-gateway-5f6c75f879-xhq98 2/2 Running 0 42m lokistack-sample-index-gateway-0 1/1 Running 0 42m lokistack-sample-ingester-0 1/1 Running 0 42m lokistack-sample-querier-6b7b56bccc-2v9q4 1/1 Running 0 42m lokistack-sample-query-frontend-84fb57c578-gq2f7 1/1 Running 0 42m 10.2.1.7. Creating a secret for Loki object storage by using the CLI To configure Loki object storage, you must create a secret. You can do this by using the OpenShift CLI ( oc ). Prerequisites You have administrator permissions. You installed the Loki Operator. You installed the OpenShift CLI ( oc ). Procedure Create a secret in the directory that contains your certificate and key files by running the following command: USD oc create secret generic -n openshift-logging <your_secret_name> \ --from-file=tls.key=<your_key_file> --from-file=tls.crt=<your_crt_file> --from-file=ca-bundle.crt=<your_bundle_file> --from-literal=username=<your_username> --from-literal=password=<your_password> Note Use generic or opaque secrets for best results. Verification Verify that a secret was created by running the following command: USD oc get secrets Additional resources Loki object storage 10.2.1.8. Creating a LokiStack custom resource by using the CLI You can create a LokiStack custom resource (CR) by using the OpenShift CLI ( oc ). Prerequisites You have administrator permissions. You installed the Loki Operator. You installed the OpenShift CLI ( oc ). Procedure Create a LokiStack CR: Example LokiStack CR apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging spec: size: 1x.small 2 storage: schemas: - effectiveDate: '2023-10-15' version: v13 secret: name: logging-loki-s3 3 type: s3 4 credentialMode: 5 storageClassName: <storage_class_name> 6 tenants: mode: openshift-logging 1 Use the name logging-loki . 2 Specify the deployment size. In the logging 5.8 and later versions, the supported size options for production instances of Loki are 1x.extra-small , 1x.small , or 1x.medium . 3 Specify the secret used for your log storage. 4 Specify the corresponding storage type. 5 Optional field, logging 5.9 and later. Supported user configured values are as follows: static is the default authentication mode available for all supported object storage types using credentials stored in a Secret. token for short-lived tokens retrieved from a credential source. In this mode the static configuration does not contain credentials needed for the object storage. Instead, they are generated during runtime using a service, which allows for shorter-lived credentials and much more granular control. This authentication mode is not supported for all object storage types. token-cco is the default value when Loki is running on managed STS mode and using CCO on STS/WIF clusters. 6 Enter the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. Available storage classes for your cluster can be listed by using the oc get storageclasses command. Apply the LokiStack CR by running the following command: Verification Verify the installation by listing the pods in the openshift-logging project by running the following command and observing the output: USD oc get pods -n openshift-logging Confirm that you see several pods for components of the logging, similar to the following list: Example output NAME READY STATUS RESTARTS AGE cluster-logging-operator-78fddc697-mnl82 1/1 Running 0 14m collector-6cglq 2/2 Running 0 45s collector-8r664 2/2 Running 0 45s collector-8z7px 2/2 Running 0 45s collector-pdxl9 2/2 Running 0 45s collector-tc9dx 2/2 Running 0 45s collector-xkd76 2/2 Running 0 45s logging-loki-compactor-0 1/1 Running 0 8m2s logging-loki-distributor-b85b7d9fd-25j9g 1/1 Running 0 8m2s logging-loki-distributor-b85b7d9fd-xwjs6 1/1 Running 0 8m2s logging-loki-gateway-7bb86fd855-hjhl4 2/2 Running 0 8m2s logging-loki-gateway-7bb86fd855-qjtlb 2/2 Running 0 8m2s logging-loki-index-gateway-0 1/1 Running 0 8m2s logging-loki-index-gateway-1 1/1 Running 0 7m29s logging-loki-ingester-0 1/1 Running 0 8m2s logging-loki-ingester-1 1/1 Running 0 6m46s logging-loki-querier-f5cf9cb87-9fdjd 1/1 Running 0 8m2s logging-loki-querier-f5cf9cb87-fp9v5 1/1 Running 0 8m2s logging-loki-query-frontend-58c579fcb7-lfvbc 1/1 Running 0 8m2s logging-loki-query-frontend-58c579fcb7-tjf9k 1/1 Running 0 8m2s logging-view-plugin-79448d8df6-ckgmx 1/1 Running 0 46s 10.2.2. Loki object storage The Loki Operator supports AWS S3 , as well as other S3 compatible object stores such as Minio and OpenShift Data Foundation . Azure , GCS , and Swift are also supported. The recommended nomenclature for Loki storage is logging-loki- <your_storage_provider> . The following table shows the type values within the LokiStack custom resource (CR) for each storage provider. For more information, see the section on your storage provider. Table 10.2. Secret type quick reference Storage provider Secret type value AWS s3 Azure azure Google Cloud gcs Minio s3 OpenShift Data Foundation s3 Swift swift 10.2.2.1. AWS storage Prerequisites You installed the Loki Operator. You installed the OpenShift CLI ( oc ). You created a bucket on AWS. You created an AWS IAM Policy and IAM User . Procedure Create an object storage secret with the name logging-loki-aws by running the following command: USD oc create secret generic logging-loki-aws \ --from-literal=bucketnames="<bucket_name>" \ --from-literal=endpoint="<aws_bucket_endpoint>" \ --from-literal=access_key_id="<aws_access_key_id>" \ --from-literal=access_key_secret="<aws_access_key_secret>" \ --from-literal=region="<aws_region_of_your_bucket>" 10.2.2.1.1. AWS storage for STS enabled clusters If your cluster has STS enabled, the Cloud Credential Operator (CCO) supports short-term authentication using AWS tokens. You can create the Loki object storage secret manually by running the following command: USD oc -n openshift-logging create secret generic "logging-loki-aws" \ --from-literal=bucketnames="<s3_bucket_name>" \ --from-literal=region="<bucket_region>" \ --from-literal=audience="<oidc_audience>" 1 1 Optional annotation, default value is openshift . 10.2.2.2. Azure storage Prerequisites You installed the Loki Operator. You installed the OpenShift CLI ( oc ). You created a bucket on Azure. Procedure Create an object storage secret with the name logging-loki-azure by running the following command: USD oc create secret generic logging-loki-azure \ --from-literal=container="<azure_container_name>" \ --from-literal=environment="<azure_environment>" \ 1 --from-literal=account_name="<azure_account_name>" \ --from-literal=account_key="<azure_account_key>" 1 Supported environment values are AzureGlobal , AzureChinaCloud , AzureGermanCloud , or AzureUSGovernment . 10.2.2.2.1. Azure storage for Microsoft Entra Workload ID enabled clusters If your cluster has Microsoft Entra Workload ID enabled, the Cloud Credential Operator (CCO) supports short-term authentication using Workload ID. You can create the Loki object storage secret manually by running the following command: USD oc -n openshift-logging create secret generic logging-loki-azure \ --from-literal=environment="<azure_environment>" \ --from-literal=account_name="<storage_account_name>" \ --from-literal=container="<container_name>" 10.2.2.3. Google Cloud Platform storage Prerequisites You installed the Loki Operator. You installed the OpenShift CLI ( oc ). You created a project on Google Cloud Platform (GCP). You created a bucket in the same project. You created a service account in the same project for GCP authentication. Procedure Copy the service account credentials received from GCP into a file called key.json . Create an object storage secret with the name logging-loki-gcs by running the following command: USD oc create secret generic logging-loki-gcs \ --from-literal=bucketname="<bucket_name>" \ --from-file=key.json="<path/to/key.json>" 10.2.2.4. Minio storage Prerequisites You installed the Loki Operator. You installed the OpenShift CLI ( oc ). You have Minio deployed on your cluster. You created a bucket on Minio. Procedure Create an object storage secret with the name logging-loki-minio by running the following command: USD oc create secret generic logging-loki-minio \ --from-literal=bucketnames="<bucket_name>" \ --from-literal=endpoint="<minio_bucket_endpoint>" \ --from-literal=access_key_id="<minio_access_key_id>" \ --from-literal=access_key_secret="<minio_access_key_secret>" 10.2.2.5. OpenShift Data Foundation storage Prerequisites You installed the Loki Operator. You installed the OpenShift CLI ( oc ). You deployed OpenShift Data Foundation . You configured your OpenShift Data Foundation cluster for object storage . Procedure Create an ObjectBucketClaim custom resource in the openshift-logging namespace: apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: loki-bucket-odf namespace: openshift-logging spec: generateBucketName: loki-bucket-odf storageClassName: openshift-storage.noobaa.io Get bucket properties from the associated ConfigMap object by running the following command: BUCKET_HOST=USD(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_HOST}') BUCKET_NAME=USD(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_NAME}') BUCKET_PORT=USD(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_PORT}') Get bucket access key from the associated secret by running the following command: ACCESS_KEY_ID=USD(oc get -n openshift-logging secret loki-bucket-odf -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 -d) SECRET_ACCESS_KEY=USD(oc get -n openshift-logging secret loki-bucket-odf -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 -d) Create an object storage secret with the name logging-loki-odf by running the following command: USD oc create -n openshift-logging secret generic logging-loki-odf \ --from-literal=access_key_id="<access_key_id>" \ --from-literal=access_key_secret="<secret_access_key>" \ --from-literal=bucketnames="<bucket_name>" \ --from-literal=endpoint="https://<bucket_host>:<bucket_port>" 10.2.2.6. Swift storage Prerequisites You installed the Loki Operator. You installed the OpenShift CLI ( oc ). You created a bucket on Swift. Procedure Create an object storage secret with the name logging-loki-swift by running the following command: USD oc create secret generic logging-loki-swift \ --from-literal=auth_url="<swift_auth_url>" \ --from-literal=username="<swift_usernameclaim>" \ --from-literal=user_domain_name="<swift_user_domain_name>" \ --from-literal=user_domain_id="<swift_user_domain_id>" \ --from-literal=user_id="<swift_user_id>" \ --from-literal=password="<swift_password>" \ --from-literal=domain_id="<swift_domain_id>" \ --from-literal=domain_name="<swift_domain_name>" \ --from-literal=container_name="<swift_container_name>" You can optionally provide project-specific data, region, or both by running the following command: USD oc create secret generic logging-loki-swift \ --from-literal=auth_url="<swift_auth_url>" \ --from-literal=username="<swift_usernameclaim>" \ --from-literal=user_domain_name="<swift_user_domain_name>" \ --from-literal=user_domain_id="<swift_user_domain_id>" \ --from-literal=user_id="<swift_user_id>" \ --from-literal=password="<swift_password>" \ --from-literal=domain_id="<swift_domain_id>" \ --from-literal=domain_name="<swift_domain_name>" \ --from-literal=container_name="<swift_container_name>" \ --from-literal=project_id="<swift_project_id>" \ --from-literal=project_name="<swift_project_name>" \ --from-literal=project_domain_id="<swift_project_domain_id>" \ --from-literal=project_domain_name="<swift_project_domain_name>" \ --from-literal=region="<swift_region>" 10.2.3. Deploying an Elasticsearch log store You can use the OpenShift Elasticsearch Operator to deploy an internal Elasticsearch log store on your Red Hat OpenShift Service on AWS cluster. Note The Logging 5.9 release does not contain an updated version of the OpenShift Elasticsearch Operator. If you currently use the OpenShift Elasticsearch Operator released with Logging 5.8, it will continue to work with Logging until the EOL of Logging 5.8. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. For more information on the Logging lifecycle dates, see Platform Agnostic Operators . 10.2.3.1. Storage considerations for Elasticsearch A persistent volume is required for each Elasticsearch deployment configuration. On Red Hat OpenShift Service on AWS this is achieved using persistent volume claims (PVCs). Note If you use a local volume for persistent storage, do not use a raw block volume, which is described with volumeMode: block in the LocalVolume object. Elasticsearch cannot use raw block volumes. The OpenShift Elasticsearch Operator names the PVCs using the Elasticsearch resource name. Fluentd ships any logs from systemd journal and /var/log/containers/*.log to Elasticsearch. Elasticsearch requires sufficient memory to perform large merge operations. If it does not have enough memory, it becomes unresponsive. To avoid this problem, evaluate how much application log data you need, and allocate approximately double that amount of free storage capacity. By default, when storage capacity is 85% full, Elasticsearch stops allocating new data to the node. At 90%, Elasticsearch attempts to relocate existing shards from that node to other nodes if possible. But if no nodes have a free capacity below 85%, Elasticsearch effectively rejects creating new indices and becomes RED. Note These low and high watermark values are Elasticsearch defaults in the current release. You can modify these default values. Although the alerts use the same default values, you cannot change these values in the alerts. 10.2.3.2. Installing the OpenShift Elasticsearch Operator by using the web console The OpenShift Elasticsearch Operator creates and manages the Elasticsearch cluster used by OpenShift Logging. Prerequisites Elasticsearch is a memory-intensive application. Each Elasticsearch node needs at least 16GB of memory for both memory requests and limits, unless you specify otherwise in the ClusterLogging custom resource. The initial set of Red Hat OpenShift Service on AWS nodes might not be large enough to support the Elasticsearch cluster. You must add additional nodes to the Red Hat OpenShift Service on AWS cluster to run with the recommended or higher memory, up to a maximum of 64GB for each Elasticsearch node. Elasticsearch nodes can operate with a lower memory setting, though this is not recommended for production environments. Ensure that you have the necessary persistent storage for Elasticsearch. Note that each Elasticsearch node requires its own storage volume. Note If you use a local volume for persistent storage, do not use a raw block volume, which is described with volumeMode: block in the LocalVolume object. Elasticsearch cannot use raw block volumes. Procedure In the Red Hat OpenShift Service on AWS web console, click Operators OperatorHub . Click OpenShift Elasticsearch Operator from the list of available Operators, and click Install . Ensure that the All namespaces on the cluster is selected under Installation mode . Ensure that openshift-operators-redhat is selected under Installed Namespace . You must specify the openshift-operators-redhat namespace. The openshift-operators namespace might contain Community Operators, which are untrusted and could publish a metric with the same name as Red Hat OpenShift Service on AWS metric, which would cause conflicts. Select Enable operator recommended cluster monitoring on this namespace . This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. Select stable-5.x as the Update channel . Select an Update approval strategy: The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. The Manual strategy requires a user with appropriate credentials to approve the Operator update. Click Install . Verification Verify that the OpenShift Elasticsearch Operator installed by switching to the Operators Installed Operators page. Ensure that OpenShift Elasticsearch Operator is listed in all projects with a Status of Succeeded . 10.2.3.3. Installing the OpenShift Elasticsearch Operator by using the CLI You can use the OpenShift CLI ( oc ) to install the OpenShift Elasticsearch Operator. Prerequisites Ensure that you have the necessary persistent storage for Elasticsearch. Note that each Elasticsearch node requires its own storage volume. Note If you use a local volume for persistent storage, do not use a raw block volume, which is described with volumeMode: block in the LocalVolume object. Elasticsearch cannot use raw block volumes. Elasticsearch is a memory-intensive application. By default, Red Hat OpenShift Service on AWS installs three Elasticsearch nodes with memory requests and limits of 16 GB. This initial set of three Red Hat OpenShift Service on AWS nodes might not have enough memory to run Elasticsearch within your cluster. If you experience memory issues that are related to Elasticsearch, add more Elasticsearch nodes to your cluster rather than increasing the memory on existing nodes. You have administrator permissions. You have installed the OpenShift CLI ( oc ). Procedure Create a Namespace object as a YAML file: apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 annotations: openshift.io/node-selector: "" labels: openshift.io/cluster-monitoring: "true" 2 1 You must specify the openshift-operators-redhat namespace. To prevent possible conflicts with metrics, configure the Prometheus Cluster Monitoring stack to scrape metrics from the openshift-operators-redhat namespace and not the openshift-operators namespace. The openshift-operators namespace might contain community Operators, which are untrusted and could publish a metric with the same name as a ROSA metric, which would cause conflicts. 2 String. You must specify this label as shown to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. Apply the Namespace object by running the following command: USD oc apply -f <filename>.yaml Create an OperatorGroup object as a YAML file: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-operators-redhat namespace: openshift-operators-redhat 1 spec: {} 1 You must specify the openshift-operators-redhat namespace. Apply the OperatorGroup object by running the following command: USD oc apply -f <filename>.yaml Create a Subscription object to subscribe the namespace to the OpenShift Elasticsearch Operator: Example Subscription apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: elasticsearch-operator namespace: openshift-operators-redhat 1 spec: channel: stable-x.y 2 installPlanApproval: Automatic 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace name: elasticsearch-operator 1 You must specify the openshift-operators-redhat namespace. 2 Specify stable , or stable-x.y as the channel. See the following note. 3 Automatic allows the Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. Manual requires a user with appropriate credentials to approve the Operator update. 4 Specify redhat-operators . If your Red Hat OpenShift Service on AWS cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object created when you configured the Operator Lifecycle Manager (OLM). Note Specifying stable installs the current version of the latest stable release. Using stable with installPlanApproval: "Automatic" automatically upgrades your Operators to the latest stable major and minor release. Specifying stable-x.y installs the current minor version of a specific major release. Using stable-x.y with installPlanApproval: "Automatic" automatically upgrades your Operators to the latest stable minor release within the major release. Apply the subscription by running the following command: USD oc apply -f <filename>.yaml The OpenShift Elasticsearch Operator is installed to the openshift-operators-redhat namespace and copied to each project in the cluster. Verification Run the following command: USD oc get csv -n --all-namespaces Observe the output and confirm that pods for the OpenShift Elasticsearch Operator exist in each namespace Example output NAMESPACE NAME DISPLAY VERSION REPLACES PHASE default elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded kube-node-lease elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded kube-public elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded kube-system elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded non-destructive-test elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded openshift-apiserver-operator elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded openshift-apiserver elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded ... 10.2.4. Configuring log storage You can configure which log storage type your logging uses by modifying the ClusterLogging custom resource (CR). Prerequisites You have administrator permissions. You have installed the OpenShift CLI ( oc ). You have installed the Red Hat OpenShift Logging Operator and an internal log store that is either the LokiStack or Elasticsearch. You have created a ClusterLogging CR. Note The Logging 5.9 release does not contain an updated version of the OpenShift Elasticsearch Operator. If you currently use the OpenShift Elasticsearch Operator released with Logging 5.8, it will continue to work with Logging until the EOL of Logging 5.8. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. For more information on the Logging lifecycle dates, see Platform Agnostic Operators . Procedure Modify the ClusterLogging CR logStore spec: ClusterLogging CR example apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: # ... spec: # ... logStore: type: <log_store_type> 1 elasticsearch: 2 nodeCount: <integer> resources: {} storage: {} redundancyPolicy: <redundancy_type> 3 lokistack: 4 name: {} # ... 1 Specify the log store type. This can be either lokistack or elasticsearch . 2 Optional configuration options for the Elasticsearch log store. 3 Specify the redundancy type. This value can be ZeroRedundancy , SingleRedundancy , MultipleRedundancy , or FullRedundancy . 4 Optional configuration options for LokiStack. Example ClusterLogging CR to specify LokiStack as the log store apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: managementState: Managed logStore: type: lokistack lokistack: name: logging-loki # ... Apply the ClusterLogging CR by running the following command: USD oc apply -f <filename>.yaml 10.3. Configuring the LokiStack log store In logging documentation, LokiStack refers to the logging supported combination of Loki and web proxy with Red Hat OpenShift Service on AWS authentication integration. LokiStack's proxy uses Red Hat OpenShift Service on AWS authentication to enforce multi-tenancy. Loki refers to the log store as either the individual component or an external store. 10.3.1. Creating a new group for the cluster-admin user role Important Querying application logs for multiple namespaces as a cluster-admin user, where the sum total of characters of all of the namespaces in the cluster is greater than 5120, results in the error Parse error: input size too long (XXXX > 5120) . For better control over access to logs in LokiStack, make the cluster-admin user a member of the cluster-admin group. If the cluster-admin group does not exist, create it and add the desired users to it. Use the following procedure to create a new group for users with cluster-admin permissions. Procedure Enter the following command to create a new group: USD oc adm groups new cluster-admin Enter the following command to add the desired user to the cluster-admin group: USD oc adm groups add-users cluster-admin <username> Enter the following command to add cluster-admin user role to the group: USD oc adm policy add-cluster-role-to-group cluster-admin cluster-admin 10.3.2. LokiStack behavior during cluster restarts In logging version 5.8 and newer versions, when an Red Hat OpenShift Service on AWS cluster is restarted, LokiStack ingestion and the query path continue to operate within the available CPU and memory resources available for the node. This means that there is no downtime for the LokiStack during Red Hat OpenShift Service on AWS cluster updates. This behavior is achieved by using PodDisruptionBudget resources. The Loki Operator provisions PodDisruptionBudget resources for Loki, which determine the minimum number of pods that must be available per component to ensure normal operations under certain conditions. Additional resources Pod disruption budgets Kubernetes documentation 10.3.3. Configuring Loki to tolerate node failure In the logging 5.8 and later versions, the Loki Operator supports setting pod anti-affinity rules to request that pods of the same component are scheduled on different available nodes in the cluster. Affinity is a property of pods that controls the nodes on which they prefer to be scheduled. Anti-affinity is a property of pods that prevents a pod from being scheduled on a node. In Red Hat OpenShift Service on AWS, pod affinity and pod anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled on based on the key-value labels on other pods. The Operator sets default, preferred podAntiAffinity rules for all Loki components, which includes the compactor , distributor , gateway , indexGateway , ingester , querier , queryFrontend , and ruler components. You can override the preferred podAntiAffinity settings for Loki components by configuring required settings in the requiredDuringSchedulingIgnoredDuringExecution field: Example user settings for the ingester component apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: # ... template: ingester: podAntiAffinity: # ... requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchLabels: 2 app.kubernetes.io/component: ingester topologyKey: kubernetes.io/hostname # ... 1 The stanza to define a required rule. 2 The key-value pair (label) that must be matched to apply the rule. Additional resources PodAntiAffinity v1 core Kubernetes documentation Assigning Pods to Nodes Kubernetes documentation Placing pods relative to other pods using affinity and anti-affinity rules 10.3.4. Zone aware data replication In the logging 5.8 and later versions, the Loki Operator offers support for zone-aware data replication through pod topology spread constraints. Enabling this feature enhances reliability and safeguards against log loss in the event of a single zone failure. When configuring the deployment size as 1x.extra.small , 1x.small , or 1x.medium, the replication.factor field is automatically set to 2. To ensure proper replication, you need to have at least as many availability zones as the replication factor specifies. While it is possible to have more availability zones than the replication factor, having fewer zones can lead to write failures. Each zone should host an equal number of instances for optimal operation. Example LokiStack CR with zone replication enabled apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: replicationFactor: 2 1 replication: factor: 2 2 zones: - maxSkew: 1 3 topologyKey: topology.kubernetes.io/zone 4 1 Deprecated field, values entered are overwritten by replication.factor . 2 This value is automatically set when deployment size is selected at setup. 3 The maximum difference in number of pods between any two topology domains. The default is 1, and you cannot specify a value of 0. 4 Defines zones in the form of a topology key that corresponds to a node label. 10.3.4.1. Recovering Loki pods from failed zones In Red Hat OpenShift Service on AWS a zone failure happens when specific availability zone resources become inaccessible. Availability zones are isolated areas within a cloud provider's data center, aimed at enhancing redundancy and fault tolerance. If your Red Hat OpenShift Service on AWS cluster is not configured to handle this, a zone failure can lead to service or data loss. Loki pods are part of a StatefulSet , and they come with Persistent Volume Claims (PVCs) provisioned by a StorageClass object. Each Loki pod and its PVCs reside in the same zone. When a zone failure occurs in a cluster, the StatefulSet controller automatically attempts to recover the affected pods in the failed zone. Warning The following procedure will delete the PVCs in the failed zone, and all data contained therein. To avoid complete data loss the replication factor field of the LokiStack CR should always be set to a value greater than 1 to ensure that Loki is replicating. Prerequisites Logging version 5.8 or later. Verify your LokiStack CR has a replication factor greater than 1. Zone failure detected by the control plane, and nodes in the failed zone are marked by cloud provider integration. The StatefulSet controller automatically attempts to reschedule pods in a failed zone. Because the associated PVCs are also in the failed zone, automatic rescheduling to a different zone does not work. You must manually delete the PVCs in the failed zone to allow successful re-creation of the stateful Loki Pod and its provisioned PVC in the new zone. Procedure List the pods in Pending status by running the following command: oc get pods --field-selector status.phase==Pending -n openshift-logging Example oc get pods output NAME READY STATUS RESTARTS AGE 1 logging-loki-index-gateway-1 0/1 Pending 0 17m logging-loki-ingester-1 0/1 Pending 0 16m logging-loki-ruler-1 0/1 Pending 0 16m 1 These pods are in Pending status because their corresponding PVCs are in the failed zone. List the PVCs in Pending status by running the following command: oc get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == "Pending") | .metadata.name' -r Example oc get pvc output storage-logging-loki-index-gateway-1 storage-logging-loki-ingester-1 wal-logging-loki-ingester-1 storage-logging-loki-ruler-1 wal-logging-loki-ruler-1 Delete the PVC(s) for a pod by running the following command: oc delete pvc __<pvc_name>__ -n openshift-logging Then delete the pod(s) by running the following command: oc delete pod __<pod_name>__ -n openshift-logging Once these objects have been successfully deleted, they should automatically be rescheduled in an available zone. 10.3.4.1.1. Troubleshooting PVC in a terminating state The PVCs might hang in the terminating state without being deleted, if PVC metadata finalizers are set to kubernetes.io/pv-protection . Removing the finalizers should allow the PVCs to delete successfully. Remove the finalizer for each PVC by running the command below, then retry deletion. oc patch pvc __<pvc_name>__ -p '{"metadata":{"finalizers":null}}' -n openshift-logging Additional resources Topology spread constraints Kubernetes documentation Kubernetes storage documentation . 10.3.5. Fine grained access for Loki logs In logging 5.8 and later, the Red Hat OpenShift Logging Operator does not grant all users access to logs by default. As an administrator, you must configure your users' access unless the Operator was upgraded and prior configurations are in place. Depending on your configuration and need, you can configure fine grain access to logs using the following: Cluster wide policies Namespace scoped policies Creation of custom admin groups As an administrator, you need to create the role bindings and cluster role bindings appropriate for your deployment. The Red Hat OpenShift Logging Operator provides the following cluster roles: cluster-logging-application-view grants permission to read application logs. cluster-logging-infrastructure-view grants permission to read infrastructure logs. cluster-logging-audit-view grants permission to read audit logs. If you have upgraded from a prior version, an additional cluster role logging-application-logs-reader and associated cluster role binding logging-all-authenticated-application-logs-reader provide backward compatibility, allowing any authenticated user read access in their namespaces. Note Users with access by namespace must provide a namespace when querying application logs. 10.3.5.1. Cluster wide access Cluster role binding resources reference cluster roles, and set permissions cluster wide. Example ClusterRoleBinding kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: logging-all-application-logs-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-logging-application-view 1 subjects: 2 - kind: Group name: system:authenticated apiGroup: rbac.authorization.k8s.io 1 Additional ClusterRoles are cluster-logging-infrastructure-view , and cluster-logging-audit-view . 2 Specifies the users or groups this object applies to. 10.3.5.2. Namespaced access RoleBinding resources can be used with ClusterRole objects to define the namespace a user or group has access to logs for. Example RoleBinding kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: allow-read-logs namespace: log-test-0 1 roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-logging-application-view subjects: - kind: User apiGroup: rbac.authorization.k8s.io name: testuser-0 1 Specifies the namespace this RoleBinding applies to. 10.3.5.3. Custom admin group access If you have a large deployment with several users who require broader permissions, you can create a custom group using the adminGroup field. Users who are members of any group specified in the adminGroups field of the LokiStack CR are considered administrators. Administrator users have access to all application logs in all namespaces, if they also get assigned the cluster-logging-application-view role. Example LokiStack CR apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: tenants: mode: openshift-logging 1 openshift: adminGroups: 2 - cluster-admin - custom-admin-group 3 1 Custom admin groups are only available in this mode. 2 Entering an empty list [] value for this field disables admin groups. 3 Overrides the default groups ( system:cluster-admins , cluster-admin , dedicated-admin ) 10.3.6. Enabling stream-based retention with Loki Additional resources With Logging version 5.6 and higher, you can configure retention policies based on log streams. Rules for these may be set globally, per tenant, or both. If you configure both, tenant rules apply before global rules. Important If there is no retention period defined on the s3 bucket or in the LokiStack custom resource (CR), then the logs are not pruned and they stay in the s3 bucket forever, which might fill up the s3 storage. Note Although logging version 5.9 and higher supports schema v12, v13 is recommended. To enable stream-based retention, create a LokiStack CR: Example global stream-based retention for AWS apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: 1 retention: 2 days: 20 streams: - days: 4 priority: 1 selector: '{kubernetes_namespace_name=~"test.+"}' 3 - days: 1 priority: 1 selector: '{log_type="infrastructure"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: "2020-10-11" version: v11 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging 1 Sets retention policy for all log streams. Note: This field does not impact the retention period for stored logs in object storage. 2 Retention is enabled in the cluster when this block is added to the CR. 3 Contains the LogQL query used to define the log stream.spec: limits: Example per-tenant stream-based retention for AWS apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: retention: days: 20 tenants: 1 application: retention: days: 1 streams: - days: 4 selector: '{kubernetes_namespace_name=~"test.+"}' 2 infrastructure: retention: days: 5 streams: - days: 1 selector: '{kubernetes_namespace_name=~"openshift-cluster.+"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: "2020-10-11" version: v11 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging 1 Sets retention policy by tenant. Valid tenant types are application , audit , and infrastructure . 2 Contains the LogQL query used to define the log stream. 2 Apply the LokiStack CR: USD oc apply -f <filename>.yaml Note This is not for managing the retention for stored logs. Global retention periods for stored logs to a supported maximum of 30 days is configured with your object storage. 10.3.7. Troubleshooting Loki rate limit errors If the Log Forwarder API forwards a large block of messages that exceeds the rate limit to Loki, Loki generates rate limit ( 429 ) errors. These errors can occur during normal operation. For example, when adding the logging to a cluster that already has some logs, rate limit errors might occur while the logging tries to ingest all of the existing log entries. In this case, if the rate of addition of new logs is less than the total rate limit, the historical data is eventually ingested, and the rate limit errors are resolved without requiring user intervention. In cases where the rate limit errors continue to occur, you can fix the issue by modifying the LokiStack custom resource (CR). Important The LokiStack CR is not available on Grafana-hosted Loki. This topic does not apply to Grafana-hosted Loki servers. Conditions The Log Forwarder API is configured to forward logs to Loki. Your system sends a block of messages that is larger than 2 MB to Loki. For example: "values":[["1630410392689800468","{\"kind\":\"Event\",\"apiVersion\":\ ....... ...... ...... ...... \"received_at\":\"2021-08-31T11:46:32.800278+00:00\",\"version\":\"1.7.4 1.6.0\"}},\"@timestamp\":\"2021-08-31T11:46:32.799692+00:00\",\"viaq_index_name\":\"audit-write\",\"viaq_msg_id\":\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\",\"log_type\":\"audit\"}"]]}]} After you enter oc logs -n openshift-logging -l component=collector , the collector logs in your cluster show a line containing one of the following error messages: 429 Too Many Requests Ingestion rate limit exceeded Example Vector error message 2023-08-25T16:08:49.301780Z WARN sink{component_kind="sink" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true Example Fluentd error message 2023-08-30 14:52:15 +0000 [warn]: [default_loki_infra] failed to flush the buffer. retry_times=2 next_retry_time=2023-08-30 14:52:19 +0000 chunk="604251225bf5378ed1567231a1c03b8b" error_class=Fluent::Plugin::LokiOutput::LogPostError error="429 Too Many Requests Ingestion rate limit exceeded for user infrastructure (limit: 4194304 bytes/sec) while attempting to ingest '4082' lines totaling '7820025' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased\n" The error is also visible on the receiving end. For example, in the LokiStack ingester pod: Example Loki ingester error message level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err="rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream Procedure Update the ingestionBurstSize and ingestionRate fields in the LokiStack CR: apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2 # ... 1 The ingestionBurstSize field defines the maximum local rate-limited sample size per distributor replica in MB. This value is a hard limit. Set this value to at least the maximum logs size expected in a single push request. Single requests that are larger than the ingestionBurstSize value are not permitted. 2 The ingestionRate field is a soft limit on the maximum amount of ingested samples per second in MB. Rate limit errors occur if the rate of logs exceeds the limit, but the collector retries sending the logs. As long as the total average is lower than the limit, the system recovers and errors are resolved without user intervention. 10.3.8. Configuring Loki to tolerate memberlist creation failure In an OpenShift cluster, administrators generally use a non-private IP network range. As a result, the LokiStack memberlist configuration fails because, by default, it only uses private IP networks. As an administrator, you can select the pod network for the memberlist configuration. You can modify the LokiStack CR to use the podIP in the hashRing spec. To configure the LokiStack CR, use the following command: USD oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{"spec": {"hashRing":{"memberlist":{"instanceAddrType":"podIP","type": "memberlist"}}}}' Example LokiStack to include podIP apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: # ... hashRing: type: memberlist memberlist: instanceAddrType: podIP # ... 10.3.9. Additional resources Loki components documentation Loki Query Language (LogQL) documentation Grafana Dashboard documentation Loki Object Storage documentation Loki Operator IngestionLimitSpec documentation Loki Storage Schema documentation 10.4. Configuring the Elasticsearch log store You can use Elasticsearch 6 to store and organize log data. You can make modifications to your log store, including: Storage for your Elasticsearch cluster Shard replication across data nodes in the cluster, from full replication to no replication External access to Elasticsearch data 10.4.1. Configuring log storage You can configure which log storage type your logging uses by modifying the ClusterLogging custom resource (CR). Prerequisites You have administrator permissions. You have installed the OpenShift CLI ( oc ). You have installed the Red Hat OpenShift Logging Operator and an internal log store that is either the LokiStack or Elasticsearch. You have created a ClusterLogging CR. Note The Logging 5.9 release does not contain an updated version of the OpenShift Elasticsearch Operator. If you currently use the OpenShift Elasticsearch Operator released with Logging 5.8, it will continue to work with Logging until the EOL of Logging 5.8. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. For more information on the Logging lifecycle dates, see Platform Agnostic Operators . Procedure Modify the ClusterLogging CR logStore spec: ClusterLogging CR example apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: # ... spec: # ... logStore: type: <log_store_type> 1 elasticsearch: 2 nodeCount: <integer> resources: {} storage: {} redundancyPolicy: <redundancy_type> 3 lokistack: 4 name: {} # ... 1 Specify the log store type. This can be either lokistack or elasticsearch . 2 Optional configuration options for the Elasticsearch log store. 3 Specify the redundancy type. This value can be ZeroRedundancy , SingleRedundancy , MultipleRedundancy , or FullRedundancy . 4 Optional configuration options for LokiStack. Example ClusterLogging CR to specify LokiStack as the log store apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: managementState: Managed logStore: type: lokistack lokistack: name: logging-loki # ... Apply the ClusterLogging CR by running the following command: USD oc apply -f <filename>.yaml 10.4.2. Forwarding audit logs to the log store In a logging deployment, container and infrastructure logs are forwarded to the internal log store defined in the ClusterLogging custom resource (CR) by default. Audit logs are not forwarded to the internal log store by default because this does not provide secure storage. You are responsible for ensuring that the system to which you forward audit logs is compliant with your organizational and governmental regulations, and is properly secured. If this default configuration meets your needs, you do not need to configure a ClusterLogForwarder CR. If a ClusterLogForwarder CR exists, logs are not forwarded to the internal log store unless a pipeline is defined that contains the default output. Procedure To use the Log Forward API to forward audit logs to the internal Elasticsearch instance: Create or edit a YAML file that defines the ClusterLogForwarder CR object: Create a CR to send all log types to the internal Elasticsearch instance. You can use the following example without making any changes: apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: pipelines: 1 - name: all-to-default inputRefs: - infrastructure - application - audit outputRefs: - default 1 A pipeline defines the type of logs to forward using the specified output. The default output forwards logs to the internal Elasticsearch instance. Note You must specify all three types of logs in the pipeline: application, infrastructure, and audit. If you do not specify a log type, those logs are not stored and will be lost. If you have an existing ClusterLogForwarder CR, add a pipeline to the default output for the audit logs. You do not need to define the default output. For example: apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: elasticsearch-insecure type: "elasticsearch" url: http://elasticsearch-insecure.messaging.svc.cluster.local insecure: true - name: elasticsearch-secure type: "elasticsearch" url: https://elasticsearch-secure.messaging.svc.cluster.local secret: name: es-audit - name: secureforward-offcluster type: "fluentdForward" url: https://secureforward.offcluster.com:24224 secret: name: secureforward pipelines: - name: container-logs inputRefs: - application outputRefs: - secureforward-offcluster - name: infra-logs inputRefs: - infrastructure outputRefs: - elasticsearch-insecure - name: audit-logs inputRefs: - audit outputRefs: - elasticsearch-secure - default 1 1 This pipeline sends the audit logs to the internal Elasticsearch instance in addition to an external instance. Additional resources About log collection and forwarding 10.4.3. Configuring log retention time You can configure a retention policy that specifies how long the default Elasticsearch log store keeps indices for each of the three log sources: infrastructure logs, application logs, and audit logs. To configure the retention policy, you set a maxAge parameter for each log source in the ClusterLogging custom resource (CR). The CR applies these values to the Elasticsearch rollover schedule, which determines when Elasticsearch deletes the rolled-over indices. Elasticsearch rolls over an index, moving the current index and creating a new index, when an index matches any of the following conditions: The index is older than the rollover.maxAge value in the Elasticsearch CR. The index size is greater than 40 GB x the number of primary shards. The index doc count is greater than 40960 KB x the number of primary shards. Elasticsearch deletes the rolled-over indices based on the retention policy you configure. If you do not create a retention policy for any log sources, logs are deleted after seven days by default. Prerequisites The Red Hat OpenShift Logging Operator and the OpenShift Elasticsearch Operator must be installed. Procedure To configure the log retention time: Edit the ClusterLogging CR to add or modify the retentionPolicy parameter: apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" ... spec: managementState: "Managed" logStore: type: "elasticsearch" retentionPolicy: 1 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3 ... 1 Specify the time that Elasticsearch should retain each log source. Enter an integer and a time designation: weeks(w), hours(h/H), minutes(m) and seconds(s). For example, 1d for one day. Logs older than the maxAge are deleted. By default, logs are retained for seven days. You can verify the settings in the Elasticsearch custom resource (CR). For example, the Red Hat OpenShift Logging Operator updated the following Elasticsearch CR to configure a retention policy that includes settings to roll over active indices for the infrastructure logs every eight hours and the rolled-over indices are deleted seven days after rollover. Red Hat OpenShift Service on AWS checks every 15 minutes to determine if the indices need to be rolled over. apiVersion: "logging.openshift.io/v1" kind: "Elasticsearch" metadata: name: "elasticsearch" spec: ... indexManagement: policies: 1 - name: infra-policy phases: delete: minAge: 7d 2 hot: actions: rollover: maxAge: 8h 3 pollInterval: 15m 4 ... 1 For each log source, the retention policy indicates when to delete and roll over logs for that source. 2 When Red Hat OpenShift Service on AWS deletes the rolled-over indices. This setting is the maxAge you set in the ClusterLogging CR. 3 The index age for Red Hat OpenShift Service on AWS to consider when rolling over the indices. This value is determined from the maxAge you set in the ClusterLogging CR. 4 When Red Hat OpenShift Service on AWS checks if the indices should be rolled over. This setting is the default and cannot be changed. Note Modifying the Elasticsearch CR is not supported. All changes to the retention policies must be made in the ClusterLogging CR. The OpenShift Elasticsearch Operator deploys a cron job to roll over indices for each mapping using the defined policy, scheduled using the pollInterval . USD oc get cronjob Example output NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 4s elasticsearch-im-audit */15 * * * * False 0 <none> 4s elasticsearch-im-infra */15 * * * * False 0 <none> 4s 10.4.4. Configuring CPU and memory requests for the log store Each component specification allows for adjustments to both the CPU and memory requests. You should not have to manually adjust these values as the OpenShift Elasticsearch Operator sets values sufficient for your environment. Note In large-scale clusters, the default memory limit for the Elasticsearch proxy container might not be sufficient, causing the proxy container to be OOMKilled. If you experience this issue, increase the memory requests and limits for the Elasticsearch proxy. Each Elasticsearch node can operate with a lower memory setting though this is not recommended for production deployments. For production use, you should have no less than the default 16Gi allocated to each pod. Preferably you should allocate as much as possible, up to 64Gi per pod. Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc edit ClusterLogging instance apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" .... spec: logStore: type: "elasticsearch" elasticsearch: 1 resources: limits: 2 memory: "32Gi" requests: 3 cpu: "1" memory: "16Gi" proxy: 4 resources: limits: memory: 100Mi requests: memory: 100Mi 1 Specify the CPU and memory requests for Elasticsearch as needed. If you leave these values blank, the OpenShift Elasticsearch Operator sets default values that should be sufficient for most deployments. The default values are 16Gi for the memory request and 1 for the CPU request. 2 The maximum amount of resources a pod can use. 3 The minimum resources required to schedule a pod. 4 Specify the CPU and memory requests for the Elasticsearch proxy as needed. If you leave these values blank, the OpenShift Elasticsearch Operator sets default values that are sufficient for most deployments. The default values are 256Mi for the memory request and 100m for the CPU request. When adjusting the amount of Elasticsearch memory, the same value should be used for both requests and limits . For example: resources: limits: 1 memory: "32Gi" requests: 2 cpu: "8" memory: "32Gi" 1 The maximum amount of the resource. 2 The minimum amount required. Kubernetes generally adheres the node configuration and does not allow Elasticsearch to use the specified limits. Setting the same value for the requests and limits ensures that Elasticsearch can use the memory you want, assuming the node has the memory available. 10.4.5. Configuring replication policy for the log store You can define how Elasticsearch shards are replicated across data nodes in the cluster. Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc -n openshift-logging edit ClusterLogging instance apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" .... spec: logStore: type: "elasticsearch" elasticsearch: redundancyPolicy: "SingleRedundancy" 1 1 Specify a redundancy policy for the shards. The change is applied upon saving the changes. FullRedundancy . Elasticsearch fully replicates the primary shards for each index to every data node. This provides the highest safety, but at the cost of the highest amount of disk required and the poorest performance. MultipleRedundancy . Elasticsearch fully replicates the primary shards for each index to half of the data nodes. This provides a good tradeoff between safety and performance. SingleRedundancy . Elasticsearch makes one copy of the primary shards for each index. Logs are always available and recoverable as long as at least two data nodes exist. Better performance than MultipleRedundancy, when using 5 or more nodes. You cannot apply this policy on deployments of single Elasticsearch node. ZeroRedundancy . Elasticsearch does not make copies of the primary shards. Logs might be unavailable or lost in the event a node is down or fails. Use this mode when you are more concerned with performance than safety, or have implemented your own disk/PVC backup/restore strategy. Note The number of primary shards for the index templates is equal to the number of Elasticsearch data nodes. 10.4.6. Scaling down Elasticsearch pods Reducing the number of Elasticsearch pods in your cluster can result in data loss or Elasticsearch performance degradation. If you scale down, you should scale down by one pod at a time and allow the cluster to re-balance the shards and replicas. After the Elasticsearch health status returns to green , you can scale down by another pod. Note If your Elasticsearch cluster is set to ZeroRedundancy , you should not scale down your Elasticsearch pods. 10.4.7. Configuring persistent storage for the log store Elasticsearch requires persistent storage. The faster the storage, the faster the Elasticsearch performance. Warning Using NFS storage as a volume or a persistent volume (or via NAS such as Gluster) is not supported for Elasticsearch storage, as Lucene relies on file system behavior that NFS does not supply. Data corruption and other problems can occur. Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Procedure Edit the ClusterLogging CR to specify that each data node in the cluster is bound to a Persistent Volume Claim. apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" # ... spec: logStore: type: "elasticsearch" elasticsearch: nodeCount: 3 storage: storageClassName: "gp2" size: "200G" This example specifies each data node in the cluster is bound to a Persistent Volume Claim that requests "200G" of AWS General Purpose SSD (gp2) storage. Note If you use a local volume for persistent storage, do not use a raw block volume, which is described with volumeMode: block in the LocalVolume object. Elasticsearch cannot use raw block volumes. 10.4.8. Configuring the log store for emptyDir storage You can use emptyDir with your log store, which creates an ephemeral deployment in which all of a pod's data is lost upon restart. Note When using emptyDir, if log storage is restarted or redeployed, you will lose data. Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Procedure Edit the ClusterLogging CR to specify emptyDir: spec: logStore: type: "elasticsearch" elasticsearch: nodeCount: 3 storage: {} 10.4.9. Performing an Elasticsearch rolling cluster restart Perform a rolling restart when you change the elasticsearch config map or any of the elasticsearch-* deployment configurations. Also, a rolling restart is recommended if the nodes on which an Elasticsearch pod runs requires a reboot. Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Procedure To perform a rolling cluster restart: Change to the openshift-logging project: Get the names of the Elasticsearch pods: Scale down the collector pods so they stop sending new logs to Elasticsearch: USD oc -n openshift-logging patch daemonset/collector -p '{"spec":{"template":{"spec":{"nodeSelector":{"logging-infra-collector": "false"}}}}}' Perform a shard synced flush using the Red Hat OpenShift Service on AWS es_util tool to ensure there are no pending operations waiting to be written to disk prior to shutting down: USD oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query="_flush/synced" -XPOST For example: Example output Prevent shard balancing when purposely bringing down nodes using the Red Hat OpenShift Service on AWS es_util tool: For example: Example output {"acknowledged":true,"persistent":{"cluster":{"routing":{"allocation":{"enable":"primaries"}}}},"transient": After the command is complete, for each deployment you have for an ES cluster: By default, the Red Hat OpenShift Service on AWS Elasticsearch cluster blocks rollouts to their nodes. Use the following command to allow rollouts and allow the pod to pick up the changes: For example: Example output A new pod is deployed. After the pod has a ready container, you can move on to the deployment. Example output NAME READY STATUS RESTARTS AGE elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6k 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-2-f799564cb-l9mj7 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-3-585968dc68-k7kjr 2/2 Running 0 22h After the deployments are complete, reset the pod to disallow rollouts: For example: Example output Check that the Elasticsearch cluster is in a green or yellow state: Note If you performed a rollout on the Elasticsearch pod you used in the commands, the pod no longer exists and you need a new pod name here. For example: 1 Make sure this parameter value is green or yellow before proceeding. If you changed the Elasticsearch configuration map, repeat these steps for each Elasticsearch pod. After all the deployments for the cluster have been rolled out, re-enable shard balancing: For example: Example output { "acknowledged" : true, "persistent" : { }, "transient" : { "cluster" : { "routing" : { "allocation" : { "enable" : "all" } } } } } Scale up the collector pods so they send new logs to Elasticsearch. USD oc -n openshift-logging patch daemonset/collector -p '{"spec":{"template":{"spec":{"nodeSelector":{"logging-infra-collector": "true"}}}}}' 10.4.10. Exposing the log store service as a route By default, the log store that is deployed with logging is not accessible from outside the logging cluster. You can enable a route with re-encryption termination for external access to the log store service for those tools that access its data. Externally, you can access the log store by creating a reencrypt route, your Red Hat OpenShift Service on AWS token and the installed log store CA certificate. Then, access a node that hosts the log store service with a cURL request that contains: The Authorization: Bearer USD{token} The Elasticsearch reencrypt route and an Elasticsearch API request . Internally, you can access the log store service using the log store cluster IP, which you can get by using either of the following commands: USD oc get service elasticsearch -o jsonpath={.spec.clusterIP} -n openshift-logging Example output 172.30.183.229 USD oc get service elasticsearch -n openshift-logging Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE elasticsearch ClusterIP 172.30.183.229 <none> 9200/TCP 22h You can check the cluster IP address with a command similar to the following: USD oc exec elasticsearch-cdm-oplnhinv-1-5746475887-fj2f8 -n openshift-logging -- curl -tlsv1.2 --insecure -H "Authorization: Bearer USD{token}" "https://172.30.183.229:9200/_cat/health" Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 29 100 29 0 0 108 0 --:--:-- --:--:-- --:--:-- 108 Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. You must have access to the project to be able to access to the logs. Procedure To expose the log store externally: Change to the openshift-logging project: USD oc project openshift-logging Extract the CA certificate from the log store and write to the admin-ca file: USD oc extract secret/elasticsearch --to=. --keys=admin-ca Example output admin-ca Create the route for the log store service as a YAML file: Create a YAML file with the following: apiVersion: route.openshift.io/v1 kind: Route metadata: name: elasticsearch namespace: openshift-logging spec: host: to: kind: Service name: elasticsearch tls: termination: reencrypt destinationCACertificate: | 1 1 Add the log store CA certifcate or use the command in the step. You do not have to set the spec.tls.key , spec.tls.certificate , and spec.tls.caCertificate parameters required by some reencrypt routes. Run the following command to add the log store CA certificate to the route YAML you created in the step: USD cat ./admin-ca | sed -e "s/^/ /" >> <file-name>.yaml Create the route: USD oc create -f <file-name>.yaml Example output route.route.openshift.io/elasticsearch created Check that the Elasticsearch service is exposed: Get the token of this service account to be used in the request: USD token=USD(oc whoami -t) Set the elasticsearch route you created as an environment variable. USD routeES=`oc get route elasticsearch -o jsonpath={.spec.host}` To verify the route was successfully created, run the following command that accesses Elasticsearch through the exposed route: curl -tlsv1.2 --insecure -H "Authorization: Bearer USD{token}" "https://USD{routeES}" The response appears similar to the following: Example output { "name" : "elasticsearch-cdm-i40ktba0-1", "cluster_name" : "elasticsearch", "cluster_uuid" : "0eY-tJzcR3KOdpgeMJo-MQ", "version" : { "number" : "6.8.1", "build_flavor" : "oss", "build_type" : "zip", "build_hash" : "Unknown", "build_date" : "Unknown", "build_snapshot" : true, "lucene_version" : "7.7.0", "minimum_wire_compatibility_version" : "5.6.0", "minimum_index_compatibility_version" : "5.0.0" }, "<tagline>" : "<for search>" } 10.4.11. Removing unused components if you do not use the default Elasticsearch log store As an administrator, in the rare case that you forward logs to a third-party log store and do not use the default Elasticsearch log store, you can remove several unused components from your logging cluster. In other words, if you do not use the default Elasticsearch log store, you can remove the internal Elasticsearch logStore and Kibana visualization components from the ClusterLogging custom resource (CR). Removing these components is optional but saves resources. Prerequisites Verify that your log forwarder does not send log data to the default internal Elasticsearch cluster. Inspect the ClusterLogForwarder CR YAML file that you used to configure log forwarding. Verify that it does not have an outputRefs element that specifies default . For example: outputRefs: - default Warning Suppose the ClusterLogForwarder CR forwards log data to the internal Elasticsearch cluster, and you remove the logStore component from the ClusterLogging CR. In that case, the internal Elasticsearch cluster will not be present to store the log data. This absence can cause data loss. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc edit ClusterLogging instance If they are present, remove the logStore and visualization stanzas from the ClusterLogging CR. Preserve the collection stanza of the ClusterLogging CR. The result should look similar to the following example: apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: "openshift-logging" spec: managementState: "Managed" collection: type: "fluentd" fluentd: {} Verify that the collector pods are redeployed: USD oc get pods -l component=collector -n openshift-logging | [
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v13 effectiveDate: \"<yyyy>-<mm>-<dd>\" secret: name: logging-loki-s3 4 type: s3 5 credentialMode: 6 storageClassName: <storage_class_name> 7 tenants: mode: openshift-logging 8",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging 2 spec: collection: type: vector logStore: lokistack: name: logging-loki retentionPolicy: application: maxAge: 7d audit: maxAge: 7d infra: maxAge: 7d type: lokistack visualization: type: ocp-console ocpConsole: logsLimit: 15 managementState: Managed",
"apiVersion: v1 kind: Secret metadata: name: logging-loki-s3 namespace: openshift-logging stringData: access_key_id: AKIAIOSFODNN7EXAMPLE access_key_secret: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: \"stable-5.9\" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: CLIENTID value: <your_client_id> - name: TENANTID value: <your_tenant_id> - name: SUBSCRIPTIONID value: <your_subscription_id> - name: REGION value: <your_region>",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: \"stable-5.9\" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: ROLEARN value: <role_ARN>",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging spec: size: 1x.small 2 storage: schemas: - effectiveDate: '2023-10-15' version: v13 secret: name: logging-loki-s3 3 type: s3 4 credentialMode: 5 storageClassName: <storage_class_name> 6 tenants: mode: openshift-logging",
"apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\" 2",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat 1 spec: channel: stable 2 name: loki-operator source: redhat-operators 3 sourceNamespace: openshift-marketplace",
"oc apply -f <filename>.yaml",
"apiVersion: v1 kind: Namespace metadata: name: openshift-logging 1 annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-logging: \"true\" openshift.io/cluster-monitoring: \"true\" 2",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging 1 spec: targetNamespaces: - openshift-logging",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging 1 spec: channel: stable 2 name: cluster-logging source: redhat-operators 3 sourceNamespace: openshift-marketplace",
"oc apply -f <filename>.yaml",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v13 effectiveDate: \"<yyyy>-<mm>-<dd>\" secret: name: logging-loki-s3 4 type: s3 5 credentialMode: 6 storageClassName: <storage_class_name> 7 tenants: mode: openshift-logging 8",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging 2 spec: collection: type: vector logStore: lokistack: name: logging-loki retentionPolicy: application: maxAge: 7d audit: maxAge: 7d infra: maxAge: 7d type: lokistack visualization: type: ocp-console ocpConsole: logsLimit: 15 managementState: Managed",
"oc apply -f <filename>.yaml",
"oc get pods -n openshift-logging",
"oc get pods -n openshift-logging NAME READY STATUS RESTARTS AGE cluster-logging-operator-fb7f7cf69-8jsbq 1/1 Running 0 98m collector-222js 2/2 Running 0 18m collector-g9ddv 2/2 Running 0 18m collector-hfqq8 2/2 Running 0 18m collector-sphwg 2/2 Running 0 18m collector-vv7zn 2/2 Running 0 18m collector-wk5zz 2/2 Running 0 18m logging-view-plugin-6f76fbb78f-n2n4n 1/1 Running 0 18m lokistack-sample-compactor-0 1/1 Running 0 42m lokistack-sample-distributor-7d7688bcb9-dvcj8 1/1 Running 0 42m lokistack-sample-gateway-5f6c75f879-bl7k9 2/2 Running 0 42m lokistack-sample-gateway-5f6c75f879-xhq98 2/2 Running 0 42m lokistack-sample-index-gateway-0 1/1 Running 0 42m lokistack-sample-ingester-0 1/1 Running 0 42m lokistack-sample-querier-6b7b56bccc-2v9q4 1/1 Running 0 42m lokistack-sample-query-frontend-84fb57c578-gq2f7 1/1 Running 0 42m",
"oc create secret generic -n openshift-logging <your_secret_name> --from-file=tls.key=<your_key_file> --from-file=tls.crt=<your_crt_file> --from-file=ca-bundle.crt=<your_bundle_file> --from-literal=username=<your_username> --from-literal=password=<your_password>",
"oc get secrets",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging spec: size: 1x.small 2 storage: schemas: - effectiveDate: '2023-10-15' version: v13 secret: name: logging-loki-s3 3 type: s3 4 credentialMode: 5 storageClassName: <storage_class_name> 6 tenants: mode: openshift-logging",
"oc get pods -n openshift-logging",
"NAME READY STATUS RESTARTS AGE cluster-logging-operator-78fddc697-mnl82 1/1 Running 0 14m collector-6cglq 2/2 Running 0 45s collector-8r664 2/2 Running 0 45s collector-8z7px 2/2 Running 0 45s collector-pdxl9 2/2 Running 0 45s collector-tc9dx 2/2 Running 0 45s collector-xkd76 2/2 Running 0 45s logging-loki-compactor-0 1/1 Running 0 8m2s logging-loki-distributor-b85b7d9fd-25j9g 1/1 Running 0 8m2s logging-loki-distributor-b85b7d9fd-xwjs6 1/1 Running 0 8m2s logging-loki-gateway-7bb86fd855-hjhl4 2/2 Running 0 8m2s logging-loki-gateway-7bb86fd855-qjtlb 2/2 Running 0 8m2s logging-loki-index-gateway-0 1/1 Running 0 8m2s logging-loki-index-gateway-1 1/1 Running 0 7m29s logging-loki-ingester-0 1/1 Running 0 8m2s logging-loki-ingester-1 1/1 Running 0 6m46s logging-loki-querier-f5cf9cb87-9fdjd 1/1 Running 0 8m2s logging-loki-querier-f5cf9cb87-fp9v5 1/1 Running 0 8m2s logging-loki-query-frontend-58c579fcb7-lfvbc 1/1 Running 0 8m2s logging-loki-query-frontend-58c579fcb7-tjf9k 1/1 Running 0 8m2s logging-view-plugin-79448d8df6-ckgmx 1/1 Running 0 46s",
"oc create secret generic logging-loki-aws --from-literal=bucketnames=\"<bucket_name>\" --from-literal=endpoint=\"<aws_bucket_endpoint>\" --from-literal=access_key_id=\"<aws_access_key_id>\" --from-literal=access_key_secret=\"<aws_access_key_secret>\" --from-literal=region=\"<aws_region_of_your_bucket>\"",
"oc -n openshift-logging create secret generic \"logging-loki-aws\" --from-literal=bucketnames=\"<s3_bucket_name>\" --from-literal=region=\"<bucket_region>\" --from-literal=audience=\"<oidc_audience>\" 1",
"oc create secret generic logging-loki-azure --from-literal=container=\"<azure_container_name>\" --from-literal=environment=\"<azure_environment>\" \\ 1 --from-literal=account_name=\"<azure_account_name>\" --from-literal=account_key=\"<azure_account_key>\"",
"oc -n openshift-logging create secret generic logging-loki-azure --from-literal=environment=\"<azure_environment>\" --from-literal=account_name=\"<storage_account_name>\" --from-literal=container=\"<container_name>\"",
"oc create secret generic logging-loki-gcs --from-literal=bucketname=\"<bucket_name>\" --from-file=key.json=\"<path/to/key.json>\"",
"oc create secret generic logging-loki-minio --from-literal=bucketnames=\"<bucket_name>\" --from-literal=endpoint=\"<minio_bucket_endpoint>\" --from-literal=access_key_id=\"<minio_access_key_id>\" --from-literal=access_key_secret=\"<minio_access_key_secret>\"",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: loki-bucket-odf namespace: openshift-logging spec: generateBucketName: loki-bucket-odf storageClassName: openshift-storage.noobaa.io",
"BUCKET_HOST=USD(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_HOST}') BUCKET_NAME=USD(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_NAME}') BUCKET_PORT=USD(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_PORT}')",
"ACCESS_KEY_ID=USD(oc get -n openshift-logging secret loki-bucket-odf -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 -d) SECRET_ACCESS_KEY=USD(oc get -n openshift-logging secret loki-bucket-odf -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 -d)",
"oc create -n openshift-logging secret generic logging-loki-odf --from-literal=access_key_id=\"<access_key_id>\" --from-literal=access_key_secret=\"<secret_access_key>\" --from-literal=bucketnames=\"<bucket_name>\" --from-literal=endpoint=\"https://<bucket_host>:<bucket_port>\"",
"oc create secret generic logging-loki-swift --from-literal=auth_url=\"<swift_auth_url>\" --from-literal=username=\"<swift_usernameclaim>\" --from-literal=user_domain_name=\"<swift_user_domain_name>\" --from-literal=user_domain_id=\"<swift_user_domain_id>\" --from-literal=user_id=\"<swift_user_id>\" --from-literal=password=\"<swift_password>\" --from-literal=domain_id=\"<swift_domain_id>\" --from-literal=domain_name=\"<swift_domain_name>\" --from-literal=container_name=\"<swift_container_name>\"",
"oc create secret generic logging-loki-swift --from-literal=auth_url=\"<swift_auth_url>\" --from-literal=username=\"<swift_usernameclaim>\" --from-literal=user_domain_name=\"<swift_user_domain_name>\" --from-literal=user_domain_id=\"<swift_user_domain_id>\" --from-literal=user_id=\"<swift_user_id>\" --from-literal=password=\"<swift_password>\" --from-literal=domain_id=\"<swift_domain_id>\" --from-literal=domain_name=\"<swift_domain_name>\" --from-literal=container_name=\"<swift_container_name>\" --from-literal=project_id=\"<swift_project_id>\" --from-literal=project_name=\"<swift_project_name>\" --from-literal=project_domain_id=\"<swift_project_domain_id>\" --from-literal=project_domain_name=\"<swift_project_domain_name>\" --from-literal=region=\"<swift_region>\"",
"apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\" 2",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-operators-redhat namespace: openshift-operators-redhat 1 spec: {}",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: elasticsearch-operator namespace: openshift-operators-redhat 1 spec: channel: stable-x.y 2 installPlanApproval: Automatic 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace name: elasticsearch-operator",
"oc apply -f <filename>.yaml",
"oc get csv -n --all-namespaces",
"NAMESPACE NAME DISPLAY VERSION REPLACES PHASE default elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded kube-node-lease elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded kube-public elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded kube-system elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded non-destructive-test elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded openshift-apiserver-operator elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded openshift-apiserver elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: spec: logStore: type: <log_store_type> 1 elasticsearch: 2 nodeCount: <integer> resources: {} storage: {} redundancyPolicy: <redundancy_type> 3 lokistack: 4 name: {}",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: managementState: Managed logStore: type: lokistack lokistack: name: logging-loki",
"oc apply -f <filename>.yaml",
"oc adm groups new cluster-admin",
"oc adm groups add-users cluster-admin <username>",
"oc adm policy add-cluster-role-to-group cluster-admin cluster-admin",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: ingester: podAntiAffinity: # requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchLabels: 2 app.kubernetes.io/component: ingester topologyKey: kubernetes.io/hostname",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: replicationFactor: 2 1 replication: factor: 2 2 zones: - maxSkew: 1 3 topologyKey: topology.kubernetes.io/zone 4",
"get pods --field-selector status.phase==Pending -n openshift-logging",
"NAME READY STATUS RESTARTS AGE 1 logging-loki-index-gateway-1 0/1 Pending 0 17m logging-loki-ingester-1 0/1 Pending 0 16m logging-loki-ruler-1 0/1 Pending 0 16m",
"get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == \"Pending\") | .metadata.name' -r",
"storage-logging-loki-index-gateway-1 storage-logging-loki-ingester-1 wal-logging-loki-ingester-1 storage-logging-loki-ruler-1 wal-logging-loki-ruler-1",
"delete pvc __<pvc_name>__ -n openshift-logging",
"delete pod __<pod_name>__ -n openshift-logging",
"patch pvc __<pvc_name>__ -p '{\"metadata\":{\"finalizers\":null}}' -n openshift-logging",
"kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: logging-all-application-logs-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-logging-application-view 1 subjects: 2 - kind: Group name: system:authenticated apiGroup: rbac.authorization.k8s.io",
"kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: allow-read-logs namespace: log-test-0 1 roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-logging-application-view subjects: - kind: User apiGroup: rbac.authorization.k8s.io name: testuser-0",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: tenants: mode: openshift-logging 1 openshift: adminGroups: 2 - cluster-admin - custom-admin-group 3",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: 1 retention: 2 days: 20 streams: - days: 4 priority: 1 selector: '{kubernetes_namespace_name=~\"test.+\"}' 3 - days: 1 priority: 1 selector: '{log_type=\"infrastructure\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v11 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: retention: days: 20 tenants: 1 application: retention: days: 1 streams: - days: 4 selector: '{kubernetes_namespace_name=~\"test.+\"}' 2 infrastructure: retention: days: 5 streams: - days: 1 selector: '{kubernetes_namespace_name=~\"openshift-cluster.+\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v11 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging",
"oc apply -f <filename>.yaml",
"\"values\":[[\"1630410392689800468\",\"{\\\"kind\\\":\\\"Event\\\",\\\"apiVersion\\\": .... ... ... ... \\\"received_at\\\":\\\"2021-08-31T11:46:32.800278+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-31T11:46:32.799692+00:00\\\",\\\"viaq_index_name\\\":\\\"audit-write\\\",\\\"viaq_msg_id\\\":\\\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\\\",\\\"log_type\\\":\\\"audit\\\"}\"]]}]}",
"429 Too Many Requests Ingestion rate limit exceeded",
"2023-08-25T16:08:49.301780Z WARN sink{component_kind=\"sink\" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true",
"2023-08-30 14:52:15 +0000 [warn]: [default_loki_infra] failed to flush the buffer. retry_times=2 next_retry_time=2023-08-30 14:52:19 +0000 chunk=\"604251225bf5378ed1567231a1c03b8b\" error_class=Fluent::Plugin::LokiOutput::LogPostError error=\"429 Too Many Requests Ingestion rate limit exceeded for user infrastructure (limit: 4194304 bytes/sec) while attempting to ingest '4082' lines totaling '7820025' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased\\n\"",
"level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err=\"rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2",
"oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{\"spec\": {\"hashRing\":{\"memberlist\":{\"instanceAddrType\":\"podIP\",\"type\": \"memberlist\"}}}}'",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: hashRing: type: memberlist memberlist: instanceAddrType: podIP",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: spec: logStore: type: <log_store_type> 1 elasticsearch: 2 nodeCount: <integer> resources: {} storage: {} redundancyPolicy: <redundancy_type> 3 lokistack: 4 name: {}",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: managementState: Managed logStore: type: lokistack lokistack: name: logging-loki",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: pipelines: 1 - name: all-to-default inputRefs: - infrastructure - application - audit outputRefs: - default",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: elasticsearch-insecure type: \"elasticsearch\" url: http://elasticsearch-insecure.messaging.svc.cluster.local insecure: true - name: elasticsearch-secure type: \"elasticsearch\" url: https://elasticsearch-secure.messaging.svc.cluster.local secret: name: es-audit - name: secureforward-offcluster type: \"fluentdForward\" url: https://secureforward.offcluster.com:24224 secret: name: secureforward pipelines: - name: container-logs inputRefs: - application outputRefs: - secureforward-offcluster - name: infra-logs inputRefs: - infrastructure outputRefs: - elasticsearch-insecure - name: audit-logs inputRefs: - audit outputRefs: - elasticsearch-secure - default 1",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" retentionPolicy: 1 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3",
"apiVersion: \"logging.openshift.io/v1\" kind: \"Elasticsearch\" metadata: name: \"elasticsearch\" spec: indexManagement: policies: 1 - name: infra-policy phases: delete: minAge: 7d 2 hot: actions: rollover: maxAge: 8h 3 pollInterval: 15m 4",
"oc get cronjob",
"NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 4s elasticsearch-im-audit */15 * * * * False 0 <none> 4s elasticsearch-im-infra */15 * * * * False 0 <none> 4s",
"oc edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" . spec: logStore: type: \"elasticsearch\" elasticsearch: 1 resources: limits: 2 memory: \"32Gi\" requests: 3 cpu: \"1\" memory: \"16Gi\" proxy: 4 resources: limits: memory: 100Mi requests: memory: 100Mi",
"resources: limits: 1 memory: \"32Gi\" requests: 2 cpu: \"8\" memory: \"32Gi\"",
"oc -n openshift-logging edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" . spec: logStore: type: \"elasticsearch\" elasticsearch: redundancyPolicy: \"SingleRedundancy\" 1",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: \"gp2\" size: \"200G\"",
"spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: {}",
"oc project openshift-logging",
"oc get pods -l component=elasticsearch",
"oc -n openshift-logging patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"nodeSelector\":{\"logging-infra-collector\": \"false\"}}}}}'",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_flush/synced\" -XPOST",
"oc exec -c elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_flush/synced\" -XPOST",
"{\"_shards\":{\"total\":4,\"successful\":4,\"failed\":0},\".security\":{\"total\":2,\"successful\":2,\"failed\":0},\".kibana_1\":{\"total\":2,\"successful\":2,\"failed\":0}}",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"primaries\" } }'",
"oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"primaries\" } }'",
"{\"acknowledged\":true,\"persistent\":{\"cluster\":{\"routing\":{\"allocation\":{\"enable\":\"primaries\"}}}},\"transient\":",
"oc rollout resume deployment/<deployment-name>",
"oc rollout resume deployment/elasticsearch-cdm-0-1",
"deployment.extensions/elasticsearch-cdm-0-1 resumed",
"oc get pods -l component=elasticsearch-",
"NAME READY STATUS RESTARTS AGE elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6k 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-2-f799564cb-l9mj7 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-3-585968dc68-k7kjr 2/2 Running 0 22h",
"oc rollout pause deployment/<deployment-name>",
"oc rollout pause deployment/elasticsearch-cdm-0-1",
"deployment.extensions/elasticsearch-cdm-0-1 paused",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=_cluster/health?pretty=true",
"oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=_cluster/health?pretty=true",
"{ \"cluster_name\" : \"elasticsearch\", \"status\" : \"yellow\", 1 \"timed_out\" : false, \"number_of_nodes\" : 3, \"number_of_data_nodes\" : 3, \"active_primary_shards\" : 8, \"active_shards\" : 16, \"relocating_shards\" : 0, \"initializing_shards\" : 0, \"unassigned_shards\" : 1, \"delayed_unassigned_shards\" : 0, \"number_of_pending_tasks\" : 0, \"number_of_in_flight_fetch\" : 0, \"task_max_waiting_in_queue_millis\" : 0, \"active_shards_percent_as_number\" : 100.0 }",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"all\" } }'",
"oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"all\" } }'",
"{ \"acknowledged\" : true, \"persistent\" : { }, \"transient\" : { \"cluster\" : { \"routing\" : { \"allocation\" : { \"enable\" : \"all\" } } } } }",
"oc -n openshift-logging patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"nodeSelector\":{\"logging-infra-collector\": \"true\"}}}}}'",
"oc get service elasticsearch -o jsonpath={.spec.clusterIP} -n openshift-logging",
"172.30.183.229",
"oc get service elasticsearch -n openshift-logging",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE elasticsearch ClusterIP 172.30.183.229 <none> 9200/TCP 22h",
"oc exec elasticsearch-cdm-oplnhinv-1-5746475887-fj2f8 -n openshift-logging -- curl -tlsv1.2 --insecure -H \"Authorization: Bearer USD{token}\" \"https://172.30.183.229:9200/_cat/health\"",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 29 100 29 0 0 108 0 --:--:-- --:--:-- --:--:-- 108",
"oc project openshift-logging",
"oc extract secret/elasticsearch --to=. --keys=admin-ca",
"admin-ca",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: elasticsearch namespace: openshift-logging spec: host: to: kind: Service name: elasticsearch tls: termination: reencrypt destinationCACertificate: | 1",
"cat ./admin-ca | sed -e \"s/^/ /\" >> <file-name>.yaml",
"oc create -f <file-name>.yaml",
"route.route.openshift.io/elasticsearch created",
"token=USD(oc whoami -t)",
"routeES=`oc get route elasticsearch -o jsonpath={.spec.host}`",
"curl -tlsv1.2 --insecure -H \"Authorization: Bearer USD{token}\" \"https://USD{routeES}\"",
"{ \"name\" : \"elasticsearch-cdm-i40ktba0-1\", \"cluster_name\" : \"elasticsearch\", \"cluster_uuid\" : \"0eY-tJzcR3KOdpgeMJo-MQ\", \"version\" : { \"number\" : \"6.8.1\", \"build_flavor\" : \"oss\", \"build_type\" : \"zip\", \"build_hash\" : \"Unknown\", \"build_date\" : \"Unknown\", \"build_snapshot\" : true, \"lucene_version\" : \"7.7.0\", \"minimum_wire_compatibility_version\" : \"5.6.0\", \"minimum_index_compatibility_version\" : \"5.0.0\" }, \"<tagline>\" : \"<for search>\" }",
"outputRefs: - default",
"oc edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: \"openshift-logging\" spec: managementState: \"Managed\" collection: type: \"fluentd\" fluentd: {}",
"oc get pods -l component=collector -n openshift-logging"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/logging/log-storage |
Chapter 4. Configuring Capsule Servers for Load Balancing | Chapter 4. Configuring Capsule Servers for Load Balancing This chapter outlines how to configure Capsule Servers for load balancing. Proceed to one of the following sections depending on your Satellite Server configuration: Section 4.1, "Configuring Capsule Server with Default SSL Certificates for Load Balancing without Puppet" Section 4.2, "Configuring Capsule Server with Default SSL Certificates for Load Balancing with Puppet" Section 4.3, "Configuring Capsule Server with Custom SSL Certificates for Load Balancing without Puppet" Section 4.4, "Configuring Capsule Server with Custom SSL Certificates for Load Balancing with Puppet" Use different file names for the Katello certificates you create for each Capsule Server. For example, name the certificate archive file with Capsule Server FQDN. 4.1. Configuring Capsule Server with Default SSL Certificates for Load Balancing without Puppet The following section describes how to configure Capsule Servers that use default SSL certificates for load balancing without Puppet. Complete this procedure on each Capsule Server that you want to configure for load balancing. Procedure On Satellite Server, generate Katello certificates for Capsule Server, for example: Retain a copy of the example satellite-installer command that is output by the capsule-certs-generate command for installing Capsule Server certificate. Copy the certificate archive file from Satellite Server to Capsule Server. Append the following options to the satellite-installer command that you obtain from the output of the capsule-certs-generate command. On Capsule Server, enter the satellite-installer command, for example: 4.2. Configuring Capsule Server with Default SSL Certificates for Load Balancing with Puppet The following section describes how to configure Capsule Servers that use default SSL certificates for load balancing with Puppet. If you use Puppet in your Satellite configuration, you must complete the following procedures: Configuring Capsule Server to Generate and Sign Puppet Certificates Configuring Remaining Capsule Servers for Load Balancing Configuring Capsule Server to Generate and Sign Puppet Certificates Complete this procedure only for the system where you want to configure Capsule Server to generate and sign Puppet certificates for all other Capsule Servers that you configure for load balancing. In the examples in this procedure, the FQDN of this Capsule Server is capsule-ca.example.com . On Satellite Server, generate Katello certificates for the system where you configure Capsule Server to generate and sign Puppet certificates: Retain a copy of the example satellite-installer command that is output by the capsule-certs-generate command for installing Capsule Server certificate. Copy the certificate archive file from Satellite Server to Capsule Server: Append the following options to the satellite-installer command that you obtain from the output of the capsule-certs-generate command: On Capsule Server, enter the satellite-installer command, for example: On Capsule Server, stop the Puppet server: Generate Puppet certificates for all other Capsule Servers that you configure for load balancing, except the first system where you configure Puppet certificates signing: This command creates the following files on the system where you configure Capsule Server to sign Puppet certificates: /etc/puppetlabs/puppet/ssl/certs/ca.pem /etc/puppetlabs/puppet/ssl/certs/ capsule.example.com .pem /etc/puppetlabs/puppet/ssl/private_keys/ capsule.example.com .pem /etc/puppetlabs/puppet/ssl/public_keys/ capsule.example.com .pem Resume the Puppet server: Configuring Remaining Capsule Servers for Load Balancing Complete this procedure on each Capsule Server excluding the system where you configure Capsule Server to sign Puppet certificates. On Satellite Server, generate Katello certificates for Capsule Server: Retain a copy of the example satellite-installer command that is output by the capsule-certs-generate command for installing Capsule Server certificate. Copy the certificate archive file from Satellite Server to Capsule Server: On Capsule Server, install the puppetserver package: On Capsule Server, create directories for puppet certificates: On Capsule Server, copy the Puppet certificates for this Capsule Server from the system where you configure Capsule Server to sign Puppet certificates: On Capsule Server, change the directory ownership to user puppet , group puppet and set the SELinux contexts: Append the following options to the satellite-installer command that you obtain from the output of the capsule-certs-generate command: On Capsule Server, enter the satellite-installer command, for example: 4.3. Configuring Capsule Server with Custom SSL Certificates for Load Balancing without Puppet The following section describes how to configure Capsule Servers that use custom SSL certificates for load balancing without Puppet. 4.3.1. Creating Custom SSL Certificates for Capsule Server This procedure outlines how to create a configuration file for the Certificate Signing Request and include the load balancer and Capsule Server as Subject Alternative Names (SAN). Complete this procedure on each Capsule Server that you want to configure for load balancing. Procedure On Capsule Server, create a directory to contain all the source certificate files, accessible to only the root user: Create a private key with which to sign the Certificate Signing Request (CSR). Note that the private key must be unencrypted. If you use a password-protected private key, remove the private key password. If you already have a private key for this Capsule Server, skip this step. Create the certificate request configuration file with the following content: 1 The certificate's common name must match the FQDN of Capsule Server. Ensure to change this when running the command on each Capsule Server that you configure for load balancing. You can also set a wildcard value * . If you set a wildcard value, you must add the -t capsule option when you use the katello-certs-check command. 2 Under [alt_names] , include the FQDN of the load balancer as DNS.1 and the FQDN of Capsule Server as DNS.2 . Create a Certificate Signing Request (CSR) for the SAN certificate. 1 Capsule Server's private key, used to sign the certificate 2 The certificate request configuration file 3 Certificate Signing Request file Send the certificate request to the Certificate Authority: When you submit the request, specify the lifespan of the certificate. The method for sending the certificate request varies, so consult the Certificate Authority for the preferred method. In response to the request, you can expect to receive a Certificate Authority bundle and a signed certificate, in separate files. Copy the Certificate Authority bundle and Capsule Server certificate file that you receive from the Certificate Authority, and Capsule Server private key to your Satellite Server. On Satellite Server, validate Capsule Server certificate input files: 1 Capsule Server certificate file, provided by your Certificate Authority 2 Capsule Server's private key that you used to sign the certificate 3 Certificate Authority bundle, provided by your Certificate Authority If you set the commonName= to a wildcard value * , you must add the -t capsule option to the katello-certs-check command. Retain a copy of the example capsule-certs-generate command that is output by the katello-certs-check command for creating the Certificate Archive File for this Capsule Server. 4.3.2. Configuring Capsule Server with Custom SSL Certificates for Load Balancing without Puppet Complete this procedure on each Capsule Server that you want to configure for load balancing. Procedure Append the following option to the capsule-certs-generate command that you obtain from the output of the katello-certs-check command: On Satellite Server, enter the capsule-certs-generate command to generate Capsule certificates. For example: Retain a copy of the example satellite-installer command from the output for installing Capsule Server certificates. Copy the certificate archive file from Satellite Server to Capsule Server: Append the following options to the satellite-installer command that you obtain from the output of the capsule-certs-generate command. On Capsule Server, enter the satellite-installer command, for example: 4.4. Configuring Capsule Server with Custom SSL Certificates for Load Balancing with Puppet The following section describes how to configure Capsule Servers that use custom SSL certificates for load balancing with Puppet. 4.4.1. Creating Custom SSL Certificates for Capsule Server This procedure outlines how to create a configuration file for the Certificate Signing Request and include the load balancer and Capsule Server as Subject Alternative Names (SAN). Complete this procedure on each Capsule Server that you want to configure for load balancing. Procedure On Capsule Server, create a directory to contain all the source certificate files, accessible to only the root user: Create a private key with which to sign the Certificate Signing Request (CSR). Note that the private key must be unencrypted. If you use a password-protected private key, remove the private key password. If you already have a private key for this Capsule Server, skip this step. Create the certificate request configuration file with the following content: 1 The certificate's common name must match the FQDN of Capsule Server. Ensure to change this when running the command on each Capsule Server. You can also set a wildcard value * . If you set a wildcard value, you must add the -t capsule option when you use the katello-certs-check command. 2 Under [alt_names] , include the FQDN of the load balancer as DNS.1 and the FQDN of Capsule Server as DNS.2 . Create a Certificate Signing Request (CSR) for the SAN certificate: 1 Capsule Server's private key, used to sign the certificate 2 The certificate request configuration file 3 Certificate Signing Request file Send the certificate request to the Certificate Authority: When you submit the request, specify the lifespan of the certificate. The method for sending the certificate request varies, so consult the Certificate Authority for the preferred method. In response to the request, you can expect to receive a Certificate Authority bundle and a signed certificate, in separate files. Copy the Certificate Authority bundle and Capsule Server certificate file that you receive from the Certificate Authority, and Capsule Server private key to your Satellite Server to validate them. On Satellite Server, validate Capsule Server certificate input files: 1 Capsule Server certificate file, provided by your Certificate Authority 2 Capsule Server's private key that you used to sign the certificate 3 Certificate Authority bundle, provided by your Certificate Authority If you set the commonName= to a wildcard value * , you must add the -t capsule option to the katello-certs-check command. Retain a copy of the example capsule-certs-generate command that is output by the katello-certs-check command for creating the Certificate Archive File for this Capsule Server. 4.4.2. Configuring Capsule Server with Custom SSL Certificates for Load Balancing with Puppet If you use Puppet in your Satellite configuration, then you must complete the following procedures: Configuring Capsule Server to Generate and Sign Puppet Certificates Configuring Remaining Capsule Servers for Load Balancing Configuring Capsule Server to Generate and Sign Puppet Certificates Complete this procedure only for the system where you want to configure Capsule Server to generate Puppet certificates for all other Capsule Servers that you configure for load balancing. In the examples in this procedure, the FQDN of this Capsule Server is capsule-ca.example.com . Append the following option to the capsule-certs-generate command that you obtain from the output of the katello-certs-check command: On Satellite Server, enter the capsule-certs-generate command to generate Capsule certificates. For example: Retain a copy of the example satellite-installer command from the output for installing Capsule Server certificates. Copy the certificate archive file from Satellite Server to Capsule Server. Append the following options to the satellite-installer command that you obtain from the output of the capsule-certs-generate command: On Capsule Server, enter the satellite-installer command, for example: On Capsule Server, generate Puppet certificates for all other Capsules that you configure for load balancing, except this first system where you configure Puppet certificates signing: This command creates the following files on the Puppet certificate signing Capsule Server instance: /etc/puppetlabs/puppet/ssl/certs/ca.pem /etc/puppetlabs/puppet/ssl/certs/capsule.example.com.pem /etc/puppetlabs/puppet/ssl/private_keys/capsule.example.com.pem /etc/puppetlabs/puppet/ssl/public_keys/capsule.example.com.pem Configuring Remaining Capsule Servers for Load Balancing Complete this procedure for each Capsule Server excluding the system where you configure Capsule Server to sign Puppet certificates. Append the following option to the capsule-certs-generate command that you obtain from the output of the katello-certs-check command: On Satellite Server, enter the capsule-certs-generate command to generate Capsule certificates. For example: Retain a copy of the example satellite-installer command from the output for installing Capsule Server certificates. Copy the certificate archive file from Satellite Server to Capsule Server. On Capsule Server, install the puppetserver package: On Capsule Server, create directories for puppet certificates: On Capsule Server, copy the Puppet certificates for this Capsule Server from the system where you configure Capsule Server to sign Puppet certificates: On Capsule Server, change the directory ownership to user puppet , group puppet and set the SELinux contexts: Append the following options to the satellite-installer command that you obtain from the output of the capsule-certs-generate command: On Capsule Server, enter the satellite-installer command, for example: | [
"capsule-certs-generate --foreman-proxy-fqdn capsule.example.com --certs-tar \"/root/ capsule.example.com -certs.tar\" --foreman-proxy-cname loadbalancer.example.com",
"scp /root/ capsule.example.com -certs.tar root@ capsule.example.com :/root/ capsule.example.com -certs.tar",
"--certs-cname \" loadbalancer.example.com \" --enable-foreman-proxy-plugin-remote-execution-ssh",
"satellite-installer --scenario capsule --foreman-proxy-register-in-foreman \"true\" --foreman-proxy-foreman-base-url \" https://satellite.example.com \" --foreman-proxy-trusted-hosts \" satellite.example.com \" --foreman-proxy-trusted-hosts \" capsule.example.com \" --foreman-proxy-oauth-consumer-key \" oauth key \" --foreman-proxy-oauth-consumer-secret \" oauth secret \" --certs-tar-file \" capsule.example.com-certs.tar \" --certs-cname \" loadbalancer.example.com \" --enable-foreman-proxy-plugin-remote-execution-ssh",
"capsule-certs-generate --foreman-proxy-fqdn capsule-ca.example.com --certs-tar \"/root/ capsule-ca.example.com -certs.tar\" --foreman-proxy-cname loadbalancer.example.com",
"scp /root/ capsule-ca.example.com -certs.tar root@ capsule-ca.example.com : capsule-ca.example.com -certs.tar",
"--certs-cname \" loadbalancer.example.com \" --puppet-dns-alt-names \" loadbalancer.example.com \" --puppet-ca-server \" capsule-ca.example.com \" --foreman-proxy-puppetca \"true\" --puppet-server-ca \"true\" --enable-foreman-proxy-plugin-remote-execution-ssh",
"satellite-installer --scenario capsule --foreman-proxy-register-in-foreman \"true\" --foreman-proxy-foreman-base-url \" https://satellite.example.com \" --foreman-proxy-trusted-hosts \" satellite.example.com \" --foreman-proxy-trusted-hosts \" capsule-ca.example.com \" --foreman-proxy-oauth-consumer-key \" oauth key \" --foreman-proxy-oauth-consumer-secret \" oauth secret \" --certs-tar-file \" capsule-ca.example.com-certs.tar \" --puppet-server-foreman-url \" https://satellite.example.com \" --certs-cname \" loadbalancer.example.com \" --puppet-dns-alt-names \" loadbalancer.example.com \" --puppet-ca-server \" capsule-ca.example.com \" --foreman-proxy-puppetca \"true\" --puppet-server-ca \"true\" --enable-foreman-proxy-plugin-remote-execution-script --foreman-proxy-content-puppet true --enable-puppet --puppet-server true --puppet-server-foreman-ssl-ca /etc/pki/katello/puppet/puppet_client_ca.crt --puppet-server-foreman-ssl-cert /etc/pki/katello/puppet/puppet_client.crt --puppet-server-foreman-ssl-key /etc/pki/katello/puppet/puppet_client.key",
"puppet resource service puppetserver ensure=stopped",
"puppetserver ca generate --certname capsule.example.com --subject-alt-names loadbalancer.example.com --ca-client",
"puppet resource service puppetserver ensure=running",
"capsule-certs-generate --foreman-proxy-fqdn capsule.example.com --certs-tar \"/root/ capsule.example.com -certs.tar\" --foreman-proxy-cname loadbalancer.example.com",
"scp /root/ capsule.example.com -certs.tar root@ capsule.example.com :/root/ capsule.example.com -certs.tar",
"satellite-maintain packages install puppetserver",
"mkdir -p /etc/puppetlabs/puppet/ssl/certs/ /etc/puppetlabs/puppet/ssl/private_keys/ /etc/puppetlabs/puppet/ssl/public_keys/",
"scp root@ capsule-ca.example.com :/etc/puppetlabs/puppet/ssl/certs/ca.pem /etc/puppetlabs/puppet/ssl/certs/ca.pem scp root@ capsule-ca.example.com :/etc/puppetlabs/puppet/ssl/certs/ capsule.example.com .pem /etc/puppetlabs/puppet/ssl/certs/ capsule.example.com .pem scp root@ capsule-ca.example.com :/etc/puppetlabs/puppet/ssl/private_keys/ capsule.example.com .pem /etc/puppetlabs/puppet/ssl/private_keys/ capsule.example.com .pem scp root@ capsule-ca.example.com :/etc/puppetlabs/puppet/ssl/public_keys/ capsule.example.com .pem /etc/puppetlabs/puppet/ssl/public_keys/ capsule.example.com .pem",
"chown -R puppet:puppet /etc/puppetlabs/puppet/ssl/ restorecon -Rv /etc/puppetlabs/puppet/ssl/",
"--certs-cname \" loadbalancer.example.com \" --puppet-dns-alt-names \" loadbalancer.example.com \" --puppet-ca-server \" capsule-ca.example.com \" --foreman-proxy-puppetca \"false\" --puppet-server-ca \"false\" --enable-foreman-proxy-plugin-remote-execution-ssh",
"satellite-installer --scenario capsule --foreman-proxy-register-in-foreman \"true\" --foreman-proxy-foreman-base-url \" https://satellite.example.com \" --foreman-proxy-trusted-hosts \" satellite.example.com \" --foreman-proxy-trusted-hosts \" capsule.example.com \" --foreman-proxy-oauth-consumer-key \" oauth key \" --foreman-proxy-oauth-consumer-secret \" oauth secret \" --certs-tar-file \" capsule.example.com-certs.tar \" --puppet-server-foreman-url \" https://satellite.example.com \" --certs-cname \" loadbalancer.example.com \" --puppet-dns-alt-names \" loadbalancer.example.com \" --puppet-ca-server \" capsule-ca.example.com \" --foreman-proxy-puppetca \"false\" --puppet-server-ca \"false\" --enable-foreman-proxy-plugin-remote-execution-ssh",
"mkdir /root/capsule_cert cd /root/capsule_cert",
"openssl genrsa -out /root/capsule_cert/capsule_cert_key.pem 4096",
"[ req ] default_bits = 4096 distinguished_name = req_distinguished_name req_extensions = req_ext prompt = no [ req_distinguished_name ] countryName= 2 Letter Country Code stateOrProvinceName= State or Province Full Name localityName= Locality Name 0.organizationName= Organization Name organizationalUnitName= Capsule Organization Unit Name commonName= capsule.example.com 1 emailAddress= Email Address authorityKeyIdentifier=keyid,issuer #basicConstraints=CA:FALSE keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment subjectAltName = @alt_names [alt_names] 2 DNS.1 = loadbalancer.example.com DNS.2 = capsule.example.com",
"openssl req -new -key /root/capsule_cert/capsule_cert_key.pem \\ 1 -config SAN_config.cfg \\ 2 -out /root/capsule_cert/capsule_cert_csr.pem 3",
"katello-certs-check -c /root/capsule_cert/capsule_cert.pem \\ 1 -k /root/capsule_cert/capsule_cert_key.pem \\ 2 -b /root/capsule_cert/ca_cert_bundle.pem 3",
"--foreman-proxy-cname loadbalancer.example.com",
"capsule-certs-generate --foreman-proxy-fqdn capsule.example.com --certs-tar /root/capsule_cert/capsule.tar --server-cert /root/capsule_cert/capsule.pem --server-key /root/capsule_cert/capsule.pem --server-ca-cert /root/capsule_cert/ca_cert_bundle.pem --foreman-proxy-cname loadbalancer.example.com",
"scp /root/ capsule.example.com -certs.tar root@ capsule.example.com : capsule.example.com -certs.tar",
"--certs-cname \" loadbalancer.example.com \" --enable-foreman-proxy-plugin-remote-execution-ssh",
"satellite-installer --scenario capsule --foreman-proxy-register-in-foreman \"true\" --foreman-proxy-foreman-base-url \" https://satellite.example.com \" --foreman-proxy-trusted-hosts \" satellite.example.com \" --foreman-proxy-trusted-hosts \" capsule.example.com \" --foreman-proxy-oauth-consumer-key \" oauth key \" --foreman-proxy-oauth-consumer-secret \" oauth secret \" --certs-tar-file \" capsule.example.com-certs.tar \" --certs-cname \" loadbalancer.example.com \" --enable-foreman-proxy-plugin-remote-execution-ssh",
"mkdir /root/capsule_cert cd /root/capsule_cert",
"openssl genrsa -out /root/capsule_cert/capsule.pem 4096",
"[ req ] default_bits = 4096 distinguished_name = req_distinguished_name req_extensions = req_ext prompt = no [ req_distinguished_name ] countryName= 2 Letter Country Code stateOrProvinceName= State or Province Full Name localityName= Locality Name 0.organizationName= Organization Name organizationalUnitName= Capsule Organization Unit Name commonName= capsule.example.com 1 emailAddress= Email Address authorityKeyIdentifier=keyid,issuer #basicConstraints=CA:FALSE keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment subjectAltName = @alt_names [alt_names] 2 DNS.1 = loadbalancer.example.com DNS.2 = capsule.example.com",
"openssl req -new -key /root/capsule_cert/capsule.pem \\ 1 -config SAN_config.cfg \\ 2 -out /root/capsule_cert/capsule.pem 3",
"katello-certs-check -c /root/capsule_cert/capsule.pem \\ 1 -k /root/capsule_cert/capsule.pem \\ 2 -b /root/capsule_cert/ca_cert_bundle.pem 3",
"--foreman-proxy-cname loadbalancer.example.com",
"capsule-certs-generate --foreman-proxy-fqdn capsule-ca.example.com --certs-tar /root/capsule_cert/capsule-ca.tar --server-cert /root/capsule_cert/capsule-ca.pem --server-key /root/capsule_cert/capsule-ca.pem --server-ca-cert /root/capsule_cert/ca_cert_bundle.pem --foreman-proxy-cname loadbalancer.example.com",
"--puppet-dns-alt-names \" loadbalancer.example.com \" --puppet-ca-server \" capsule-ca.example.com \" --foreman-proxy-puppetca \"true\" --puppet-server-ca \"true\" --enable-foreman-proxy-plugin-remote-execution-ssh",
"satellite-installer --scenario capsule --foreman-proxy-register-in-foreman \"true\" --foreman-proxy-foreman-base-url \" https://satellite.example.com \" --foreman-proxy-trusted-hosts \" satellite.example.com \" --foreman-proxy-trusted-hosts \" capsule-ca.example.com \" --foreman-proxy-oauth-consumer-key \"oauth key\" --foreman-proxy-oauth-consumer-secret \"oauth secret\" --certs-tar-file \" certs.tgz \" --puppet-server-foreman-url \" https://satellite.example.com \" --certs-cname \" loadbalancer.example.com \" --puppet-dns-alt-names \" loadbalancer.example.com \" --puppet-ca-server \" capsule-ca.example.com \" --foreman-proxy-puppetca \"true\" --puppet-server-ca \"true\" --enable-foreman-proxy-plugin-remote-execution-script --foreman-proxy-content-puppet true --enable-puppet --puppet-server true --puppet-server-foreman-ssl-ca /etc/pki/katello/puppet/puppet_client_ca.crt --puppet-server-foreman-ssl-cert /etc/pki/katello/puppet/puppet_client.crt --puppet-server-foreman-ssl-key /etc/pki/katello/puppet/puppet_client.key",
"puppet cert generate capsule.example.com --dns_alt_names= loadbalancer.example.com",
"--foreman-proxy-cname loadbalancer.example.com",
"capsule-certs-generate --foreman-proxy-fqdn capsule.example.com --certs-tar /root/capsule_cert/capsule.tar --server-cert /root/capsule_cert/capsule.pem --server-key /root/capsule_cert/capsule.pem --server-ca-cert /root/capsule_cert/ca_cert_bundle.pem --foreman-proxy-cname loadbalancer.example.com",
"scp /root/ capsule.example.com -certs.tar root@ capsule.example.com : capsule.example.com -certs.tar",
"satellite-maintain packages install puppetserver",
"mkdir -p /etc/puppetlabs/puppet/ssl/certs/ /etc/puppetlabs/puppet/ssl/private_keys/ /etc/puppetlabs/puppet/ssl/public_keys/",
"scp root@ capsule-ca.example.com :/etc/puppetlabs/puppet/ssl/certs/ca.pem /etc/puppetlabs/puppet/ssl/certs/ca.pem scp root@ capsule-ca.example.com :/etc/puppetlabs/puppet/ssl/certs/ capsule.example.com .pem /etc/puppetlabs/puppet/ssl/certs/ capsule.example.com .pem scp root@ capsule-ca.example.com :/etc/puppetlabs/puppet/ssl/private_keys/ capsule.example.com .pem /etc/puppetlabs/puppet/ssl/private_keys/ capsule.example.com .pem scp root@ capsule-ca.example.com :/etc/puppetlabs/puppet/ssl/public_keys/ capsule.example.com .pem /etc/puppetlabs/puppet/ssl/public_keys/ capsule.example.com .pem",
"chown -R puppet:puppet /etc/puppetlabs/puppet/ssl/ restorecon -Rv /etc/puppetlabs/puppet/ssl/",
"--certs-cname \" loadbalancer.example.com \" --puppet-dns-alt-names \" loadbalancer.example.com \" --puppet-ca-server \" capsule-ca.example.com \" --foreman-proxy-puppetca \"false\" --puppet-server-ca \"false\" --enable-foreman-proxy-plugin-remote-execution-ssh",
"satellite-installer --scenario capsule --foreman-proxy-register-in-foreman \"true\" --foreman-proxy-foreman-base-url \" https://satellite.example.com \" --foreman-proxy-trusted-hosts \" satellite.example.com \" --foreman-proxy-trusted-hosts \" capsule.example.com \" --foreman-proxy-oauth-consumer-key \" oauth key \" --foreman-proxy-oauth-consumer-secret \" oauth secret \" --certs-tar-file \" capsule.example.com-certs.tar \" --puppet-server-foreman-url \" https://satellite.example.com \" --certs-cname \" loadbalancer.example.com \" --puppet-dns-alt-names \" loadbalancer.example.com \" --puppet-ca-server \" capsule-ca.example.com \" --foreman-proxy-puppetca \"false\" --puppet-server-ca \"false\" --enable-foreman-proxy-plugin-remote-execution-ssh"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/configuring_capsules_with_a_load_balancer/configuring-capsule-servers-for-load-balancing |
Chapter 1. About the Migration Toolkit for Containers | Chapter 1. About the Migration Toolkit for Containers The Migration Toolkit for Containers (MTC enables you to migrate stateful application workloads between OpenShift Container Platform 4 clusters at the granularity of a namespace. Note If you are migrating from OpenShift Container Platform 3, see About migrating from OpenShift Container Platform 3 to 4 and Installing the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 3 . You can migrate applications within the same cluster or between clusters by using state migration. MTC provides a web console and an API, based on Kubernetes custom resources, to help you control the migration and minimize application downtime. The MTC console is installed on the target cluster by default. You can configure the Migration Toolkit for Containers Operator to install the console on a remote cluster . See Advanced migration options for information about the following topics: Automating your migration with migration hooks and the MTC API. Configuring your migration plan to exclude resources, support large-scale migrations, and enable automatic PV resizing for direct volume migration. 1.1. Terminology Table 1.1. MTC terminology Term Definition Source cluster Cluster from which the applications are migrated. Destination cluster [1] Cluster to which the applications are migrated. Replication repository Object storage used for copying images, volumes, and Kubernetes objects during indirect migration or for Kubernetes objects during direct volume migration or direct image migration. The replication repository must be accessible to all clusters. Host cluster Cluster on which the migration-controller pod and the web console are running. The host cluster is usually the destination cluster but this is not required. The host cluster does not require an exposed registry route for direct image migration. Remote cluster A remote cluster is usually the source cluster but this is not required. A remote cluster requires a Secret custom resource that contains the migration-controller service account token. A remote cluster requires an exposed secure registry route for direct image migration. Indirect migration Images, volumes, and Kubernetes objects are copied from the source cluster to the replication repository and then from the replication repository to the destination cluster. Direct volume migration Persistent volumes are copied directly from the source cluster to the destination cluster. Direct image migration Images are copied directly from the source cluster to the destination cluster. Stage migration Data is copied to the destination cluster without stopping the application. Running a stage migration multiple times reduces the duration of the cutover migration. Cutover migration The application is stopped on the source cluster and its resources are migrated to the destination cluster. State migration Application state is migrated by copying specific persistent volume claims to the destination cluster. Rollback migration Rollback migration rolls back a completed migration. 1 Called the target cluster in the MTC web console. 1.2. MTC workflow You can migrate Kubernetes resources, persistent volume data, and internal container images to OpenShift Container Platform 4.10 by using the Migration Toolkit for Containers (MTC) web console or the Kubernetes API. MTC migrates the following resources: A namespace specified in a migration plan. Namespace-scoped resources: When the MTC migrates a namespace, it migrates all the objects and resources associated with that namespace, such as services or pods. Additionally, if a resource that exists in the namespace but not at the cluster level depends on a resource that exists at the cluster level, the MTC migrates both resources. For example, a security context constraint (SCC) is a resource that exists at the cluster level and a service account (SA) is a resource that exists at the namespace level. If an SA exists in a namespace that the MTC migrates, the MTC automatically locates any SCCs that are linked to the SA and also migrates those SCCs. Similarly, the MTC migrates persistent volumes that are linked to the persistent volume claims of the namespace. Note Cluster-scoped resources might have to be migrated manually, depending on the resource. Custom resources (CRs) and custom resource definitions (CRDs): MTC automatically migrates CRs and CRDs at the namespace level. Migrating an application with the MTC web console involves the following steps: Install the Migration Toolkit for Containers Operator on all clusters. You can install the Migration Toolkit for Containers Operator in a restricted environment with limited or no internet access. The source and target clusters must have network access to each other and to a mirror registry. Configure the replication repository, an intermediate object storage that MTC uses to migrate data. The source and target clusters must have network access to the replication repository during migration. If you are using a proxy server, you must configure it to allow network traffic between the replication repository and the clusters. Add the source cluster to the MTC web console. Add the replication repository to the MTC web console. Create a migration plan, with one of the following data migration options: Copy : MTC copies the data from the source cluster to the replication repository, and from the replication repository to the target cluster. Note If you are using direct image migration or direct volume migration, the images or volumes are copied directly from the source cluster to the target cluster. Move : MTC unmounts a remote volume, for example, NFS, from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. The remote volume must be accessible to the source and target clusters. Note Although the replication repository does not appear in this diagram, it is required for migration. Run the migration plan, with one of the following options: Stage copies data to the target cluster without stopping the application. A stage migration can be run multiple times so that most of the data is copied to the target before migration. Running one or more stage migrations reduces the duration of the cutover migration. Cutover stops the application on the source cluster and moves the resources to the target cluster. Optional: You can clear the Halt transactions on the source cluster during migration checkbox. 1.3. About data copy methods The Migration Toolkit for Containers (MTC) supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider. 1.3.1. File system copy method MTC copies data files from the source cluster to the replication repository, and from there to the target cluster. The file system copy method uses Restic for indirect migration or Rsync for direct volume migration. Table 1.2. File system copy method summary Benefits Limitations Clusters can have different storage classes. Supported for all S3 storage providers. Optional data verification with checksum. Supports direct volume migration, which significantly increases performance. Slower than the snapshot copy method. Optional data verification significantly reduces performance. Note The Restic and Rsync PV migration assumes that the PVs supported are only volumeMode=filesystem . Using volumeMode=Block for file system migration is not supported. 1.3.2. Snapshot copy method MTC copies a snapshot of the source cluster data to the replication repository of a cloud provider. The data is restored on the target cluster. The snapshot copy method can be used with Amazon Web Services, Google Cloud Provider, and Microsoft Azure. Table 1.3. Snapshot copy method summary Benefits Limitations Faster than the file system copy method. Cloud provider must support snapshots. Clusters must be on the same cloud provider. Clusters must be in the same location or region. Clusters must have the same storage class. Storage class must be compatible with snapshots. Does not support direct volume migration. 1.4. Direct volume migration and direct image migration You can use direct image migration (DIM) and direct volume migration (DVM) to migrate images and data directly from the source cluster to the target cluster. If you run DVM with nodes that are in different availability zones, the migration might fail because the migrated pods cannot access the persistent volume claim. DIM and DVM have significant performance benefits because the intermediate steps of backing up files from the source cluster to the replication repository and restoring files from the replication repository to the target cluster are skipped. The data is transferred with Rsync . DIM and DVM have additional prerequisites. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/migration_toolkit_for_containers/about-mtc |
function::ucallers | function::ucallers Name function::ucallers - Return first n elements of user stack backtrace Synopsis Arguments n number of levels to descend in the stack (not counting the top level). If n is -1, print the entire stack. Description This function returns a string of the first n hex addresses from the backtrace of the user stack. Output may be truncated as per maximum string length (MAXSTRINGLEN). Note To get (full) backtraces for user space applications and shared shared libraries not mentioned in the current script run stap with -d /path/to/exe-or-so and/or add --ldd to load all needed unwind data. | [
"ucallers:string(n:long)"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-ucallers |
Chapter 15. Managing packages | Chapter 15. Managing packages You can use Satellite to install, upgrade, and remove packages and to enable or disable repositories on hosts. Packages actions use remote execution. For more information about running remote execution jobs, see Configuring and setting up remote jobs in Managing hosts . 15.1. Enabling and disabling repositories on hosts Use this procedure to enable and disable repositories on hosts. Procedure In the Satellite web UI, navigate to Hosts > All Hosts , Select a host. On the Content tab, click the Repository sets tab. Click the vertical ellipsis to choose Override to disabled or Override to enabled to disable or enable repositories on hosts. 15.2. Installing packages on a host Use this procedure to review and install packages on a host using the Satellite web UI. The list of packages available for installation depends on the content view and lifecycle environment assigned to the host. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Select a host. On the Content tab, click the Packages tab. On the vertical ellipsis icon to the upgrade button, click Install packages . In the Install packages window, select the package or packages that you want to install on the host. Click Install . The Satellite web UI shows a notification for the remote execution job. Create a body of the API request in the JSON format by following the instructions below. API procedure Create the "job_invocation" object and place rest of the body inside this object. Create the "inputs" object with the "package" field of the string type specifying the packages you want to install. If you are specifying multiple packages, separate them with a whitespace. Create a "feature" field of the string type with value "katello_package_install" . Create a "search_query" field of the string type and input a search query matching the hosts on which you want to install the packages. Optional: If you want to install packages as a specific user, create an ssh object with the following fields of the string type: "effective_user" with the name of the ssh user "effective_user_password" with the password of the ssh user if this password is required Optional: If you want to install packages at a later time, create the "scheduling" object. The object can contain one or both of the following fields of the string type with date, time, and a timezone in the ISO 8601 format: "start_at" - sets the time to install the packages "start_before" - sets the latest time to install the packages. If it is not possible to install the packages by this time, then this action is cancelled. If you omit time, it defaults to 00:00:00. If you omit timezone, it defaults to UTC. Optional: If you want to limit the number of hosts on which the job is run concurrently, create the "concurrency_control" object with the "concurrency_level" field of the integer type. Assign the number of hosts as the field value. Optional: If you want to install packages at a later time and you want the host search query to be evaluated at a time of running the job, create a "targeting_type" field of the string type with the "dynamic_query" value. This is useful if you expect the result of the search query to be different at the time of running the job due to changed status of the hosts. If you omit this field, it defaults to "static_query" . Send a POST request with the created body to the /api/job_invocations endpoint of your Satellite Server and use a tool like Python to see a formatted response. Example API request: Verification In the Satellite web UI, navigate to Monitor > Jobs and see the report of the scheduled or completed remote execution job to install the packages on the selected hosts. Example API request body 15.3. Upgrading packages on a host You can upgrade packages on a host in bulk in the Satellite web UI. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Select a host. On the Content tab, click the Packages tab. Select Upgradable from the Status list. In the Upgradable to column, select the package version that you want to upgrade to. Select the packages you want to upgrade. Click Upgrade . The Satellite web UI shows a notification for the remote execution job. Create a body of the API request in the JSON format by following the instructions below. API procedure Create the "job_invocation" object and place rest of the body inside this object. Create the "inputs" object with the "package" field of the string type specifying the packages you want to update. If you are specifying multiple packages, separate them with a whitespace. Create a "feature" field of the string type with value "katello_package_update" . Create a "search_query" field of the string type and input a search query matching the hosts on which you want to update the packages. Optional: If you want to update packages as a specific user, create an ssh object with the following fields of the string type: "effective_user" with the name of the ssh user "effective_user_password" with the password of the ssh user if this password is required Optional: If you want to update packages at a later time, create the "scheduling" object. The object can contain one or both of the following fields of the string type with date, time, and a timezone in the ISO 8601 format: "start_at" - sets the time to update the packages "start_before" - sets the latest time to update the packages. If it is not possible to update the packages by this time, then this action is cancelled. If you omit time, it defaults to 00:00:00. If you omit timezone, it defaults to UTC. Optional: If you want to limit the number of hosts on which the job is run concurrently, create the "concurrency_control" object with the "concurrency_level" field of the integer type. Assign the number of hosts as the field value. Optional: If you want to update packages at a later time and you want the host search query to be evaluated at a time of running the job, create a "targeting_type" field of the string type with the "dynamic_query" value. This is useful if you expect the result of the search query to be different at the time of running the job due to changed status of the hosts. If you omit this field, it defaults to "static_query" . Send a POST request with the created body to the /api/job_invocations endpoint of your Satellite Server and use a tool like Python to see a formatted response. Example API request: Verification In the Satellite web UI, navigate to Monitor > Jobs and see the report of the scheduled or completed remote execution job to update the packages on the selected hosts. Example API request body 15.4. Removing packages from a host You can remove packages from a host in the Satellite web UI. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Select a host. On the Content tab, click the Packages tab. Click the vertical ellipsis for the package you want to remove. Select Remove . The Satellite web UI shows a notification for the remote execution job. Create a body of the API request in the JSON format by following the instructions below. API procedure Create the "job_invocation" object and place rest of the body inside this object. Create the "inputs" object with the "package" field of the string type specifying the packages you want to remove. If you are specifying multiple packages, separate them with a whitespace. Create a "feature" field of the string type with value "katello_package_remove" . Create a "search_query" field of the string type and input a search query matching the hosts on which you want to remove the packages. Optional: If you want to remove packages as a specific user, create an ssh object with the following fields of the string type: "effective_user" with the name of the ssh user "effective_user_password" with the password of the ssh user if this password is required Optional: If you want to remove packages at a later time, create the "scheduling" object. The object can contain one or both of the following fields of the string type with date, time, and a timezone in the ISO 8601 format: "start_at" - sets the time to remove the packages "start_before" - sets the latest time to remove the packages. If it is not possible to remove the packages by this time, then this action is cancelled. If you omit time, it defaults to 00:00:00. If you omit timezone, it defaults to UTC. Optional: If you want to limit the number of hosts on which the job is run concurrently, create the "concurrency_control" object with the "concurrency_level" field of the integer type. Assign the number of hosts as the field value. Optional: If you want to remove packages at a later time and you want the host search query to be evaluated at a time of running the job, create a "targeting_type" field of the string type with the "dynamic_query" value. This is useful if you expect the result of the search query to be different at the time of running the job due to changed status of the hosts. If you omit this field, it defaults to "static_query" . Send a POST request with the created body to the /api/job_invocations endpoint of your Satellite Server and use a tool like Python to see a formatted response. Example API request: Verification In the Satellite web UI, navigate to Monitor > Jobs and see the report of the scheduled or completed remote execution job to remove the packages on the selected hosts. Example API request body | [
"curl https:// satellite.example.com /api/job_invocations -H \"content-type: application/json\" -X POST -d @ Path_To_My_API_Request_Body -u My_Username : My_Password | python3 -m json.tool",
"{ \"job_invocation\" : { \"concurrency_control\" : { \"concurrency_level\" : 100 }, \"feature\" : \"katello_package_install\", \"inputs\" : { \"package\" : \"nano vim\" }, \"scheduling\" : { \"start_at\" : \"2023-09-21T19:00:00+00:00\", \"start_before\" : \"2023-09-23T00:00:00+00:00\" }, \"search_query\" : \"*\", \"ssh\" : { \"effective_user\" : \"My_Username\", \"effective_user_password\" : \"My_Password\" }, \"targeting_type\" : \"dynamic_query\" } }",
"curl https:// satellite.example.com /api/job_invocations -H \"content-type: application/json\" -X POST -d @ Path_To_My_API_Request_Body -u My_Username : My_Password | python3 -m json.tool",
"{ \"job_invocation\" : { \"concurrency_control\" : { \"concurrency_level\" : 100 }, \"feature\" : \"katello_package_update\", \"inputs\" : { \"package\" : \"nano vim\" }, \"scheduling\" : { \"start_at\" : \"2023-09-21T19:00:00+00:00\", \"start_before\" : \"2023-09-23T00:00:00+00:00\" }, \"search_query\" : \"*\", \"ssh\" : { \"effective_user\" : \"My_Username\", \"effective_user_password\" : \"My_Password\" }, \"targeting_type\" : \"dynamic_query\" } }",
"curl https:// satellite.example.com /api/job_invocations -H \"content-type: application/json\" -X POST -d @ Path_To_My_API_Request_Body -u My_Username : My_Password | python3 -m json.tool",
"{ \"job_invocation\" : { \"concurrency_control\" : { \"concurrency_level\" : 100 }, \"feature\" : \"katello_package_remove\", \"inputs\" : { \"package\" : \"nano vim\" }, \"scheduling\" : { \"start_at\" : \"2023-09-21T19:00:00+00:00\", \"start_before\" : \"2023-09-23T00:00:00+00:00\" }, \"search_query\" : \"*\", \"ssh\" : { \"effective_user\" : \"My_Username\", \"effective_user_password\" : \"My_Password\" }, \"targeting_type\" : \"dynamic_query\" } }"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/managing_hosts/managing-packages_managing-hosts |
probe::stapio.receive_control_message | probe::stapio.receive_control_message Name probe::stapio.receive_control_message - Received a control message Synopsis stapio.receive_control_message Values len the length (in bytes) of the data blob data a ptr to a binary blob of data sent as the control message type type of message being send; defined in runtime/transport/transport_msgs.h Description Fires just after a message was receieved and before it's processed. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-stapio-receive-control-message |
Chapter 2. Performing operations with the Block Storage service (cinder) | Chapter 2. Performing operations with the Block Storage service (cinder) You can use the Block Storage volume service to perform the following volume operations: creating volumes, editing volumes, resizing volumes, changing the volume owner, changing the volume type, setting the access rights of a volume, and deleting volumes. You can also create volume snapshots to preserve the state of a volume at a specific point in time, which you can either revert to the latest state later or clone new volumes from, or delete. Note All of these operations use the CLI which is faster, requires less setup, and provides more options than the Dashboard. Prerequisites You have the oc command line tool installed on your workstation. You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges. To use Block Storage service cinder CLI commands, source the cloudrc file with the command USD source ./cloudrc before using them. If the cloudrc file does not exist, then you need to create it. For more information, see Creating the cloudrc file . 2.1. Creating Block Storage volumes Create volumes to provide persistent storage for instances that you launch with the Compute service (nova). To create an encrypted volume, you must select a volume type that is specifically configured for volume encryption. Then you must configure both the Compute and Block Storage services to use the same static key. Important By default, the maximum number of volumes you can create for a project is 10. However, the project administrator can change this limit for your project. Procedure Access the remote shell for the OpenStackClient pod from your workstation: Create a volume. Replace <size> with the size of this volume, in gigabytes. Note The <size> argument is not required when you use --snapshot <snapshot> or --source <volume> to provide the volume source. Optional: Replace <volume_type> with the required volume type for this volume that potentially specifies the required back end and performance level. You can list the available volume types by running this command openstack volume type list . If this is not specified, then a default volume type is used. You can always change the volume type later. For more information, see Retyping volumes . If you do not specify a back end, the Block Storage scheduler will try to select a suitable back end for you. For more information, see Volume allocation on multiple back ends . Note If no suitable back end is found then the volume will not be created. Optional: Choose one of the following options to specify the volume source: No source, creates an empty volume, which does not contain a file system or partition table. Replace <image> with the image that you want to use. You can list the available images by using the openstack image list command. Note If you want to create an encrypted volume from an unencrypted image, you must ensure that the volume size is larger than the image size so that the encryption data does not truncate the volume data. Replace <snapshot> with the name of the volume snapshot that you want to clone. You can list the available snapshots by using the openstack volume snapshot list command. If you want to revert a volume to the state of its latest snapshot, then you can do this on the volume itself without having to create a new volume. For more information, see Reverting a volume to the latest snapshot . Important If you want to create a new volume from a snapshot of an encrypted volume, ensure that the new volume is at least 1GB larger than the old volume. Replace <volume> with the name of the existing volume that you want to clone. You can list the available volumes by using the openstack volume list command. Optional: Replace <availability_zone> with the availability zone of this volume. Availability zones or logical server groups, along with host aggregates, are a common method for segregating resources within OpenStack. For more information about availability zones and host aggregates, see Creating and managing host aggregates in Configuring the Compute service for instance creation . Optional: Replace <description> with a concise description of this volume, enclosed in double quotes (") when this contains spaces. Optional: Determine whether this volume is bootable or not. The default state of a volume is --non-bootable . A --bootable volume should contain the files needed to initiate the operating system. This setting is usually used when the volume is created from a bootable image. Optional: Determine the access permissions of this volume. The default state of a volume is --read-write to allow data to be written to and read from it. You can mark a volume as --read-only to protect its data from being accidentally overwritten or deleted. Volumes that have been set as read-only can be changed to read-write. Replace <volume_name> with the name of this volume. For example: Exit the openstackclient pod: USD exit 2.1.1. Volume allocation on multiple back ends When you create a volume, you can choose the required volume type, which specifies the volume settings that in some cases can specify the required back end. For large deployments with large numbers of back ends it is usually not necessary for volume types to specify the back ends. In this case it is better that the Block Storage scheduler determines the best back end for a volume, based on the configured scheduler filters. If you do not specify a back end when creating the volume, for example, when you select a volume type that does not specify a specific back end, then the Block Storage scheduler will try to select a suitable back end for you. The scheduler uses the following default filters to select suitable back ends: AvailabilityZoneFilter Filters out all back ends that do not meet the availability zone requirements of the requested volume. CapacityFilter Selects only back ends with enough space to accommodate the volume. CapabilitiesFilter Selects only back ends that can support any specified settings in the volume. If there is more than one suitable back end, then the scheduler uses a weighting method to pick the best back end. By default, the CapacityWeigher method is used, so that the filtered back end with the most available free space is selected. Note If there is no suitable back end then the volume will not be created. 2.2. Editing a volume You can change several properties of a volume after you have created it, such as the volume name, volume description, volume type, whether the volume is bootable, and the access permissions of this volume. Procedure Access the remote shell for the OpenStackClient pod from your workstation: Edit a volume. Optional: Replace <name> with the new name for this volume. Optional: Replace <description> with a revised concise description of this volume, enclosed in double quotes (") when this contains spaces. Optional: Replace <new_volume_type_name> with the name of the new volume type. You can list the available volume types by using the openstack volume type list command. Optional: Determine whether this volume should be bootable or not. The default state of a volume is --non-bootable . A --bootable volume should contain the files needed to initiate the operating system. Optional: Determine the access permissions of this volume. The default state of a volume is --read-write to allow data to be written to and read from it. You can mark a volume as --read-only to protect its data from being accidentally overwritten or deleted. Volumes that have been set as read-only can be set back to read-write. Replace <volume_name> with the existing volume name or ID. You can list the available volumes by using the openstack volume list command. You can determine which volumes use which volume types by using the `openstack volume show`command for each volume. Optional: This command does not provide any confirmation that the changes to the volume were successful. You can run the following command to review the changes made to this volume. If you have changed the volume name then replace <latest_volume_name> with this new name, otherwise use the existing volume name: Exit the openstackclient pod: USD exit 2.3. Resizing (extending) a volume Resize volumes to increase the storage capacity of the volumes, you cannot reduce the size of the volume. Note The ability to resize a volume in use is supported but it is driver dependent. Red Hat Ceph Storage is supported. You cannot extend in-use multi-attach volumes. For more information about support for this feature, contact Red Hat Support . Procedure Access the remote shell for the OpenStackClient pod from your workstation: Increase the size of the volume: Replace <size> with the required size of this volume, in gigabytes. Replace <volume> with the name or ID of the volume you want to extend. You can list the available volumes by using the openstack volume list command. Optional: This command does not provide any confirmation that volume has been successfully resized. You can run the following command to review the changes made to this volume: Exit the openstackclient pod: USD exit 2.4. Changing the volume owner You must complete the following two steps to change the user who owns a volume: A volume transfer is initiated by the volume's owner, who clears the ownership of the volume and generates the id , name , and auth_key values. Important You must save the auth_key and the id or name values of this transfer request and send them to the new user, because this is the ONLY time that the auth_key is generated for security purposes. The new user can claim ownership of the volume, by logging in and using the values that he has received to accept the transfer. Initiate the transfer of volume ownership Access the remote shell for the OpenStackClient pod from your workstation: The current owner of the volume must log in from the command line: This user creates a volume transfer request. Optional: Replace <name> with a literal name to identify this transfer request in addition to the ID. If you do not do this then the name is None . Replace <volume> with the name or ID of the volume that you want to transfer the ownership of. You can list the available volumes by using the openstack volume list command. This command clears the ownership of the volume and creates a table of the id , name , and auth_key values for the transfer request. These values can be given to, and used by, another user to accept the transfer request and become the new owner of the volume. For example: Important Ensure that you save the value of the auth_key and the id or name , if specified, and send them to the user who will become the new volume owner because NONE of the other openstack volume transfer request commands provide the auth_key for security purposes. Exit the openstackclient pod: USD exit Complete the transfer of volume ownership Access the remote shell for the OpenStackClient pod from your workstation: The new user must log in from the command line. This user accepts the volume transfer request. Replace <transfer_request> with the id or name that you have received from the original volume owner who created the volume transfer request. Replace <auth_key> with the auth_key value that you have received from the original volume owner who created the volume transfer request. For example: Exit the openstackclient pod: USD exit 2.5. Retyping a volume You can retype or change the volume type of a volume to adjust the settings applied to this volume, for a variety of reasons, such as: To upgrade or downgrade the performance of this volume. To facilitate additional functionality, such as the ability to back up this volume. To move a volume from one backend to another. In this case, you need at least two volume types that each target different backends and may have specifications that are backend specific. Therefore, by retyping a volume, the volume is moved to a different back end. For more information, see Restrictions and performance constraints when moving volumes in Customizing persistent storage . Your available options when retyping a volume depend upon the volume types that your administrator has created. Important You cannot retype an unencrypted volume to an encrypted volume of the same size. Encrypted volumes require additional space to store the encryption data. Prerequisites Only volume owners and administrators can retype volumes. Procedure Access the remote shell for the OpenStackClient pod from your workstation: Change the volume type of a volume. Replace <new_volume_type_name> with the name of the new volume type. You can list the available volume types by using the openstack volume type list command. Replace <volume> with the name or ID of the volume. You can list the available volumes by using the openstack volume list command. You can determine which volumes use which volume types by using the `openstack volume show`command for each volume. Exit the openstackclient pod: USD exit 2.6. Configuring the access rights to a volume The default state of a volume is read-write to allow data to be written to and read from it. You can mark a volume as read-only to protect its data from being accidentally overwritten or deleted. Volumes that have been set as read-only can be set back to read-write. Prerequisites If the volume is already attached to an instance it must be detached from the instance. Procedure Access the remote shell for the OpenStackClient pod from your workstation: Set the required access rights of a volume: Choose --read-only or --read-write to set the required volume access. Replace <volume> with the name or ID of the required volume. You can list the available volumes by using the openstack volume list command. If you detached this volume from an instance to change the access rights, then re-attach the volume. Exit the openstackclient pod: USD exit 2.7. Deleting a volume You should delete the volumes that you no longer need, to ensure that you will not exceed the volume quota limit for your project, which will prevent you from creating more volumes. Note You normally cannot delete a volume if it has existing snapshots, unless you use the --purge option. Procedure Access the remote shell for the OpenStackClient pod from your workstation: Delete a volume. Optional: Either choose the --force option when deleting a volume, to override any normal volume checks such as whether the volume is in use or not. Or choose the --purge option when deleting a volume, to also delete any snapshots that have been created for this volume. Replace <volume> with the name or ID of the volume that you want to remove. You can list the available volumes by using the openstack volume list command. Warning There is not undo action that is possible after using this command, so be sure that you are deleting the required volume. Optional: This command does not provide any confirmation that the volume has been deleted successfully. You can run the following command to list the available volumes and you should not see the volume that you have deleted: Exit the openstackclient pod: USD exit 2.8. Creating volume snapshots You can preserve the state of a volume at a specific point in time by creating a volume snapshot. When you want to create snapshots of in use volumes, which are crash consistent but not application consistent, use the --force option. When you use a Block Storage (cinder) REST API microversion of 3.66 or later, this is the default action. You can then use the snapshot to clone new volumes or to revert the state of the volume to. For more information about creating a new volume from a snapshot, see Creating Block Storage volumes . Important By default, the maximum number of snapshots you can create for a project is 10. However, the project administrator can change this limit for your project. Volume backups are different from snapshots. Backups preserve the data contained in the volume, whereas snapshots preserve the state of a volume at a specific point in time. You cannot delete a volume if it has existing snapshots. Volume backups prevent data loss, whereas snapshots facilitate cloning. For this reason, snapshot back ends are typically colocated with volume back ends to minimize latency during cloning. By contrast, a backup repository is usually located in a different location, for example, on a different node, physical storage, or even geographical location in a typical enterprise deployment. This is to protect the backup repository from any damage that might occur to the volume back end. Prerequisites A volume that you want to snapshot. For more information about creating volumes, see Creating Block Storage volumes . Procedure Access the remote shell for the OpenStackClient pod from your workstation: Create a volume snapshot. Replace <volume> with the name or the ID of the volume that you want to create a snapshot of. You can list the available volumes by using the openstack volume list command. Optional: Replace <description> with a concise description of this snapshot, enclosed in double quotes (") when this contains spaces. Optional: If you receive a warning that a volume is in use when creating a snapshot, you can use this --force option to ignore the warning and create the snapshot regardless. Replace <snapshot> with the name of this volume snapshot. This command provides a table containing the details of this snapshot. For example: Exit the openstackclient pod: USD exit 2.9. Reverting a volume to the latest snapshot To revert a volume to the state of its most recent snapshot you can change the state of your volume, instead of creating a new volume from this snapshot. For more information about creating a new volume from a snapshot, see Creating Block Storage volumes Block storage drivers that support the revert-to-snapshot feature, perform this task faster and more efficiently than the drivers that do not. For example, copy-on-write space optimizations are unaffected. For information about which features your drivers support, contact your driver vendors. Limitations You cannot use the revert-to-snapshot feature on an attached or in-use volume. You cannot use the revert-to-snapshot feature on a bootable root volume because it is not in the available state. To use this feature, the instance must have been booted with the delete_on_termination=false (default) property to preserve the boot volume if the instance is terminated. When you want to revert to a snapshot, you must first delete the initial instance so that the volume is available. You can then revert it and create a new instance from the volume. You cannot revert a volume that you resize (extend) after taking the snapshot. There might be limitations to using the revert-to-snapshot feature with multi-attach volumes. Check whether such limitations apply before you use this feature. Prerequisites Block Storage service (cinder) REST API microversion 3.40 or later. You must have created at least one snapshot for the volume. Procedure Access the remote shell for the OpenStackClient pod from your workstation: Detach your volume: Replace <instance_id> with the instance ID. Replace <vol_id> with the volume ID that you want to revert. Locate the ID or name of the latest snapshot. Revert the snapshot: Replace <snapshot_id_name> with the ID or the name of the snapshot. Optional: You can check that the volume you are reverting is in a reverting state: Reattach the volume: Optional: You can use the following command to verify that the volume you reverted is now in an available state. Exit the openstackclient pod: USD exit 2.10. Deleting volume snapshots You should delete the volume snapshots that you no longer need, to ensure that you will not exceed the snapshot quota limit for your project, which will prevent you from creating more snapshots. Red Hat OpenStack Services on OpenShift (RHOSO) 18.0 uses the RBD CloneV2 API, which allows you to delete volume snapshots even if they have dependencies, when using the Ceph as a back end. But if you use an external Ceph back end, you must also configure the minimum client on the Ceph cluster for this to work. For more information, see Enabling deferred deletion for volumes or images with dependencies Prerequisites A volume snapshot that you want to delete. Procedure Access the remote shell for the OpenStackClient pod from your workstation: Delete the volume snapshot. Optional: Choose the --force option when deleting a snapshot, to override any normal snapshot state checks. Replace <snapshot> with the name of this volume snapshot that you want to delete. You can list the available snapshots by using the openstack volume snapshot list command. Warning There is not undo action that is possible after using this command, so be sure that you are deleting the required snapshot. Optional: This command does not provide any confirmation that the snapshot has been deleted successfully. You can run the following command to list the available volumes and you should not see the snapshot that you have deleted: Exit the openstackclient pod: USD exit | [
"oc rsh -n openstack openstackclient",
"openstack volume create --size <size> [--type <volume_type>] [--image <image> | --snapshot <snapshot> | --source <volume>] [--availability-zone <availability_zone>] [--description <description>] [--bootable | --non-bootable] [--read-only | --read-write] <volume_name>",
"openstack volume create --size 10 --type MyEncryptedVolType --availability-zone nova --description \"A blank encrypted volume\" MyEncVol +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2024-11-12T06:40:21.619913 | | description | A blank encrypted volume | | encrypted | True | | id | 90fe8005-89e7-4629-858d-815b2f8a5001 | | migration_status | None | | multiattach | False | | name | MyEncVol | | properties | | | replication_status | None | | size | 10 | | snapshot_id | None | | source_volid | None | | status | creating | | type | MyEncryptedVolType | | updated_at | None | | user_id | a20f9aed5757443da0e5fb7bab067f9b | +---------------------+--------------------------------------+",
"exit",
"oc rsh -n openstack openstackclient",
"openstack volume set [--name <name>] [--description <description>] [--type <new_volume_type_name>] [--bootable | --non-bootable] [--read-only | --read-write] <volume_name>",
"openstack volume show <latest_volume_name>",
"exit",
"oc rsh -n openstack openstackclient",
"openstack volume set -size <size> <volume>",
"openstack volume show <volume>",
"exit",
"oc rsh -n openstack openstackclient",
"openstack volume transfer request create [--name <name>] <volume>",
"+------------+--------------------------------------+ | Field | Value | +------------+--------------------------------------+ | auth_key | bb39cacfe626da60 | | created_at | 2024-11-12T08:30:01.826548 | | id | 6163f5d4-e4f0-4d4f-a7e9-c8e0668f934b | | name | None | | volume_id | fc13ff34-ac2a-4d26-9744-860b2b19b6ca | +------------+--------------------------------------+----",
"exit",
"oc rsh -n openstack openstackclient",
"openstack volume transfer request accept --auth-key <auth_key> <transfer_request>",
"openstack volume transfer request accept --auth-key bb39cacfe626da60 6163f5d4-e4f0-4d4f-a7e9-c8e0668f934b +-----------+--------------------------------------+ | Field | Value | +-----------+--------------------------------------+ | id | 6163f5d4-e4f0-4d4f-a7e9-c8e0668f934b | | name | None | | volume_id | fc13ff34-ac2a-4d26-9744-860b2b19b6ca | +-----------+--------------------------------------+",
"exit",
"oc rsh -n openstack openstackclient",
"openstack volume set --type <new_volume_type_name> <volume>",
"exit",
"oc rsh -n openstack openstackclient",
"openstack volume set [--read-only|--read-write] <volume>",
"exit",
"oc rsh -n openstack openstackclient",
"openstack volume delete [--force | --purge] <volume>",
"openstack volume list",
"exit",
"oc rsh -n openstack openstackclient",
"openstack volume snapshot create --volume <volume> [--description <description>] [--force] <snapshot>",
"+-------------+--------------------------------------+ | Field | Value | +-------------+--------------------------------------+ | created_at | 2024-11-12T11:22:40.533557 | | description | None | | id | 2ee2cb3d-a59a-473e-a17d-482d7abcd5d5 | | name | MyVol1snap | | properties | | | size | 12 | | status | creating | | updated_at | None | | volume_id | 99510e51-ad60-4d60-b62a-7321e6442f06 | +-------------+--------------------------------------+",
"exit",
"oc rsh -n openstack openstackclient",
"nova volume-detach <instance_id> <vol_id>",
"openstack volume snapshot-list",
"openstack volume --os-volume-api-version=3.40 revert <snapshot_id_name>",
"openstack volume snapshot-list",
"nova volume-attach <instance_id> <vol_id>",
"openstack volume list",
"exit",
"oc rsh -n openstack openstackclient",
"openstack volume snapshot delete [--force] <snapshot>",
"openstack volume snapshot list",
"exit"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/performing_storage_operations/assembly_cinder-performing-operations_osp |
Chapter 1. Planning custom undercloud features | Chapter 1. Planning custom undercloud features Before you configure and install director on the undercloud, you can plan to include custom features in your undercloud. 1.1. Character encoding configuration Red Hat OpenStack Platform has special character encoding requirements as part of the locale settings: Use UTF-8 encoding on all nodes. Ensure the LANG environment variable is set to en_US.UTF-8 on all nodes. Avoid using non-ASCII characters if you use Red Hat Ansible Tower to automate the creation of Red Hat OpenStack Platform resources. 1.2. Considerations when running the undercloud with a proxy Compared to using Red Hat Satellite for registry and package management, there are limitations when you run the undercloud with a proxy. If your environment uses a proxy, review these methods for integrating parts of Red Hat OpenStack Platform (RHOSP) with a proxy and the limitations of each method. System-wide proxy configuration Use this method to configure proxy communication for all network traffic on the undercloud. To configure the proxy settings, edit the /etc/environment file and set the following environment variables: http_proxy The proxy that you want to use for standard HTTP requests. https_proxy The proxy that you want to use for HTTPs requests. no_proxy A comma-separated list of domains that you want to exclude from proxy communications. The system-wide proxy method has the following limitations: The maximum length of no_proxy is 1,024 characters due to a fixed size buffer in the pam_env pluggable authentication module (PAM). Some existing containers bind and parse the environment variables in /etc/environment files incorrectly, which causes issues when running these services. For information about updating the proxy settings in /etc/environment files to work correctly for existing containers, see the Red Hat Knowledgebase solutions at https://access.redhat.com/solutions/7109135 and https://access.redhat.com/solutions/7007114 . dnf proxy configuration Use this method to configure dnf to run all traffic through a proxy. To configure the proxy settings, edit the /etc/dnf/dnf.conf file and set the following parameters: proxy The URL of the proxy server. proxy_username The username that you want to use to connect to the proxy server. proxy_password The password that you want to use to connect to the proxy server. proxy_auth_method The authentication method used by the proxy server. For more information about these options, run man dnf.conf . The dnf proxy method has the following limitations: This method provides proxy support only for dnf . The dnf proxy method does not include an option to exclude certain hosts from proxy communication. Red Hat Subscription Manager proxy Use this method to configure Red Hat Subscription Manager to run all traffic through a proxy. To configure the proxy settings, edit the /etc/rhsm/rhsm.conf file and set the following parameters: proxy_hostname Host for the proxy. proxy_scheme The scheme for the proxy when writing out the proxy to repo definitions. proxy_port The port for the proxy. proxy_username The username that you want to use to connect to the proxy server. proxy_password The password to use for connecting to the proxy server. no_proxy A comma-separated list of hostname suffixes for specific hosts that you want to exclude from proxy communication. For more information about these options, run man rhsm.conf . The Red Hat Subscription Manager proxy method has the following limitations: This method provides proxy support only for Red Hat Subscription Manager. The values for the Red Hat Subscription Manager proxy configuration override any values set for the system-wide environment variables. Transparent proxy If your network uses a transparent proxy to manage application layer traffic, you do not need to configure the undercloud itself to interact with the proxy because proxy management occurs automatically. A transparent proxy can help overcome limitations associated with client-based proxy configuration in RHOSP. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/customizing_your_red_hat_openstack_platform_deployment/assembly_planning-custom-undercloud-features |
Chapter 4. Securing the Fuse Console | Chapter 4. Securing the Fuse Console To secure the Fuse Console on EAP: Disable the Fuse Console's proxy servlet when deploying to AWS If you want to deploy a standalone Fuse application to Amazon Web Services (AWS), you should disable the Fuse Console's proxy servlet by setting the hawtio.disableProxy system property to true . Note When you disable the Fuse Console proxy servlet, the Fuse Console's Connect tab is disabled and you cannot connect to other JVMs from the Fuse Console. If you want to deploy more than one Fuse application on AWS, you must deploy the Fuse Console for each application. Set HTTPS as the required protocol You can use the hawtio.http.strictTransportSecurity property to require web browsers to use the secure HTTPS protocol to access the Fuse Console. This property specifies that web browsers that try to use HTTP to access the Fuse Console must automatically convert the request to use HTTPS. Use public keys to secure responses You can use the hawtio.http.publicKeyPins property to secure the HTTPS protocol by telling the web browser to associate a specific cryptographic public key with the Fuse Console to decrease the risk of "man-in-the-middle" attacks with forged certificates. Procedure Set the hawtio.http.strictTransportSecurity and the hawtio.http.publicKeyPins properties in the system-properties section of the USDEAP_HOME/standalone/configuration/standalone*.xml file as shown in the following example: (For deploying on AWS only) To disable the Fuse Console's proxy servlet, set the hawtio.disableProxy property in the system-properties section of the USDEAP_HOME/standalone/configuration/standalone*.xml file as shown in the following example: Additional resources For a description of the hawtio.http.strictTransportSecurity property's syntax, see the description page for the HTTP Strict Transport Security (HSTS) response header. For a description of the hawtio.http.publicKeyPins property's syntax, including instructions on how to extract the Base64 encoded public key, see the description page for the HTTP Public Key Pinning response header. | [
"<property name=\"hawtio.http.strictTransportSecurity\" value=\"max-age=31536000; includeSubDomains; preload\"/> <property name=\"hawtio.http.publicKeyPins\" value=\"pin-sha256=cUPcTAZWKaASuYWhhneDttWpY3oBAkE3h2+soZS7sWs\"; max-age=5184000; includeSubDomains\"/>",
"<property name=\"hawtio.disableProxy\" value=\"true\"/>"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/managing_fuse_on_jboss_eap_standalone/fuse-console-security-eap |
3.5. Storage | 3.5. Storage Storage for virtual machines is abstracted from the physical storage allocated to the virtual machine. It is attached to the virtual machine using the paravirtualized or emulated block device drivers. 3.5.1. Storage Pools A storage pool is a file, directory, or storage device managed by libvirt for the purpose of providing storage to virtual machines. Storage pools are divided into storage volumes that store virtual machine images or are attached to virtual machines as additional storage. Multiple guests can share the same storage pool, allowing for better allocation of storage resources. For more information, see the Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide . Local storage pools Local storage pools are attached directly to the host server. They include local directories, directly attached disks, physical partitions, and Logical Volume Management (LVM) volume groups on local devices. Local storage pools are useful for development, testing and small deployments that do not require migration or large numbers of virtual machines. Local storage pools may not be suitable for many production environment, because they do not support live migration. Networked (shared) storage pools Networked storage pools include storage devices shared over a network using standard protocols. Networked storage is required when migrating virtual machines between hosts with virt-manager , but is optional when migrating with virsh . Networked storage pools are managed by libvirt . 3.5.2. Storage Volumes Storage pools are divided into storage volumes . Storage volumes are an abstraction of physical partitions, LVM logical volumes, file-based disk images and other storage types handled by libvirt . Storage volumes are presented to virtual machines as local storage devices regardless of the underlying hardware. 3.5.3. Emulated Storage Devices Virtual machines can be presented with a range of storage devices that are emulated by the host. Each type of storage device is appropriate for specific use cases, allowing for maximum flexibility and compatibility with guest operating systems. virtio-scsi virtio-scsi is the recommended paravirtualized device for guests using large numbers of disks or advanced storage features such as TRIM. Guest driver installation may be necessary on guests using operating systems other than Red Hat Enterprise Linux 7. virtio-blk virtio-blk is a paravirtualized storage device suitable for exposing image files to guests. virtio-blk can provide the best disk I/O performance for virtual machines, but has fewer features than virtio-scsi. IDE IDE is recommended for legacy guests that do not support virtio drivers. IDE performance is lower than virtio-scsi or virtio-blk, but it is widely compatible with different systems. CD-ROM ATAPI CD-ROMs and virtio-scsi CD-ROMs are available and make it possible for guests to use ISO files or the host's CD-ROM drive. virtio-scsi CD-ROMs can be used with guests that have the virtio-scsi driver installed. ATAPI CD-ROMs offer wider compatibility but lower performance. USB mass storage devices and floppy disks Emulated USB mass storage devices and floppy disks are available when removable media are required. USB mass storage devices are preferable to floppy disks due to their larger capacity. 3.5.4. Host Storage Disk images can be stored on a range of local and remote storage technologies connected to the host. Image files Image files can only be stored on a host file system. The image files can be stored on a local file system, such as ext4 or xfs, or a network file system, such as NFS. Tools such as libguestfs can manage, back up, and monitor files. Disk image formats on KVM include: raw Raw image files contain the contents of the disk with no additional metadata. Raw files can either be pre-allocated or sparse, if the host file system allows it. Sparse files allocate host disk space on demand, and are therefore a form of thin provisioning. Pre-allocated files are fully provisioned but have higher performance than sparse files. Raw files are desirable when disk I/O performance is critical and transferring the image file over a network is rarely necessary. qcow2 qcow2 image files offer a number of advanced disk image features, including backing files, snapshots, compression, and encryption. They can be used to instantiate virtual machines from template images. qcow2 files are typically more efficient to transfer over a network, because only sectors written by the virtual machine are allocated in the image. Red Hat Enterprise Linux 7 supports the qcow2 version 3 image file format. LVM volumes Logical volumes (LVs) can be used for disk images and managed using the system's LVM tools. LVM offers higher performance than file systems because of its simpler block storage model. LVM thin provisioning offers snapshots and efficient space usage for LVM volumes, and can be used as an alternative to migrating to qcow2. Host devices Host devices such as physical CD-ROMs, raw disks, and logical unit numbers (LUNs) can be presented to the guest. This enables SAN or iSCSI LUNs as well as local CD-ROM media to be used by the guest with good performance. Host devices can be used when storage management is done on a SAN instead of on hosts. Distributed storage systems Gluster volumes can be used as disk images. This enables high-performance clustered storage over the network. Red Hat Enterprise Linux 7 includes native support for disk images on GlusterFS. This enables a KVM host to boot virtual machine images from GlusterFS volumes, and to use images from a GlusterFS volume as data disks for virtual machines. When compared to GlusterFS FUSE, the native support in KVM delivers higher performance. For more information on storage and virtualization, see the Managing Storage for Virtual Machines . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_getting_started_guide/sec-virtualization_getting_started-products-storage |
Chapter 11. Creating a customized RHEL bare metal image by using Insights image builder | Chapter 11. Creating a customized RHEL bare metal image by using Insights image builder You can create customized RHEL ISO system images by using the Insights image builder. You can then download these images and install them on a bare metal system according to your requirements. 11.1. Creating a customized RHEL ISO system image by using Insights image builder Complete the following steps to create customized RHEL ISO images by using the Insights image builder. Procedure Access Insights image builder on the browser. The Insights image builder dashboard opens. Click Create image . The Create image dialog wizard opens. On the Image output page, complete the following steps: From the Release list, select the Release that you want to use: for example, choose Red Hat Enterprise Linux (RHEL). From the Select target environments option, select Bare metal - Installer. Click . On the Registration page, select the type of registration that you want to use. You can select from these options: Register images with Red Hat : Register and connect image instances, subscriptions and insights with Red Hat. For details on how to embed an activation key and register systems on first boot, see Creating a customized system image with an embed subscription by using Insights image builder . Register image instances only : Register and connect only image instances and subscriptions with Red Hat. Register later :- Register the system after the image creation. Click . Optional: On the Packages page, add packages to your image. See Adding packages during image creation by using Insights image builder . On the Name image page, enter a name for your image and click . If you do not enter a name, you can find the image you created by its UUID. On the Review page, review the details about the image creation and click Create image . Your image is created as a .iso image. When the new image displays a Ready status in the Status column, click Download .iso image. The .iso image is saved to your system and is ready for deployment. Note The .iso images are available for 6 hours and expire after that. Ensure that you download the image to avoid losing it. 11.2. Installing the customized RHEL ISO system image to a bare metal system You can create a virtual machine (VM) from the ISO image that you created using the Insights image builder. Prerequisites You created and downloaded an ISO image by using Insights image builder. A 8 GB USB flash drive. Procedure Access the directory where you downloaded your ISO image. Place the bootable ISO image file on a USB flash drive. Connect the USB flash drive to the port of the computer you want to boot. Boot the ISO image from the USB flash drive. Perform the steps to install the customized bootable ISO image. The boot screen shows you the following options: Install Red Hat Enterprise Linux 8 Test this media & install Red Hat Enterprise Linux 8 Additional resources Booting the installation | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/creating_customized_images_by_using_insights_image_builder/assembly_creating-a-customized-rhel-bare-metal-image-using-red-hat-image-builder |
Chapter 12. Using a service account as an OAuth client | Chapter 12. Using a service account as an OAuth client 12.1. Service accounts as OAuth clients You can use a service account as a constrained form of OAuth client. Service accounts can request only a subset of scopes that allow access to some basic user information and role-based power inside of the service account's own namespace: user:info user:check-access role:<any_role>:<service_account_namespace> role:<any_role>:<service_account_namespace>:! When using a service account as an OAuth client: client_id is system:serviceaccount:<service_account_namespace>:<service_account_name> . client_secret can be any of the API tokens for that service account. For example: USD oc sa get-token <service_account_name> To get WWW-Authenticate challenges, set an serviceaccounts.openshift.io/oauth-want-challenges annotation on the service account to true . redirect_uri must match an annotation on the service account. 12.1.1. Redirect URIs for service accounts as OAuth clients Annotation keys must have the prefix serviceaccounts.openshift.io/oauth-redirecturi. or serviceaccounts.openshift.io/oauth-redirectreference. such as: In its simplest form, the annotation can be used to directly specify valid redirect URIs. For example: The first and second postfixes in the above example are used to separate the two valid redirect URIs. In more complex configurations, static redirect URIs may not be enough. For example, perhaps you want all Ingresses for a route to be considered valid. This is where dynamic redirect URIs via the serviceaccounts.openshift.io/oauth-redirectreference. prefix come into play. For example: Since the value for this annotation contains serialized JSON data, it is easier to see in an expanded format: { "kind": "OAuthRedirectReference", "apiVersion": "v1", "reference": { "kind": "Route", "name": "jenkins" } } Now you can see that an OAuthRedirectReference allows us to reference the route named jenkins . Thus, all Ingresses for that route will now be considered valid. The full specification for an OAuthRedirectReference is: { "kind": "OAuthRedirectReference", "apiVersion": "v1", "reference": { "kind": ..., 1 "name": ..., 2 "group": ... 3 } } 1 kind refers to the type of the object being referenced. Currently, only route is supported. 2 name refers to the name of the object. The object must be in the same namespace as the service account. 3 group refers to the group of the object. Leave this blank, as the group for a route is the empty string. Both annotation prefixes can be combined to override the data provided by the reference object. For example: The first postfix is used to tie the annotations together. Assuming that the jenkins route had an Ingress of https://example.com , now https://example.com/custompath is considered valid, but https://example.com is not. The format for partially supplying override data is as follows: Type Syntax Scheme "https://" Hostname "//website.com" Port "//:8000" Path "examplepath" Note Specifying a hostname override will replace the hostname data from the referenced object, which is not likely to be desired behavior. Any combination of the above syntax can be combined using the following format: <scheme:>//<hostname><:port>/<path> The same object can be referenced more than once for more flexibility: Assuming that the route named jenkins has an Ingress of https://example.com , then both https://example.com:8000 and https://example.com/custompath are considered valid. Static and dynamic annotations can be used at the same time to achieve the desired behavior: | [
"oc sa get-token <service_account_name>",
"serviceaccounts.openshift.io/oauth-redirecturi.<name>",
"\"serviceaccounts.openshift.io/oauth-redirecturi.first\": \"https://example.com\" \"serviceaccounts.openshift.io/oauth-redirecturi.second\": \"https://other.com\"",
"\"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\"",
"{ \"kind\": \"OAuthRedirectReference\", \"apiVersion\": \"v1\", \"reference\": { \"kind\": \"Route\", \"name\": \"jenkins\" } }",
"{ \"kind\": \"OAuthRedirectReference\", \"apiVersion\": \"v1\", \"reference\": { \"kind\": ..., 1 \"name\": ..., 2 \"group\": ... 3 } }",
"\"serviceaccounts.openshift.io/oauth-redirecturi.first\": \"custompath\" \"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\"",
"\"serviceaccounts.openshift.io/oauth-redirecturi.first\": \"custompath\" \"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\" \"serviceaccounts.openshift.io/oauth-redirecturi.second\": \"//:8000\" \"serviceaccounts.openshift.io/oauth-redirectreference.second\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\"",
"\"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\" \"serviceaccounts.openshift.io/oauth-redirecturi.second\": \"https://other.com\""
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/authentication_and_authorization/using-service-accounts-as-oauth-client |
5.4.17. Controlling Logical Volume Activation | 5.4.17. Controlling Logical Volume Activation You can flag a logical volume to be skipped during normal activation commands with the -k or --setactivationskip {y|n} option of the lvcreate or lvchange command. This flag is not applied during deactivation. You can determine whether this flag is set for a logical volume with the lvs command, which displays the k attribute as in the following example. By default, thin snapshot volumes are flagged for activation skip. You can activate a logical volume with the k attribute set by using the -K or --ignoreactivationskip option in addition to the standard -ay or --activate y option. The following command activates a thin snapshot logical volume. The persistent "activation skip" flag can be turned off when the logical volume is created by specifying the -kn or --setactivationskip n option of the lvcreate command. You can turn the flag off for an existing logical volume by specifying the -kn or --setactivationskip n option of the lvchange command. You can turn the flag on again with the -ky or --setactivationskip y option. The following command creates a snapshot logical volume without the activation skip flag The following command removes the activation skip flag from a snapshot logical volume. You can control the default activation skip setting with the auto_set_activation_skip setting in the /etc/lvm/lvm.conf file. | [
"lvs vg/thin1s1 LV VG Attr LSize Pool Origin thin1s1 vg Vwi---tz-k 1.00t pool0 thin1",
"lvchange -ay -K VG/SnapLV",
"lvcreate --type thin -n SnapLV -kn -s ThinLV --thinpool VG/ThinPoolLV",
"lvchange -kn VG/SnapLV"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/lv_activate |
Chapter 1. Architectures | Chapter 1. Architectures Red Hat Enterprise Linux 7.1 is available as a single kit on the following architectures: [1] 64-bit AMD 64-bit Intel IBM POWER7+ and POWER8 (big endian) IBM POWER8 (little endian) [2] IBM System z [3] In this release, Red Hat brings together improvements for servers and systems, as well as for the overall Red Hat open source experience. 1.1. Red Hat Enterprise Linux for POWER, little endian Red Hat Enterprise Linux 7.1 introduces little endian support on IBM Power Systems servers using IBM POWER8 processors. Previously in Red Hat Enterprise Linux 7, only the big endian variant was offered for IBM Power Systems. Support for little endian on POWER8-based servers aims to improve portability of applications between 64-bit Intel compatible systems ( x86_64 ) and IBM Power Systems. Separate installation media are offered for installing Red Hat Enterprise Linux on IBM Power Systems servers in little endian mode. These media are available from the Downloads section of the Red Hat Customer Portal . Only IBM POWER8 processor-based servers are supported with Red Hat Enterprise Linux for POWER, little endian. Currently, Red Hat Enterprise Linux for POWER, little endian is supported only as a KVM guest under Red Hat Enteprise Virtualization for Power . Installation on bare metal hardware is currently not supported. The GRUB2 boot loader is used on the installation media and for network boot. The Installation Guide has been updated with instructions for setting up a network boot server for IBM Power Systems clients using GRUB2 . All software packages for IBM Power Systems are available for both the little endian and the big endian variant of Red Hat Enterprise Linux for POWER. Packages built for Red Hat Enterprise Linux for POWER, little endian use the the ppc64le architecture code - for example, gcc-4.8.3-9.ael7b.ppc64le.rpm . [1] Note that the Red Hat Enterprise Linux 7.1 installation is supported only on 64-bit hardware. Red Hat Enterprise Linux 7.1 is able to run 32-bit operating systems, including versions of Red Hat Enterprise Linux, as virtual machines. [2] Red Hat Enterprise Linux 7.1 (little endian) is currently only supported as a KVM guest under Red Hat Enteprise Virtualization for Power and PowerVM hypervisors. [3] Note that Red Hat Enterprise Linux 7.1 supports IBM zEnterprise 196 hardware or later; IBM System z10 mainframe systems are no longer supported and will not boot Red Hat Enterprise Linux 7.1. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.1_release_notes/chap-red_hat_enterprise_linux-7.1_release_notes-architectures |
Planning your deployment | Planning your deployment Red Hat OpenShift Data Foundation 4.18 Important considerations when deploying Red Hat OpenShift Data Foundation 4.18 Red Hat Storage Documentation Team Abstract Read this document for important considerations when planning your Red Hat OpenShift Data Foundation deployment. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Jira ticket: Log in to the Jira . Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Select Documentation in the Components field. Click Create at the bottom of the dialogue. Chapter 1. Introduction to OpenShift Data Foundation Red Hat OpenShift Data Foundation is a highly integrated collection of cloud storage and data services for Red Hat OpenShift Container Platform. It is available as part of the Red Hat OpenShift Container Platform Service Catalog, packaged as an operator to facilitate simple deployment and management. Red Hat OpenShift Data Foundation services are primarily made available to applications by way of storage classes that represent the following components: Block storage devices, catering primarily to database workloads. Prime examples include Red Hat OpenShift Container Platform logging and monitoring, and PostgreSQL. Important Block storage should be used for any worklaod only when it does not require sharing the data across multiple containers. Shared and distributed file system, catering primarily to software development, messaging, and data aggregation workloads. Examples include Jenkins build sources and artifacts, Wordpress uploaded content, Red Hat OpenShift Container Platform registry, and messaging using JBoss AMQ. Multicloud object storage, featuring a lightweight S3 API endpoint that can abstract the storage and retrieval of data from multiple cloud object stores. On premises object storage, featuring a robust S3 API endpoint that scales to tens of petabytes and billions of objects, primarily targeting data intensive applications. Examples include the storage and access of row, columnar, and semi-structured data with applications like Spark, Presto, Red Hat AMQ Streams (Kafka), and even machine learning frameworks like TensorFlow and Pytorch. Note Running PostgresSQL workload on CephFS persistent volume is not supported and it is recommended to use RADOS Block Device (RBD) volume. For more information, see the knowledgebase solution ODF Database Workloads Must Not Use CephFS PVs/PVCs . Red Hat OpenShift Data Foundation version 4.x integrates a collection of software projects, including: Ceph, providing block storage, a shared and distributed file system, and on-premises object storage Ceph CSI, to manage provisioning and lifecycle of persistent volumes and claims NooBaa, providing a Multicloud Object Gateway OpenShift Data Foundation, Rook-Ceph, and NooBaa operators to initialize and manage OpenShift Data Foundation services. Chapter 2. Architecture of OpenShift Data Foundation Red Hat OpenShift Data Foundation provides services for, and can run internally from the Red Hat OpenShift Container Platform. Figure 2.1. Red Hat OpenShift Data Foundation architecture Red Hat OpenShift Data Foundation supports deployment into Red Hat OpenShift Container Platform clusters deployed on installer-provisioned or user-provisioned infrastructure. For details about these two approaches, see OpenShift Container Platform - Installation process . To know more about interoperability of components for Red Hat OpenShift Data Foundation and Red Hat OpenShift Container Platform, see Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . For information about the architecture and lifecycle of OpenShift Container Platform, see OpenShift Container Platform architecture . Tip For IBM Power, see Installing on IBM Power . 2.1. About operators Red Hat OpenShift Data Foundation comprises of three main operators, which codify administrative tasks and custom resources so that you can easily automate the task and resource characteristics. Administrators define the desired end state of the cluster, and the OpenShift Data Foundation operators ensure the cluster is either in that state, or approaching that state, with minimal administrator intervention. OpenShift Data Foundation operator A meta-operator that draws on other operators in specific tested ways to codify and enforce the recommendations and requirements of a supported Red Hat OpenShift Data Foundation deployment. The rook-ceph and noobaa operators provide the storage cluster resource that wraps these resources. Rook-ceph operator This operator automates the packaging, deployment, management, upgrading, and scaling of persistent storage and file, block, and object services. It creates block and file storage classes for all environments, and creates an object storage class and services Object Bucket Claims (OBCs) made against it in on-premises environments. Additionally, for internal mode clusters, it provides the ceph cluster resource, which manages the deployments and services representing the following: Object Storage Daemons (OSDs) Monitors (MONs) Manager (MGR) Metadata servers (MDS) RADOS Object Gateways (RGWs) on-premises only Multicloud Object Gateway operator This operator automates the packaging, deployment, management, upgrading, and scaling of the Multicloud Object Gateway (MCG) object service. It creates an object storage class and services the OBCs made against it. Additionally, it provides the NooBaa cluster resource, which manages the deployments and services for NooBaa core, database, and endpoint. Note OpenShift Data Foundation's default configuration for MCG is optimized for low resource consumption and not performance. If you plan to use MCG often, see information about increasing resource limits in the knowledebase article Performance tuning guide for Multicloud Object Gateway . 2.2. Storage cluster deployment approaches The growing list of operating modalities is an evidence that flexibility is a core tenet of Red Hat OpenShift Data Foundation. This section provides you with information that will help you to select the most appropriate approach for your environments. You can deploy Red Hat OpenShift Data Foundation either entirely within OpenShift Container Platform (Internal approach) or to make available the services from a cluster running outside of OpenShift Container Platform (External approach). 2.2.1. Internal approach Deployment of Red Hat OpenShift Data Foundation entirely within Red Hat OpenShift Container Platform has all the benefits of operator based deployment and management. You can use the internal-attached device approach in the graphical user interface (GUI) to deploy Red Hat OpenShift Data Foundation in internal mode using the local storage operator and local storage devices. Ease of deployment and management are the highlights of running OpenShift Data Foundation services internally on OpenShift Container Platform. There are two different deployment modalities available when Red Hat OpenShift Data Foundation is running entirely within Red Hat OpenShift Container Platform: Simple Optimized Simple deployment Red Hat OpenShift Data Foundation services run co-resident with applications. The operators in Red Hat OpenShift Container Platform manages these applications. A simple deployment is best for situations where, Storage requirements are not clear. Red Hat OpenShift Data Foundation services runs co-resident with the applications. Creating a node instance of a specific size is difficult, for example, on bare metal. For Red Hat OpenShift Data Foundation to run co-resident with the applications, the nodes must have local storage devices, or portable storage devices attached to them dynamically, like EBS volumes on EC2, or vSphere Virtual Volumes on VMware, or SAN volumes. Note PowerVC dynamically provisions the SAN volumes. Optimized deployment Red Hat OpenShift Data Foundation services run on dedicated infrastructure nodes. Red Hat OpenShift Container Platform manages these infrastructure nodes. An optimized approach is best for situations when, Storage requirements are clear. Red Hat OpenShift Data Foundation services run on dedicated infrastructure nodes. Creating a node instance of a specific size is easy, for example, on cloud, virtualized environment, and so on. 2.2.2. External approach Red Hat OpenShift Data Foundation exposes the Red Hat Ceph Storage services running outside of the OpenShift Container Platform cluster as storage classes. The external approach is best used when, Storage requirements are significant (600+ storage devices). Multiple OpenShift Container Platform clusters need to consume storage services from a common external cluster. Another team, Site Reliability Engineering (SRE), storage, and so on, needs to manage the external cluster providing storage services. Possibly a pre-existing one. 2.3. Node types Nodes run the container runtime, as well as services, to ensure that the containers are running, and maintain network communication and separation between the pods. In OpenShift Data Foundation, there are three types of nodes. Table 2.1. Types of nodes Node Type Description Master These nodes run processes that expose the Kubernetes API, watch and schedule newly created pods, maintain node health and quantity, and control interaction with underlying cloud providers. Infrastructure (Infra) Infra nodes run cluster level infrastructure services such as logging, metrics, registry, and routing. These are optional in OpenShift Container Platform clusters. In order to separate OpenShift Data Foundation layer workload from applications, ensure that you use infra nodes for OpenShift Data Foundation in virtualized and cloud environments. To create Infra nodes, you can provision new nodes labeled as infra . For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation Worker Worker nodes are also known as application nodes since they run applications. When OpenShift Data Foundation is deployed in internal mode, you require a minimal cluster of 3 worker nodes. Make sure that the nodes are spread across 3 different racks, or availability zones, to ensure availability. In order for OpenShift Data Foundation to run on worker nodes, you need to attach the local storage devices, or portable storage devices to the worker nodes dynamically. When OpenShift Data Foundation is deployed in external mode, it runs on multiple nodes. This allows Kubernetes to reschedule on the available nodes in case of a failure. Note OpenShift Data Foundation requires the same number of subsciptions as OpenShift Container Platform. However, if OpenShift Data Foundation is running on infra nodes, OpenShift does not require OpenShift Container Platform subscription for these nodes. Therefore, the OpenShift Data Foundation control plane does not require additional OpenShift Container Platform and OpenShift Data Foundation subscriptions. For more information, see Chapter 6, Subscriptions . Chapter 3. Internal storage services Red Hat OpenShift Data Foundation service is available for consumption internally to the Red Hat OpenShift Container Platform that runs on the following infrastructure: Amazon Web Services (AWS) Bare metal VMware vSphere Microsoft Azure Google Cloud Red Hat OpenStack 13 or higher (installer-provisioned infrastructure) [Technology Preview] IBM Power IBM Z and IBM(R) LinuxONE ROSA with hosted control planes (HCP) Creation of an internal cluster resource results in the internal provisioning of the OpenShift Data Foundation base services, and makes additional storage classes available to the applications. Chapter 4. External storage services Red Hat OpenShift Data Foundation can make services from an external Red Hat Ceph Storage cluster available for consumption through OpenShift Container Platform clusters running on the following platforms: VMware vSphere Bare metal Red Hat OpenStack platform (Technology Preview) IBM Power IBM Z The OpenShift Data Foundation operators create and manage services to satisfy Persistent Volume (PV) and Object Bucket Claims (OBCs) against the external services. External cluster can serve block, file and object storage classes for applications that run on OpenShift Container Platform. The operators do not deploy or manage the external clusters. Chapter 5. Security considerations 5.1. FIPS-140-2 The Federal Information Processing Standard Publication 140-2 (FIPS-140-2) is a standard that defines a set of security requirements for the use of cryptographic modules. Law mandates this standard for the US government agencies and contractors and is also referenced in other international and industry specific standards. Red Hat OpenShift Data Foundation now uses the FIPS validated cryptographic modules. Red Hat Enterprise Linux OS/CoreOS (RHCOS) delivers these modules. Currently, the Cryptographic Module Validation Program (CMVP) processes the cryptography modules. You can see the state of these modules at Modules in Process List . For more up-to-date information, see the Red Hat Knowledgebase solution RHEL core crypto components . Note Enable the FIPS mode on the OpenShift Container Platform, before you install OpenShift Data Foundation. OpenShift Container Platform must run on the RHCOS nodes, as the feature does not support OpenShift Data Foundation deployment on Red Hat Enterprise Linux 7 (RHEL 7). For more information, see Installing a cluster in FIPS mode and Support for FIPS cryptography of the Installing guide in OpenShift Container Platform documentation. 5.2. Proxy environment A proxy environment is a production environment that denies direct access to the internet and provides an available HTTP or HTTPS proxy instead. Red Hat Openshift Container Platform is configured to use a proxy by modifying the proxy object for existing clusters or by configuring the proxy settings in the install-config.yaml file for new clusters. Red Hat supports deployment of OpenShift Data Foundation in proxy environments when OpenShift Container Platform has been configured according to configuring the cluster-wide proxy . 5.3. Data encryption options Encryption lets you encode your data to make it impossible to read without the required encryption keys. This mechanism protects the confidentiality of your data in the event of a physical security breach that results in a physical media to escape your custody. The per-PV encryption also provides access protection from other namespaces inside the same OpenShift Container Platform cluster. Data is encrypted when it is written to the disk, and decrypted when it is read from the disk. Working with encrypted data might incur a small penalty to performance. Encryption is only supported for new clusters deployed using Red Hat OpenShift Data Foundation 4.6 or higher. An existing encrypted cluster that is not using an external Key Management System (KMS) cannot be migrated to use an external KMS. Previously, HashiCorp Vault was the only supported KMS for Cluster-wide and Persistent Volume encryptions. With OpenShift Data Foundation 4.7.0 and 4.7.1, only HashiCorp Vault Key/Value (KV) secret engine API, version 1 is supported. Starting with OpenShift Data Foundation 4.7.2, HashiCorp Vault KV secret engine API, versions 1 and 2 are supported. As of OpenShift Data Foundation 4.12, Thales CipherTrust Manager has been introduced as an additional supported KMS. Important KMS is required for StorageClass encryption, and is optional for cluster-wide encryption. To start with, Storage class encryption requires a valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . Red Hat works with the technology partners to provide this documentation as a service to the customers. However, Red Hat does not provide support for the Hashicorp product. For technical assistance with this product, contact Hashicorp . 5.3.1. Cluster-wide encryption Red Hat OpenShift Data Foundation supports cluster-wide encryption (encryption-at-rest) for all the disks and Multicloud Object Gateway operations in the storage cluster. OpenShift Data Foundation uses Linux Unified Key System (LUKS) version 2 based encryption with a key size of 512 bits and the aes-xts-plain64 cipher where each device has a different encryption key. The keys are stored using a Kubernetes secret or an external KMS. Both methods are mutually exclusive and you can not migrate between methods. Encryption is disabled by default for block and file storage. You can enable encryption for the cluster at the time of deployment. The MultiCloud Object Gateway supports encryption by default. See the deployment guides for more information. OpenShift Data Foundation supports cluster wide encryption with and without Key Management System (KMS). Cluster wide encryption with KMS is supported using the following service providers: HashiCorp Vault Thales Cipher Trust Manager Security common practices require periodic encryption key rotation. OpenShift Data Foundation automatically rotates encryption keys stored in kubernetes secret (non-KMS) and Vault on a weekly basis. However, key rotation for Vault KMS must be enabled after the storage cluster creation and does not happen by default. For more information refer to the deployment guides. Note Requires a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . Cluster wide encryption with HashiCorp Vault KMS provides two authentication methods: Token : This method allows authentication using vault tokens. A kubernetes secret containing the vault token is created in the openshift-storage namespace and is used for authentication. If this authentication method is selected then the administrator has to provide the vault token that provides access to the backend path in Vault, where the encryption keys are stored. Kubernetes : This method allows authentication with vault using serviceaccounts. If this authentication method is selected then the administrator has to provide the name of the role configured in Vault that provides access to the backend path, where the encryption keys are stored. The value of this role is then added to the ocs-kms-connection-details config map. Note OpenShift Data Foundation on IBM Cloud platform supports Hyper Protect Crypto Services (HPCS) Key Management Services (KMS) as the encryption solution in addition to HashiCorp Vault KMS. Important Red Hat works with the technology partners to provide this documentation as a service to the customers. However, Red Hat does not provide support for the Hashicorp product. For technical assistance with this product, contact Hashicorp . 5.3.2. Storage class encryption You can encrypt persistent volumes (block only) with storage class encryption using an external Key Management System (KMS) to store device encryption keys. Persistent volume encryption is only available for RADOS Block Device (RBD) persistent volumes. See how to create a storage class with persistent volume encryption . Storage class encryption is supported in OpenShift Data Foundation 4.7 or higher with HashiCorp Vault KMS. Storage class encryption is supported in OpenShift Data Foundation 4.12 or higher with both HashiCorp Vault KMS and Thales CipherTrust Manager KMS. Note Requires a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . 5.3.3. CipherTrust manager Red Hat OpenShift Data Foundation version 4.12 introduced Thales CipherTrust Manager as an additional Key Management System (KMS) provider for your deployment. Thales CipherTrust Manager provides centralized key lifecycle management. CipherTrust Manager supports Key Management Interoperability Protocol (KMIP), which enables communication between key management systems. CipherTrust Manager is enabled during deployment. 5.3.4. Data encryption in-transit via Red Hat Ceph Storage's messenger version 2 protocol (msgr2) Starting with OpenShift Data Foundation version 4.14, Red Hat Ceph Storage's messenger version 2 protocol can be used to encrypt data in-transit. This provides an important security requirement for your infrastructure. In-transit encryption can be enabled during deployment while the cluster is being created. See the deployment guide for your environment for instructions on enabling data encryption in-transit during cluster creation. The msgr2 protocol supports two connection modes: crc Provides strong initial authentication when a connection is established with cephx. Provides a crc32c integrity check to protect against bit flips. Does not provide protection against a malicious man-in-the-middle attack. Does not prevent an eavesdropper from seeing all post-authentication traffic. secure Provides strong initial authentication when a connection is established with cephx. Provides full encryption of all post-authentication traffic. Provides a cryptographic integrity check. The default mode is crc . 5.4. Encryption in Transit You need to enable IPsec so that all the network traffic between the nodes on the OVN-Kubernetes Container Network Interface (CNI) cluster network travels through an encrypted tunnel. By default, IPsec is disabled. You can enable it either during or after installing the cluster. If you need to enable IPsec after cluster installation, you must first resize your cluster MTU to account for the overhead of the IPsec ESP IP header. For more information on how to configure the IPsec encryption, see Configuring IPsec encryption of the Networking guide in OpenShift Container Platform documentation. Chapter 6. Subscriptions 6.1. Subscription offerings Red Hat OpenShift Data Foundation subscription is based on "core-pairs," similar to Red Hat OpenShift Container Platform. The Red Hat OpenShift Data Foundation 2-core subscription is based on the number of logical cores on the CPUs in the system where OpenShift Container Platform runs. As with OpenShift Container Platform: OpenShift Data Foundation subscriptions are stackable to cover larger hosts. Cores can be distributed across as many virtual machines (VMs) as needed. For example, ten 2-core subscriptions will provide 20 cores and in case of IBM Power a 2-core subscription at SMT level of 8 will provide 2 cores or 16 vCPUs that can be used across any number of VMs. OpenShift Data Foundation subscriptions are available with Premium or Standard support. 6.2. Disaster recovery subscription requirement Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced entitlement A valid Red Hat Advanced Cluster Management for Kubernetes subscription Any Red Hat OpenShift Data Foundation Cluster containing PVs participating in active replication either as a source or destination requires OpenShift Data Foundation Advanced entitlement. This subscription should be active on both source and destination clusters. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . 6.3. Cores versus vCPUs and hyperthreading Making a determination about whether or not a particular system consumes one or more cores is currently dependent on whether or not that system has hyperthreading available. Hyperthreading is only a feature of Intel CPUs. Visit the Red Hat Customer Portal to determine whether a particular system supports hyperthreading. Virtualized OpenShift nodes using logical CPU threads, also known as simultaneous multithreading (SMT) for AMD EPYC CPUs or hyperthreading with Intel CPUs, calculate their core utilization for OpenShift subscriptions based on the number of cores/CPUs assigned to the node, however each subscription covers 4 vCPUs/cores when logical CPU threads are used. Red Hat's subscription management tools assume logical CPU threads are enabled by default on all systems. For systems where hyperthreading is enabled and where one hyperthread equates to one visible system core, the calculation of cores is a ratio of 2 cores to 4 vCPUs. Therefore, a 2-core subscription covers 4 vCPUs in a hyperthreaded system. A large virtual machine (VM) might have 8 vCPUs, equating to 4 subscription cores. As subscriptions come in 2-core units, you will need two 2-core subscriptions to cover these 4 cores or 8 vCPUs. Where hyperthreading is not enabled, and where each visible system core correlates directly to an underlying physical core, the calculation of cores is a ratio of 2 cores to 2 vCPUs. 6.3.1. Cores versus vCPUs and simultaneous multithreading (SMT) for IBM Power Making a determination about whether or not a particular system consumes one or more cores is currently dependent on the level of simultaneous multithreading configured (SMT). IBM Power provides simultaneous multithreading levels of 1, 2, 4 or 8 for each core which correspond to the number of vCPUs as in the table below. Table 6.1. Different SMT levels and their corresponding vCPUs SMT level SMT=1 SMT=2 SMT=4 SMT=8 1 Core # vCPUs=1 # vCPUs=2 # vCPUs=4 # vCPUs=8 2 Cores # vCPUs=2 # vCPUs=4 # vCPUs=8 # vCPUs=16 4 Cores # vCPUs=4 # vCPUs=8 # vCPUs=16 # vCPUs=32 For systems where SMT is configured the calculation for the number of cores required for subscription purposes depends on the SMT level. Therefore, a 2-core subscription corresponds to 2 vCPUs on SMT level of 1, and to 4 vCPUs on SMT level of 2, and to 8 vCPUs on SMT level of 4 and to 16 vCPUs on SMT level of 8 as seen in the table above. A large virtual machine (VM) might have 16 vCPUs, which at a SMT level 8 will require a 2 core subscription based on dividing the # of vCPUs by the SMT level (16 vCPUs / 8 for SMT-8 = 2). As subscriptions come in 2-core units, you will need one 2-core subscription to cover these 2 cores or 16 vCPUs. 6.4. Splitting cores Systems that require an odd number of cores need to consume a full 2-core subscription. For example, a system that is calculated to require only 1 core will end up consuming a full 2-core subscription once it is registered and subscribed. When a single virtual machine (VM) with 2 vCPUs uses hyperthreading resulting in 1 calculated vCPU, a full 2-core subscription is required; a single 2-core subscription may not be split across two VMs with 2 vCPUs using hyperthreading. See section Cores versus vCPUs and hyperthreading for more information. It is recommended that virtual instances be sized so that they require an even number of cores. 6.4.1. Shared Processor Pools for IBM Power IBM Power have a notion of shared processor pools. The processors in a shared processor pool can be shared across the nodes in the cluster. The aggregate compute capacity required for a Red Hat OpenShift Data Foundation should be a multiple of core-pairs. 6.5. Subscription requirements Red Hat OpenShift Data Foundation components can run on either OpenShift Container Platform worker or infrastructure nodes, for which you can use either Red Hat CoreOS (RHCOS) or Red Hat Enterprise Linux (RHEL) 8.4 as the host operating system. RHEL 7 is now deprecated. OpenShift Data Foundation subscriptions are required for every OpenShift Container Platform subscribed core with a ratio of 1:1. When using infrastructure nodes, the rule to subscribe all OpenShift worker node cores for OpenShift Data Foundation applies even though they don't need any OpenShift Container Platform or any OpenShift Data Foundation subscriptions. You can use labels to state whether a node is a worker or an infrastructure node. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation in the Managing and Allocating Storage Resources guide. Chapter 7. Infrastructure requirements 7.1. Platform requirements Red Hat OpenShift Data Foundation 4.17 is supported only on OpenShift Container Platform version 4.17 and its minor versions. Bug fixes for version of Red Hat OpenShift Data Foundation will be released as bug fix versions. For more details, see the Red Hat OpenShift Container Platform Life Cycle Policy . For external cluster subscription requirements, see the Red Hat Knowledgebase article OpenShift Data Foundation Subscription Guide . For a complete list of supported platform versions, see the Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . 7.1.1. Amazon EC2 Supports internal Red Hat OpenShift Data Foundation clusters only. An Internal cluster must meet both, storage device requirements and have a storage class that provides, EBS storage via the aws-ebs provisioner. OpenShift Data Foundation supports gp2-csi and gp3-csi drivers that were introduced by Amazon Web Services (AWS). These drivers offer better storage expansion capabilities and a reduced monthly price point ( gp3-csi ). You can now select the new drivers when selecting your storage class. In case a high throughput is required, gp3-csi is recommended to be used when deploying OpenShift Data Foundation. If you need a high input/output operation per second (IOPS), the recommended EC2 instance types are D2 or D3 . 7.1.2. Bare Metal Supports internal clusters and consuming external clusters. An internal cluster must meet both the storage device requirements and have a storage class that provide local SSD (NVMe/SATA/SAS, SAN) via the Local Storage Operator. 7.1.3. VMware vSphere Supports internal clusters and consuming external clusters. Recommended versions: vSphere 7.0 or later vSphere 8.0 or later For more details, see the VMware vSphere infrastructure requirements . Note If VMware ESXi does not recognize its devices as flash, mark them as flash devices. Before Red Hat OpenShift Data Foundation deployment, refer to Mark Storage Devices as Flash . Additionally, an Internal cluster must meet both the, storage device requirements and have a storage class providing either, vSAN or VMFS datastore via the vsphere-volume provisioner VMDK, RDM, or DirectPath storage devices via the Local Storage Operator. 7.1.4. Microsoft Azure Supports internal Red Hat OpenShift Data Foundation clusters only. An internal cluster must meet both, storage device requirements and have a storage class that provides, an azure disk via the azure-disk provisioner. 7.1.5. Google Cloud Supports internal Red Hat OpenShift Data Foundation clusters only. An internal cluster must meet both, storage device requirements and have a storage class that provides, a GCE Persistent Disk via the gce-pd provisioner. 7.1.6. Red Hat OpenStack Platform [Technology Preview] Supports internal Red Hat OpenShift Data Foundation clusters and consuming external clusters. An internal cluster must meet both, storage device requirements and have a storage class that provides a standard disk via the Cinder provisioner. 7.1.7. IBM Power Supports internal Red Hat OpenShift Data Foundation clusters and consuming external clusters. An Internal cluster must meet both, storage device requirements and have a storage class providing local SSD (NVMe/SATA/SAS, SAN) via the Local Storage Operator. 7.1.8. IBM Z and IBM(R) LinuxONE Supports internal Red Hat OpenShift Data Foundation clusters. Also, supports external mode where Red Hat Ceph Storage is running on x86. An Internal cluster must meet both, storage device requirements and have a storage class providing local SSD (NVMe/SATA/SAS, SAN) via the Local Storage Operator. 7.1.9. ROSA with hosted control planes (HCP) Supports internal Red Hat OpenShift Data Foundation clusters only. An internal cluster must meet both, storage device requirements and have a storage class that provides AWS EBS volumes via gp3-csi provisioner. 7.1.10. Any platform Supports internal clusters and consuming external clusters. An internal cluster must meet both the storage device requirements and have a storage class that provide local SSD (NVMe/SATA/SAS, SAN) via the Local Storage Operator. 7.2. External mode requirement 7.2.1. Red Hat Ceph Storage To check the supportability and interoperability of Red Hat Ceph Storage (RHCS) with Red Hat OpenShift Data Foundation in external mode, go to the lab Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . Select Service Type as ODF as Self-Managed Service . Select appropriate Version from the drop down. On the Versions tab, click the Supported RHCS Compatibility tab. For instructions regarding how to install a RHCS cluster, see the installation guide . 7.3. Resource requirements Red Hat OpenShift Data Foundation services consist of an initial set of base services, and can be extended with additional device sets. All of these Red Hat OpenShift Data Foundation services pods are scheduled by kubernetes on OpenShift Container Platform nodes. Expanding the cluster in multiples of three, one node in each failure domain, is an easy way to satisfy the pod placement rules . Important These requirements relate to OpenShift Data Foundation services only, and not to any other services, operators or workloads that are running on these nodes. Table 7.1. Aggregate avaliable resource requirements for Red Hat OpenShift Data Foundation only Deployment Mode Base services Additional device Set Internal 30 CPU (logical) 72 GiB memory 3 storage devices 6 CPU (logical) 15 GiB memory 3 storage devices External 4 CPU (logical) 16 GiB memory Not applicable Example: For a 3 node cluster in an internal mode deployment with a single device set, a minimum of 3 x 10 = 30 units of CPU are required. For more information, see Chapter 6, Subscriptions and CPU units . For additional guidance with designing your Red Hat OpenShift Data Foundation cluster, see the ODF Sizing Tool . CPU units In this section, 1 CPU Unit maps to the Kubernetes concept of 1 CPU unit. 1 unit of CPU is equivalent to 1 core for non-hyperthreaded CPUs. 2 units of CPU are equivalent to 1 core for hyperthreaded CPUs. Red Hat OpenShift Data Foundation core-based subscriptions always come in pairs (2 cores). Table 7.2. Aggregate minimum resource requirements for IBM Power Deployment Mode Base services Internal 48 CPU (logical) 192 GiB memory 3 storage devices, each with additional 500GB of disk External 24 CPU (logical) 48 GiB memory Example: For a 3 node cluster in an internal-attached devices mode deployment, a minimum of 3 x 16 = 48 units of CPU and 3 x 64 = 192 GB of memory is required. 7.3.1. Resource requirements for IBM Z and IBM LinuxONE infrastructure Red Hat OpenShift Data Foundation services consist of an initial set of base services, and can be extended with additional device sets. All of these Red Hat OpenShift Data Foundation services pods are scheduled by kubernetes on OpenShift Container Platform nodes . Expanding the cluster in multiples of three, one node in each failure domain, is an easy way to satisfy the pod placement rules . Table 7.3. Aggregate available resource requirements for Red Hat OpenShift Data Foundation only (IBM Z and IBM(R) LinuxONE) Deployment Mode Base services Additional device Set IBM Z and IBM(R) LinuxONE minimum hardware requirements Internal 30 CPU (logical) 3 nodes with 10 CPUs (logical) each 72 GiB memory 3 storage devices 6 CPU (logical) 15 GiB memory 3 storage devices 1 IFL External 4 CPU (logical) 16 GiB memory Not applicable Not applicable CPU Is the number of virtual cores defined in the hypervisor, IBM Z/VM, Kernel Virtual Machine (KVM), or both. IFL (Integrated Facility for Linux) Is the physical core for IBM Z and IBM(R) LinuxONE. Minimum system environment In order to operate a minimal cluster with 1 logical partition (LPAR), one additional IFL is required on top of the 6 IFLs. OpenShift Container Platform consumes these IFLs . 7.3.2. Minimum deployment resource requirements An OpenShift Data Foundation cluster will be deployed with minimum configuration when the standard deployment resource requirement is not met. Important These requirements relate to OpenShift Data Foundation services only, and not to any other services, operators or workloads that are running on these nodes. Table 7.4. Aggregate resource requirements for OpenShift Data Foundation only Deployment Mode Base services Internal 24 CPU (logical) 72 GiB memory 3 storage devices If you want to add additional device sets, we recommend converting your minimum deployment to standard deployment. 7.3.3. Compact deployment resource requirements Red Hat OpenShift Data Foundation can be installed on a three-node OpenShift compact bare metal cluster, where all the workloads run on three strong master nodes. There are no worker or storage nodes. Important These requirements relate to OpenShift Data Foundation services only, and not to any other services, operators or workloads that are running on these nodes. Table 7.5. Aggregate resource requirements for OpenShift Data Foundation only Deployment Mode Base services Additional device Set Internal 24 CPU (logical) 72 GiB memory 3 storage devices 6 CPU (logical) 15 GiB memory 3 storage devices To configure OpenShift Container Platform on a compact bare metal cluster, see Configuring a three-node cluster and Delivering a Three-node Architecture for Edge Deployments . 7.3.4. Resource requirements for MCG only deployment An OpenShift Data Foundation cluster deployed only with the Multicloud Object Gateway (MCG) component provides the flexibility in deployment and helps to reduce the resource consumption. Table 7.6. Aggregate resource requirements for MCG only deployment Deployment Mode Core Database (DB) Endpoint Internal 1 CPU 4 GiB memory 0.5 CPU 4 GiB memory 1 CPU 2 GiB memory Note The defaut auto scale is between 1 - 2. 7.3.5. Resource requirements for using Network File system You can create exports using Network File System (NFS) that can then be accessed externally from the OpenShift cluster. If you plan to use this feature, the NFS service consumes 3 CPUs and 8Gi of Ram. NFS is optional and is disabled by default. The NFS volume can be accessed two ways: In-cluster: by an application pod inside of the Openshift cluster. Out of cluster: from outside of the Openshift cluster. For more information about the NFS feature, see Creating exports using NFS 7.3.6. Resource requirements for performance profiles OpenShift Data Foundation provides three performance profiles to enhance the performance of the clusters. You can choose one of these profiles based on your available resources and desired performance level during deployment or post deployment. Table 7.7. Recommended resource requirement for different performance profiles Performance profile CPU Memory Lean 24 72 GiB Balanced 30 72 GiB Performance 45 96 GiB Important Make sure to select the profiles based on the available free resources as you might already be running other workloads. 7.4. Pod placement rules Kubernetes is responsible for pod placement based on declarative placement rules. The Red Hat OpenShift Data Foundation base service placement rules for Internal cluster can be summarized as follows: Nodes are labeled with the cluster.ocs.openshift.io/openshift-storage key Nodes are sorted into pseudo failure domains if none exist Components requiring high availability are spread across failure domains A storage device must be accessible in each failure domain This leads to the requirement that there be at least three nodes, and that nodes be in three distinct rack or zone failure domains in the case of pre-existing topology labels . For additional device sets, there must be a storage device, and sufficient resources for the pod consuming it, in each of the three failure domains. Manual placement rules can be used to override default placement rules, but generally this approach is only suitable for bare metal deployments. 7.5. Storage device requirements Use this section to understand the different storage capacity requirements that you can consider when planning internal mode deployments and upgrades. We generally recommend 12 devices or less per node. This recommendation ensures both that nodes stay below cloud provider dynamic storage device attachment limits, and to limit the recovery time after node failures with local storage devices. Expanding the cluster in multiples of three, one node in each failure domain, is an easy way to satisfy pod placement rules . Storage nodes should have at least two disks, one for the operating system and the remaining disks for OpenShift Data Foundation components. Note You can expand the storage capacity only in the increment of the capacity selected at the time of installation. 7.5.1. Dynamic storage devices Red Hat OpenShift Data Foundation permits the selection of either 0.5 TiB, 2 TiB or 4 TiB capacities as the request size for dynamic storage device sizes. The number of dynamic storage devices that can run per node is a function of the node size, underlying provisioner limits and resource requirements . 7.5.2. Local storage devices For local storage deployment, any disk size of 16 TiB or less can be used, and all disks should be of the same size and type. The number of local storage devices that can run per node is a function of the node size and resource requirements . Expanding the cluster in multiples of three, one node in each failure domain, is an easy way to satisfy pod placement rules . Note Disk partitioning is not supported. 7.5.3. Capacity planning Always ensure that available storage capacity stays ahead of consumption. Recovery is difficult if available storage capacity is completely exhausted, and requires more intervention than simply adding capacity or deleting or migrating content. Capacity alerts are issued when cluster storage capacity reaches 75% (near-full) and 85% (full) of total capacity. Always address capacity warnings promptly, and review your storage regularly to ensure that you do not run out of storage space. When you get to 75% (near-full), either free up space or expand the cluster. When you get the 85% (full) alert, it indicates that you have run out of storage space completely and cannot free up space using standard commands. At this point, contact Red Hat Customer Support . The following tables show example node configurations for Red Hat OpenShift Data Foundation with dynamic storage devices. Table 7.8. Example initial configurations with 3 nodes Storage Device size Storage Devices per node Total capacity Usable storage capacity 0.5 TiB 1 1.5 TiB 0.5 TiB 2 TiB 1 6 TiB 2 TiB 4 TiB 1 12 TiB 4 TiB Table 7.9. Example of expanded configurations with 30 nodes (N) Storage Device size (D) Storage Devices per node (M) Total capacity (D * M * N) Usable storage capacity (D*M*N/3) 0.5 TiB 3 45 TiB 15 TiB 2 TiB 6 360 TiB 120 TiB 4 TiB 9 1080 TiB 360 TiB Chapter 8. Network requirements OpenShift Data Foundation requires that at least one network interface that is used for the cluster network to be capable of at least 10 gigabit network speeds. This section further covers different network considerations for planning deployments. 8.1. IPv6 support Red Hat OpenShift Data Foundation version 4.12 introduced the support of IPv6. IPv6 is supported in single stack only, and cannot be used simultaneously with IPv4. IPv6 is the default behavior in OpenShift Data Foundation when IPv6 is turned on in Openshift Container Platform. Red Hat OpenShift Data Foundation version 4.14 introduces IPv6 auto detection and configuration. Clusters using IPv6 will automatically be configured accordingly. OpenShift Container Platform dual stack with Red Hat OpenShift Data Foundation IPv4 is supported from version 4.13 and later. Dual stack on Red Hat OpenShift Data Foundation IPv6 is not supported. 8.2. Multi network plug-in (Multus) support OpenShift Data Foundation supports the ability to use multi-network plug-in Multus on bare metal infrastructures to improve security and performance by isolating the different types of network traffic. By using Multus, one or more network interfaces on hosts can be reserved for exclusive use of OpenShift Data Foundation. To use Multus, first run the Multus prerequisite validation tool. For instructions to use the tool, see OpenShift Data Foundation - Multus prerequisite validation tool . For more information about Multus networks, see Multiple networks . You can configure your Multus networks to use IPv4 or IPv6 as a technology preview. This works only for Multus networks that are pure IPv4 or pure IPv6. Networks cannot be mixed mode. Important Technology Preview features provide early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process. However, these features are not fully supported under Red Hat Service Level Agreements, may not be functionally complete, and are not intended for production use. As Red Hat considers making future iterations of Technology Preview features generally available, we will attempt to resolve any issues that customers experience when using these features. See Technology Preview Features Support Scope for more information. 8.2.1. Multus prerequisites In order for Ceph-CSI to communicate with a Multus-enabled CephCluster, some setup is required for Kubernetes hosts. These prerequisites require an understanding of how Multus networks are configured and how Rook uses them. This section will help clarify questions that could arise. Two basic requirements must be met: OpenShift hosts must be able to route successfully to the Multus public network. Pods on the Multus public network must be able to route successfully to OpenShift hosts. These two requirements can be broken down further as follows: For routing Kubernetes hosts to the Multus public network, each host must ensure the following: The host must have an interface connected to the Multus public network (the "public-network-interface"). The "public-network-interface" must have an IP address. A route must exist to direct traffic destined for pods on the Multus public network through the "public-network-interface". For routing pods on the Multus public network to Kubernetes hosts, the public NetworkAttachmentDefinition must be configured to ensure the following: The definition must have its IP Address Management (IPAM) configured to route traffic destined for nodes through the network. To ensure routing between the two networks works properly, no IP address assigned to a node can overlap with any IP address assigned to a pod on the Multus public network. Generally, both the NetworkAttachmentDefinition, and node configurations must use the same network technology (Macvlan) to connect to the Multus public network. Node configurations and pod configurations are interrelated and tightly coupled. Both must be planned at the same time, and OpenShift Data Foundation cannot support Multus public networks without both. The "public-network-interface" must be the same for both. Generally, the connection technology (Macvlan) should also be the same for both. IP range(s) in the NetworkAttachmentDefinition must be encoded as routes on nodes, and, in mirror, IP ranges for nodes must be encoded as routes in the NetworkAttachmentDefinition. Some installations might not want to use the same public network IP address range for both pods and nodes. In the case where there are different ranges for pods and nodes, additional steps must be taken to ensure each range routes to the other so that they act as a single, contiguous network.These requirements require careful planning. See Multus examples to help understand and implement these requirements. Tip There are often ten or more OpenShift Data Foundation pods per storage node. The pod address space usually needs to be several times larger (or more) than the host address space. OpenShift Container Platform recommends using the NMState operator's NodeNetworkConfigurationPolicies as a good method of configuring hosts to meet host requirements. Other methods can be used as well if needed. 8.2.1.1. Multus network address space sizing Networks must have enough addresses to account for the number of storage pods that will attach to the network, plus some additional space to account for failover events. It is highly recommended to also plan ahead for future storage cluster expansion and estimate how large the OpenShift Container Platform and OpenShift Data Foundation clusters may grow in the future. Reserving addresses for future expansion means that there is lower risk of depleting the IP address pool unexpectedly during expansion. It is safest to allocate 25% more addresses (or more) than the total maximum number of addresses that are expected to be needed at one time in the storage cluster's lifetime. This helps lower the risk of depleting the IP address pool during failover and maintenance. For ease of writing corresponding network CIDR configurations, rounding totals up to the nearest power of 2 is also recommended. Three ranges must be planned: If used, the public Network Attachment Definition address space must include enough IPs for the total number of ODF pods running in the openshift-storage namespace If used, the cluster Network Attachment Definition address space must include enough IPs for the total number of OSD pods running in the openshift-storage namespace If the Multus public network is used, the node public network address space must include enough IPs for the total number of OpenShift nodes connected to the Multus public network. Note If the cluster uses a unified address space for the public Network Attachment Definition and node public network attachments, add these two requirements together. This is relevant, for example, if DHCP is used to manage IPs for the public network. Important For users with environments with piecewise CIDRs, that is one network with two or more different CIDRs, auto-detection is likely to find only a single CIDR, meaning Ceph daemons may fail to start or fail to connect to the network. See this knowledgebase article for information to mitigate this issue. 8.2.1.1.1. Recommendation The following recommendation suffices for most organizations. The recommendation uses the last 6.25% (1/16) of the reserved private address space (192.168.0.0/16), assuming the beginning of the range is in use or otherwise desirable. Approximate maximums (accounting for 25% overhead) are given. Table 8.1. Multus recommendations Network Network range CIDR Approximate maximums Public Network Attachment Definition 192.168.240.0/21 1,600 total ODF pods Cluster Network Attachment Definition 192.168.248.0/22 800 OSDs Node public network attachments 192.168.252.0/23 400 total nodes 8.2.1.1.2. Calculation More detailed address space sizes can be determined as follows: Determine the maximum number of OSDs that are likely to be needed in the future. Add 25%, then add 5. Round the result up to the nearest power of 2. This is the cluster address space size. Begin with the un-rounded number calculated in step 1. Add 64, then add 25%. Round the result up to the nearest power of 2. This is the public address space size for pods. Determine the maximum number of total OpenShift nodes (including storage nodes) that are likely to be needed in the future. Add 25%. Round the result up to the nearest power of 2. This is the public address space size for nodes. 8.2.1.2. Verifying requirements have been met After configuring nodes and creating the Multus public NetworkAttachmentDefinition (see Creating network attachment definitions ) check that the node configurations and NetworkAttachmentDefinition configurations are compatible. To do so, verify that each node can ping pods via the public network. Start a daemonset similar to the following example: List the Multus public network IPs assigned to test pods using a command like the following example. This example command lists all IPs assigned to all test pods (each will have 2 IPs). From the output, it is easy to manually extract the IPs associated with the Multus public network. In the example, test pod IPs on the Multus public network are: 192.168.20.22 192.168.20.29 192.168.20.23 Check that each node (NODE) can reach all test pod IPs over the public network: If any node does not get a successful ping to a running pod, it is not safe to proceed. Diagnose and fix the issue, then repeat this testing. Some reasons you may encounter a problem include: The host may not be properly attached to the Multus public network (via Macvlan) The host may not be properly configured to route to the pod IP range The public NetworkAttachmentDefinition may not be properly configured to route back to the host IP range The host may have a firewall rule blocking the connection in either direction The network switch may have a firewall or security rule blocking the connection Suggested debugging steps: Ensure nodes can ping each other over using public network "shim" IPs Ensure the output of ip address 8.2.2. Multus examples The relevant network plan for this cluster is as follows: A dedicated NIC provides eth0 for the Multus public network Macvlan will be used to attach OpenShift pods to eth0 The IP range 192.168.0.0/16 is free in the example cluster - pods and nodes will share this IP range on the Multus public network Nodes will get the IP range 192.168.252.0/22 (this allows up to 1024 Kubernetes hosts, more than the example organization will ever need) Pods will get the remainder of the ranges (192.168.0.1 to 192.168.251.255) The example organization does not want to use DHCP unless necessary; therefore, nodes will have IPs on the Multus network (via eth0) assigned statically using the NMState operator 's NodeNetworkConfigurationPolicy resources With DHCP unavailable, Whereabouts will be used to assign IPs to the Multus public network because it is easy to use out of the box There are 3 compute nodes in the OpenShift cluster on which OpenShift Data Foundation also runs: compute-0, compute-1, and compute-2 Nodes' network policies must be configured to route to pods on the Multus public network. Because pods will be connecting via Macvlan, and because Macvlan does not allow hosts and pods to route between each other, the host must also be connected via Macvlan. Generally speaking, the host must connect to the Multus public network using the same technology that pods do. Pod connections are configured in the Network Attachment Definition. Because the host IP range is a subset of the whole range, hosts are not able to route to pods simply by IP assignment. A route must be added to hosts to allow them to route to the whole 192.168.0.0/16 range. NodeNetworkConfigurationPolicy desiredState specs will look like the following: For static IP management, each node must have a different NodeNetworkConfigurationPolicy. Select separate nodes for each policy to configure static networks. A "shim" interface is used to connect hosts to the Multus public network using the same technology as the Network Attachment Definition will use. The host's "shim" must be of the same type as planned for pods, macvlan in this example. The interface must match the Multus public network interface selected in planning, eth0 in this example. The ipv4 (or ipv6` ) section configures node IP addresses on the Multus public network. IPs assigned to this node's shim must match the plan. This example uses 192.168.252.0/22 for node IPs on the Multus public network. For static IP management, don't forget to change the IP for each node. The routes section instructs nodes how to reach pods on the Multus public network. The route destination(s) must match the CIDR range planned for pods. In this case, it is safe to use the entire 192.168.0.0/16 range because it won't affect nodes' ability to reach other nodes over their "shim" interfaces. In general, this must match the CIDR used in the Multus public NetworkAttachmentDefinition. The NetworkAttachmentDefinition for the public network would look like the following, using Whereabouts' exclude option to simplify the range request. The Whereabouts routes[].dst option ensures pods route to hosts via the Multus public network. This must match the plan for how to attach pods to the Multus public network. Nodes must attach using the same technology, Macvlan. The interface must match the Multus public network interface selected in planning, eth0 in this example. The plan for this example uses whereabouts instead of DHCP for assigning IPs to pods. For this example, it was decided that pods could be assigned any IP in the range 192.168.0.0/16 with the exception of a portion of the range allocated to nodes (see 5). whereabouts provides an exclude directive that allows easily excluding the range allocated for nodes from its pool. This allows keeping the range directive (see 4 ) simple. The routes section instructs pods how to reach nodes on the Multus public network. The route destination ( dst ) must match the CIDR range planned for nodes. 8.2.3. Holder pod deprecation Due to the recurring maintenance impact of holder pods during upgrade (holder pods are present when Multus is enabled), holder pods are deprecated in the ODF v4.18 release and targeted for removal in the ODF v4.18 release. This deprecation requires completing additional network configuration actions before removing the holder pods. In ODF v4.16, clusters with Multus enabled are upgraded to v4.17 following standard upgrade procedures. After the ODF cluster (with Multus enabled) is successfully upgraded to v4.17, administrators must then complete the procedure documented in the article Disabling Multus holder pods to disable and remove holder pods. Be aware that this disabling procedure is time consuming; however, it is not critical to complete the entire process immediately after upgrading to v4.17. It is critical to complete the process before ODF is upgraded to v4.18. 8.2.4. Segregating storage traffic using Multus By default, Red Hat OpenShift Data Foundation is configured to use the Red Hat OpenShift Software Defined Network (SDN). The default SDN carries the following types of traffic: Pod-to-pod traffic Pod-to-storage traffic, known as public network traffic when the storage is OpenShift Data Foundation OpenShift Data Foundation internal replication and rebalancing traffic, known as cluster network traffic There are three ways to segregate OpenShift Data Foundation from OpenShift default network: Reserve a network interface on the host for the public network of OpenShift Data Foundation Pod-to-storage and internal storage replication traffic coexist on a network that is isolated from pod-to-pod network traffic. Application pods have access to the maximum public network storage bandwidth when the OpenShift Data Foundation cluster is healthy. When the OpenShift Data Foundation cluster is recovering from failure, the application pods will have reduced bandwidth due to ongoing replication and rebalancing traffic. Reserve a network interface on the host for OpenShift Data Foundation's cluster network Pod-to-pod and pod-to-storage traffic both continue to use OpenShift's default network. Pod-to-storage bandwidth is less affected by the health of the OpenShift Data Foundation cluster. Pod-to-pod and pod-to-storage OpenShift Data Foundation traffic might contend for network bandwidth in busy OpenShift clusters. The storage internal network often has an overabundance of bandwidth that is unused, reserved for use during failures. Reserve two network interfaces on the host for OpenShift Data Foundation: one for the public network and one for the cluster network Pod-to-pod, pod-to-storage, and storage internal traffic are all isolated, and none of the traffic types will contend for resources. Service level agreements for all traffic types are more able to be ensured. During healthy runtime, more network bandwidth is reserved but unused across all three networks. Dual network interface segregated configuration schematic example: Triple network interface full segregated configuration schematic example: 8.2.5. When to use Multus Use Multus for OpenShift Data Foundation when you need the following: Improved latency - Multus with ODF always improves latency. Use host interfaces at near-host network speeds and bypass OpenShift's software-defined Pod network. You can also perform Linux per interface level tuning for each interface. Improved bandwidth - Dedicated interfaces for OpenShift Data Foundation client data traffic and internal data traffic. These dedicated interfaces reserve full bandwidth. Improved security - Multus isolates storage network traffic from application network traffic for added security. Bandwidth or performance might not be isolated when networks share an interface, however, you can use QoS or traffic shaping to prioritize bandwidth on shared interfaces. 8.2.6. Multus configuration To use Multus, you must create network attachment definitions (NADs) before deploying the OpenShift Data Foundation cluster, which is later attached to the cluster. For more information, see Creating network attachment definitions . To attach additional network interfaces to a pod, you must create configurations that define how the interfaces are attached. You specify each interface by using a NetworkAttachmentDefinition custom resource (CR). A Container Network Interface (CNI) configuration inside each of these CRs defines how that interface is created. OpenShift Data Foundation supports the macvlan driver, which includes the following features: Each connection gets a sub-interface of the parent interface with its own MAC address and is isolated from the host network. Uses less CPU and provides better throughput than Linux bridge or ipvlan . Bridge mode is almost always the best choice. Near-host performance when network interface card (NIC) supports virtual ports/virtual local area networks (VLANs) in hardware. OpenShift Data Foundation supports the following two types IP address management: whereabouts DHCP Uses OpenShift/Kubernetes leases to select unique IP addresses per Pod. Does not require range field. Does not require a DHCP server to provide IPs for Pods. Network DHCP server can give out the same range to Multus Pods as well as any other hosts on the same network. Caution If there is a DHCP server, ensure Multus configured IPAM does not give out the same range so that multiple MAC addresses on the network cannot have the same IP. 8.2.7. Requirements for Multus configuration Prerequisites The interface used for the public network must have the same interface name on each OpenShift storage and worker node, and the interfaces must all be connected to the same underlying network. The interface used for the cluster network must have the same interface name on each OpenShift storage node, and the interfaces must all be connected to the same underlying network. Cluster network interfaces do not have to be present on the OpenShift worker nodes. Each network interface used for the public or cluster network must be capable of at least 10 gigabit network speeds. Each network requires a separate virtual local area network (VLAN) or subnet. See Creating Multus networks for the necessary steps to configure a Multus based configuration on bare metal. Chapter 9. Disaster Recovery Disaster Recovery (DR) helps an organization to recover and resume business critical functions or normal operations when there are disruptions or disasters. OpenShift Data Foundation provides High Availability (HA) & DR solutions for stateful apps which are broadly categorized into two broad categories: Metro-DR : Single Region and cross data center protection with no data loss. Regional-DR : Cross Region protection with minimal potential data loss. Disaster Recovery with stretch cluster : Single OpenShift Data Foundation cluster is stretched between two different locations to provide the storage infrastructure with disaster recovery capabilities. 9.1. Metro-DR Metropolitan disaster recovery (Metro-DR) is composed of Red Hat Advanced Cluster Management for Kubernetes (RHACM), Red Hat Ceph Storage and OpenShift Data Foundation components to provide application and data mobility across OpenShift Container Platform clusters. This release of Metro-DR solution provides volume persistent data and metadata replication across sites that are geographically dispersed. In the public cloud these would be similar to protecting from an Availability Zone failure. Metro-DR ensures business continuity during the unavailability of a data center with no data loss. This solution is entitled with Red Hat Advanced Cluster Management (RHACM) and OpenShift Data Foundation Advanced SKUs and related bundles. Important You can now easily set up Metropolitan disaster recovery solutions for workloads based on OpenShift virtualization technology using OpenShift Data Foundation. For more information, see the knowledgebase article . Prerequisites Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites in order to successfully implement a Disaster Recovery solution: A valid Red Hat OpenShift Data Foundation Advanced entitlement A valid Red Hat Advanced Cluster Management for Kubernetes subscription To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . Ensure that the primary managed cluster (Site-1) is co-situated with the active RHACM hub cluster while the passive hub cluster is situated along with the secondary managed cluster (Site-2). Alternatively, the active RHACM hub cluster can be placed in a neutral site (site-3) that is not impacted by the failures of either of the primary managed cluster at Site-1 or the secondary cluster at Site-2. In this situation, if a passive hub cluster is used it can be placed with the secondary cluster at Site-2. Note Hub recovery for Metro-DR is a Technology Preview feature and is subject to Technology Preview support limitations. For detailed solution requirements, see Metro-DR requirements , deployment requirements for Red Hat Ceph Storage stretch cluster with arbiter and RHACM requirements . 9.2. Regional-DR Regional disaster recovery (Regional-DR) is composed of Red Hat Advanced Cluster Management for Kubernetes (RHACM) and OpenShift Data Foundation components to provide application and data mobility across OpenShift Container Platform clusters. It is built on Asynchronous data replication and hence could have a potential data loss but provides the protection against a broad set of failures. Red Hat OpenShift Data Foundation is backed by Ceph as the storage provider, whose lifecycle is managed by Rook and it's enhanced with the ability to: Enable pools for mirroring. Automatically mirror images across RBD pools. Provides csi-addons to manage per Persistent Volume Claim mirroring. This release of Regional-DR supports Multi-Cluster configuration that is deployed across different regions and data centers. For example, a 2-way replication across two managed clusters located in two different regions or data centers. This solution is entitled with Red Hat Advanced Cluster Management (RHACM) and OpenShift Data Foundation Advanced SKUs and related bundles. Important You can now easily set up Regional disaster recovery solutions for workloads based on OpenShift virtualization technology using OpenShift Data Foundation. For more information, see the knowledgebase article . Prerequisites Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites in order to successfully implement a Disaster Recovery solution: A valid Red Hat OpenShift Data Foundation Advanced entitlement A valid Red Hat Advanced Cluster Management for Kubernetes subscription To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . Ensure that the primary managed cluster (Site-1) is co-situated with the active RHACM hub cluster while the passive hub cluster is situated along with the secondary managed cluster (Site-2). Alternatively, the active RHACM hub cluster can be placed in a neutral site (site-3) that is not impacted by the failures of either of the primary managed cluster at Site-1 or the secondary cluster at Site-2. In this situation, if a passive hub cluster is used it can be placed with the secondary cluster at Site-2. For detailed solution requirements, see Regional-DR requirements and RHACM requirements . 9.3. Disaster Recovery with stretch cluster In this case, a single cluster is stretched across two zones with a third zone as the location for the arbiter. This feature is currently intended for deployment in the OpenShift Container Platform on-premises and in the same location. This solution is not recommended for deployments stretching over multiple data centers. Instead, consider Metro-DR as a first option for no data loss DR solution deployed over multiple data centers with low latency networks. Note The stretch cluster solution is designed for deployments where latencies do not exceed 10 ms maximum round-trip time (RTT) between the zones containing data volumes. For Arbiter nodes follow the latency requirements specified for etcd, see Guidance for Red Hat OpenShift Container Platform Clusters - Deployments Spanning Multiple Sites(Data Centers/Regions) . Contact Red Hat Customer Support if you are planning to deploy with higher latencies. To use the stretch cluster, You must have a minimum of five nodes across three zones, where: Two nodes per zone are used for each data-center zone, and one additional zone with one node is used for arbiter zone (the arbiter can be on a master node). All the nodes must be manually labeled with the zone labels prior to cluster creation. For example, the zones can be labeled as: topology.kubernetes.io/zone=arbiter (master or worker node) topology.kubernetes.io/zone=datacenter1 (minimum two worker nodes) topology.kubernetes.io/zone=datacenter2 (minimum two worker nodes) For more information, see Configuring OpenShift Data Foundation for stretch cluster . To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . Important You can now easily set up disaster recovery with stretch cluster for workloads based on OpenShift virtualization technology using OpenShift Data Foundation. For more information, see OpenShift Virtualization in OpenShift Container Platform guide. Chapter 10. Disconnected environment Disconnected environment is a network restricted environment where the Operator Lifecycle Manager (OLM) cannot access the default Operator Hub and image registries, which require internet connectivity. Red Hat supports deployment of OpenShift Data Foundation in disconnected environments where you have installed OpenShift Container Platform in restricted networks. To install OpenShift Data Foundation in a disconnected environment, see Using Operator Lifecycle Manager on restricted networks of the Operators guide in OpenShift Container Platform documentation. Note When you install OpenShift Data Foundation in a restricted network environment, apply a custom Network Time Protocol (NTP) configuration to the nodes, because by default, internet connectivity is assumed in OpenShift Container Platform and chronyd is configured to use the *.rhel.pool.ntp.org servers. For more information, see the Red Hat Knowledgebase solution A newly deployed OCS 4 cluster status shows as "Degraded", Why? and Configuring chrony time service of the Installing guide in OpenShift Container Platform documentation. Red Hat OpenShift Data Foundation version 4.12 introduced the Agent-based Installer for disconnected environment deployment. The Agent-based Installer allows you to use a mirror registry for disconnected installations. For more information, see Preparing to install with Agent-based Installer . Packages to include for OpenShift Data Foundation When you prune the redhat-operator index image, include the following list of packages for the OpenShift Data Foundation deployment: ocs-operator odf-operator mcg-operator odf-csi-addons-operator odr-cluster-operator odr-hub-operator Optional: local-storage-operator Only for local storage deployments. Optional: odf-multicluster-orchestrator Only for Regional Disaster Recovery (Regional-DR) configuration. Important Name the CatalogSource as redhat-operators . Chapter 11. Supported and Unsupported features for IBM Power and IBM Z Table 11.1. List of supported and unsupported features on IBM Power and IBM Z Features IBM Power IBM Z Compact deployment Unsupported Unsupported Dynamic storage devices Unsupported Supported Stretched Cluster - Arbiter Supported Unsupported Federal Information Processing Standard Publication (FIPS) Unsupported Unsupported Ability to view pool compression metrics Supported Unsupported Automated scaling of Multicloud Object Gateway (MCG) endpoint pods Supported Unsupported Alerts to control overprovision Supported Unsupported Alerts when Ceph Monitor runs out of space Supported Unsupported Extended OpenShift Data Foundation control plane which allows pluggable external storage such as IBM Flashsystem Unsupported Unsupported IPV6 support Unsupported Unsupported Multus Unsupported Unsupported Multicloud Object Gateway (MCG) bucket replication Supported Unsupported Quota support for object data Supported Unsupported Minimum deployment Unsupported Unsupported Regional-Disaster Recovery (Regional-DR) with Red Hat Advanced Cluster Management (RHACM) Supported Unsupported Metro-Disaster Recovery (Metro-DR) multiple clusters with RHACM Supported Supported Single Node solution for Radio Access Network (RAN) Unsupported Unsupported Support for network file system (NFS) services Supported Unsupported Ability to change Multicloud Object Gateway (MCG) account credentials Supported Unsupported Multicluster monitoring in Red Hat Advanced Cluster Management console Supported Unsupported Deletion of expired objects in Multicloud Object Gateway lifecycle Supported Unsupported Agnostic deployment of OpenShift Data Foundation on any Openshift supported platform Unsupported Unsupported Installer provisioned deployment of OpenShift Data Foundation using bare metal infrastructure Unsupported Unsupported Openshift dual stack with OpenShift Data Foundation using IPv4 Unsupported Unsupported Ability to disable Multicloud Object Gateway external service during deployment Unsupported Unsupported Ability to allow overriding of default NooBaa backing store Supported Unsupported Allowing ocs-operator to deploy two MGR pods, one active and one standby Supported Unsupported Disaster Recovery for brownfield deployments Unsupported Supported Automatic scaling of RGW Unsupported Unsupported Chapter 12. steps To start deploying your OpenShift Data Foundation, you can use the internal mode within OpenShift Container Platform or use external mode to make available services from a cluster running outside of OpenShift Container Platform. Depending on your requirement, go to the respective deployment guides. Internal mode Deploying OpenShift Data Foundation using Amazon web services Deploying OpenShift Data Foundation using Bare Metal Deploying OpenShift Data Foundation using VMWare vSphere Deploying OpenShift Data Foundation using Microsoft Azure Deploying OpenShift Data Foundation using Google Cloud Deploying OpenShift Data Foundation using Red Hat OpenStack Platform [Technology Preview] Deploying OpenShift Data Foundation on IBM Power Deploying OpenShift Data Foundation on IBM Z Deploying OpenShift Data Foundation on any platform External mode Deploying OpenShift Data Foundation in external mode Internal or external For deploying multiple clusters, see Deploying multiple OpenShift Data Foundation clusters . | [
"apiVersion: apps/v1 kind: DaemonSet metadata: name: multus-public-test namespace: openshift-storage labels: app: multus-public-test spec: selector: matchLabels: app: multus-public-test template: metadata: labels: app: multus-public-test annotations: k8s.v1.cni.cncf.io/networks: openshift-storage/public-net # spec: containers: - name: test image: quay.io/ceph/ceph:v18 # image known to have 'ping' installed command: - sleep - infinity resources: {}",
"oc -n openshift-storage describe pod -l app=multus-public-test | grep -o -E 'Add .* from .*' Add eth0 [10.128.2.86/23] from ovn-kubernetes Add net1 [192.168.20.22/24] from default/public-net Add eth0 [10.129.2.173/23] from ovn-kubernetes Add net1 [192.168.20.29/24] from default/public-net Add eth0 [10.131.0.108/23] from ovn-kubernetes Add net1 [192.168.20.23/24] from default/public-net",
"oc debug node/NODE Starting pod/NODE-debug To use host binaries, run `chroot /host` Pod IP: **** If you don't see a command prompt, try pressing enter. sh-5.1# chroot /host sh-5.1# ping 192.168.20.22 PING 192.168.20.22 (192.168.20.22) 56(84) bytes of data. 64 bytes from 192.168.20.22: icmp_seq=1 ttl=64 time=0.093 ms 64 bytes from 192.168.20.22: icmp_seq=2 ttl=64 time=0.056 ms ^C --- 192.168.20.22 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1046ms rtt min/avg/max/mdev = 0.056/0.074/0.093/0.018 ms sh-5.1# ping 192.168.20.29 PING 192.168.20.29 (192.168.20.29) 56(84) bytes of data. 64 bytes from 192.168.20.29: icmp_seq=1 ttl=64 time=0.403 ms 64 bytes from 192.168.20.29: icmp_seq=2 ttl=64 time=0.181 ms ^C --- 192.168.20.29 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1007ms rtt min/avg/max/mdev = 0.181/0.292/0.403/0.111 ms sh-5.1# ping 192.168.20.23 PING 192.168.20.23 (192.168.20.23) 56(84) bytes of data. 64 bytes from 192.168.20.23: icmp_seq=1 ttl=64 time=0.329 ms 64 bytes from 192.168.20.23: icmp_seq=2 ttl=64 time=0.227 ms ^C --- 192.168.20.23 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1047ms rtt min/avg/max/mdev = 0.227/0.278/0.329/0.051 ms",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ceph-public-net-shim-compute-0 namespace: openshift-storage spec: nodeSelector: node-role.kubernetes.io/worker: \"\" kubernetes.io/hostname: compute-0 desiredState: interfaces: - name: odf-pub-shim description: Shim interface used to connect host to OpenShift Data Foundation public Multus network type: mac-vlan state: up mac-vlan: base-iface: eth0 mode: bridge promiscuous: true ipv4: enabled: true dhcp: false address: - ip: 192.168.252.1 # STATIC IP FOR compute-0 prefix-length: 22 routes: config: - destination: 192.168.0.0/16 next-hop-interface: odf-pub-shim --- apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ceph-public-net-shim-compute-1 namespace: openshift-storage spec: nodeSelector: node-role.kubernetes.io/worker: \"\" kubernetes.io/hostname: compute-1 desiredState: interfaces: - name: odf-pub-shim description: Shim interface used to connect host to OpenShift Data Foundation public Multus network type: mac-vlan state: up mac-vlan: base-iface: eth0 mode: bridge promiscuous: true ipv4: enabled: true dhcp: false address: - ip: 192.168.252.1 # STATIC IP FOR compute-1 prefix-length: 22 routes: config: - destination: 192.168.0.0/16 next-hop-interface: odf-pub-shim --- apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ceph-public-net-shim-compute-2 # [1] namespace: openshift-storage spec: nodeSelector: node-role.kubernetes.io/worker: \"\" kubernetes.io/hostname: compute-2 # [2] desiredState: Interfaces: [3] - name: odf-pub-shim description: Shim interface used to connect host to OpenShift Data Foundation public Multus network type: mac-vlan # [4] state: up mac-vlan: base-iface: eth0 # [5] mode: bridge promiscuous: true ipv4: # [6] enabled: true dhcp: false address: - ip: 192.168.252.2 # STATIC IP FOR compute-2 # [7] prefix-length: 22 routes: # [8] config: - destination: 192.168.0.0/16 # [9] next-hop-interface: odf-pub-shim",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: public-net namespace: openshift-storage spec: config: '{ \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", # [1] \"master\": \"eth0\", # [2] \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", # [3] \"range\": \"192.168.0.0/16\", # [4] \"exclude\": [ \"192.168.252.0/22\" # [5] ], \"routes\": [ # [6] {\"dst\": \"192.168.252.0/22\"} # [7] ] } }'"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html-single/planning_your_deployment/index |
5.7. Manually Unconfiguring Client Machines | 5.7. Manually Unconfiguring Client Machines A machine may need to be removed from one IdM domain and moved to another domain or a virtual machine may be copied. There are a number of different situations where an IdM client needs to be reconfigured. The easiest solution is to uninstall the client and then configure it afresh. Use the --updatedns option, as when installing a client, to update the domain DNS configuration automatically. If it is not possible to uninstall the client directly, then the IdM configuration can be manually removed from the client system. Warning When a machine is unenrolled, the procedure cannot be undone. The machine can only be enrolled again. On the client, remove the old hostname from the main keytab. This can be done by removing every principal in the realm or by removing specific principals. For example, to remove all principals: To remove specific principals: On the client system, disable tracking in certmonger for every certificate. Each certificate must be removed from tracking individually. First, list every certificate being tracked, and extract the database and nickname for each certificate. The number of certificates depends on the configured services for the host. Then, disable tracking for each. For example: On the IdM server, remove the old host from the IdM DNS domain. While this is optional, it cleans up the old IdM entries associated with the system and allows it to be re-enrolled cleanly at a later time. If the system should be re-added to a new IdM domain - such as a virtual machine which was moved from one location to another - then the system can be rejoined to IdM using the ipa-join command on the client system. | [
"ipa-client-install --uninstall --updatedns",
"[jsmith@client ~]USD ipa-rmkeytab -k /etc/krb5.keytab -r EXAMPLE.COM",
"[jsmith@client ~]USD ipa-rmkeytab -k /etc/krb5.keytab -p host/[email protected]",
"[jsmith@client ~]USD ipa-getcert list",
"[jsmith@client ~]USD ipa-getcert stop-tracking -n \"Server-Cert\" -d /etc/httpd/alias",
"[jsmith@server ~]USD kinit admin [jsmith@server ~]USD ipa host-del server.example.com",
"[jsmith@client ~]USD ipa-join"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/manually-unconfig-machines |
Chapter 1. Introduction | Chapter 1. Introduction Red Hat's Trusted Profile Analyzer (RHTPA) is a proactive service that assists in risk management of Open Source Software (OSS) packages and dependencies. The Trusted Profile Analyzer service brings awareness to and remediation of OSS vulnerabilities discovered within the software supply chain. The Red Hat Trusted Profile Analyzer documentation is available here . | null | https://docs.redhat.com/en/documentation/red_hat_trusted_profile_analyzer/1/html/release_notes/introduction |
Chapter 4. Installing a cluster on IBM Power Virtual Server with customizations | Chapter 4. Installing a cluster on IBM Power Virtual Server with customizations In OpenShift Container Platform version 4.17, you can install a customized cluster on infrastructure that the installation program provisions on IBM Power Virtual Server. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 4.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an IBM Cloud(R) account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. You configured the ccoctl utility before you installed the cluster. For more information, see Configuring the Cloud Credential Operator utility . 4.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.17, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 4.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 4.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 4.5. Exporting the API key You must set the API key you created as a global variable; the installation program ingests the variable during startup to set the API key. Prerequisites You have created either a user API key or service ID API key for your IBM Cloud(R) account. Procedure Export your API key for your account as a global variable: USD export IBMCLOUD_API_KEY=<api_key> Important You must set the variable name exactly as specified; the installation program expects the variable name to be present during startup. 4.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select powervs as the platform to target. Select the region to deploy the cluster to. Select the zone to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for IBM Power(R) Virtual Server 4.6.1. Sample customized install-config.yaml file for IBM Power Virtual Server You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com compute: 1 2 - architecture: ppc64le hyperthreading: Enabled 3 name: worker platform: powervs: smtLevel: 8 4 replicas: 3 controlPlane: 5 6 architecture: ppc64le hyperthreading: Enabled 7 name: master platform: powervs: smtLevel: 8 8 replicas: 3 metadata: creationTimestamp: null name: example-cluster-name networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/24 networkType: OVNKubernetes 9 serviceNetwork: - 172.30.0.0/16 platform: powervs: userID: ibm-user-id region: powervs-region zone: powervs-zone powervsResourceGroup: "ibmcloud-resource-group" 10 serviceInstanceGUID: "powervs-region-service-instance-guid" vpcRegion : vpc-region publish: External pullSecret: '{"auths": ...}' 11 sshKey: ssh-ed25519 AAAA... 12 1 5 If you do not provide these parameters and values, the installation program provides the default value. 2 6 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 3 7 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 8 The smtLevel specifies the level of SMT to set to the control plane and compute machines. The supported values are 1, 2, 4, 8, 'off' and 'on' . The default value is 8. The smtLevel 'off' sets SMT to off and smtlevel 'on' sets SMT to the default value 8 on the cluster nodes. Note When simultaneous multithreading (SMT), or hyperthreading is not enabled, one vCPU is equivalent to one physical core. When enabled, total vCPUs is computed as: (Thread(s) per core * Core(s) per socket) * Socket(s). The smtLevel controls the threads per core. Lower SMT levels may require additional assigned cores when deploying the cluster nodes. You can do this by setting the 'processors' parameter in the install-config.yaml file to an appropriate value to meet the requirements for deploying OpenShift Container Platform successfully. 9 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 10 The name of an existing resource group. 11 Required. The installation program prompts you for this value. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 4.6.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 4.7. Manually creating IAM Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets for you cloud provider. You can use the Cloud Credential Operator (CCO) utility ( ccoctl ) to create the required IBM Cloud(R) resources. Prerequisites You have configured the ccoctl binary. You have an existing install-config.yaml file. Procedure Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled 1 This line is added to set the credentialsMode parameter to Manual . To generate the manifests, run the following command from the directory that contains the installation program: USD ./openshift-install create manifests --dir <installation_directory> From the directory that contains the installation program, set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: "1.0" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer Create the service ID for each credential request, assign the policies defined, create an API key, and generate the secret: USD ccoctl ibmcloud create-service-id \ --credentials-requests-dir=<path_to_credential_requests_directory> \ 1 --name=<cluster_name> \ 2 --output-dir=<installation_directory> \ 3 --resource-group-name=<resource_group_name> 4 1 Specify the directory containing the files for the component CredentialsRequest objects. 2 Specify the name of the OpenShift Container Platform cluster. 3 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 4 Optional: Specify the name of the resource group used for scoping the access policies. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. If an incorrect resource group name is provided, the installation fails during the bootstrap phase. To find the correct resource group name, run the following command: USD grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml Verification Ensure that the appropriate secrets were generated in your cluster's manifests directory. 4.8. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 4.9. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.17. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.17 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.17 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.17 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.17 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 4.10. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources Accessing the web console 4.11. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.17, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 4.12. steps Customize your cluster If necessary, you can opt out of remote health reporting | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"export IBMCLOUD_API_KEY=<api_key>",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"apiVersion: v1 baseDomain: example.com compute: 1 2 - architecture: ppc64le hyperthreading: Enabled 3 name: worker platform: powervs: smtLevel: 8 4 replicas: 3 controlPlane: 5 6 architecture: ppc64le hyperthreading: Enabled 7 name: master platform: powervs: smtLevel: 8 8 replicas: 3 metadata: creationTimestamp: null name: example-cluster-name networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/24 networkType: OVNKubernetes 9 serviceNetwork: - 172.30.0.0/16 platform: powervs: userID: ibm-user-id region: powervs-region zone: powervs-zone powervsResourceGroup: \"ibmcloud-resource-group\" 10 serviceInstanceGUID: \"powervs-region-service-instance-guid\" vpcRegion : vpc-region publish: External pullSecret: '{\"auths\": ...}' 11 sshKey: ssh-ed25519 AAAA... 12",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled",
"./openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer",
"ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4",
"grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_on_ibm_power_virtual_server/installing-ibm-power-vs-customizations |
Configuration Guide | Configuration Guide Red Hat Ceph Storage 5 Configuration settings for Red Hat Ceph Storage Red Hat Ceph Storage Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/configuration_guide/index |
8.5. Remediating the System to Align with a Specific Baseline Using the SSG Ansible Playbook | 8.5. Remediating the System to Align with a Specific Baseline Using the SSG Ansible Playbook Use this procedure to remediate your system with a specific baseline using the Ansible playbook file from the SCAP Security Guide project. This example uses the Protection Profile for General Purpose Operating Systems (OSPP). Warning If not used carefully, running the system evaluation with the Remediate option enabled might render the system non-functional. Red Hat does not provide any automated method to revert changes made by security-hardening remediations. Remediations are supported on RHEL systems in the default configuration. If your system has been altered after the installation, running remediation might not make it compliant with the required security profile. Prerequisites The scap-security-guide package is installed on your RHEL 7 system. The ansible package is installed. See the Ansible Installation Guide for more information. Procedure Remediate your system to align with OSPP using Ansible: Restart the system. Verification Evaluate compliance of the system with the OSPP profile, and save scan results in the ospp_report.html file: Additional Resources scap-security-guide(8) and oscap(8) man pages Ansible Documentation | [
"~]# ansible-playbook -i localhost, -c local /usr/share/scap-security-guide/ansible/ssg-rhel7-role-ospp.yml",
"~]# oscap xccdf eval --profile ospp --report ospp_report.html /usr/share/xml/scap/ssg/content/ssg-rhel7-ds.xml"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/remediating-the-system-to-align-with-baseline-using-the-ssg-ansible-playbook_scanning-the-system-for-configuration-compliance-and-vulnerabilities |
2.2. Using authconfig | 2.2. Using authconfig The authconfig tool can help configure what kind of data store to use for user credentials, such as LDAP. On Red Hat Enterprise Linux, authconfig has both GUI and command-line options to configure any user data stores. The authconfig tool can configure the system to use specific services - SSSD, LDAP, NIS, or Winbind - for its user database, along with using different forms of authentication mechanisms. Important To configure Identity Management systems, Red Hat recommends using the ipa-client-install utility or the realmd system instead of authconfig . The authconfig utilities are limited and substantially less flexible. For more information, see Section 2.1, "Identity Management Tools for System Authentication" . The following three authconfig utilities are available for configuring authentication settings: authconfig-gtk provides a full graphical interface. authconfig provides a command-line interface for manual configuration. authconfig-tui provides a text-based UI. Note that this utility has been deprecated. All of these configuration utilities must be run as root . 2.2.1. Tips for Using the authconfig CLI The authconfig command-line tool updates all of the configuration files and services required for system authentication, according to the settings passed to the script. Along with providing even more identity and authentication configuration options than can be set through the UI, the authconfig tool can also be used to create backup and kickstart files. For a complete list of authconfig options, check the help output and the man page. There are some things to remember when running authconfig : With every command, use either the --update or --test option. One of those options is required for the command to run successfully. Using --update writes the configuration changes. The --test option displays the changes but does not apply the changes to the configuration. If the --update option is not used, then the changes are not written to the system configuration files. The command line can be used to update existing configuration as well as to set new configuration. Because of this, the command line does not enforce that required attributes are used with a given invocation (because the command may be updating otherwise complete settings). When editing the authentication configuration, be very careful that the configuration is complete and accurate. Changing the authentication settings to incomplete or wrong values can lock users out of the system. Use the --test option to confirm that the settings are proper before using the --update option to write them. Each enable option has a corresponding disable option. 2.2.2. Installing the authconfig UI The authconfig UI is not installed by default, but it can be useful for administrators to make quick changes to the authentication configuration. To install the UI, install the authconfig-gtk package. This has dependencies on some common system packages, such as the authconfig command-line tool, Bash, and Python. Most of those are installed by default. 2.2.3. Launching the authconfig UI Open the terminal and log in as root. Run the system-config-authentication command. Important Any changes take effect immediately when the authconfig UI is closed. There are three configuration tabs in the Authentication dialog box: Identity & Authentication , which configures the resource used as the identity store (the data repository where the user IDs and corresponding credentials are stored). Advanced Options , which configures authentication methods other than passwords or certificates, like smart cards and fingerprint. Password Options , which configures password authentication methods. Figure 2.1. authconfig Window 2.2.4. Testing Authentication Settings It is critical that authentication is fully and properly configured. Otherwise all users (even root) could be locked out of the system, or some users blocked. The --test option prints all of the authentication configuration for the system, for every possible identity and authentication mechanism. This shows both the settings for what is enabled and what areas are disabled. The test option can be run by itself to show the full, current configuration or it can be used with an authconfig command to show how the configuration will be changed (without actually changing it). This can be very useful in verifying that the proposed authentication settings are complete and correct. 2.2.5. Saving and Restoring Configuration Using authconfig Changing authentication settings can be problematic. Improperly changing the configuration can wrongly exclude users who should have access, can cause connections to the identity store to fail, or can even lock all access to a system. Before editing the authentication configuration, it is strongly recommended that administrators take a backup of all configuration files. This is done with the --savebackup option. The authentication configuration can be restored to any saved version using the --restorebackup option, with the name of the backup to use. The authconfig command saves an automatic backup every time the configuration is altered. It is possible to restore the last backup using the --restorelastbackup option. | [
"yum install authconfig-gtk Loaded plugins: langpacks, product-id, subscription-manager Resolving Dependencies --> Running transaction check ---> Package authconfig-gtk.x86_64 0:6.2.8-8.el7 will be installed --> Finished Dependency Resolution Dependencies Resolved ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: authconfig-gtk x86_64 6.2.8-8.el7 RHEL-Server 105 k Transaction Summary ================================================================================ Install 1 Package ... 8<",
"authconfig --test caching is disabled nss_files is always enabled nss_compat is disabled nss_db is disabled nss_hesiod is disabled hesiod LHS = \"\" hesiod RHS = \"\" nss_ldap is disabled LDAP+TLS is disabled LDAP server = \"\" LDAP base DN = \"\" nss_nis is disabled NIS server = \"\" NIS domain = \"\" nss_nisplus is disabled nss_winbind is disabled SMB workgroup = \"MYGROUP\" SMB servers = \"\" SMB security = \"user\" SMB realm = \"\" Winbind template shell = \"/bin/false\" SMB idmap range = \"16777216-33554431\" nss_sss is enabled by default nss_wins is disabled nss_mdns4_minimal is disabled DNS preference over NSS or WINS is disabled pam_unix is always enabled shadow passwords are enabled password hashing algorithm is sha512 pam_krb5 is disabled krb5 realm = \"#\" krb5 realm via dns is disabled krb5 kdc = \"\" krb5 kdc via dns is disabled krb5 admin server = \"\" pam_ldap is disabled LDAP+TLS is disabled LDAP server = \"\" LDAP base DN = \"\" LDAP schema = \"rfc2307\" pam_pkcs11 is disabled use only smartcard for login is disabled smartcard module = \"\" smartcard removal action = \"\" pam_fprintd is disabled pam_ecryptfs is disabled pam_winbind is disabled SMB workgroup = \"MYGROUP\" SMB servers = \"\" SMB security = \"user\" SMB realm = \"\" pam_sss is disabled by default credential caching in SSSD is enabled SSSD use instead of legacy services if possible is enabled IPAv2 is disabled IPAv2 domain was not joined IPAv2 server = \"\" IPAv2 realm = \"\" IPAv2 domain = \"\" pam_pwquality is enabled (try_first_pass local_users_only retry=3 authtok_type=) pam_passwdqc is disabled () pam_access is disabled () pam_mkhomedir or pam_oddjob_mkhomedir is disabled (umask=0077) Always authorize local users is enabled () Authenticate system accounts against network services is disabled",
"authconfig --savebackup=/backups/authconfigbackup20200701",
"authconfig --restorebackup=/backups/authconfigbackup20200701",
"authconfig --restorelastbackup"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system-level_authentication_guide/authconfig-install |
14.8.2. make_smbcodepage | 14.8.2. make_smbcodepage make_smbcodepage <c|d> <codepage_number> <inputfile> <outputfile> The make_smbcodepage program compiles a binary codepage file from a text-format definition. The reverse is also allowed by decompiling a binary codepage file to a text-format definition. This obsolete program is part of the internationalization features of versions of Samba which are included by default with the current version of Samba. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-samba-programs-make_smbcodepage |
Chapter 6. Docker SELinux Security Policy | Chapter 6. Docker SELinux Security Policy The Docker SELinux security policy is similar to the libvirt security policy and is based on the libvirt security policy. The libvirt security policy is a series of SELinux policies that defines two ways of isolating virtual machines. Generally, virtual machines are prevented from accessing parts of the network. Specifically, individual virtual machines are denied access to one another's resources. Red Hat extends the libvirt-SELinux model to Docker. The Docker SELinux role and Docker SELinux types are based on libvirt. For example, by default, Docker has access to /usr/var/ and some other locations, but it has complete access to things that are labeled with svirt_sandbox_file_t . https://www.mankier.com/8/docker_selinux - this explains the entire Docker SELinux policy. It is not in layman's terms, but it is complete. svirt_sandbox_file_t If a file is labeled svirt_sandbox_file_t , then by default all containers can read it. But if the containers write into a directory that has svirt_sandbox_file_t ownership, they write using their own category (which in this case is "c186,c641). If you start the same container twice, it will get a new category the second time ( a different category than it had the first time). The category system isolates containers from one another. Types can be applied to processes and to files. 6.1. MCS - Multi-Category Security MCS - Multi-Category Security - this is similar to Multi-Level Authentication. Each container is given a unique ID at startup, and each file that a container writes carries that unique ID. Although this is an opt-in system, failure to make use of it means that you will have no isolation between containers. If you do not make use of MCS, you will have isolation between containers and the host, but you will not have isolation of containers from one another. That means that one container could access another container's files. https://securityblog.redhat.com/2015/04/29/container-security-just-the-good-parts/ - this will be used later to build the MCS example that we will include in the MCS. 6.2. Leveraging the Docker SELinux Security Model Properly Labeling Content - By default, docker gets access to everything in /usr and most things in /etc . To give docker access to more than that, relabel content on the host. To restrict access to things in /usr or things in /etc , relabel them. If you want to restrict access to only one or two containers, then you'll need to use the opt-in MCS system. Important Booleans and Other Restrictions - "privileged" under docker is not really privileged. Even privileged docker processes cannot access arbitrary socket files. An SElinux Boolean, docker_connect_any , makes it possible for privileged docker processes to access arbitrary socket files. Even if run privileged, docker is restricted by the Booleans that are in effect. restricting kernel capabilities - docker supports two commands as part of "docker run": (1) "--cap-add=" and (2) "--cap-drop=". these allow us to add and drop kernel capabilites to and from the containers. root powers have been broken up into a number of groups of capabilities (for instance "cap-chown", which lets you change the ownership of files). by default, docker has a very restricted list of capabilites. This provides more information about capabilites. Capabilites constitute the heart of the isolation of containers. If you have used capabilites in the manner described in this guide, an attacker who does not have a kernel exploit will be able to do nothing even if they have root on your system. restricting kernel calls with seccomp - This is a kernel call that renounces capabilities. A seccomp call that has no root capabilities will make the call to the kernel. A capable process creates a restricted process that makes the kernel call. "seccomp" is an abbreviation of "secure computing mode". http://man7.org/linux/man-pages/man2/seccomp.2.html - seccomp is even more fine-grained than capabilites. This feature restricts the kernel calls that containers can make. This is useful for general security reasons, because (for instance) you can prevent a container from calling "cd". Almost all kernel exploits rely on making kernel calls (usually to rarely used parts of the kernel). With seccomp you can drop lots of kernel calls, and dropped kernel calls can't be exploited as attack vectors. docker network security and routing - By default, docker creates a virtual ethernet card for each container. Each container has its own routing tables and iptables. When specific ports are forwarded, docker creates certain host iptables rules. The docker daemon itself does some of the proxying. If you map applications to containers, you provide flexibility to yourself by limiting network access on a per-application basis. Because containers have their own routing tables, they can be used to limit incoming and outgoing traffic: use the ip route command in the same way you would use it on a host. scenario: using a containerized firewall to segregate a particular kind of internet traffic from other kinds of internet traffic. This is an easy and potentially diverting exercise for the reader, and might involve concocting scenarios in which certain kinds of traffic are to be kept separate from an official network (one that is constrained, for instance, by the surveillance of a spouse or an employer). cgroups - "control groups". cgroups provides the core functionality that permits docker to work. In its original implementation, cgroups controlled access only to resources like the CPU. You could put a process in a cgroup, and then instruct the kernel to give that cgroup only up to 10 percent of the cpu. This functions as a kind of a way of providing SLA or quota. By default, docker creates a unique cgroup for each container. If you have existing cgroup policy on the docker daemon host, you can make use of that existing cgroup policy to control the resource consumption of the specified container. freezing and unfreezing a container - You can completely stall a container in the state that it is in a any given moment, and then restart it at that point later. This is done by giving the container zero percent CPU. cgroups is the protection that docker provides against DDoS attacks. We could host a service on a machine and give it a cgroup priority so that the service can never get less than ten percent of the CPU: then if other services became compromised, they would be unable to stall out the service, because the service is guaranteed to get a minimum of ten percent of the CPU. This makes it possible to ensure that essential processes need never relinquish control of a part of the CPU, no matter how strongly they are attacked. | [
"system_u:system_r:svirt_lxc_net_t:s0:c186,c641",
"^ ^ ^ ^ ^--- unique category | | | |---- secret-level 0 | | |--- a shared type | |---SELinux role |------ SELinux user"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/container_security_guide/docker_selinux_security_policy |
Chapter 2. Opting out of Telemetry | Chapter 2. Opting out of Telemetry The decision to opt out of telemetry should be based on your specific needs and requirements, as well as any applicable regulations or policies that you need to comply with. 2.1. Consequences of disabling Telemetry In Red Hat Advanced Cluster Security for Kubernetes (RHACS) version 4.0, you can opt out of Telemetry. However, telemetry is embedded as a core component, so opting out is strongly discouraged. Opting out of telemetry limits the ability of Red Hat to understand how everyone uses the product and which areas to prioritize for improvements. 2.2. Disabling Telemetry If you have configured Telemetry by setting the key in your environment, you can disable Telemetry data collection from the Red Hat Advanced Cluster Security for Kubernetes (RHACS) user interface (UI). Procedure In the RHACS portal, go to Platform Configuration > System Configuration . In the System Configuration header, click Edit . Scroll down and ensure that Online Telemetry Data Collection is set to Disabled. | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/telemetry/opting-out-of-telemetry |
Storage APIs | Storage APIs OpenShift Container Platform 4.18 Reference guide for storage APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html-single/storage_apis/index |
Chapter 3. Setting up the environment for an OpenShift installation | Chapter 3. Setting up the environment for an OpenShift installation 3.1. Installing RHEL on the provisioner node With the configuration of the prerequisites complete, the step is to install RHEL 9.x on the provisioner node. The installer uses the provisioner node as the orchestrator while installing the OpenShift Container Platform cluster. For the purposes of this document, installing RHEL on the provisioner node is out of scope. However, options include but are not limited to using a RHEL Satellite server, PXE, or installation media. 3.2. Preparing the provisioner node for OpenShift Container Platform installation Perform the following steps to prepare the environment. Procedure Log in to the provisioner node via ssh . Create a non-root user ( kni ) and provide that user with sudo privileges: # useradd kni # passwd kni # echo "kni ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/kni # chmod 0440 /etc/sudoers.d/kni Create an ssh key for the new user: # su - kni -c "ssh-keygen -t ed25519 -f /home/kni/.ssh/id_rsa -N ''" Log in as the new user on the provisioner node: # su - kni Use Red Hat Subscription Manager to register the provisioner node: USD sudo subscription-manager register --username=<user> --password=<pass> --auto-attach USD sudo subscription-manager repos --enable=rhel-9-for-<architecture>-appstream-rpms --enable=rhel-9-for-<architecture>-baseos-rpms Note For more information about Red Hat Subscription Manager, see Using and Configuring Red Hat Subscription Manager . Install the following packages: USD sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool Modify the user to add the libvirt group to the newly created user: USD sudo usermod --append --groups libvirt <user> Restart firewalld and enable the http service: USD sudo systemctl start firewalld USD sudo firewall-cmd --zone=public --add-service=http --permanent USD sudo firewall-cmd --reload Start and enable the libvirtd service: USD sudo systemctl enable libvirtd --now Create the default storage pool and start it: USD sudo virsh pool-define-as --name default --type dir --target /var/lib/libvirt/images USD sudo virsh pool-start default USD sudo virsh pool-autostart default Create a pull-secret.txt file: USD vim pull-secret.txt In a web browser, navigate to Install OpenShift on Bare Metal with installer-provisioned infrastructure . Click Copy pull secret . Paste the contents into the pull-secret.txt file and save the contents in the kni user's home directory. 3.3. Checking NTP server synchronization The OpenShift Container Platform installation program installs the chrony Network Time Protocol (NTP) service on the cluster nodes. To complete installation, each node must have access to an NTP time server. You can verify NTP server synchronization by using the chrony service. For disconnected clusters, you must configure the NTP servers on the control plane nodes. For more information see the Additional resources section. Prerequisites You installed the chrony package on the target node. Procedure Log in to the node by using the ssh command. View the NTP servers available to the node by running the following command: USD chronyc sources Example output MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^+ time.cloudflare.com 3 10 377 187 -209us[ -209us] +/- 32ms ^+ t1.time.ir2.yahoo.com 2 10 377 185 -4382us[-4382us] +/- 23ms ^+ time.cloudflare.com 3 10 377 198 -996us[-1220us] +/- 33ms ^* brenbox.westnet.ie 1 10 377 193 -9538us[-9761us] +/- 24ms Use the ping command to ensure that the node can access an NTP server, for example: USD ping time.cloudflare.com Example output PING time.cloudflare.com (162.159.200.123) 56(84) bytes of data. 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=1 ttl=54 time=32.3 ms 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=2 ttl=54 time=30.9 ms 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=3 ttl=54 time=36.7 ms ... Additional resources Optional: Configuring NTP for disconnected clusters Network Time Protocol (NTP) 3.4. Configuring networking Before installation, you must configure the networking on the provisioner node. Installer-provisioned clusters deploy with a bare-metal bridge and network, and an optional provisioning bridge and network. Note You can also configure networking from the web console. Procedure Export the bare-metal network NIC name by running the following command: USD export PUB_CONN=<baremetal_nic_name> Configure the bare-metal network: Note The SSH connection might disconnect after executing these steps. For a network using DHCP, run the following command: USD sudo nohup bash -c " nmcli con down \"USDPUB_CONN\" nmcli con delete \"USDPUB_CONN\" # RHEL 8.1 appends the word \"System\" in front of the connection, delete in case it exists nmcli con down \"System USDPUB_CONN\" nmcli con delete \"System USDPUB_CONN\" nmcli connection add ifname baremetal type bridge <con_name> baremetal bridge.stp no 1 nmcli con add type bridge-slave ifname \"USDPUB_CONN\" master baremetal pkill dhclient;dhclient baremetal " 1 Replace <con_name> with the connection name. For a network using static IP addressing and no DHCP network, run the following command: USD sudo nohup bash -c " nmcli con down \"USDPUB_CONN\" nmcli con delete \"USDPUB_CONN\" # RHEL 8.1 appends the word \"System\" in front of the connection, delete in case it exists nmcli con down \"System USDPUB_CONN\" nmcli con delete \"System USDPUB_CONN\" nmcli connection add ifname baremetal type bridge con-name baremetal bridge.stp no ipv4.method manual ipv4.addr "x.x.x.x/yy" ipv4.gateway "a.a.a.a" ipv4.dns "b.b.b.b" 1 nmcli con add type bridge-slave ifname \"USDPUB_CONN\" master baremetal nmcli con up baremetal " 1 Replace <con_name> with the connection name. Replace x.x.x.x/yy with the IP address and CIDR for the network. Replace a.a.a.a with the network gateway. Replace b.b.b.b with the IP address of the DNS server. Optional: If you are deploying with a provisioning network, export the provisioning network NIC name by running the following command: USD export PROV_CONN=<prov_nic_name> Optional: If you are deploying with a provisioning network, configure the provisioning network by running the following command: USD sudo nohup bash -c " nmcli con down \"USDPROV_CONN\" nmcli con delete \"USDPROV_CONN\" nmcli connection add ifname provisioning type bridge con-name provisioning nmcli con add type bridge-slave ifname \"USDPROV_CONN\" master provisioning nmcli connection modify provisioning ipv6.addresses fd00:1101::1/64 ipv6.method manual nmcli con down provisioning nmcli con up provisioning " Note The SSH connection might disconnect after executing these steps. The IPv6 address can be any address that is not routable through the bare-metal network. Ensure that UEFI is enabled and UEFI PXE settings are set to the IPv6 protocol when using IPv6 addressing. Optional: If you are deploying with a provisioning network, configure the IPv4 address on the provisioning network connection by running the following command: USD nmcli connection modify provisioning ipv4.addresses 172.22.0.254/24 ipv4.method manual SSH back into the provisioner node (if required) by running the following command: # ssh kni@provisioner.<cluster-name>.<domain> Verify that the connection bridges have been properly created by running the following command: USD sudo nmcli con show Example output NAME UUID TYPE DEVICE baremetal 4d5133a5-8351-4bb9-bfd4-3af264801530 bridge baremetal provisioning 43942805-017f-4d7d-a2c2-7cb3324482ed bridge provisioning virbr0 d9bca40f-eee1-410b-8879-a2d4bb0465e7 bridge virbr0 bridge-slave-eno1 76a8ed50-c7e5-4999-b4f6-6d9014dd0812 ethernet eno1 bridge-slave-eno2 f31c3353-54b7-48de-893a-02d2b34c4736 ethernet eno2 3.5. Creating a manifest object that includes a customized br-ex bridge As an alternative to using the configure-ovs.sh shell script to set a br-ex bridge on a bare-metal platform, you can create a MachineConfig object that includes an NMState configuration file. The NMState configuration file creates a customized br-ex bridge network configuration on each node in your cluster. Consider the following use cases for creating a manifest object that includes a customized br-ex bridge: You want to make postinstallation changes to the bridge, such as changing the Open vSwitch (OVS) or OVN-Kubernetes br-ex bridge network. The configure-ovs.sh shell script does not support making postinstallation changes to the bridge. You want to deploy the bridge on a different interface than the interface available on a host or server IP address. You want to make advanced configurations to the bridge that are not possible with the configure-ovs.sh shell script. Using the script for these configurations might result in the bridge failing to connect multiple network interfaces and facilitating data forwarding between the interfaces. Note If you require an environment with a single network interface controller (NIC) and default network settings, use the configure-ovs.sh shell script. After you install Red Hat Enterprise Linux CoreOS (RHCOS) and the system reboots, the Machine Config Operator injects Ignition configuration files into each node in your cluster, so that each node received the br-ex bridge network configuration. To prevent configuration conflicts, the configure-ovs.sh shell script receives a signal to not configure the br-ex bridge. Prerequisites Optional: You have installed the nmstate API so that you can validate the NMState configuration. Procedure Create a NMState configuration file that has decoded base64 information for your customized br-ex bridge network: Example of an NMState configuration for a customized br-ex bridge network interfaces: - name: enp2s0 1 type: ethernet 2 state: up 3 ipv4: enabled: false 4 ipv6: enabled: false - name: br-ex type: ovs-bridge state: up ipv4: enabled: false dhcp: false ipv6: enabled: false dhcp: false bridge: port: - name: enp2s0 5 - name: br-ex - name: br-ex type: ovs-interface state: up copy-mac-from: enp2s0 ipv4: enabled: true dhcp: true ipv6: enabled: false dhcp: false # ... 1 Name of the interface. 2 The type of ethernet. 3 The requested state for the interface after creation. 4 Disables IPv4 and IPv6 in this example. 5 The node NIC to which the bridge attaches. Use the cat command to base64-encode the contents of the NMState configuration: USD cat <nmstate_configuration>.yaml | base64 1 1 Replace <nmstate_configuration> with the name of your NMState resource YAML file. Create a MachineConfig manifest file and define a customized br-ex bridge network configuration analogous to the following example: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 10-br-ex-worker 2 spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<base64_encoded_nmstate_configuration> 3 mode: 0644 overwrite: true path: /etc/nmstate/openshift/cluster.yml # ... 1 For each node in your cluster, specify the hostname path to your node and the base-64 encoded Ignition configuration file data for the machine type. If you have a single global configuration specified in an /etc/nmstate/openshift/cluster.yml configuration file that you want to apply to all nodes in your cluster, you do not need to specify the hostname path for each node. The worker role is the default role for nodes in your cluster. The .yaml extension does not work when specifying the hostname path for each node or all nodes in the MachineConfig manifest file. 2 The name of the policy. 3 Writes the encoded base64 information to the specified path. 3.5.1. Scaling each machine set to compute nodes To apply a customized br-ex bridge configuration to all compute nodes in your OpenShift Container Platform cluster, you must edit your MachineConfig custom resource (CR) and modify its roles. Additionally, you must create a BareMetalHost CR that defines information for your bare-metal machine, such as hostname, credentials, and so on. After you configure these resources, you must scale machine sets, so that the machine sets can apply the resource configuration to each compute node and reboot the nodes. Prerequisites You created a MachineConfig manifest object that includes a customized br-ex bridge configuration. Procedure Edit the MachineConfig CR by entering the following command: USD oc edit mc <machineconfig_custom_resource_name> Add each compute node configuration to the CR, so that the CR can manage roles for each defined compute node in your cluster. Create a Secret object named extraworker-secret that has a minimal static IP configuration. Apply the extraworker-secret secret to each node in your cluster by entering the following command. This step provides each compute node access to the Ignition config file. USD oc apply -f ./extraworker-secret.yaml Create a BareMetalHost resource and specify the network secret in the preprovisioningNetworkDataName parameter: Example BareMetalHost resource with an attached network secret apiVersion: metal3.io/v1alpha1 kind: BareMetalHost spec: # ... preprovisioningNetworkDataName: ostest-extraworker-0-network-config-secret # ... To manage the BareMetalHost object within the openshift-machine-api namespace of your cluster, change to the namespace by entering the following command: USD oc project openshift-machine-api Get the machine sets: USD oc get machinesets Scale each machine set by entering the following command. You must run this command for each machine set. USD oc scale machineset <machineset_name> --replicas=<n> 1 1 Where <machineset_name> is the name of the machine set and <n> is the number of compute nodes. 3.6. Establishing communication between subnets In a typical OpenShift Container Platform cluster setup, all nodes, including the control plane and compute nodes, reside in the same network. However, for edge computing scenarios, it can be beneficial to locate compute nodes closer to the edge. This often involves using different network segments or subnets for the remote nodes than the subnet used by the control plane and local compute nodes. Such a setup can reduce latency for the edge and allow for enhanced scalability. Before installing OpenShift Container Platform, you must configure the network properly to ensure that the edge subnets containing the remote nodes can reach the subnet containing the control plane nodes and receive traffic from the control plane too. You can run control plane nodes in the same subnet or multiple subnets by configuring a user-managed load balancer in place of the default load balancer. With a multiple subnet environment, you can reduce the risk of your OpenShift Container Platform cluster from failing because of a hardware failure or a network outage. For more information, see "Services for a user-managed load balancer" and "Configuring a user-managed load balancer". Running control plane nodes in a multiple subnet environment requires completion of the following key tasks: Configuring a user-managed load balancer instead of the default load balancer by specifying UserManaged in the loadBalancer.type parameter of the install-config.yaml file. Configuring a user-managed load balancer address in the ingressVIPs and apiVIPs parameters of the install-config.yaml file. Adding the multiple subnet Classless Inter-Domain Routing (CIDR) and the user-managed load balancer IP addresses to the networking.machineNetworks parameter in the install-config.yaml file. Note Deploying a cluster with multiple subnets requires using virtual media, such as redfish-virtualmedia and idrac-virtualmedia . This procedure details the network configuration required to allow the remote compute nodes in the second subnet to communicate effectively with the control plane nodes in the first subnet and to allow the control plane nodes in the first subnet to communicate effectively with the remote compute nodes in the second subnet. In this procedure, the cluster spans two subnets: The first subnet ( 10.0.0.0 ) contains the control plane and local compute nodes. The second subnet ( 192.168.0.0 ) contains the edge compute nodes. Procedure Configure the first subnet to communicate with the second subnet: Log in as root to a control plane node by running the following command: USD sudo su - Get the name of the network interface by running the following command: # nmcli dev status Add a route to the second subnet ( 192.168.0.0 ) via the gateway by running the following command: # nmcli connection modify <interface_name> +ipv4.routes "192.168.0.0/24 via <gateway>" Replace <interface_name> with the interface name. Replace <gateway> with the IP address of the actual gateway. Example # nmcli connection modify eth0 +ipv4.routes "192.168.0.0/24 via 192.168.0.1" Apply the changes by running the following command: # nmcli connection up <interface_name> Replace <interface_name> with the interface name. Verify the routing table to ensure the route has been added successfully: # ip route Repeat the steps for each control plane node in the first subnet. Note Adjust the commands to match your actual interface names and gateway. Configure the second subnet to communicate with the first subnet: Log in as root to a remote compute node by running the following command: USD sudo su - Get the name of the network interface by running the following command: # nmcli dev status Add a route to the first subnet ( 10.0.0.0 ) via the gateway by running the following command: # nmcli connection modify <interface_name> +ipv4.routes "10.0.0.0/24 via <gateway>" Replace <interface_name> with the interface name. Replace <gateway> with the IP address of the actual gateway. Example # nmcli connection modify eth0 +ipv4.routes "10.0.0.0/24 via 10.0.0.1" Apply the changes by running the following command: # nmcli connection up <interface_name> Replace <interface_name> with the interface name. Verify the routing table to ensure the route has been added successfully by running the following command: # ip route Repeat the steps for each compute node in the second subnet. Note Adjust the commands to match your actual interface names and gateway. After you have configured the networks, test the connectivity to ensure the remote nodes can reach the control plane nodes and the control plane nodes can reach the remote nodes. From the control plane nodes in the first subnet, ping a remote node in the second subnet by running the following command: USD ping <remote_node_ip_address> If the ping is successful, it means the control plane nodes in the first subnet can reach the remote nodes in the second subnet. If you do not receive a response, review the network configurations and repeat the procedure for the node. From the remote nodes in the second subnet, ping a control plane node in the first subnet by running the following command: USD ping <control_plane_node_ip_address> If the ping is successful, it means the remote compute nodes in the second subnet can reach the control plane in the first subnet. If you do not receive a response, review the network configurations and repeat the procedure for the node. 3.7. Retrieving the OpenShift Container Platform installer Use the stable-4.x version of the installation program and your selected architecture to deploy the generally available stable version of OpenShift Container Platform: USD export VERSION=stable-4.17 USD export RELEASE_ARCH=<architecture> USD export RELEASE_IMAGE=USD(curl -s https://mirror.openshift.com/pub/openshift-v4/USDRELEASE_ARCH/clients/ocp/USDVERSION/release.txt | grep 'Pull From: quay.io' | awk -F ' ' '{print USD3}') 3.8. Extracting the OpenShift Container Platform installer After retrieving the installer, the step is to extract it. Procedure Set the environment variables: USD export cmd=openshift-baremetal-install USD export pullsecret_file=~/pull-secret.txt USD export extract_dir=USD(pwd) Get the oc binary: USD curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/openshift-client-linux.tar.gz | tar zxvf - oc Extract the installer: USD sudo cp oc /usr/local/bin USD oc adm release extract --registry-config "USD{pullsecret_file}" --command=USDcmd --to "USD{extract_dir}" USD{RELEASE_IMAGE} USD sudo cp openshift-baremetal-install /usr/local/bin 3.9. Creating an RHCOS images cache To employ image caching, you must download the Red Hat Enterprise Linux CoreOS (RHCOS) image used by the bootstrap VM to provision the cluster nodes. Image caching is optional, but it is especially useful when running the installation program on a network with limited bandwidth. Note The installation program no longer needs the clusterOSImage RHCOS image because the correct image is in the release payload. If you are running the installation program on a network with limited bandwidth and the RHCOS images download takes more than 15 to 20 minutes, the installation program will timeout. Caching images on a web server will help in such scenarios. Warning If you enable TLS for the HTTPD server, you must confirm the root certificate is signed by an authority trusted by the client and verify the trusted certificate chain between your OpenShift Container Platform hub and spoke clusters and the HTTPD server. Using a server configured with an untrusted certificate prevents the images from being downloaded to the image creation service. Using untrusted HTTPS servers is not supported. Install a container that contains the images. Procedure Install podman : USD sudo dnf install -y podman Open firewall port 8080 to be used for RHCOS image caching: USD sudo firewall-cmd --add-port=8080/tcp --zone=public --permanent USD sudo firewall-cmd --reload Create a directory to store the bootstraposimage : USD mkdir /home/kni/rhcos_image_cache Set the appropriate SELinux context for the newly created directory: USD sudo semanage fcontext -a -t httpd_sys_content_t "/home/kni/rhcos_image_cache(/.*)?" USD sudo restorecon -Rv /home/kni/rhcos_image_cache/ Get the URI for the RHCOS image that the installation program will deploy on the bootstrap VM: USD export RHCOS_QEMU_URI=USD(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH "USD(arch)" '.architectures[USDARCH].artifacts.qemu.formats["qcow2.gz"].disk.location') Get the name of the image that the installation program will deploy on the bootstrap VM: USD export RHCOS_QEMU_NAME=USD{RHCOS_QEMU_URI##*/} Get the SHA hash for the RHCOS image that will be deployed on the bootstrap VM: USD export RHCOS_QEMU_UNCOMPRESSED_SHA256=USD(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH "USD(arch)" '.architectures[USDARCH].artifacts.qemu.formats["qcow2.gz"].disk["uncompressed-sha256"]') Download the image and place it in the /home/kni/rhcos_image_cache directory: USD curl -L USD{RHCOS_QEMU_URI} -o /home/kni/rhcos_image_cache/USD{RHCOS_QEMU_NAME} Confirm SELinux type is of httpd_sys_content_t for the new file: USD ls -Z /home/kni/rhcos_image_cache Create the pod: USD podman run -d --name rhcos_image_cache \ 1 -v /home/kni/rhcos_image_cache:/var/www/html \ -p 8080:8080/tcp \ registry.access.redhat.com/ubi9/httpd-24 1 Creates a caching webserver with the name rhcos_image_cache . This pod serves the bootstrapOSImage image in the install-config.yaml file for deployment. Generate the bootstrapOSImage configuration: USD export BAREMETAL_IP=USD(ip addr show dev baremetal | awk '/inet /{print USD2}' | cut -d"/" -f1) USD export BOOTSTRAP_OS_IMAGE="http://USD{BAREMETAL_IP}:8080/USD{RHCOS_QEMU_NAME}?sha256=USD{RHCOS_QEMU_UNCOMPRESSED_SHA256}" USD echo " bootstrapOSImage=USD{BOOTSTRAP_OS_IMAGE}" Add the required configuration to the install-config.yaml file under platform.baremetal : platform: baremetal: bootstrapOSImage: <bootstrap_os_image> 1 1 Replace <bootstrap_os_image> with the value of USDBOOTSTRAP_OS_IMAGE . See the "Configuring the install-config.yaml file" section for additional details. 3.10. Services for a user-managed load balancer You can configure an OpenShift Container Platform cluster to use a user-managed load balancer in place of the default load balancer. Important Configuring a user-managed load balancer depends on your vendor's load balancer. The information and examples in this section are for guideline purposes only. Consult the vendor documentation for more specific information about the vendor's load balancer. Red Hat supports the following services for a user-managed load balancer: Ingress Controller OpenShift API OpenShift MachineConfig API You can choose whether you want to configure one or all of these services for a user-managed load balancer. Configuring only the Ingress Controller service is a common configuration option. To better understand each service, view the following diagrams: Figure 3.1. Example network workflow that shows an Ingress Controller operating in an OpenShift Container Platform environment Figure 3.2. Example network workflow that shows an OpenShift API operating in an OpenShift Container Platform environment Figure 3.3. Example network workflow that shows an OpenShift MachineConfig API operating in an OpenShift Container Platform environment The following configuration options are supported for user-managed load balancers: Use a node selector to map the Ingress Controller to a specific set of nodes. You must assign a static IP address to each node in this set, or configure each node to receive the same IP address from the Dynamic Host Configuration Protocol (DHCP). Infrastructure nodes commonly receive this type of configuration. Target all IP addresses on a subnet. This configuration can reduce maintenance overhead, because you can create and destroy nodes within those networks without reconfiguring the load balancer targets. If you deploy your ingress pods by using a machine set on a smaller network, such as a /27 or /28 , you can simplify your load balancer targets. Tip You can list all IP addresses that exist in a network by checking the machine config pool's resources. Before you configure a user-managed load balancer for your OpenShift Container Platform cluster, consider the following information: For a front-end IP address, you can use the same IP address for the front-end IP address, the Ingress Controller's load balancer, and API load balancer. Check the vendor's documentation for this capability. For a back-end IP address, ensure that an IP address for an OpenShift Container Platform control plane node does not change during the lifetime of the user-managed load balancer. You can achieve this by completing one of the following actions: Assign a static IP address to each control plane node. Configure each node to receive the same IP address from the DHCP every time the node requests a DHCP lease. Depending on the vendor, the DHCP lease might be in the form of an IP reservation or a static DHCP assignment. Manually define each node that runs the Ingress Controller in the user-managed load balancer for the Ingress Controller back-end service. For example, if the Ingress Controller moves to an undefined node, a connection outage can occur. 3.10.1. Configuring a user-managed load balancer You can configure an OpenShift Container Platform cluster to use a user-managed load balancer in place of the default load balancer. Important Before you configure a user-managed load balancer, ensure that you read the "Services for a user-managed load balancer" section. Read the following prerequisites that apply to the service that you want to configure for your user-managed load balancer. Note MetalLB, which runs on a cluster, functions as a user-managed load balancer. OpenShift API prerequisites You defined a front-end IP address. TCP ports 6443 and 22623 are exposed on the front-end IP address of your load balancer. Check the following items: Port 6443 provides access to the OpenShift API service. Port 22623 can provide ignition startup configurations to nodes. The front-end IP address and port 6443 are reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address and port 22623 are reachable only by OpenShift Container Platform nodes. The load balancer backend can communicate with OpenShift Container Platform control plane nodes on port 6443 and 22623. Ingress Controller prerequisites You defined a front-end IP address. TCP ports 443 and 80 are exposed on the front-end IP address of your load balancer. The front-end IP address, port 80 and port 443 are be reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address, port 80 and port 443 are reachable to all nodes that operate in your OpenShift Container Platform cluster. The load balancer backend can communicate with OpenShift Container Platform nodes that run the Ingress Controller on ports 80, 443, and 1936. Prerequisite for health check URL specifications You can configure most load balancers by setting health check URLs that determine if a service is available or unavailable. OpenShift Container Platform provides these health checks for the OpenShift API, Machine Configuration API, and Ingress Controller backend services. The following examples show health check specifications for the previously listed backend services: Example of a Kubernetes API health check specification Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of a Machine Config API health check specification Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of an Ingress Controller health check specification Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10 Procedure Configure the HAProxy Ingress Controller, so that you can enable access to the cluster from your load balancer on ports 6443, 22623, 443, and 80. Depending on your needs, you can specify the IP address of a single subnet or IP addresses from multiple subnets in your HAProxy configuration. Example HAProxy configuration with one listed subnet # ... listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2 # ... Example HAProxy configuration with multiple listed subnets # ... listen api-server-6443 bind *:6443 mode tcp server master-00 192.168.83.89:6443 check inter 1s server master-01 192.168.84.90:6443 check inter 1s server master-02 192.168.85.99:6443 check inter 1s server bootstrap 192.168.80.89:6443 check inter 1s listen machine-config-server-22623 bind *:22623 mode tcp server master-00 192.168.83.89:22623 check inter 1s server master-01 192.168.84.90:22623 check inter 1s server master-02 192.168.85.99:22623 check inter 1s server bootstrap 192.168.80.89:22623 check inter 1s listen ingress-router-80 bind *:80 mode tcp balance source server worker-00 192.168.83.100:80 check inter 1s server worker-01 192.168.83.101:80 check inter 1s listen ingress-router-443 bind *:443 mode tcp balance source server worker-00 192.168.83.100:443 check inter 1s server worker-01 192.168.83.101:443 check inter 1s listen ironic-api-6385 bind *:6385 mode tcp balance source server master-00 192.168.83.89:6385 check inter 1s server master-01 192.168.84.90:6385 check inter 1s server master-02 192.168.85.99:6385 check inter 1s server bootstrap 192.168.80.89:6385 check inter 1s listen inspector-api-5050 bind *:5050 mode tcp balance source server master-00 192.168.83.89:5050 check inter 1s server master-01 192.168.84.90:5050 check inter 1s server master-02 192.168.85.99:5050 check inter 1s server bootstrap 192.168.80.89:5050 check inter 1s # ... Use the curl CLI command to verify that the user-managed load balancer and its resources are operational: Verify that the cluster machine configuration API is accessible to the Kubernetes API server resource, by running the following command and observing the response: USD curl https://<loadbalancer_ip_address>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that the cluster machine configuration API is accessible to the Machine config server resource, by running the following command and observing the output: USD curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that the controller is accessible to the Ingress Controller resource on port 80, by running the following command and observing the output: USD curl -I -L -H "Host: console-openshift-console.apps.<cluster_name>.<base_domain>" http://<load_balancer_front_end_IP_address> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache Verify that the controller is accessible to the Ingress Controller resource on port 443, by running the following command and observing the output: USD curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private Configure the DNS records for your cluster to target the front-end IP addresses of the user-managed load balancer. You must update records to your DNS server for the cluster API and applications over the load balancer. Examples of modified DNS records <load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End <load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End Important DNS propagation might take some time for each DNS record to become available. Ensure that each DNS record propagates before validating each record. For your OpenShift Container Platform cluster to use the user-managed load balancer, you must specify the following configuration in your cluster's install-config.yaml file: # ... platform: baremetal: loadBalancer: type: UserManaged 1 apiVIPs: - <api_ip> 2 ingressVIPs: - <ingress_ip> 3 # ... 1 Set UserManaged for the type parameter to specify a user-managed load balancer for your cluster. The parameter defaults to OpenShiftManagedDefault , which denotes the default internal load balancer. For services defined in an openshift-kni-infra namespace, a user-managed load balancer can deploy the coredns service to pods in your cluster but ignores keepalived and haproxy services. 2 Required parameter when you specify a user-managed load balancer. Specify the user-managed load balancer's public IP address, so that the Kubernetes API can communicate with the user-managed load balancer. 3 Required parameter when you specify a user-managed load balancer. Specify the user-managed load balancer's public IP address, so that the user-managed load balancer can manage ingress traffic for your cluster. Verification Use the curl CLI command to verify that the user-managed load balancer and DNS record configuration are operational: Verify that you can access the cluster API, by running the following command and observing the output: USD curl https://api.<cluster_name>.<base_domain>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that you can access the cluster machine configuration, by running the following command and observing the output: USD curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that you can access each cluster application on port, by running the following command and observing the output: USD curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private Verify that you can access each cluster application on port 443, by running the following command and observing the output: USD curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private 3.11. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, NetworkManager sets the hostnames. By default, DHCP provides the hostnames to NetworkManager , which is the recommended method. NetworkManager gets the hostnames through a reverse DNS lookup in the following cases: If DHCP does not provide the hostnames If you use kernel arguments to set the hostnames If you use another method to set the hostnames Reverse DNS lookup occurs after the network has been initialized on a node, and can increase the time it takes NetworkManager to set the hostname. Other system services can start prior to NetworkManager setting the hostname, which can cause those services to use a default hostname such as localhost . Tip You can avoid the delay in setting hostnames by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 3.12. Configuring the install-config.yaml file 3.12.1. Configuring the install-config.yaml file The install-config.yaml file requires some additional details. Most of the information teaches the installation program and the resulting cluster enough about the available hardware that it is able to fully manage it. Note The installation program no longer needs the clusterOSImage RHCOS image because the correct image is in the release payload. Configure install-config.yaml . Change the appropriate variables to match the environment, including pullSecret and sshKey : apiVersion: v1 baseDomain: <domain> metadata: name: <cluster_name> networking: machineNetwork: - cidr: <public_cidr> networkType: OVNKubernetes compute: - name: worker replicas: 2 1 controlPlane: name: master replicas: 3 platform: baremetal: {} platform: baremetal: apiVIPs: - <api_ip> ingressVIPs: - <wildcard_ip> provisioningNetworkCIDR: <CIDR> bootstrapExternalStaticIP: <bootstrap_static_ip_address> 2 bootstrapExternalStaticGateway: <bootstrap_static_gateway> 3 bootstrapExternalStaticDNS: <bootstrap_static_dns> 4 hosts: - name: openshift-master-0 role: master bmc: address: ipmi://<out_of_band_ip> 5 username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: "<installation_disk_drive_path>" 6 - name: <openshift_master_1> role: master bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: "<installation_disk_drive_path>" - name: <openshift_master_2> role: master bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: "<installation_disk_drive_path>" - name: <openshift_worker_0> role: worker bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> - name: <openshift_worker_1> role: worker bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: "<installation_disk_drive_path>" pullSecret: '<pull_secret>' sshKey: '<ssh_pub_key>' 1 Scale the compute machines based on the number of compute nodes that are part of the OpenShift Container Platform cluster. Valid options for the replicas value are 0 and integers greater than or equal to 2 . Set the number of replicas to 0 to deploy a three-node cluster, which contains only three control plane machines. A three-node cluster is a smaller, more resource-efficient cluster that can be used for testing, development, and production. You cannot install the cluster with only one compute node. 2 When deploying a cluster with static IP addresses, you must set the bootstrapExternalStaticIP configuration setting to specify the static IP address of the bootstrap VM when there is no DHCP server on the bare-metal network. 3 When deploying a cluster with static IP addresses, you must set the bootstrapExternalStaticGateway configuration setting to specify the gateway IP address for the bootstrap VM when there is no DHCP server on the bare-metal network. 4 When deploying a cluster with static IP addresses, you must set the bootstrapExternalStaticDNS configuration setting to specify the DNS address for the bootstrap VM when there is no DHCP server on the bare-metal network. 5 See the BMC addressing sections for more options. 6 To set the path to the installation disk drive, enter the kernel name of the disk. For example, /dev/sda . Important Because the disk discovery order is not guaranteed, the kernel name of the disk can change across booting options for machines with multiple disks. For example, /dev/sda becomes /dev/sdb and vice versa. To avoid this issue, you must use persistent disk attributes, such as the disk World Wide Name (WWN) or /dev/disk/by-path/ . It is recommended to use the /dev/disk/by-path/<device_path> link to the storage location. To use the disk WWN, replace the deviceName parameter with the wwnWithExtension parameter. Depending on the parameter that you use, enter either of the following values: The disk name. For example, /dev/sda , or /dev/disk/by-path/ . The disk WWN. For example, "0x64cd98f04fde100024684cf3034da5c2" . Ensure that you enter the disk WWN value within quotes so that it is used as a string value and not a hexadecimal value. Failure to meet these requirements for the rootDeviceHints parameter might result in the following error: ironic-inspector inspection failed: No disks satisfied root device hints Note Before OpenShift Container Platform 4.12, the cluster installation program only accepted an IPv4 address or an IPv6 address for the apiVIP and ingressVIP configuration settings. In OpenShift Container Platform 4.12 and later, these configuration settings are deprecated. Instead, use a list format in the apiVIPs and ingressVIPs configuration settings to specify IPv4 addresses, IPv6 addresses, or both IP address formats. Create a directory to store the cluster configuration: USD mkdir ~/clusterconfigs Copy the install-config.yaml file to the new directory: USD cp install-config.yaml ~/clusterconfigs Ensure all bare metal nodes are powered off prior to installing the OpenShift Container Platform cluster: USD ipmitool -I lanplus -U <user> -P <password> -H <management-server-ip> power off Remove old bootstrap resources if any are left over from a deployment attempt: for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done 3.12.2. Additional install-config parameters See the following tables for the required parameters, the hosts parameter, and the bmc parameter for the install-config.yaml file. Table 3.1. Required parameters Parameters Default Description baseDomain The domain name for the cluster. For example, example.com . bootMode UEFI The boot mode for a node. Options are legacy , UEFI , and UEFISecureBoot . If bootMode is not set, Ironic sets it while inspecting the node. bootstrapExternalStaticDNS The static network DNS of the bootstrap node. You must set this value when deploying a cluster with static IP addresses when there is no Dynamic Host Configuration Protocol (DHCP) server on the bare-metal network. If you do not set this value, the installation program will use the value from bootstrapExternalStaticGateway , which causes problems when the IP address values of the gateway and DNS are different. bootstrapExternalStaticIP The static IP address for the bootstrap VM. You must set this value when deploying a cluster with static IP addresses when there is no DHCP server on the bare-metal network. bootstrapExternalStaticGateway The static IP address of the gateway for the bootstrap VM. You must set this value when deploying a cluster with static IP addresses when there is no DHCP server on the bare-metal network. sshKey The sshKey configuration setting contains the key in the ~/.ssh/id_rsa.pub file required to access the control plane nodes and compute nodes. Typically, this key is from the provisioner node. pullSecret The pullSecret configuration setting contains a copy of the pull secret downloaded from the Install OpenShift on Bare Metal page when preparing the provisioner node. The name to be given to the OpenShift Container Platform cluster. For example, openshift . The public CIDR (Classless Inter-Domain Routing) of the external network. For example, 10.0.0.0/24 . The OpenShift Container Platform cluster requires a name be provided for compute nodes even if there are zero nodes. Replicas sets the number of compute nodes in the OpenShift Container Platform cluster. The OpenShift Container Platform cluster requires a name for control plane nodes. Replicas sets the number of control plane nodes included as part of the OpenShift Container Platform cluster. provisioningNetworkInterface The name of the network interface on nodes connected to the provisioning network. For OpenShift Container Platform 4.9 and later releases, use the bootMACAddress configuration setting to enable Ironic to identify the IP address of the NIC instead of using the provisioningNetworkInterface configuration setting to identify the name of the NIC. defaultMachinePlatform The default configuration used for machine pools without a platform configuration. apiVIPs (Optional) The virtual IP address for Kubernetes API communication. This setting must either be provided in the install-config.yaml file as a reserved IP from the MachineNetwork or preconfigured in the DNS so that the default name resolves correctly. Use the virtual IP address and not the FQDN when adding a value to the apiVIPs configuration setting in the install-config.yaml file. The primary IP address must be from the IPv4 network when using dual stack networking. If not set, the installation program uses api.<cluster_name>.<base_domain> to derive the IP address from the DNS. Note Before OpenShift Container Platform 4.12, the cluster installation program only accepted an IPv4 address or an IPv6 address for the apiVIP configuration setting. From OpenShift Container Platform 4.12 or later, the apiVIP configuration setting is deprecated. Instead, use a list format for the apiVIPs configuration setting to specify an IPv4 address, an IPv6 address or both IP address formats. disableCertificateVerification False redfish and redfish-virtualmedia need this parameter to manage BMC addresses. The value should be True when using a self-signed certificate for BMC addresses. ingressVIPs (Optional) The virtual IP address for ingress traffic. This setting must either be provided in the install-config.yaml file as a reserved IP from the MachineNetwork or preconfigured in the DNS so that the default name resolves correctly. Use the virtual IP address and not the FQDN when adding a value to the ingressVIPs configuration setting in the install-config.yaml file. The primary IP address must be from the IPv4 network when using dual stack networking. If not set, the installation program uses test.apps.<cluster_name>.<base_domain> to derive the IP address from the DNS. Note Before OpenShift Container Platform 4.12, the cluster installation program only accepted an IPv4 address or an IPv6 address for the ingressVIP configuration setting. In OpenShift Container Platform 4.12 and later, the ingressVIP configuration setting is deprecated. Instead, use a list format for the ingressVIPs configuration setting to specify an IPv4 addresses, an IPv6 addresses or both IP address formats. Table 3.2. Optional Parameters Parameters Default Description provisioningDHCPRange 172.22.0.10,172.22.0.100 Defines the IP range for nodes on the provisioning network. provisioningNetworkCIDR 172.22.0.0/24 The CIDR for the network to use for provisioning. This option is required when not using the default address range on the provisioning network. clusterProvisioningIP The third IP address of the provisioningNetworkCIDR . The IP address within the cluster where the provisioning services run. Defaults to the third IP address of the provisioning subnet. For example, 172.22.0.3 . bootstrapProvisioningIP The second IP address of the provisioningNetworkCIDR . The IP address on the bootstrap VM where the provisioning services run while the installer is deploying the control plane (master) nodes. Defaults to the second IP address of the provisioning subnet. For example, 172.22.0.2 or 2620:52:0:1307::2 . externalBridge baremetal The name of the bare-metal bridge of the hypervisor attached to the bare-metal network. provisioningBridge provisioning The name of the provisioning bridge on the provisioner host attached to the provisioning network. architecture Defines the host architecture for your cluster. Valid values are amd64 or arm64 . defaultMachinePlatform The default configuration used for machine pools without a platform configuration. bootstrapOSImage A URL to override the default operating system image for the bootstrap node. The URL must contain a SHA-256 hash of the image. For example: https://mirror.openshift.com/rhcos-<version>-qemu.qcow2.gz?sha256=<uncompressed_sha256> . provisioningNetwork The provisioningNetwork configuration setting determines whether the cluster uses the provisioning network. If it does, the configuration setting also determines if the cluster manages the network. Disabled : Set this parameter to Disabled to disable the requirement for a provisioning network. When set to Disabled , you must only use virtual media based provisioning, or bring up the cluster using the assisted installer. If Disabled and using power management, BMCs must be accessible from the bare-metal network. If Disabled , you must provide two IP addresses on the bare-metal network that are used for the provisioning services. Managed : Set this parameter to Managed , which is the default, to fully manage the provisioning network, including DHCP, TFTP, and so on. Unmanaged : Set this parameter to Unmanaged to enable the provisioning network but take care of manual configuration of DHCP. Virtual media provisioning is recommended but PXE is still available if required. httpProxy Set this parameter to the appropriate HTTP proxy used within your environment. httpsProxy Set this parameter to the appropriate HTTPS proxy used within your environment. noProxy Set this parameter to the appropriate list of exclusions for proxy usage within your environment. Hosts The hosts parameter is a list of separate bare metal assets used to build the cluster. Table 3.3. Hosts Name Default Description name The name of the BareMetalHost resource to associate with the details. For example, openshift-master-0 . role The role of the bare metal node. Either master (control plane node) or worker (compute node). bmc Connection details for the baseboard management controller. See the BMC addressing section for additional details. bootMACAddress The MAC address of the NIC that the host uses for the provisioning network. Ironic retrieves the IP address using the bootMACAddress configuration setting. Then, it binds to the host. Note You must provide a valid MAC address from the host if you disabled the provisioning network. networkConfig Set this optional parameter to configure the network interface of a host. See "(Optional) Configuring host network interfaces" for additional details. 3.12.3. BMC addressing Most vendors support Baseboard Management Controller (BMC) addressing with the Intelligent Platform Management Interface (IPMI). IPMI does not encrypt communications. It is suitable for use within a data center over a secured or dedicated management network. Check with your vendor to see if they support Redfish network boot. Redfish delivers simple and secure management for converged, hybrid IT and the Software Defined Data Center (SDDC). Redfish is human readable and machine capable, and leverages common internet and web services standards to expose information directly to the modern tool chain. If your hardware does not support Redfish network boot, use IPMI. You can modify the BMC address during installation while the node is in the Registering state. If you need to modify the BMC address after the node leaves the Registering state, you must disconnect the node from Ironic, edit the BareMetalHost resource, and reconnect the node to Ironic. See the Editing a BareMetalHost resource section for details. IPMI Hosts using IPMI use the ipmi://<out-of-band-ip>:<port> address format, which defaults to port 623 if not specified. The following example demonstrates an IPMI configuration within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: ipmi://<out-of-band-ip> username: <user> password: <password> Important The provisioning network is required when PXE booting using IPMI for BMC addressing. It is not possible to PXE boot hosts without a provisioning network. If you deploy without a provisioning network, you must use a virtual media BMC addressing option such as redfish-virtualmedia or idrac-virtualmedia . See "Redfish virtual media for HPE iLO" in the "BMC addressing for HPE iLO" section or "Redfish virtual media for Dell iDRAC" in the "BMC addressing for Dell iDRAC" section for additional details. Redfish network boot To enable Redfish, use redfish:// or redfish+http:// to disable TLS. The installer requires both the hostname or the IP address and the path to the system ID. The following example demonstrates a Redfish configuration within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True Additional resources Editing a BareMetalHost resource 3.12.4. Verifying support for Redfish APIs When installing using the Redfish API, the installation program calls several Redfish endpoints on the baseboard management controller (BMC) when using installer-provisioned infrastructure on bare metal. If you use Redfish, ensure that your BMC supports all of the Redfish APIs before installation. Procedure Set the IP address or hostname of the BMC by running the following command: USD export SERVER=<ip_address> 1 1 Replace <ip_address> with the IP address or hostname of the BMC. Set the ID of the system by running the following command: USD export SystemID=<system_id> 1 1 Replace <system_id> with the system ID. For example, System.Embedded.1 or 1 . See the following vendor-specific BMC sections for details. List of Redfish APIs Check power on support by running the following command: USD curl -u USDUSER:USDPASS -X POST -H'Content-Type: application/json' -H'Accept: application/json' -d '{"ResetType": "On"}' https://USDSERVER/redfish/v1/Systems/USDSystemID/Actions/ComputerSystem.Reset Check power off support by running the following command: USD curl -u USDUSER:USDPASS -X POST -H'Content-Type: application/json' -H'Accept: application/json' -d '{"ResetType": "ForceOff"}' https://USDSERVER/redfish/v1/Systems/USDSystemID/Actions/ComputerSystem.Reset Check the temporary boot implementation that uses pxe by running the following command: USD curl -u USDUSER:USDPASS -X PATCH -H "Content-Type: application/json" -H "If-Match: <ETAG>" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{"Boot": {"BootSourceOverrideTarget": "pxe", "BootSourceOverrideEnabled": "Once"}} Check the status of setting the firmware boot mode that uses Legacy or UEFI by running the following command: USD curl -u USDUSER:USDPASS -X PATCH -H "Content-Type: application/json" -H "If-Match: <ETAG>" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{"Boot": {"BootSourceOverrideMode":"UEFI"}} List of Redfish virtual media APIs Check the ability to set the temporary boot device that uses cd or dvd by running the following command: USD curl -u USDUSER:USDPASS -X PATCH -H "Content-Type: application/json" -H "If-Match: <ETAG>" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{"Boot": {"BootSourceOverrideTarget": "cd", "BootSourceOverrideEnabled": "Once"}}' Virtual media might use POST or PATCH , depending on your hardware. Check the ability to mount virtual media by running one of the following commands: USD curl -u USDUSER:USDPASS -X POST -H "Content-Type: application/json" https://USDServer/redfish/v1/Managers/USDManagerID/VirtualMedia/USDVmediaId -d '{"Image": "https://example.com/test.iso", "TransferProtocolType": "HTTPS", "UserName": "", "Password":""}' USD curl -u USDUSER:USDPASS -X PATCH -H "Content-Type: application/json" -H "If-Match: <ETAG>" https://USDServer/redfish/v1/Managers/USDManagerID/VirtualMedia/USDVmediaId -d '{"Image": "https://example.com/test.iso", "TransferProtocolType": "HTTPS", "UserName": "", "Password":""}' Note The PowerOn and PowerOff commands for Redfish APIs are the same for the Redfish virtual media APIs. In some hardware, you might only find the VirtualMedia resource under Systems/USDSystemID instead of Managers/USDManagerID . For the VirtualMedia resource, the UserName and Password fields are optional. Important HTTPS and HTTP are the only supported parameter types for TransferProtocolTypes . 3.12.5. BMC addressing for Dell iDRAC The address field for each bmc entry is a URL for connecting to the OpenShift Container Platform cluster nodes, including the type of controller in the URL scheme and its location on the network. platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password> 1 The address configuration setting specifies the protocol. For Dell hardware, Red Hat supports integrated Dell Remote Access Controller (iDRAC) virtual media, Redfish network boot, and IPMI. BMC address formats for Dell iDRAC Protocol Address Format iDRAC virtual media idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 Redfish network boot redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 IPMI ipmi://<out-of-band-ip> Important Use idrac-virtualmedia as the protocol for Redfish virtual media. redfish-virtualmedia will not work on Dell hardware. Dell's idrac-virtualmedia uses the Redfish standard with Dell's OEM extensions. See the following sections for additional details. Redfish virtual media for Dell iDRAC For Redfish virtual media on Dell servers, use idrac-virtualmedia:// in the address setting. Using redfish-virtualmedia:// will not work. Note Use idrac-virtualmedia:// as the protocol for Redfish virtual media. Using redfish-virtualmedia:// will not work on Dell hardware, because the idrac-virtualmedia:// protocol corresponds to the idrac hardware type and the Redfish protocol in Ironic. Dell's idrac-virtualmedia:// protocol uses the Redfish standard with Dell's OEM extensions. Ironic also supports the idrac type with the WSMAN protocol. Therefore, you must specify idrac-virtualmedia:// to avoid unexpected behavior when electing to use Redfish with virtual media on Dell hardware. The following example demonstrates using iDRAC virtual media within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. Note Ensure the OpenShift Container Platform cluster nodes have AutoAttach enabled through the iDRAC console. The menu path is: Configuration Virtual Media Attach Mode AutoAttach . The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> disableCertificateVerification: True Redfish network boot for iDRAC To enable Redfish, use redfish:// or redfish+http:// to disable transport layer security (TLS). The installer requires both the hostname or the IP address and the path to the system ID. The following example demonstrates a Redfish configuration within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> disableCertificateVerification: True Note There is a known issue on Dell iDRAC 9 with firmware version 04.40.00.00 and all releases up to including the 5.xx series for installer-provisioned installations on bare metal deployments. The virtual console plugin defaults to eHTML5, an enhanced version of HTML5, which causes problems with the InsertVirtualMedia workflow. Set the plugin to use HTML5 to avoid this issue. The menu path is Configuration Virtual console Plug-in Type HTML5 . Ensure the OpenShift Container Platform cluster nodes have AutoAttach enabled through the iDRAC console. The menu path is: Configuration Virtual Media Attach Mode AutoAttach . 3.12.6. BMC addressing for HPE iLO The address field for each bmc entry is a URL for connecting to the OpenShift Container Platform cluster nodes, including the type of controller in the URL scheme and its location on the network. platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password> 1 The address configuration setting specifies the protocol. For HPE integrated Lights Out (iLO), Red Hat supports Redfish virtual media, Redfish network boot, and IPMI. Table 3.4. BMC address formats for HPE iLO Protocol Address Format Redfish virtual media redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 Redfish network boot redfish://<out-of-band-ip>/redfish/v1/Systems/1 IPMI ipmi://<out-of-band-ip> See the following sections for additional details. Redfish virtual media for HPE iLO To enable Redfish virtual media for HPE servers, use redfish-virtualmedia:// in the address setting. The following example demonstrates using Redfish virtual media within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True Note Redfish virtual media is not supported on 9th generation systems running iLO4, because Ironic does not support iLO4 with virtual media. Redfish network boot for HPE iLO To enable Redfish, use redfish:// or redfish+http:// to disable TLS. The installer requires both the hostname or the IP address and the path to the system ID. The following example demonstrates a Redfish configuration within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True 3.12.7. BMC addressing for Fujitsu iRMC The address field for each bmc entry is a URL for connecting to the OpenShift Container Platform cluster nodes, including the type of controller in the URL scheme and its location on the network. platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password> 1 The address configuration setting specifies the protocol. For Fujitsu hardware, Red Hat supports integrated Remote Management Controller (iRMC) and IPMI. Table 3.5. BMC address formats for Fujitsu iRMC Protocol Address Format iRMC irmc://<out-of-band-ip> IPMI ipmi://<out-of-band-ip> iRMC Fujitsu nodes can use irmc://<out-of-band-ip> and defaults to port 443 . The following example demonstrates an iRMC configuration within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: irmc://<out-of-band-ip> username: <user> password: <password> Note Currently Fujitsu supports iRMC S5 firmware version 3.05P and above for installer-provisioned installation on bare metal. 3.12.8. BMC addressing for Cisco CIMC The address field for each bmc entry is a URL for connecting to the OpenShift Container Platform cluster nodes, including the type of controller in the URL scheme and its location on the network. platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password> 1 The address configuration setting specifies the protocol. For Cisco UCS C-Series and X-Series servers, Red Hat supports Cisco Integrated Management Controller (CIMC). Table 3.6. BMC address format for Cisco CIMC Protocol Address Format Redfish virtual media redfish-virtualmedia://<server_kvm_ip>/redfish/v1/Systems/<serial_number> To enable Redfish virtual media for Cisco UCS C-Series and X-Series servers, use redfish-virtualmedia:// in the address setting. The following example demonstrates using Redfish virtual media within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<server_kvm_ip>/redfish/v1/Systems/<serial_number> username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration by using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<server_kvm_ip>/redfish/v1/Systems/<serial_number> username: <user> password: <password> disableCertificateVerification: True 3.12.9. Root device hints The rootDeviceHints parameter enables the installer to provision the Red Hat Enterprise Linux CoreOS (RHCOS) image to a particular device. The installer examines the devices in the order it discovers them, and compares the discovered values with the hint values. The installer uses the first discovered device that matches the hint value. The configuration can combine multiple hints, but a device must match all hints for the installer to select it. Table 3.7. Subfields Subfield Description deviceName A string containing a Linux device name such as /dev/vda or /dev/disk/by-path/ . It is recommended to use the /dev/disk/by-path/<device_path> link to the storage location. The hint must match the actual value exactly. hctl A string containing a SCSI bus address like 0:0:0:0 . The hint must match the actual value exactly. model A string containing a vendor-specific device identifier. The hint can be a substring of the actual value. vendor A string containing the name of the vendor or manufacturer of the device. The hint can be a sub-string of the actual value. serialNumber A string containing the device serial number. The hint must match the actual value exactly. minSizeGigabytes An integer representing the minimum size of the device in gigabytes. wwn A string containing the unique storage identifier. The hint must match the actual value exactly. wwnWithExtension A string containing the unique storage identifier with the vendor extension appended. The hint must match the actual value exactly. wwnVendorExtension A string containing the unique vendor storage identifier. The hint must match the actual value exactly. rotational A boolean indicating whether the device should be a rotating disk (true) or not (false). Example usage - name: master-0 role: master bmc: address: ipmi://10.10.0.3:6203 username: admin password: redhat bootMACAddress: de:ad:be:ef:00:40 rootDeviceHints: deviceName: "/dev/sda" 3.12.10. Setting proxy settings To deploy an OpenShift Container Platform cluster while using a proxy, make the following changes to the install-config.yaml file. Procedure Add proxy values under the proxy key mapping: apiVersion: v1 baseDomain: <domain> proxy: httpProxy: http://USERNAME:[email protected]:PORT httpsProxy: https://USERNAME:[email protected]:PORT noProxy: <WILDCARD_OF_DOMAIN>,<PROVISIONING_NETWORK/CIDR>,<BMC_ADDRESS_RANGE/CIDR> The following is an example of noProxy with values. noProxy: .example.com,172.22.0.0/24,10.10.0.0/24 With a proxy enabled, set the appropriate values of the proxy in the corresponding key/value pair. Key considerations: If the proxy does not have an HTTPS proxy, change the value of httpsProxy from https:// to http:// . If the cluster uses a provisioning network, include it in the noProxy setting, otherwise the installation program fails. Set all of the proxy settings as environment variables within the provisioner node. For example, HTTP_PROXY , HTTPS_PROXY , and NO_PROXY . 3.12.11. Deploying with no provisioning network To deploy an OpenShift Container Platform cluster without a provisioning network, make the following changes to the install-config.yaml file. platform: baremetal: apiVIPs: - <api_VIP> ingressVIPs: - <ingress_VIP> provisioningNetwork: "Disabled" 1 1 Add the provisioningNetwork configuration setting, if needed, and set it to Disabled . Important The provisioning network is required for PXE booting. If you deploy without a provisioning network, you must use a virtual media BMC addressing option such as redfish-virtualmedia or idrac-virtualmedia . See "Redfish virtual media for HPE iLO" in the "BMC addressing for HPE iLO" section or "Redfish virtual media for Dell iDRAC" in the "BMC addressing for Dell iDRAC" section for additional details. 3.12.12. Deploying with dual-stack networking For dual-stack networking in OpenShift Container Platform clusters, you can configure IPv4 and IPv6 address endpoints for cluster nodes. To configure IPv4 and IPv6 address endpoints for cluster nodes, edit the machineNetwork , clusterNetwork , and serviceNetwork configuration settings in the install-config.yaml file. Each setting must have two CIDR entries each. For a cluster with the IPv4 family as the primary address family, specify the IPv4 setting first. For a cluster with the IPv6 family as the primary address family, specify the IPv6 setting first. machineNetwork: - cidr: {{ extcidrnet }} - cidr: {{ extcidrnet6 }} clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd03::/112 Important On a bare-metal platform, if you specified an NMState configuration in the networkConfig section of your install-config.yaml file, add interfaces.wait-ip: ipv4+ipv6 to the NMState YAML file to resolve an issue that prevents your cluster from deploying on a dual-stack network. Example NMState YAML configuration file that includes the wait-ip parameter networkConfig: nmstate: interfaces: - name: <interface_name> # ... wait-ip: ipv4+ipv6 # ... To provide an interface to the cluster for applications that use IPv4 and IPv6 addresses, configure IPv4 and IPv6 virtual IP (VIP) address endpoints for the Ingress VIP and API VIP services. To configure IPv4 and IPv6 address endpoints, edit the apiVIPs and ingressVIPs configuration settings in the install-config.yaml file . The apiVIPs and ingressVIPs configuration settings use a list format. The order of the list indicates the primary and secondary VIP address for each service. platform: baremetal: apiVIPs: - <api_ipv4> - <api_ipv6> ingressVIPs: - <wildcard_ipv4> - <wildcard_ipv6> Note For a cluster with dual-stack networking configuration, you must assign both IPv4 and IPv6 addresses to the same interface. 3.12.13. Configuring host network interfaces Before installation, you can set the networkConfig configuration setting in the install-config.yaml file to configure host network interfaces using NMState. The most common use case for this functionality is to specify a static IP address on the bare-metal network, but you can also configure other networks such as a storage network. This functionality supports other NMState features such as VLAN, VXLAN, bridges, bonds, routes, MTU, and DNS resolver settings. Prerequisites Configure a PTR DNS record with a valid hostname for each node with a static IP address. Install the NMState CLI ( nmstate ). Procedure Optional: Consider testing the NMState syntax with nmstatectl gc before including it in the install-config.yaml file, because the installer will not check the NMState YAML syntax. Note Errors in the YAML syntax might result in a failure to apply the network configuration. Additionally, maintaining the validated YAML syntax is useful when applying changes using Kubernetes NMState after deployment or when expanding the cluster. Create an NMState YAML file: interfaces: - name: <nic1_name> 1 type: ethernet state: up ipv4: address: - ip: <ip_address> 2 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 3 routes: config: - destination: 0.0.0.0/0 -hop-address: <next_hop_ip_address> 4 -hop-interface: <next_hop_nic1_name> 5 1 2 3 4 5 Replace <nic1_name> , <ip_address> , <dns_ip_address> , <next_hop_ip_address> and <next_hop_nic1_name> with appropriate values. Test the configuration file by running the following command: USD nmstatectl gc <nmstate_yaml_file> Replace <nmstate_yaml_file> with the configuration file name. Use the networkConfig configuration setting by adding the NMState configuration to hosts within the install-config.yaml file: hosts: - name: openshift-master-0 role: master bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: null bootMACAddress: <NIC1_mac_address> bootMode: UEFI rootDeviceHints: deviceName: "/dev/sda" networkConfig: 1 interfaces: - name: <nic1_name> 2 type: ethernet state: up ipv4: address: - ip: <ip_address> 3 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 4 routes: config: - destination: 0.0.0.0/0 -hop-address: <next_hop_ip_address> 5 -hop-interface: <next_hop_nic1_name> 6 1 Add the NMState YAML syntax to configure the host interfaces. 2 3 4 5 6 Replace <nic1_name> , <ip_address> , <dns_ip_address> , <next_hop_ip_address> and <next_hop_nic1_name> with appropriate values. Important After deploying the cluster, you cannot modify the networkConfig configuration setting of install-config.yaml file to make changes to the host network interface. Use the Kubernetes NMState Operator to make changes to the host network interface after deployment. 3.12.14. Configuring host network interfaces for subnets For edge computing scenarios, it can be beneficial to locate compute nodes closer to the edge. To locate remote nodes in subnets, you might use different network segments or subnets for the remote nodes than you used for the control plane subnet and local compute nodes. You can reduce latency for the edge and allow for enhanced scalability by setting up subnets for edge computing scenarios. Important When using the default load balancer, OpenShiftManagedDefault and adding remote nodes to your OpenShift Container Platform cluster, all control plane nodes must run in the same subnet. When using more than one subnet, you can also configure the Ingress VIP to run on the control plane nodes by using a manifest. See "Configuring network components to run on the control plane" for details. If you have established different network segments or subnets for remote nodes as described in the section on "Establishing communication between subnets", you must specify the subnets in the machineNetwork configuration setting if the workers are using static IP addresses, bonds or other advanced networking. When setting the node IP address in the networkConfig parameter for each remote node, you must also specify the gateway and the DNS server for the subnet containing the control plane nodes when using static IP addresses. This ensures that the remote nodes can reach the subnet containing the control plane and that they can receive network traffic from the control plane. Note Deploying a cluster with multiple subnets requires using virtual media, such as redfish-virtualmedia or idrac-virtualmedia , because remote nodes cannot access the local provisioning network. Procedure Add the subnets to the machineNetwork in the install-config.yaml file when using static IP addresses: networking: machineNetwork: - cidr: 10.0.0.0/24 - cidr: 192.168.0.0/24 networkType: OVNKubernetes Add the gateway and DNS configuration to the networkConfig parameter of each edge compute node using NMState syntax when using a static IP address or advanced networking such as bonds: networkConfig: interfaces: - name: <interface_name> 1 type: ethernet state: up ipv4: enabled: true dhcp: false address: - ip: <node_ip> 2 prefix-length: 24 gateway: <gateway_ip> 3 dns-resolver: config: server: - <dns_ip> 4 1 Replace <interface_name> with the interface name. 2 Replace <node_ip> with the IP address of the node. 3 Replace <gateway_ip> with the IP address of the gateway. 4 Replace <dns_ip> with the IP address of the DNS server. 3.12.15. Configuring address generation modes for SLAAC in dual-stack networks For dual-stack clusters that use Stateless Address AutoConfiguration (SLAAC), you must specify a global value for the ipv6.addr-gen-mode network setting. You can set this value using NMState to configure the RAM disk and the cluster configuration files. If you do not configure a consistent ipv6.addr-gen-mode in these locations, IPv6 address mismatches can occur between CSR resources and BareMetalHost resources in the cluster. Prerequisites Install the NMState CLI ( nmstate ). Procedure Optional: Consider testing the NMState YAML syntax with the nmstatectl gc command before including it in the install-config.yaml file because the installation program will not check the NMState YAML syntax. Create an NMState YAML file: interfaces: - name: eth0 ipv6: addr-gen-mode: <address_mode> 1 1 Replace <address_mode> with the type of address generation mode required for IPv6 addresses in the cluster. Valid values are eui64 , stable-privacy , or random . Test the configuration file by running the following command: USD nmstatectl gc <nmstate_yaml_file> 1 1 Replace <nmstate_yaml_file> with the name of the test configuration file. Add the NMState configuration to the hosts.networkConfig section within the install-config.yaml file: hosts: - name: openshift-master-0 role: master bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: null bootMACAddress: <NIC1_mac_address> bootMode: UEFI rootDeviceHints: deviceName: "/dev/sda" networkConfig: interfaces: - name: eth0 ipv6: addr-gen-mode: <address_mode> 1 ... 1 Replace <address_mode> with the type of address generation mode required for IPv6 addresses in the cluster. Valid values are eui64 , stable-privacy , or random . 3.12.16. Configuring host network interfaces for dual port NIC Before installation, you can set the networkConfig configuration setting in the install-config.yaml file to configure host network interfaces by using NMState to support dual port NIC. OpenShift Virtualization only supports the following bond modes: mode=1 active-backup mode=2 balance-xor mode=4 802.3ad Prerequisites Configure a PTR DNS record with a valid hostname for each node with a static IP address. Install the NMState CLI ( nmstate ). Note Errors in the YAML syntax might result in a failure to apply the network configuration. Additionally, maintaining the validated YAML syntax is useful when applying changes by using Kubernetes NMState after deployment or when expanding the cluster. Procedure Add the NMState configuration to the networkConfig field to hosts within the install-config.yaml file: hosts: - name: worker-0 role: worker bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: false bootMACAddress: <NIC1_mac_address> bootMode: UEFI networkConfig: 1 interfaces: 2 - name: eno1 3 type: ethernet 4 state: up mac-address: 0c:42:a1:55:f3:06 ipv4: enabled: true dhcp: false 5 ethernet: sr-iov: total-vfs: 2 6 ipv6: enabled: false dhcp: false - name: sriov:eno1:0 type: ethernet state: up 7 ipv4: enabled: false 8 ipv6: enabled: false - name: sriov:eno1:1 type: ethernet state: down - name: eno2 type: ethernet state: up mac-address: 0c:42:a1:55:f3:07 ipv4: enabled: true ethernet: sr-iov: total-vfs: 2 ipv6: enabled: false - name: sriov:eno2:0 type: ethernet state: up ipv4: enabled: false ipv6: enabled: false - name: sriov:eno2:1 type: ethernet state: down - name: bond0 type: bond state: up min-tx-rate: 100 9 max-tx-rate: 200 10 link-aggregation: mode: active-backup 11 options: primary: sriov:eno1:0 12 port: - sriov:eno1:0 - sriov:eno2:0 ipv4: address: - ip: 10.19.16.57 13 prefix-length: 23 dhcp: false enabled: true ipv6: enabled: false dns-resolver: config: server: - 10.11.5.160 - 10.2.70.215 routes: config: - destination: 0.0.0.0/0 -hop-address: 10.19.17.254 -hop-interface: bond0 14 table-id: 254 1 The networkConfig field has information about the network configuration of the host, with subfields including interfaces , dns-resolver , and routes . 2 The interfaces field is an array of network interfaces defined for the host. 3 The name of the interface. 4 The type of interface. This example creates a ethernet interface. 5 Set this to `false to disable DHCP for the physical function (PF) if it is not strictly required. 6 Set to the number of SR-IOV virtual functions (VFs) to instantiate. 7 Set this to up . 8 Set this to false to disable IPv4 addressing for the VF attached to the bond. 9 Sets a minimum transmission rate, in Mbps, for the VF. This sample value sets a rate of 100 Mbps. This value must be less than or equal to the maximum transmission rate. Intel NICs do not support the min-tx-rate parameter. For more information, see BZ#1772847 . 10 Sets a maximum transmission rate, in Mbps, for the VF. This sample value sets a rate of 200 Mbps. 11 Sets the desired bond mode. 12 Sets the preferred port of the bonding interface. The bond uses the primary device as the first device of the bonding interfaces. The bond does not abandon the primary device interface unless it fails. This setting is particularly useful when one NIC in the bonding interface is faster and, therefore, able to handle a bigger load. This setting is only valid when the bonding interface is in active-backup mode (mode 1) and balance-tlb (mode 5). 13 Sets a static IP address for the bond interface. This is the node IP address. 14 Sets bond0 as the gateway for the default route. Important After deploying the cluster, you cannot change the networkConfig configuration setting of the install-config.yaml file to make changes to the host network interface. Use the Kubernetes NMState Operator to make changes to the host network interface after deployment. Additional resources Configuring network bonding 3.12.17. Configuring multiple cluster nodes You can simultaneously configure OpenShift Container Platform cluster nodes with identical settings. Configuring multiple cluster nodes avoids adding redundant information for each node to the install-config.yaml file. This file contains specific parameters to apply an identical configuration to multiple nodes in the cluster. Compute nodes are configured separately from the controller node. However, configurations for both node types use the highlighted parameters in the install-config.yaml file to enable multi-node configuration. Set the networkConfig parameters to BOND , as shown in the following example: hosts: - name: ostest-master-0 [...] networkConfig: &BOND interfaces: - name: bond0 type: bond state: up ipv4: dhcp: true enabled: true link-aggregation: mode: active-backup port: - enp2s0 - enp3s0 - name: ostest-master-1 [...] networkConfig: *BOND - name: ostest-master-2 [...] networkConfig: *BOND Note Configuration of multiple cluster nodes is only available for initial deployments on installer-provisioned infrastructure. 3.12.18. Configuring managed Secure Boot You can enable managed Secure Boot when deploying an installer-provisioned cluster using Redfish BMC addressing, such as redfish , redfish-virtualmedia , or idrac-virtualmedia . To enable managed Secure Boot, add the bootMode configuration setting to each node: Example hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out_of_band_ip> 1 username: <username> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: "/dev/sda" bootMode: UEFISecureBoot 2 1 Ensure the bmc.address setting uses redfish , redfish-virtualmedia , or idrac-virtualmedia as the protocol. See "BMC addressing for HPE iLO" or "BMC addressing for Dell iDRAC" for additional details. 2 The bootMode setting is UEFI by default. Change it to UEFISecureBoot to enable managed Secure Boot. Note See "Configuring nodes" in the "Prerequisites" to ensure the nodes can support managed Secure Boot. If the nodes do not support managed Secure Boot, see "Configuring nodes for Secure Boot manually" in the "Configuring nodes" section. Configuring Secure Boot manually requires Redfish virtual media. Note Red Hat does not support Secure Boot with IPMI, because IPMI does not provide Secure Boot management facilities. 3.13. Manifest configuration files 3.13.1. Creating the OpenShift Container Platform manifests Create the OpenShift Container Platform manifests. USD ./openshift-baremetal-install --dir ~/clusterconfigs create manifests INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings WARNING Discarding the OpenShift Manifest that was provided in the target directory because its dependencies are dirty and it needs to be regenerated 3.13.2. Configuring NTP for disconnected clusters OpenShift Container Platform installs the chrony Network Time Protocol (NTP) service on the cluster nodes. OpenShift Container Platform nodes must agree on a date and time to run properly. When compute nodes retrieve the date and time from the NTP servers on the control plane nodes, it enables the installation and operation of clusters that are not connected to a routable network and thereby do not have access to a higher stratum NTP server. Procedure Install Butane on your installation host by using the following command: USD sudo dnf -y install butane Create a Butane config, 99-master-chrony-conf-override.bu , including the contents of the chrony.conf file for the control plane nodes. Note See "Creating machine configs with Butane" for information about Butane. Butane config example variant: openshift version: 4.17.0 metadata: name: 99-master-chrony-conf-override labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # Use public servers from the pool.ntp.org project. # Please consider joining the pool (https://www.pool.ntp.org/join.html). # The Machine Config Operator manages this file server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony # Configure the control plane nodes to serve as local NTP servers # for all compute nodes, even if they are not in sync with an # upstream NTP server. # Allow NTP client access from the local network. allow all # Serve time even if not synchronized to a time source. local stratum 3 orphan 1 You must replace <cluster-name> with the name of the cluster and replace <domain> with the fully qualified domain name. Use Butane to generate a MachineConfig object file, 99-master-chrony-conf-override.yaml , containing the configuration to be delivered to the control plane nodes: USD butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yaml Create a Butane config, 99-worker-chrony-conf-override.bu , including the contents of the chrony.conf file for the compute nodes that references the NTP servers on the control plane nodes. Butane config example variant: openshift version: 4.17.0 metadata: name: 99-worker-chrony-conf-override labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # The Machine Config Operator manages this file. server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony 1 You must replace <cluster-name> with the name of the cluster and replace <domain> with the fully qualified domain name. Use Butane to generate a MachineConfig object file, 99-worker-chrony-conf-override.yaml , containing the configuration to be delivered to the worker nodes: USD butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yaml 3.13.3. Configuring network components to run on the control plane You can configure networking components to run exclusively on the control plane nodes. By default, OpenShift Container Platform allows any node in the machine config pool to host the ingressVIP virtual IP address. However, some environments deploy compute nodes in separate subnets from the control plane nodes, which requires configuring the ingressVIP virtual IP address to run on the control plane nodes. Important When deploying remote nodes in separate subnets, you must place the ingressVIP virtual IP address exclusively with the control plane nodes. Procedure Change to the directory storing the install-config.yaml file: USD cd ~/clusterconfigs Switch to the manifests subdirectory: USD cd manifests Create a file named cluster-network-avoid-workers-99-config.yaml : USD touch cluster-network-avoid-workers-99-config.yaml Open the cluster-network-avoid-workers-99-config.yaml file in an editor and enter a custom resource (CR) that describes the Operator configuration: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 50-worker-fix-ipi-rwn labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/kubernetes/manifests/keepalived.yaml mode: 0644 contents: source: data:, This manifest places the ingressVIP virtual IP address on the control plane nodes. Additionally, this manifest deploys the following processes on the control plane nodes only: openshift-ingress-operator keepalived Save the cluster-network-avoid-workers-99-config.yaml file. Create a manifests/cluster-ingress-default-ingresscontroller.yaml file: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/master: "" Consider backing up the manifests directory. The installer deletes the manifests/ directory when creating the cluster. Modify the cluster-scheduler-02-config.yml manifest to make the control plane nodes schedulable by setting the mastersSchedulable field to true . Control plane nodes are not schedulable by default. For example: Note If control plane nodes are not schedulable after completing this procedure, deploying the cluster will fail. 3.13.4. Deploying routers on compute nodes During installation, the installation program deploys router pods on compute nodes. By default, the installation program installs two router pods. If a deployed cluster requires additional routers to handle external traffic loads destined for services within the OpenShift Container Platform cluster, you can create a yaml file to set an appropriate number of router replicas. Important Deploying a cluster with only one compute node is not supported. While modifying the router replicas will address issues with the degraded state when deploying with one compute node, the cluster loses high availability for the ingress API, which is not suitable for production environments. Note By default, the installation program deploys two routers. If the cluster has no compute nodes, the installation program deploys the two routers on the control plane nodes by default. Procedure Create a router-replicas.yaml file: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: <num-of-router-pods> endpointPublishingStrategy: type: HostNetwork nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: "" Note Replace <num-of-router-pods> with an appropriate value. If working with just one compute node, set replicas: to 1 . If working with more than 3 compute nodes, you can increase replicas: from the default value 2 as appropriate. Save and copy the router-replicas.yaml file to the clusterconfigs/openshift directory: USD cp ~/router-replicas.yaml clusterconfigs/openshift/99_router-replicas.yaml 3.13.5. Configuring the BIOS The following procedure configures the BIOS during the installation process. Procedure Create the manifests. Modify the BareMetalHost resource file corresponding to the node: USD vim clusterconfigs/openshift/99_openshift-cluster-api_hosts-*.yaml Add the BIOS configuration to the spec section of the BareMetalHost resource: spec: firmware: simultaneousMultithreadingEnabled: true sriovEnabled: true virtualizationEnabled: true Note Red Hat supports three BIOS configurations. Only servers with BMC type irmc are supported. Other types of servers are currently not supported. Create the cluster. Additional resources Configuration using the Bare Metal Operator 3.13.6. Configuring the RAID The following procedure configures a redundant array of independent disks (RAID) using baseboard management controllers (BMCs) during the installation process. Note If you want to configure a hardware RAID for the node, verify that the node has a supported RAID controller. OpenShift Container Platform 4.17 does not support software RAID. Table 3.8. Hardware RAID support by vendor Vendor BMC and protocol Firmware version RAID levels Fujitsu iRMC N/A 0, 1, 5, 6, and 10 Dell iDRAC with Redfish Version 6.10.30.20 or later 0, 1, and 5 Procedure Create the manifests. Modify the BareMetalHost resource corresponding to the node: USD vim clusterconfigs/openshift/99_openshift-cluster-api_hosts-*.yaml Note The following example uses a hardware RAID configuration because OpenShift Container Platform 4.17 does not support software RAID. If you added a specific RAID configuration to the spec section, this causes the node to delete the original RAID configuration in the preparing phase and perform a specified configuration on the RAID. For example: spec: raid: hardwareRAIDVolumes: - level: "0" 1 name: "sda" numberOfPhysicalDisks: 1 rotational: true sizeGibibytes: 0 1 level is a required field, and the others are optional fields. If you added an empty RAID configuration to the spec section, the empty configuration causes the node to delete the original RAID configuration during the preparing phase, but does not perform a new configuration. For example: spec: raid: hardwareRAIDVolumes: [] If you do not add a raid field in the spec section, the original RAID configuration is not deleted, and no new configuration will be performed. Create the cluster. 3.13.7. Configuring storage on nodes You can make changes to operating systems on OpenShift Container Platform nodes by creating MachineConfig objects that are managed by the Machine Config Operator (MCO). The MachineConfig specification includes an ignition config for configuring the machines at first boot. This config object can be used to modify files, systemd services, and other operating system features running on OpenShift Container Platform machines. Procedure Use the ignition config to configure storage on nodes. The following MachineSet manifest example demonstrates how to add a partition to a device on a primary node. In this example, apply the manifest before installation to have a partition named recovery with a size of 16 GiB on the primary node. Create a custom-partitions.yaml file and include a MachineConfig object that contains your partition layout: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: primary name: 10_primary_storage_config spec: config: ignition: version: 3.2.0 storage: disks: - device: </dev/xxyN> partitions: - label: recovery startMiB: 32768 sizeMiB: 16384 filesystems: - device: /dev/disk/by-partlabel/recovery label: recovery format: xfs Save and copy the custom-partitions.yaml file to the clusterconfigs/openshift directory: USD cp ~/<MachineConfig_manifest> ~/clusterconfigs/openshift Additional resources Configuration using the Bare Metal Operator Partition naming scheme 3.14. Creating a disconnected registry In some cases, you might want to install an OpenShift Container Platform cluster using a local copy of the installation registry. This could be for enhancing network efficiency because the cluster nodes are on a network that does not have access to the internet. A local, or mirrored, copy of the registry requires the following: A certificate for the registry node. This can be a self-signed certificate. A web server that a container on a system will serve. An updated pull secret that contains the certificate and local repository information. Note Creating a disconnected registry on a registry node is optional. If you need to create a disconnected registry on a registry node, you must complete all of the following sub-sections. Prerequisites If you have already prepared a mirror registry for Mirroring images for a disconnected installation , you can skip directly to Modify the install-config.yaml file to use the disconnected registry . 3.14.1. Preparing the registry node to host the mirrored registry The following steps must be completed prior to hosting a mirrored registry on bare metal. Procedure Open the firewall port on the registry node: USD sudo firewall-cmd --add-port=5000/tcp --zone=libvirt --permanent USD sudo firewall-cmd --add-port=5000/tcp --zone=public --permanent USD sudo firewall-cmd --reload Install the required packages for the registry node: USD sudo yum -y install python3 podman httpd httpd-tools jq Create the directory structure where the repository information will be held: USD sudo mkdir -p /opt/registry/{auth,certs,data} 3.14.2. Mirroring the OpenShift Container Platform image repository for a disconnected registry Complete the following steps to mirror the OpenShift Container Platform image repository for a disconnected registry. Prerequisites Your mirror host has access to the internet. You configured a mirror registry to use in your restricted network and can access the certificate and credentials that you configured. You downloaded the pull secret from Red Hat OpenShift Cluster Manager and modified it to include authentication to your mirror repository. Procedure Review the OpenShift Container Platform downloads page to determine the version of OpenShift Container Platform that you want to install and determine the corresponding tag on the Repository Tags page. Set the required environment variables: Export the release version: USD OCP_RELEASE=<release_version> For <release_version> , specify the tag that corresponds to the version of OpenShift Container Platform to install, such as 4.5.4 . Export the local registry name and host port: USD LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>' For <local_registry_host_name> , specify the registry domain name for your mirror repository, and for <local_registry_host_port> , specify the port that it serves content on. Export the local repository name: USD LOCAL_REPOSITORY='<local_repository_name>' For <local_repository_name> , specify the name of the repository to create in your registry, such as ocp4/openshift4 . Export the name of the repository to mirror: USD PRODUCT_REPO='openshift-release-dev' For a production release, you must specify openshift-release-dev . Export the path to your registry pull secret: USD LOCAL_SECRET_JSON='<path_to_pull_secret>' For <path_to_pull_secret> , specify the absolute path to and file name of the pull secret for your mirror registry that you created. Export the release mirror: USD RELEASE_NAME="ocp-release" For a production release, you must specify ocp-release . Export the type of architecture for your cluster: USD ARCHITECTURE=<cluster_architecture> 1 1 Specify the architecture of the cluster, such as x86_64 , aarch64 , s390x , or ppc64le . Export the path to the directory to host the mirrored images: USD REMOVABLE_MEDIA_PATH=<path> 1 1 Specify the full path, including the initial forward slash (/) character. Mirror the version images to the mirror registry: If your mirror host does not have internet access, take the following actions: Connect the removable media to a system that is connected to the internet. Review the images and configuration manifests to mirror: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} \ --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} \ --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} \ --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run Record the entire imageContentSources section from the output of the command. The information about your mirrors is unique to your mirrored repository, and you must add the imageContentSources section to the install-config.yaml file during installation. Mirror the images to a directory on the removable media: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} Take the media to the restricted network environment and upload the images to the local container registry. USD oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror "file://openshift/release:USD{OCP_RELEASE}*" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1 1 For REMOVABLE_MEDIA_PATH , you must use the same path that you specified when you mirrored the images. If the local container registry is connected to the mirror host, take the following actions: Directly push the release images to the local registry by using following command: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} \ --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} \ --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} \ --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} This command pulls the release information as a digest, and its output includes the imageContentSources data that you require when you install your cluster. Record the entire imageContentSources section from the output of the command. The information about your mirrors is unique to your mirrored repository, and you must add the imageContentSources section to the install-config.yaml file during installation. Note The image name gets patched to Quay.io during the mirroring process, and the podman images will show Quay.io in the registry on the bootstrap virtual machine. To create the installation program that is based on the content that you mirrored, extract it and pin it to the release: If your mirror host does not have internet access, run the following command: USD oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-baremetal-install "USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}" If the local container registry is connected to the mirror host, run the following command: USD oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-baremetal-install "USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}" Important To ensure that you use the correct images for the version of OpenShift Container Platform that you selected, you must extract the installation program from the mirrored content. You must perform this step on a machine with an active internet connection. If you are in a disconnected environment, use the --image flag as part of must-gather and point to the payload image. For clusters using installer-provisioned infrastructure, run the following command: USD openshift-baremetal-install 3.14.3. Modify the install-config.yaml file to use the disconnected registry On the provisioner node, the install-config.yaml file should use the newly created pull-secret from the pull-secret-update.txt file. The install-config.yaml file must also contain the disconnected registry node's certificate and registry information. Procedure Add the disconnected registry node's certificate to the install-config.yaml file: USD echo "additionalTrustBundle: |" >> install-config.yaml The certificate should follow the "additionalTrustBundle: |" line and be properly indented, usually by two spaces. USD sed -e 's/^/ /' /opt/registry/certs/domain.crt >> install-config.yaml Add the mirror information for the registry to the install-config.yaml file: USD echo "imageContentSources:" >> install-config.yaml USD echo "- mirrors:" >> install-config.yaml USD echo " - registry.example.com:5000/ocp4/openshift4" >> install-config.yaml Replace registry.example.com with the registry's fully qualified domain name. USD echo " source: quay.io/openshift-release-dev/ocp-release" >> install-config.yaml USD echo "- mirrors:" >> install-config.yaml USD echo " - registry.example.com:5000/ocp4/openshift4" >> install-config.yaml Replace registry.example.com with the registry's fully qualified domain name. USD echo " source: quay.io/openshift-release-dev/ocp-v4.0-art-dev" >> install-config.yaml 3.15. Validation checklist for installation β OpenShift Container Platform installer has been retrieved. β OpenShift Container Platform installer has been extracted. β Required parameters for the install-config.yaml have been configured. β The hosts parameter for the install-config.yaml has been configured. β The bmc parameter for the install-config.yaml has been configured. β Conventions for the values configured in the bmc address field have been applied. β Created the OpenShift Container Platform manifests. β (Optional) Deployed routers on compute nodes. β (Optional) Created a disconnected registry. β (Optional) Validate disconnected registry settings if in use. | [
"useradd kni",
"passwd kni",
"echo \"kni ALL=(root) NOPASSWD:ALL\" | tee -a /etc/sudoers.d/kni",
"chmod 0440 /etc/sudoers.d/kni",
"su - kni -c \"ssh-keygen -t ed25519 -f /home/kni/.ssh/id_rsa -N ''\"",
"su - kni",
"sudo subscription-manager register --username=<user> --password=<pass> --auto-attach",
"sudo subscription-manager repos --enable=rhel-9-for-<architecture>-appstream-rpms --enable=rhel-9-for-<architecture>-baseos-rpms",
"sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool",
"sudo usermod --append --groups libvirt <user>",
"sudo systemctl start firewalld",
"sudo firewall-cmd --zone=public --add-service=http --permanent",
"sudo firewall-cmd --reload",
"sudo systemctl enable libvirtd --now",
"sudo virsh pool-define-as --name default --type dir --target /var/lib/libvirt/images",
"sudo virsh pool-start default",
"sudo virsh pool-autostart default",
"vim pull-secret.txt",
"chronyc sources",
"MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^+ time.cloudflare.com 3 10 377 187 -209us[ -209us] +/- 32ms ^+ t1.time.ir2.yahoo.com 2 10 377 185 -4382us[-4382us] +/- 23ms ^+ time.cloudflare.com 3 10 377 198 -996us[-1220us] +/- 33ms ^* brenbox.westnet.ie 1 10 377 193 -9538us[-9761us] +/- 24ms",
"ping time.cloudflare.com",
"PING time.cloudflare.com (162.159.200.123) 56(84) bytes of data. 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=1 ttl=54 time=32.3 ms 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=2 ttl=54 time=30.9 ms 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=3 ttl=54 time=36.7 ms",
"export PUB_CONN=<baremetal_nic_name>",
"sudo nohup bash -c \" nmcli con down \\\"USDPUB_CONN\\\" nmcli con delete \\\"USDPUB_CONN\\\" # RHEL 8.1 appends the word \\\"System\\\" in front of the connection, delete in case it exists nmcli con down \\\"System USDPUB_CONN\\\" nmcli con delete \\\"System USDPUB_CONN\\\" nmcli connection add ifname baremetal type bridge <con_name> baremetal bridge.stp no 1 nmcli con add type bridge-slave ifname \\\"USDPUB_CONN\\\" master baremetal pkill dhclient;dhclient baremetal \"",
"sudo nohup bash -c \" nmcli con down \\\"USDPUB_CONN\\\" nmcli con delete \\\"USDPUB_CONN\\\" # RHEL 8.1 appends the word \\\"System\\\" in front of the connection, delete in case it exists nmcli con down \\\"System USDPUB_CONN\\\" nmcli con delete \\\"System USDPUB_CONN\\\" nmcli connection add ifname baremetal type bridge con-name baremetal bridge.stp no ipv4.method manual ipv4.addr \"x.x.x.x/yy\" ipv4.gateway \"a.a.a.a\" ipv4.dns \"b.b.b.b\" 1 nmcli con add type bridge-slave ifname \\\"USDPUB_CONN\\\" master baremetal nmcli con up baremetal \"",
"export PROV_CONN=<prov_nic_name>",
"sudo nohup bash -c \" nmcli con down \\\"USDPROV_CONN\\\" nmcli con delete \\\"USDPROV_CONN\\\" nmcli connection add ifname provisioning type bridge con-name provisioning nmcli con add type bridge-slave ifname \\\"USDPROV_CONN\\\" master provisioning nmcli connection modify provisioning ipv6.addresses fd00:1101::1/64 ipv6.method manual nmcli con down provisioning nmcli con up provisioning \"",
"nmcli connection modify provisioning ipv4.addresses 172.22.0.254/24 ipv4.method manual",
"ssh kni@provisioner.<cluster-name>.<domain>",
"sudo nmcli con show",
"NAME UUID TYPE DEVICE baremetal 4d5133a5-8351-4bb9-bfd4-3af264801530 bridge baremetal provisioning 43942805-017f-4d7d-a2c2-7cb3324482ed bridge provisioning virbr0 d9bca40f-eee1-410b-8879-a2d4bb0465e7 bridge virbr0 bridge-slave-eno1 76a8ed50-c7e5-4999-b4f6-6d9014dd0812 ethernet eno1 bridge-slave-eno2 f31c3353-54b7-48de-893a-02d2b34c4736 ethernet eno2",
"interfaces: - name: enp2s0 1 type: ethernet 2 state: up 3 ipv4: enabled: false 4 ipv6: enabled: false - name: br-ex type: ovs-bridge state: up ipv4: enabled: false dhcp: false ipv6: enabled: false dhcp: false bridge: port: - name: enp2s0 5 - name: br-ex - name: br-ex type: ovs-interface state: up copy-mac-from: enp2s0 ipv4: enabled: true dhcp: true ipv6: enabled: false dhcp: false",
"cat <nmstate_configuration>.yaml | base64 1",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 10-br-ex-worker 2 spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<base64_encoded_nmstate_configuration> 3 mode: 0644 overwrite: true path: /etc/nmstate/openshift/cluster.yml",
"oc edit mc <machineconfig_custom_resource_name>",
"oc apply -f ./extraworker-secret.yaml",
"apiVersion: metal3.io/v1alpha1 kind: BareMetalHost spec: preprovisioningNetworkDataName: ostest-extraworker-0-network-config-secret",
"oc project openshift-machine-api",
"oc get machinesets",
"oc scale machineset <machineset_name> --replicas=<n> 1",
"sudo su -",
"nmcli dev status",
"nmcli connection modify <interface_name> +ipv4.routes \"192.168.0.0/24 via <gateway>\"",
"nmcli connection modify eth0 +ipv4.routes \"192.168.0.0/24 via 192.168.0.1\"",
"nmcli connection up <interface_name>",
"ip route",
"sudo su -",
"nmcli dev status",
"nmcli connection modify <interface_name> +ipv4.routes \"10.0.0.0/24 via <gateway>\"",
"nmcli connection modify eth0 +ipv4.routes \"10.0.0.0/24 via 10.0.0.1\"",
"nmcli connection up <interface_name>",
"ip route",
"ping <remote_node_ip_address>",
"ping <control_plane_node_ip_address>",
"export VERSION=stable-4.17",
"export RELEASE_ARCH=<architecture>",
"export RELEASE_IMAGE=USD(curl -s https://mirror.openshift.com/pub/openshift-v4/USDRELEASE_ARCH/clients/ocp/USDVERSION/release.txt | grep 'Pull From: quay.io' | awk -F ' ' '{print USD3}')",
"export cmd=openshift-baremetal-install",
"export pullsecret_file=~/pull-secret.txt",
"export extract_dir=USD(pwd)",
"curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/openshift-client-linux.tar.gz | tar zxvf - oc",
"sudo cp oc /usr/local/bin",
"oc adm release extract --registry-config \"USD{pullsecret_file}\" --command=USDcmd --to \"USD{extract_dir}\" USD{RELEASE_IMAGE}",
"sudo cp openshift-baremetal-install /usr/local/bin",
"sudo dnf install -y podman",
"sudo firewall-cmd --add-port=8080/tcp --zone=public --permanent",
"sudo firewall-cmd --reload",
"mkdir /home/kni/rhcos_image_cache",
"sudo semanage fcontext -a -t httpd_sys_content_t \"/home/kni/rhcos_image_cache(/.*)?\"",
"sudo restorecon -Rv /home/kni/rhcos_image_cache/",
"export RHCOS_QEMU_URI=USD(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH \"USD(arch)\" '.architectures[USDARCH].artifacts.qemu.formats[\"qcow2.gz\"].disk.location')",
"export RHCOS_QEMU_NAME=USD{RHCOS_QEMU_URI##*/}",
"export RHCOS_QEMU_UNCOMPRESSED_SHA256=USD(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH \"USD(arch)\" '.architectures[USDARCH].artifacts.qemu.formats[\"qcow2.gz\"].disk[\"uncompressed-sha256\"]')",
"curl -L USD{RHCOS_QEMU_URI} -o /home/kni/rhcos_image_cache/USD{RHCOS_QEMU_NAME}",
"ls -Z /home/kni/rhcos_image_cache",
"podman run -d --name rhcos_image_cache \\ 1 -v /home/kni/rhcos_image_cache:/var/www/html -p 8080:8080/tcp registry.access.redhat.com/ubi9/httpd-24",
"export BAREMETAL_IP=USD(ip addr show dev baremetal | awk '/inet /{print USD2}' | cut -d\"/\" -f1)",
"export BOOTSTRAP_OS_IMAGE=\"http://USD{BAREMETAL_IP}:8080/USD{RHCOS_QEMU_NAME}?sha256=USD{RHCOS_QEMU_UNCOMPRESSED_SHA256}\"",
"echo \" bootstrapOSImage=USD{BOOTSTRAP_OS_IMAGE}\"",
"platform: baremetal: bootstrapOSImage: <bootstrap_os_image> 1",
"Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10",
"listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2",
"listen api-server-6443 bind *:6443 mode tcp server master-00 192.168.83.89:6443 check inter 1s server master-01 192.168.84.90:6443 check inter 1s server master-02 192.168.85.99:6443 check inter 1s server bootstrap 192.168.80.89:6443 check inter 1s listen machine-config-server-22623 bind *:22623 mode tcp server master-00 192.168.83.89:22623 check inter 1s server master-01 192.168.84.90:22623 check inter 1s server master-02 192.168.85.99:22623 check inter 1s server bootstrap 192.168.80.89:22623 check inter 1s listen ingress-router-80 bind *:80 mode tcp balance source server worker-00 192.168.83.100:80 check inter 1s server worker-01 192.168.83.101:80 check inter 1s listen ingress-router-443 bind *:443 mode tcp balance source server worker-00 192.168.83.100:443 check inter 1s server worker-01 192.168.83.101:443 check inter 1s listen ironic-api-6385 bind *:6385 mode tcp balance source server master-00 192.168.83.89:6385 check inter 1s server master-01 192.168.84.90:6385 check inter 1s server master-02 192.168.85.99:6385 check inter 1s server bootstrap 192.168.80.89:6385 check inter 1s listen inspector-api-5050 bind *:5050 mode tcp balance source server master-00 192.168.83.89:5050 check inter 1s server master-01 192.168.84.90:5050 check inter 1s server master-02 192.168.85.99:5050 check inter 1s server bootstrap 192.168.80.89:5050 check inter 1s",
"curl https://<loadbalancer_ip_address>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl -I -L -H \"Host: console-openshift-console.apps.<cluster_name>.<base_domain>\" http://<load_balancer_front_end_IP_address>",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache",
"curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"platform: baremetal: loadBalancer: type: UserManaged 1 apiVIPs: - <api_ip> 2 ingressVIPs: - <ingress_ip> 3",
"curl https://api.<cluster_name>.<base_domain>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"apiVersion: v1 baseDomain: <domain> metadata: name: <cluster_name> networking: machineNetwork: - cidr: <public_cidr> networkType: OVNKubernetes compute: - name: worker replicas: 2 1 controlPlane: name: master replicas: 3 platform: baremetal: {} platform: baremetal: apiVIPs: - <api_ip> ingressVIPs: - <wildcard_ip> provisioningNetworkCIDR: <CIDR> bootstrapExternalStaticIP: <bootstrap_static_ip_address> 2 bootstrapExternalStaticGateway: <bootstrap_static_gateway> 3 bootstrapExternalStaticDNS: <bootstrap_static_dns> 4 hosts: - name: openshift-master-0 role: master bmc: address: ipmi://<out_of_band_ip> 5 username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"<installation_disk_drive_path>\" 6 - name: <openshift_master_1> role: master bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"<installation_disk_drive_path>\" - name: <openshift_master_2> role: master bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"<installation_disk_drive_path>\" - name: <openshift_worker_0> role: worker bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> - name: <openshift_worker_1> role: worker bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"<installation_disk_drive_path>\" pullSecret: '<pull_secret>' sshKey: '<ssh_pub_key>'",
"ironic-inspector inspection failed: No disks satisfied root device hints",
"mkdir ~/clusterconfigs",
"cp install-config.yaml ~/clusterconfigs",
"ipmitool -I lanplus -U <user> -P <password> -H <management-server-ip> power off",
"for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done",
"metadata: name:",
"networking: machineNetwork: - cidr:",
"compute: - name: worker",
"compute: replicas: 2",
"controlPlane: name: master",
"controlPlane: replicas: 3",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: ipmi://<out-of-band-ip> username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True",
"export SERVER=<ip_address> 1",
"export SystemID=<system_id> 1",
"curl -u USDUSER:USDPASS -X POST -H'Content-Type: application/json' -H'Accept: application/json' -d '{\"ResetType\": \"On\"}' https://USDSERVER/redfish/v1/Systems/USDSystemID/Actions/ComputerSystem.Reset",
"curl -u USDUSER:USDPASS -X POST -H'Content-Type: application/json' -H'Accept: application/json' -d '{\"ResetType\": \"ForceOff\"}' https://USDSERVER/redfish/v1/Systems/USDSystemID/Actions/ComputerSystem.Reset",
"curl -u USDUSER:USDPASS -X PATCH -H \"Content-Type: application/json\" -H \"If-Match: <ETAG>\" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{\"Boot\": {\"BootSourceOverrideTarget\": \"pxe\", \"BootSourceOverrideEnabled\": \"Once\"}}",
"curl -u USDUSER:USDPASS -X PATCH -H \"Content-Type: application/json\" -H \"If-Match: <ETAG>\" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{\"Boot\": {\"BootSourceOverrideMode\":\"UEFI\"}}",
"curl -u USDUSER:USDPASS -X PATCH -H \"Content-Type: application/json\" -H \"If-Match: <ETAG>\" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{\"Boot\": {\"BootSourceOverrideTarget\": \"cd\", \"BootSourceOverrideEnabled\": \"Once\"}}'",
"curl -u USDUSER:USDPASS -X POST -H \"Content-Type: application/json\" https://USDServer/redfish/v1/Managers/USDManagerID/VirtualMedia/USDVmediaId -d '{\"Image\": \"https://example.com/test.iso\", \"TransferProtocolType\": \"HTTPS\", \"UserName\": \"\", \"Password\":\"\"}'",
"curl -u USDUSER:USDPASS -X PATCH -H \"Content-Type: application/json\" -H \"If-Match: <ETAG>\" https://USDServer/redfish/v1/Managers/USDManagerID/VirtualMedia/USDVmediaId -d '{\"Image\": \"https://example.com/test.iso\", \"TransferProtocolType\": \"HTTPS\", \"UserName\": \"\", \"Password\":\"\"}'",
"platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> disableCertificateVerification: True",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> disableCertificateVerification: True",
"platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True",
"platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: irmc://<out-of-band-ip> username: <user> password: <password>",
"platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<server_kvm_ip>/redfish/v1/Systems/<serial_number> username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<server_kvm_ip>/redfish/v1/Systems/<serial_number> username: <user> password: <password> disableCertificateVerification: True",
"- name: master-0 role: master bmc: address: ipmi://10.10.0.3:6203 username: admin password: redhat bootMACAddress: de:ad:be:ef:00:40 rootDeviceHints: deviceName: \"/dev/sda\"",
"apiVersion: v1 baseDomain: <domain> proxy: httpProxy: http://USERNAME:[email protected]:PORT httpsProxy: https://USERNAME:[email protected]:PORT noProxy: <WILDCARD_OF_DOMAIN>,<PROVISIONING_NETWORK/CIDR>,<BMC_ADDRESS_RANGE/CIDR>",
"noProxy: .example.com,172.22.0.0/24,10.10.0.0/24",
"platform: baremetal: apiVIPs: - <api_VIP> ingressVIPs: - <ingress_VIP> provisioningNetwork: \"Disabled\" 1",
"machineNetwork: - cidr: {{ extcidrnet }} - cidr: {{ extcidrnet6 }} clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd03::/112",
"networkConfig: nmstate: interfaces: - name: <interface_name> wait-ip: ipv4+ipv6",
"platform: baremetal: apiVIPs: - <api_ipv4> - <api_ipv6> ingressVIPs: - <wildcard_ipv4> - <wildcard_ipv6>",
"interfaces: - name: <nic1_name> 1 type: ethernet state: up ipv4: address: - ip: <ip_address> 2 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 3 routes: config: - destination: 0.0.0.0/0 next-hop-address: <next_hop_ip_address> 4 next-hop-interface: <next_hop_nic1_name> 5",
"nmstatectl gc <nmstate_yaml_file>",
"hosts: - name: openshift-master-0 role: master bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: null bootMACAddress: <NIC1_mac_address> bootMode: UEFI rootDeviceHints: deviceName: \"/dev/sda\" networkConfig: 1 interfaces: - name: <nic1_name> 2 type: ethernet state: up ipv4: address: - ip: <ip_address> 3 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 4 routes: config: - destination: 0.0.0.0/0 next-hop-address: <next_hop_ip_address> 5 next-hop-interface: <next_hop_nic1_name> 6",
"networking: machineNetwork: - cidr: 10.0.0.0/24 - cidr: 192.168.0.0/24 networkType: OVNKubernetes",
"networkConfig: interfaces: - name: <interface_name> 1 type: ethernet state: up ipv4: enabled: true dhcp: false address: - ip: <node_ip> 2 prefix-length: 24 gateway: <gateway_ip> 3 dns-resolver: config: server: - <dns_ip> 4",
"interfaces: - name: eth0 ipv6: addr-gen-mode: <address_mode> 1",
"nmstatectl gc <nmstate_yaml_file> 1",
"hosts: - name: openshift-master-0 role: master bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: null bootMACAddress: <NIC1_mac_address> bootMode: UEFI rootDeviceHints: deviceName: \"/dev/sda\" networkConfig: interfaces: - name: eth0 ipv6: addr-gen-mode: <address_mode> 1",
"hosts: - name: worker-0 role: worker bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: false bootMACAddress: <NIC1_mac_address> bootMode: UEFI networkConfig: 1 interfaces: 2 - name: eno1 3 type: ethernet 4 state: up mac-address: 0c:42:a1:55:f3:06 ipv4: enabled: true dhcp: false 5 ethernet: sr-iov: total-vfs: 2 6 ipv6: enabled: false dhcp: false - name: sriov:eno1:0 type: ethernet state: up 7 ipv4: enabled: false 8 ipv6: enabled: false - name: sriov:eno1:1 type: ethernet state: down - name: eno2 type: ethernet state: up mac-address: 0c:42:a1:55:f3:07 ipv4: enabled: true ethernet: sr-iov: total-vfs: 2 ipv6: enabled: false - name: sriov:eno2:0 type: ethernet state: up ipv4: enabled: false ipv6: enabled: false - name: sriov:eno2:1 type: ethernet state: down - name: bond0 type: bond state: up min-tx-rate: 100 9 max-tx-rate: 200 10 link-aggregation: mode: active-backup 11 options: primary: sriov:eno1:0 12 port: - sriov:eno1:0 - sriov:eno2:0 ipv4: address: - ip: 10.19.16.57 13 prefix-length: 23 dhcp: false enabled: true ipv6: enabled: false dns-resolver: config: server: - 10.11.5.160 - 10.2.70.215 routes: config: - destination: 0.0.0.0/0 next-hop-address: 10.19.17.254 next-hop-interface: bond0 14 table-id: 254",
"hosts: - name: ostest-master-0 [...] networkConfig: &BOND interfaces: - name: bond0 type: bond state: up ipv4: dhcp: true enabled: true link-aggregation: mode: active-backup port: - enp2s0 - enp3s0 - name: ostest-master-1 [...] networkConfig: *BOND - name: ostest-master-2 [...] networkConfig: *BOND",
"hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out_of_band_ip> 1 username: <username> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"/dev/sda\" bootMode: UEFISecureBoot 2",
"./openshift-baremetal-install --dir ~/clusterconfigs create manifests",
"INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings WARNING Discarding the OpenShift Manifest that was provided in the target directory because its dependencies are dirty and it needs to be regenerated",
"sudo dnf -y install butane",
"variant: openshift version: 4.17.0 metadata: name: 99-master-chrony-conf-override labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # Use public servers from the pool.ntp.org project. # Please consider joining the pool (https://www.pool.ntp.org/join.html). # The Machine Config Operator manages this file server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony # Configure the control plane nodes to serve as local NTP servers # for all compute nodes, even if they are not in sync with an # upstream NTP server. # Allow NTP client access from the local network. allow all # Serve time even if not synchronized to a time source. local stratum 3 orphan",
"butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yaml",
"variant: openshift version: 4.17.0 metadata: name: 99-worker-chrony-conf-override labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # The Machine Config Operator manages this file. server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony",
"butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yaml",
"cd ~/clusterconfigs",
"cd manifests",
"touch cluster-network-avoid-workers-99-config.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 50-worker-fix-ipi-rwn labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/kubernetes/manifests/keepalived.yaml mode: 0644 contents: source: data:,",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/master: \"\"",
"sed -i \"s;mastersSchedulable: false;mastersSchedulable: true;g\" clusterconfigs/manifests/cluster-scheduler-02-config.yml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: <num-of-router-pods> endpointPublishingStrategy: type: HostNetwork nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: \"\"",
"cp ~/router-replicas.yaml clusterconfigs/openshift/99_router-replicas.yaml",
"vim clusterconfigs/openshift/99_openshift-cluster-api_hosts-*.yaml",
"spec: firmware: simultaneousMultithreadingEnabled: true sriovEnabled: true virtualizationEnabled: true",
"vim clusterconfigs/openshift/99_openshift-cluster-api_hosts-*.yaml",
"spec: raid: hardwareRAIDVolumes: - level: \"0\" 1 name: \"sda\" numberOfPhysicalDisks: 1 rotational: true sizeGibibytes: 0",
"spec: raid: hardwareRAIDVolumes: []",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: primary name: 10_primary_storage_config spec: config: ignition: version: 3.2.0 storage: disks: - device: </dev/xxyN> partitions: - label: recovery startMiB: 32768 sizeMiB: 16384 filesystems: - device: /dev/disk/by-partlabel/recovery label: recovery format: xfs",
"cp ~/<MachineConfig_manifest> ~/clusterconfigs/openshift",
"sudo firewall-cmd --add-port=5000/tcp --zone=libvirt --permanent",
"sudo firewall-cmd --add-port=5000/tcp --zone=public --permanent",
"sudo firewall-cmd --reload",
"sudo yum -y install python3 podman httpd httpd-tools jq",
"sudo mkdir -p /opt/registry/{auth,certs,data}",
"OCP_RELEASE=<release_version>",
"LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>'",
"LOCAL_REPOSITORY='<local_repository_name>'",
"PRODUCT_REPO='openshift-release-dev'",
"LOCAL_SECRET_JSON='<path_to_pull_secret>'",
"RELEASE_NAME=\"ocp-release\"",
"ARCHITECTURE=<cluster_architecture> 1",
"REMOVABLE_MEDIA_PATH=<path> 1",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror \"file://openshift/release:USD{OCP_RELEASE}*\" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-baremetal-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}\"",
"oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-baremetal-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}\"",
"openshift-baremetal-install",
"echo \"additionalTrustBundle: |\" >> install-config.yaml",
"sed -e 's/^/ /' /opt/registry/certs/domain.crt >> install-config.yaml",
"echo \"imageContentSources:\" >> install-config.yaml",
"echo \"- mirrors:\" >> install-config.yaml",
"echo \" - registry.example.com:5000/ocp4/openshift4\" >> install-config.yaml",
"echo \" source: quay.io/openshift-release-dev/ocp-release\" >> install-config.yaml",
"echo \"- mirrors:\" >> install-config.yaml",
"echo \" - registry.example.com:5000/ocp4/openshift4\" >> install-config.yaml",
"echo \" source: quay.io/openshift-release-dev/ocp-v4.0-art-dev\" >> install-config.yaml"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/deploying_installer-provisioned_clusters_on_bare_metal/ipi-install-installation-workflow |
Chapter 3. Managing Service Registry content using the web console | Chapter 3. Managing Service Registry content using the web console You can manage schema and API artifacts stored in Service Registry by using the Service Registry web console. This includes uploading and browsing Service Registry content, configuring optional rules for content, and generating client sdk code: Section 3.1, "Viewing artifacts using the Service Registry web console" Section 3.2, "Adding artifacts using the Service Registry web console" Section 3.3, "Configuring content rules using the Service Registry web console" Section 3.4, "Generating client SDKs for OpenAPI artifacts using the Service Registry web console" Section 3.5, "Changing an artifact owner using the Service Registry web console" Section 3.6, "Configuring Service Registry instance settings using the web console" Section 3.7, "Exporting and importing data using the Service Registry web console" 3.1. Viewing artifacts using the Service Registry web console You can use the Service Registry web console to browse the schema and API artifacts stored in Service Registry. This section shows a simple example of viewing Service Registry artifacts, groups, versions, and artifact rules. Prerequisites Service Registry is installed and running in your environment. You are logged in to the Service Registry web console: http://MY_REGISTRY_URL/ui Artifacts have been added to Service Registry using the web console, command line, Maven plug-in, or a Java client application. Procedure On the Artifacts tab, browse the list of artifacts stored in Service Registry, or enter a search string to find an artifact. You can select from the list to search by specific criteria such as name, group, labels, or global ID. Figure 3.1. Artifacts in Service Registry web console Click an artifact to view the following details: Overview : Displays artifact version metadata such as artifact name, artifact ID, global ID, content ID, labels, properties, and so on. Also displays rules for validity and compatibility that you can configure for artifact content. Documentation (OpenAPI and AsyncAPI only): Displays automatically-generated REST API documentation. Content : Displays a read-only view of the full artifact content. For JSON content, you can click JSON or YAML to display your preferred format. References : Displays a read-only view of all artifacts referenced by this artifact. You can also click View artifacts that reference this artifact . If additional versions of this artifact have been added, you can select them from the Version list in page header. To save the artifact contents to a local file, for example, my-openapi.json or my-protobuf-schema.proto , and click Download at the end of the page. Additional resources Section 3.2, "Adding artifacts using the Service Registry web console" Section 3.3, "Configuring content rules using the Service Registry web console" Chapter 10, Service Registry content rule reference 3.2. Adding artifacts using the Service Registry web console You can use the Service Registry web console to upload schema and API artifacts to Service Registry. This section shows simple examples of uploading Service Registry artifacts and adding new artifact versions. Prerequisites Service Registry is installed and running in your environment. You are logged in to the Service Registry web console: http://MY_REGISTRY_URL/ui Procedure On the Artifacts tab, click Upload artifact , and specify the following details: Group & ID : Use the default empty settings to automatically generate an artifact ID and add the artifact to the default artifact group. Alternatively, you can enter an optional artifact group name or ID. Type : Use the default Auto-Detect setting to automatically detect the artifact type, or select the artifact type from the list, for example, Avro Schema or OpenAPI . You must manually select the Kafka Connect Schema artifact type, which cannot be automatically detected. Artifact : Specify the artifact location using either of the following options: From file : Click Browse , and select a file, or drag and drop a file. For example, my-openapi.json or my-schema.proto . Alternatively, you can enter the file contents in the text box. From URL : Enter a valid and accessible URL, and click Fetch . For example: https://petstore3.swagger.io/api/v3/openapi.json . Click Upload and view the artifact details: Overview : Displays artifact version metadata such as artifact name, artifact ID, global ID, content ID, labels, properties, and so on. Also displays rules for validity and compatibility that you can configure for artifact content. Documentation (OpenAPI and AsyncAPI only): Displays automatically-generated REST API documentation. Content : Displays a read-only view of the full artifact content. For JSON content, you can click JSON or YAML to display your preferred format. References : Displays a read-only view of all artifacts referenced by this artifact. You can also click View artifacts that reference this artifact . You can add artifact references using the Service Registry Maven plug-in or REST API only. The following example shows an example OpenAPI artifact: Figure 3.2. Artifact details in Service Registry web console On the Overview tab, click the Edit pencil icon to edit artifact metadata such as name or description. You can also enter an optional comma-separated list of labels for searching, or add key-value pairs of arbitrary properties associated with the artifact. To add properties, perform the following steps: Click Add property . Enter the key name and the value. Repeat the first two steps to add multiple properties. Click Save . To save the artifact contents to a local file, for example, my-protobuf-schema.proto or my-openapi.json , click Download at the end of the page. To add a new artifact version, click Upload new version in the page header, and drag and drop or click Browse to upload the file, for example, my-avro-schema.json or my-openapi.json . To delete an artifact, click Delete in the page header. Warning Deleting an artifact deletes the artifact and all of its versions, and cannot be undone. Additional resources Section 3.1, "Viewing artifacts using the Service Registry web console" Section 3.3, "Configuring content rules using the Service Registry web console" Chapter 10, Service Registry content rule reference 3.3. Configuring content rules using the Service Registry web console You can use the Service Registry web console to configure optional rules to prevent invalid or incompatible content from being added to Service Registry. All configured artifact-specific rules or global rules must pass before a new artifact version can be uploaded to Service Registry. Configured artifact-specific rules override any configured global rules. This section shows a simple example of configuring global and artifact-specific rules. Prerequisites Service Registry is installed and running in your environment. You are logged in to the Service Registry web console: http://MY_REGISTRY_URL/ui Artifacts have been added to Service Registry using the web console, command line, Maven plug-in, or a Java client application. When role-based authorization is enabled, you have administrator access for global rules and artifact-specific rules, or developer access for artifact-specific rules only. Procedure On the Artifacts tab, browse the list of artifacts in Service Registry, or enter a search string to find an artifact. You can select from the list to search by specific criteria such as artifact name, group, labels, or global ID. Click an artifact to view its version details and content rules. In Artifact-specific rules , click Enable to configure a validity, compatibility, or integrity rule for artifact content, and select the appropriate rule configuration from the list. For example, for Validity rule , select Full . Figure 3.3. Artifact content rules in Service Registry web console To access global rules, click the Global rules tab. Click Enable to configure global validity, compatibility, or integrity rules for all artifact content, and select the appropriate rule configuration from the list. To disable an artifact rule or global rule, click the trash icon to the rule. Additional resources Section 3.2, "Adding artifacts using the Service Registry web console" Chapter 10, Service Registry content rule reference 3.4. Generating client SDKs for OpenAPI artifacts using the Service Registry web console You can use the Service Registry web console to configure, generate, and download client software development kits (SDKs) for OpenAPI artifacts. You can then use the generated client SDKs to build your client applications for specific platforms based on the OpenAPI. Service Registry generates client SDKs for the following programming languages: C# Go Java PHP Python Ruby Swift TypeScript Note Client SDK generation for OpenAPI artifacts runs in your browser only, and cannot be automated by using an API. You must regenerate the client SDK each time a new artifact version is added in Service Registry. Prerequisites Service Registry is installed and running in your environment. You are logged in to the Service Registry web console: http://MY_REGISTRY_URL/ui An OpenAPI artifact has been added to Service Registry using the web console, command line, Maven plug-in, or a Java client application. Procedure On the Artifacts tab, browse the list of artifacts stored in Service Registry, or enter a search string to find a specific OpenAPI artifact. You can select from the list to search by criteria such as name, group, labels, or global ID. Click the OpenAPI artifact in the list to view its details. In the Version metadata section, click Generate client SDK , and configure the following settings in the dialog: Language : Select the programming language in which to generate the client SDK, for example, Java . Generated client class name : Enter the class name for the client SDK, for example, MyJavaClientSDK. Generated client package name : Enter the package name for the client SDK, for example, io.my.example.sdk Click Show advanced settings to configure optional comma-separated lists of path patterns to include or exclude: Include path patterns : Enter specific paths to include when generating the client SDK, for example, **/.*, **/my-path/* . If this field is empty, all paths are included. Exclude path patterns : Enter specific paths to exclude when generating the client SDK, for example, **/my-other-path/* . If this field is empty, no paths are excluded. Figure 3.4. Generate a Java client SDK in Service Registry web console When you have configured the settings in the dialog, click Generate and download . Enter a file name for the client SDK in the dialog, for example, my-client-java.zip , and click Save to download. Additional resources Service Registry uses Kiota from Microsoft to generate the client SDKs. For more information, see the Kiota project in GitHub . For more details and examples of using the generated SDKs to build client applications, see the Kiota documentation . 3.5. Changing an artifact owner using the Service Registry web console As an administrator or as an owner of a schema or API artifact, you can use the Service Registry web console to change the artifact owner to another user account. For example, this feature is useful if the Artifact owner-only authorization option is set for the Service Registry instance on the Settings tab so that only owners or administrators can modify artifacts. You might need to change owner if the owner user leaves the organization or the owner account is deleted. Note The Artifact owner-only authorization setting and the artifact Owner field are displayed only if authentication was enabled when the Service Registry instance was deployed. For more details, see Installing and deploying Service Registry on OpenShift . Prerequisites The Service Registry instance is deployed and the artifact is created. You are logged in to the Service Registry web console as the artifact's current owner or as an administrator: http://MY_REGISTRY_URL/ui Procedure On the Artifacts tab, browse the list of artifacts stored in Service Registry, or enter a search string to find the artifact. You can select from the list to search by criteria such as name, group, labels, or global ID. Click the artifact that you want to reassign. In the Version metadata section, click the pencil icon to the Owner field. In the New owner field, select or enter an account name. Click Change owner . Additional resources Installing and deploying Service Registry on OpenShift 3.6. Configuring Service Registry instance settings using the web console As an administrator, you can use the Service Registry web console to configure dynamic settings for Service Registry instances at runtime. You can manage configuration options for features such as authentication, authorization, and API compatibility. Note Authentication and authorization settings are only displayed in the web console if authentication was already enabled when the Service Registry instance was deployed. For more details, see the Installing and deploying Service Registry on OpenShift . Prerequisites The Service Registry instance is already deployed. You are logged in to the Service Registry web console with administrator access: http://MY_REGISTRY_URL/ui Procedure In the Service Registry web console, click the Settings tab. Select the settings that you want to configure for this Service Registry instance: Table 3.1. Authentication settings Setting Description HTTP basic authentication Displayed only when authentication is already enabled. When selected, Service Registry users can authenticate using HTTP basic authentication, in addition to OAuth. Not selected by default. Table 3.2. Authorization settings Setting Description Anonymous read access Displayed only when authentication is already selected. When selected, Service Registry grants read-only access to requests from anonymous users without any credentials. This setting is useful if you want to use this instance to publish schemas or APIs externally. Not selected by default. Artifact owner-only authorization Displayed only when authentication is already enabled. When selected, only the user who created an artifact can modify that artifact. Not selected by default. Artifact group owner-only authorization Displayed only when authentication is already enabled and Artifact owner-only authorization is selected. When selected, only the user who created an artifact group has write access to that artifact group, for example, to add or remove artifacts in that group. Not selected by default. Authenticated read access Displayed only when authentication is already enabled. When selected, Service Registry grants at least read-only access to requests from any authenticated user regardless of their user role. Not selected by default. Table 3.3. Compatibility settings Setting Description Legacy ID mode (compatibility API) When selected, the Confluent Schema Registry compatibility API uses globalId instead of contentId as an artifact identifier. This setting is useful when migrating from legacy Service Registry instances based on the v1 Core Registry API. Not selected by default. Table 3.4. Web console settings Setting Description Download link expiry The number of seconds that a generated link to a .zip download file is active before expiring for security reasons, for example, when exporting artifact data from the instance. Defaults to 30 seconds. UI read-only mode When selected, the Service Registry web console is set to read-only, preventing create, read, update, or delete operations. Changes made using the Core Registry API are not affected by this setting. Not selected by default. Table 3.5. Additional properties Setting Description Delete artifact version When selected, users are permitted to delete artifact versions in this instance by using the Core Registry API. Not selected by default. Additional resources Installing and deploying Service Registry on OpenShift 3.7. Exporting and importing data using the Service Registry web console As an administrator, you can use the Service Registry web console to export data from one Service Registry instance, and import this data into another Service Registry instance. You can use this feature to easily migrate data between different instances. The following example shows how to export and import existing data in a .zip file from one Service Registry instance to another instance. All of the artifact data contained in the Service Registry instance is exported in the .zip file. Note You can import only Service Registry data that has been exported from another Service Registry instance. Prerequisites Service Registry instances have been created as follows: The source instance that you are exporting from contains at least one schema or API artifact The target instance that you are importing into is empty to preserve unique IDs You are logged into the Service Registry web console with administrator access: http://MY_REGISTRY_URL/ui Procedure In the web console for the source Service Registry instance, view the Artifacts tab. Click the options icon (three vertical dots) to Upload artifact , and select Download all artifacts (.zip file) to export the data for this Service Registry instance to a .zip download file. In the the web console for the target Service Registry instance, view the Artifacts tab. Click the options icon to Upload artifact , and select Upload multiple artifacts . Drag and drop or browse to the .zip download file that you exported earlier. Click Upload and wait for the data to be imported. | null | https://docs.redhat.com/en/documentation/red_hat_integration/2023.q4/html/service_registry_user_guide/managing-registry-artifacts-ui_registry |
Providing feedback on Red Hat build of Quarkus documentation | Providing feedback on Red Hat build of Quarkus documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/service_binding/proc_providing-feedback-on-red-hat-documentation_quarkus-service-binding |
Chapter 5. Multisite | Chapter 5. Multisite A single zone configuration typically consists of one zone group containing one zone and one or more ceph-radosgw instances where you may load-balance gateway client requests between the instances. In a single zone configuration, typically multiple gateway instances point to a single Ceph storage cluster. However, Red Hat supports several multi-site configuration options for the Ceph Object Gateway: Multi-zone: A more advanced configuration consists of one zone group and multiple zones, each zone with one or more ceph-radosgw instances. Each zone is backed by its own Ceph Storage Cluster. Multiple zones in a zone group provides disaster recovery for the zone group should one of the zones experience a significant failure. Each zone is active and may receive write operations. In addition to disaster recovery, multiple active zones may also serve as a foundation for content delivery networks. To configure multiple zones without replication, see Section 5.12, "Configuring Multiple Zones without Replication" . Multi-zone-group: Formerly called 'regions', the Ceph Object Gateway can also support multiple zone groups, each zone group with one or more zones. Objects stored to zone groups within the same realm share a global namespace, ensuring unique object IDs across zone groups and zones. Multiple Realms: The Ceph Object Gateway supports the notion of realms, which can be a single zone group or multiple zone groups and a globally unique namespace for the realm. Multiple realms provides the ability to support numerous configurations and namespaces. 5.1. Requirements and Assumptions A multi-site configuration requires at least two Ceph storage clusters, and At least two Ceph object gateway instances, one for each Ceph storage cluster. This guide assumes at least two Ceph storage clusters in geographically separate locations; however, the configuration can work on the same physical site. This guide also assumes four Ceph object gateway servers named rgw1 , rgw2 , rgw3 and rgw4 respectively. A multi-site configuration requires a master zone group and a master zone. Additionally, each zone group requires a master zone. Zone groups may have one or more secondary or non-master zones. Important The master zone within the master zone group of a realm is responsible for storing the master copy of the realm's metadata, including users, quotas and buckets (created by the radosgw-admin CLI). This metadata gets synchronized to secondary zones and secondary zone groups automatically. Metadata operations executed with the radosgw-admin CLI MUST be executed on a host within the master zone of the master zone group in order to ensure that they get synchronized to the secondary zone groups and zones. Currently, it is possible to execute metadata operations on secondary zones and zone groups, but it is NOT recommended because they WILL NOT be syncronized, leading to fragmented metadata. In the following examples, the rgw1 host will serve as the master zone of the master zone group; the rgw2 host will serve as the secondary zone of the master zone group; the rgw3 host will serve as the master zone of the secondary zone group; and the rgw4 host will serve as the secondary zone of the secondary zone group. 5.2. Pools Red Hat recommends using the Ceph Placement Group's per Pool Calculator to calculate a suitable number of placement groups for the pools the ceph-radosgw daemon will create. Set the calculated values as defaults in your Ceph configuration file. For example: Note Make this change to the Ceph configuration file on your storage cluster; then, either make a runtime change to the configuration so that it will use those defaults when the gateway instance creates the pools. Alternatively, create the pools manually. See Pools chapter in the Storage Strategies guide for details on creating pools. Pool names particular to a zone follow the naming convention {zone-name}.pool-name . For example, a zone named us-east will have the following pools: .rgw.root us-east.rgw.control us-east.rgw.meta us-east.rgw.log us-east.rgw.buckets.index us-east.rgw.buckets.data us-east.rgw.buckets.non-ec us-east.rgw.meta:users.keys us-east.rgw.meta:users.email us-east.rgw.meta:users.swift us-east.rgw.meta:users.uid 5.3. Installing an Object Gateway To install the Ceph Object Gateway, see the Red Hat Ceph Storage Installation Guide for details. All Ceph Object Gateway nodes must follow the tasks listed in the Requirements for Installing Red Hat Ceph Storage section. Ansible can install and configure Ceph Object Gateways for use with a Ceph Storage cluster. For multi-site and multi-site group deployments, you should have an Ansible configuration for each zone. If you install Ceph Object Gateway with Ansible, the Ansible playbooks will handle the initial configuration for you. To install the Ceph Object Gateway with Ansible, add your hosts to the /etc/ansible/hosts file. Add the Ceph Object Gateway hosts under an [rgws] section to identify their roles to Ansible. If your hosts have sequential naming, you may use a range. For example: Once you have added the hosts, you may rerun your Ansible playbooks. Note Ansible will ensure your gateway is running, so the default zones and pools may need to be deleted manually. This guide provides those steps. When updating an existing multi-site cluster with an asynchronous update, follow the installation instruction for the update. Then, restart the gateway instances. Note There is no required order for restarting the instances. Red Hat recommends restarting the master zone group and master zone first, followed by the secondary zone groups and secondary zones. 5.4. Establish a Multisite Realm All gateways in a cluster have a configuration. In a multi-site realm, these gateways may reside in different zone groups and zones. Yet, they must work together within the realm. In a multi-site realm, all gateway instances MUST retrieve their configuration from a ceph-radosgw daemon on a host within the master zone group and master zone. Consequently, the first step in creating a multi-site cluster involves establishing the realm, master zone group and master zone. To configure your gateways in a multi-site configuration, choose a ceph-radosgw instance that will hold the realm configuration, master zone group and master zone. 5.4.1. Create a Realm A realm contains the multi-site configuration of zone groups and zones and also serves to enforce a globally unique namespace within the realm. Create a new realm for the multi-site configuration by opening a command line interface on a host identified to serve in the master zone group and zone. Then, execute the following: For example: If the cluster will have a single realm, specify the --default flag. If --default is specified, radosgw-admin will use this realm by default. If --default is not specified, adding zone-groups and zones requires specifying either the --rgw-realm flag or the --realm-id flag to identify the realm when adding zone groups and zones. After creating the realm, radosgw-admin will echo back the realm configuration. For example: Note Ceph generates a unique ID for the realm, which allows the renaming of a realm if the need arises. 5.4.2. Create a Master Zone Group A realm must have at least one zone group, which will serve as the master zone group for the realm. Create a new master zone group for the multi-site configuration by opening a command line interface on a host identified to serve in the master zone group and zone. Then, execute the following: For example: If the realm will only have a single zone group, specify the --default flag. If --default is specified, radosgw-admin will use this zone group by default when adding new zones. If --default is not specified, adding zones will require either the --rgw-zonegroup flag or the --zonegroup-id flag to identify the zone group when adding or modifying zones. After creating the master zone group, radosgw-admin will echo back the zone group configuration. For example: 5.4.3. Create a Master Zone Important Zones must be created on a Ceph Object Gateway node that will be within the zone. Create a master zone for the multi-site configuration by opening a command line interface on a host identified to serve in the master zone group and zone. Then, execute the following: For example: Note The --access-key and --secret aren't specified. These settings will be added to the zone once the user is created in the section. 5.4.4. Delete the Default Zone Group and Zone Delete the default zone if it exists. Make sure to remove it from the default zone group first. Important The following steps assume a multi-site configuration using newly installed systems that aren't storing data yet. DO NOT DELETE the default zonegroup, zone, and its pools if you are already using it to store data, or the data will be deleted and unrecoverable. In order to access old data in the default zone and zonegroup, use --rgw-zone default and --rgw-zonegroup default in radosgw-admin commands. Remove the zonegroup and the zone: Example Update and commit the period if the cluster is in a multi-site configuration: Example Delete the default pools in your Ceph storage cluster if they exist. Example Important After deleting the pools, restart the Ceph Object Gateway process. 5.4.5. Create a System User The ceph-radosgw daemons must authenticate before pulling realm and period information. In the master zone, create a system user to facilitate authentication between daemons. For example: Make a note of the access_key and secret_key , as the secondary zones will require them to authenticate with the master zone. Finally, add the system user to the master zone. 5.4.6. Update the Period After updating the master zone configuration, update the period. Note Updating the period changes the epoch, and ensures that other zones will receive the updated configuration. 5.4.7. Update the Ceph Configuration File Update the Ceph configuration file on master zone hosts by adding the rgw_zone configuration option and the name of the master zone to the instance entry. For example: 5.4.8. Start the Gateway On the object gateway host, start and enable the Ceph Object Gateway service: If the service is already running, restart the service instead of starting and enabling it: 5.5. Establish a Secondary Zone Zones within a zone group replicate all data to ensure that each zone has the same data. When creating the secondary zone, execute ALL of the radosgw-admin zone operations on a host identified to serve the secondary zone. Note To add a additional zones, follow the same procedures as for adding the secondary zone. Use a different zone name. Important You must execute metadata operations, such as user creation and quotas, on a host within the master zone of the master zonegroup. The master zone and the secondary zone can receive bucket operations from the RESTful APIs, but the secondary zone redirects bucket operations to the master zone. If the master zone is down, bucket operations will fail. If you create a bucket using the radosgw-admin CLI, you must execute it on a host within the master zone of the master zone group, or the buckets will not synchronize to other zone groups and zones. 5.5.1. Pull the Realm Using the URL path, access key and secret of the master zone in the master zone group, pull the realm to the host. To pull a non-default realm, specify the realm using the --rgw-realm or --realm-id configuration options. If this realm is the default realm or the only realm, make the realm the default realm. 5.5.2. Pull the Period Using the URL path, access key and secret of the master zone in the master zone group, pull the period to the host. To pull a period from a non-default realm, specify the realm using the --rgw-realm or --realm-id configuration options. Note Pulling the period retrieves the latest version of the zone group and zone configurations for the realm. 5.5.3. Create a Secondary Zone Important Zones must be created on a Ceph Object Gateway node that will be within the zone. Create a secondary zone for the multi-site configuration by opening a command line interface on a host identified to serve the secondary zone. Specify the zone group ID, the new zone name and an endpoint for the zone. DO NOT use the --master or --default flags. All zones run in an active-active configuration by default; that is, a gateway client may write data to any zone and the zone will replicate the data to all other zones within the zone group. If the secondary zone should not accept write operations, specify the --read-only flag to create an active-passive configuration between the master zone and the secondary zone. Additionally, provide the access_key and secret_key of the generated system user stored in the master zone of the master zone group. Execute the following: Syntax Example Important The following steps assume a multi-site configuration using newly installed systems that aren't storing data. DO NOT DELETE the default zone and its pools if you are already using them to store data, or the data will be lost and unrecoverable. Delete the default zone if needed. Finally, delete the default pools in your Ceph storage cluster if needed. Important After deleting the pools, restart the RGW process. 5.5.4. Update the Period After updating the master zone configuration, update the period. Note Updating the period changes the epoch, and ensures that other zones will receive the updated configuration. 5.5.5. Update the Ceph Configuration File Update the Ceph configuration file on the secondary zone hosts by adding the rgw_zone configuration option and the name of the secondary zone to the instance entry. For example: 5.5.6. Start the Gateway On the object gateway host, start and enable the Ceph Object Gateway service: If the service is already running, restart the service instead of starting and enabling it: 5.6. Configuring the archive sync module (Technology Preview) The archive sync module leverages the versioning feature of S3 objects in Ceph object gateway to have an archive zone. The archive zone has a history of versions of S3 objects that can only be eliminated through the gateways associated with the archive zone. It captures all the data updates and metadata to consolidate them as versions of S3 objects. Important The archive sync module is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See the support scope for Red Hat Technology Preview features for more details. Prerequisites A running Red Hat Ceph Storage cluster. root or sudo access. Installation of the Ceph Object Gateway. Procedure Configure the archive sync module when creating a new zone by using the archive tier: Syntax Example Additional resources See the Establish a Multisite Realm section in the Red Hat Ceph Storage Object Gateway Guide for more details. 5.7. Failover and Disaster Recovery If the master zone would fail, failover to the secondary zone for disaster recovery. Make the secondary zone the master and default zone. For example: By default, Ceph Object Gateway runs in an active-active configuration. If the cluster was configured to run in an active-passive configuration, the secondary zone is a read-only zone. Remove the --read-only status to allow the zone to receive write operations. For example: Update the period to make the changes take effect. Restart the Ceph Object Gateway. If the former master zone recovers, revert the operation. From the recovered zone, pull the realm from the current master zone. Make the recovered zone the master and default zone. Update the period to make the changes take effect. Restart the Ceph Object Gateway in the recovered zone. If the secondary zone needs to be a read-only configuration, update the secondary zone. Update the period to make the changes take effect. Restart the Ceph Object Gateway in the secondary zone. 5.8. Migrating a Single Site System to Multi-Site To migrate from a single site system with a default zone group and zone to a multi site system, use the following steps: Create a realm. Replace <name> with the realm name. Rename the default zone and zonegroup. Replace <name> with the zonegroup or zone name. Configure the master zonegroup. Replace <name> with the realm or zonegroup name. Replace <fqdn> with the fully qualified domain name(s) in the zonegroup. Configure the master zone. Replace <name> with the realm, zonegroup or zone name. Replace <fqdn> with the fully qualified domain name(s) in the zonegroup. Create a system user. Replace <user-id> with the username. Replace <display-name> with a display name. It may contain spaces. Commit the updated configuration. Restart the Ceph Object Gateway. After completing this procedure, proceed to Establish a Secondary Zone to create a secondary zone in the master zone group. 5.9. Multisite Command Line Usage 5.9.1. Realms A realm represents a globally unique namespace consisting of one or more zonegroups containing one or more zones, and zones containing buckets, which in turn contain objects. A realm enables the Ceph Object Gateway to support multiple namespaces and their configuration on the same hardware. A realm contains the notion of periods. Each period represents the state of the zone group and zone configuration in time. Each time you make a change to a zonegroup or zone, update the period and commit it. By default, the Ceph Object Gateway version 2 does not create a realm for backward compatibility with version 1.3 and earlier releases. However, as a best practice, Red Hat recommends creating realms for new clusters. 5.9.1.1. Creating a Realm To create a realm, execute realm create and specify the realm name. If the realm is the default, specify --default . For example: By specifying --default , the realm will be called implicitly with each radosgw-admin call unless --rgw-realm and the realm name are explicitly provided. 5.9.1.2. Making a Realm the Default One realm in the list of realms should be the default realm. There may be only one default realm. If there is only one realm and it wasn't specified as the default realm when it was created, make it the default realm. Alternatively, to change which realm is the default, execute: Note When the realm is default, the command line assumes --rgw-realm=<realm-name> as an argument. 5.9.1.3. Deleting a Realm To delete a realm, execute realm delete and specify the realm name. For example: 5.9.1.4. Getting a Realm To get a realm, execute realm get and specify the realm name. For example: The CLI will echo a JSON object with the realm properties. Use > and an output file name to output the JSON object to a file. 5.9.1.5. Setting a Realm To set a realm, execute realm set , specify the realm name, and --infile= with an input file name. For example: 5.9.1.6. Listing Realms To list realms, execute realm list . 5.9.1.7. Listing Realm Periods To list realm periods, execute realm list-periods . 5.9.1.8. Pulling a Realm To pull a realm from the node containing the master zone group and master zone to a node containing a secondary zone group or zone, execute realm pull on the node that will receive the realm configuration. 5.9.1.9. Renaming a Realm A realm is not part of the period. Consequently, renaming the realm is only applied locally, and will not get pulled with realm pull . When renaming a realm with multiple zones, run the command on each zone. To rename a realm, execute the following: Note Do NOT use realm set to change the name parameter. That changes the internal name only. Specifying --rgw-realm would still use the old realm name. 5.9.2. Zone Groups The Ceph Object Gateway supports multi-site deployments and a global namespace by using the notion of zone groups. Formerly called a region, a zone group defines the geographic location of one or more Ceph Object Gateway instances within one or more zones. Configuring zone groups differs from typical configuration procedures, because not all of the settings end up in a Ceph configuration file. You can list zone groups, get a zone group configuration, and set a zone group configuration. Note The radosgw-admin zonegroup operations can be performed on any node within the realm, because the step of updating the period propagates the changes throughout the cluster. However, radosgw-admin zone operations MUST be performed on a host within the zone. 5.9.2.1. Creating a Zone Group Creating a zone group consists of specifying the zone group name. Creating a zone assumes it will live in the default realm unless --rgw-realm=<realm-name> is specified. If the zonegroup is the default zonegroup, specify the --default flag. If the zonegroup is the master zonegroup, specify the --master flag. For example: Note Use zonegroup modify --rgw-zonegroup=<zonegroup-name> to modify an existing zone group's settings. 5.9.2.2. Making a Zone Group the Default One zonegroup in the list of zonegroups should be the default zonegroup. There may be only one default zonegroup. If there is only one zonegroup and it wasn't specified as the default zonegroup when it was created, make it the default zonegroup. Alternatively, to change which zonegroup is the default, execute: Note When the zonegroup is default, the command line assumes --rgw-zonegroup=<zonegroup-name> as an argument. Then, update the period: 5.9.2.3. Adding a Zone to a Zone Group To add a zone to a zonegroup, you MUST execute this step on a host that will be in the zone. To add a zone to a zonegroup, execute the following: Then, update the period: 5.9.2.4. Removing a Zone from a Zone Group To remove a zone from a zonegroup, execute the following: Then, update the period: 5.9.2.5. Renaming a Zone Group To rename a zonegroup, execute the following: Then, update the period: 5.9.2.6. Deleting a Zone Group To delete a zonegroup, execute the following: Then, update the period: 5.9.2.7. Listing Zone Groups A Ceph cluster contains a list of zone groups. To list the zone groups, execute: The radosgw-admin returns a JSON formatted list of zone groups. 5.9.2.8. Getting a Zone Group To view the configuration of a zone group, execute: The zone group configuration looks like this: 5.9.2.9. Setting a Zone Group Defining a zone group consists of creating a JSON object, specifying at least the required settings: name : The name of the zone group. Required. api_name : The API name for the zone group. Optional. is_master : Determines if the zone group is the master zone group. Required. note: You can only have one master zone group. endpoints : A list of all the endpoints in the zone group. For example, you may use multiple domain names to refer to the same zone group. Remember to escape the forward slashes ( \/ ). You may also specify a port ( fqdn:port ) for each endpoint. Optional. hostnames : A list of all the hostnames in the zone group. For example, you may use multiple domain names to refer to the same zone group. Optional. The rgw dns name setting will automatically be included in this list. You should restart the gateway daemon(s) after changing this setting. master_zone : The master zone for the zone group. Optional. Uses the default zone if not specified. note: You can only have one master zone per zone group. zones : A list of all zones within the zone group. Each zone has a name (required), a list of endpoints (optional), and whether or not the gateway will log metadata and data operations (false by default). placement_targets : A list of placement targets (optional). Each placement target contains a name (required) for the placement target and a list of tags (optional) so that only users with the tag can use the placement target (i.e., the user's placement_tags field in the user info). default_placement : The default placement target for the object index and object data. Set to default-placement by default. You may also set a per-user default placement in the user info for each user. To set a zone group, create a JSON object consisting of the required fields, save the object to a file (e.g., zonegroup.json ); then, execute the following command: Where zonegroup.json is the JSON file you created. Important The default zone group is_master setting is true by default. If you create a new zone group and want to make it the master zone group, you must either set the default zone group is_master setting to false , or delete the default zone group. Finally, update the period: 5.9.2.10. Setting a Zone Group Map Setting a zone group map consists of creating a JSON object consisting of one or more zone groups, and setting the master_zonegroup for the cluster. Each zone group in the zone group map consists of a key/value pair, where the key setting is equivalent to the name setting for an individual zone group configuration, and the val is a JSON object consisting of an individual zone group configuration. You may only have one zone group with is_master equal to true , and it must be specified as the master_zonegroup at the end of the zone group map. The following JSON object is an example of a default zone group map. To set a zone group map, execute the following: Where zonegroupmap.json is the JSON file you created. Ensure that you have zones created for the ones specified in the zone group map. Finally, update the period. 5.9.3. Zones Ceph Object Gateway supports the notion of zones. A zone defines a logical group consisting of one or more Ceph Object Gateway instances. Configuring zones differs from typical configuration procedures, because not all of the settings end up in a Ceph configuration file. You can list zones, get a zone configuration and set a zone configuration. Important All radosgw-admin zone operations MUST be executed on a host that operates or will operate within the zone. 5.9.3.1. Creating a Zone To create a zone, specify a zone name. If it is a master zone, specify the --master option. Only one zone in a zone group may be a master zone. To add the zone to a zonegroup, specify the --rgw-zonegroup option with the zonegroup name. Important Zones must be created on a Ceph Object Gateway node that will be within the zone. Then, update the period: 5.9.3.2. Deleting a Zone To delete zone, first remove it from the zonegroup. Then, update the period: , delete the zone. Important This procedure MUST be executed on a host within the zone. Execute the following: Finally, update the period: Important Do not delete a zone without removing it from a zone group first. Otherwise, updating the period will fail. If the pools for the deleted zone will not be used anywhere else, consider deleting the pools. Replace <del-zone> in the example below with the deleted zone's name. Important Once Ceph deletes the zone pools, it deletes all of the data within them in an unrecoverable manner. Only delete the zone pools if Ceph clients no longer need the pool contents. Important In a multi-realm cluster, deleting the .rgw.root pool along with the zone pools will remove ALL the realm information for the cluster. Ensure that .rgw.root does not contain other active realms before deleting the .rgw.root pool. Important After deleting the pools, restart the RGW process. 5.9.3.3. Modifying a Zone To modify a zone, specify the zone name and the parameters you wish to modify. Important Zones should be modified on a Ceph Object Gateway node that will be within the zone. --access-key=<key> --secret/--secret-key=<key> --master --default --endpoints=<list> Then, update the period: 5.9.3.4. Listing Zones As root , to list the zones in a cluster, execute: 5.9.3.5. Getting a Zone As root , to get the configuration of a zone, execute: The default zone looks like this: 5.9.3.6. Setting a Zone Configuring a zone involves specifying a series of Ceph Object Gateway pools. For consistency, we recommend using a pool prefix that is the same as the zone name. See Pools_ for details of configuring pools. Important Zones should be set on a Ceph Object Gateway node that will be within the zone. To set a zone, create a JSON object consisting of the pools, save the object to a file (e.g., zone.json ); then, execute the following command, replacing {zone-name} with the name of the zone: Where zone.json is the JSON file you created. Then, as root , update the period: 5.9.3.7. Renaming a Zone To rename a zone, specify the zone name and the new zone name. Execute the following on a host within the zone: Then, update the period: 5.10. Zone Group and Zone Configuration Settings When configuring a default zone group and zone, the pool name includes the zone name. For example: default.rgw.control To change the defaults, include the following settings in your Ceph configuration file under each [client.rgw.{instance-name}] instance. Name Description Type Default rgw_zone The name of the zone for the gateway instance. String None rgw_zonegroup The name of the zone group for the gateway instance. String None rgw_zonegroup_root_pool The root pool for the zone group. String .rgw.root rgw_zone_root_pool The root pool for the zone. String .rgw.root rgw_default_zone_group_info_oid The OID for storing the default zone group. We do not recommend changing this setting. String default.zonegroup 5.11. Manually Resharding Buckets with Multisite To manually reshard buckets in a multisite cluster, use the following procedure. Note Manual resharding is a very expensive process, especially for huge buckets that warrant manual resharding. Every secondary zone deletes all of the objects, and then resynchronizes them from the master zone. Prerequisites Stop all Ceph Object Gateway instances. Procedure On a node within the master zone of the master zone group, execute the following command: Syntax Wait for sync status on all zones to report that data synchronization is up to date. Stop ALL ceph-radosgw daemons in ALL zones. On a node within the master zone of the master zone group, reshard the bucket. Syntax On EACH secondary zone, execute the following: Syntax Restart ALL ceph-radosgw daemons in ALL zones. On a node within the master zone of the master zone group, execute the following command: Syntax The metadata synchronization process will fetch the updated bucket entry point and bucket instance metadata. The data synchronization process will perform a full synchronization. Additional resources See the Configuring Bucket Index Sharding in Multi-site Configurations in the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide for more details. 5.12. Configuring Multiple Zones without Replication You can configure multiple zones that will not replicate each other. For example you can create a dedicated zone for each team in a company. Prerequisites A Ceph Storage Cluster with the Ceph Object Gateway installed. Procedure Create a realm. For example: Create a zone group. For example: Create one or more zones depending on your use case. For example: Get the JSON file with the configuration of the zone group. For example: In the file, set the log_meta , log_data , and sync_from_all parameters to false . Use the updated JSON file. For example: Update the period. Additional Resources Realms Zone Groups Zones Installation Guide 5.13. Configuring multiple realms in the same storage cluster This section discusses how to configure multiple realms in the same storage cluster. This is a more advanced use case for multi-site. Configuring multiple realms in the same storage cluster enables you to use a local realm to handle local Ceph Object Gateway client traffic, as well as a replicated realm for data that will be replicated to a secondary site. Note Red Hat recommends that each realm has its own Ceph Object Gateway. Prerequisites The access key and secret key for each data center in the storage cluster. Two running Red Hat Ceph Storage data centers in a storage cluster. Root-level or sudo access to all the nodes. Each data center has its own local realm. They share a realm that replicates on both sites. On the Ceph Object Gateway nodes, perform the tasks listed in the Requirements for Installing Red Hat Ceph Storage found in the Red Hat Ceph Storage Installation Guide . For each Ceph Object Gateway node, perform steps 1-7 in the Installing the Ceph Object Gateway section of the Red Hat Ceph Storage Installation Guide . Procedure Create one local realm on the first data center in the storage cluster: Syntax Example Create one local master zonegroup on the first data center: Syntax Example Create one local zone on the first data center: Syntax Example Commit the period: Example Update ceph.conf with the rgw_realm , rgw_zonegroup and rgw_zone names: Syntax Example Restart the RGW daemon: Syntax Create one local realm on the second data center in the storage cluster: Syntax Example Create one local master zonegroup on the second data center: Syntax Example Create one local zone on the second data center: Syntax Example Commit the period: Example Update ceph.conf with the rgw_realm , rgw_zonegroup and rgw_zone names: Syntax Example Restart the RGW daemon: Syntax Create a replicated realm on the first data center in the storage cluster: Syntax Example Use the --default flag to make the replicated realm default on the primary site. Create a master zonegroup for the first data center: Syntax Example Create a master zone on the first data center: Syntax Example Create a replication/synchronization user and add the system user to the master zone for multi-site: Syntax Example Commit the period: Syntax Update ceph.conf with the rgw_realm , rgw_zonegroup and rgw_zone names for the first data center: Syntax Example Restart the RGW daemon: Syntax Pull the replicated realm on the second data center: Syntax Example Pull the period from the first data center: Syntax Example Create the secondary zone on the second data center: Syntax Example Commit the period: Syntax Update ceph.conf with the rgw_realm , rgw_zonegroup and rgw_zone names for the second data center: Syntax Example Restart the Ceph Object Gateway daemon: Syntax Log in to the second data center and verify the synchronization status on the master realm: Syntax Example Log in to the first data center and verify the synchronization status for the replication-synchronization realm: Syntax Example To store and access data in the local site, create the user for local realm: Syntax Example Important By default, users are created under the default realm. For the users to access data in the local realm, the radosgw-admin command requires the --rgw-realm argument. | [
"osd pool default pg num = 50 osd pool default pgp num = 50",
"[rgws] <rgw-host-name-1> <rgw-host-name-2> <rgw-host-name[3..10]>",
"radosgw-admin realm create --rgw-realm={realm-name} [--default]",
"radosgw-admin realm create --rgw-realm=movies --default",
"{ \"id\": \"0956b174-fe14-4f97-8b50-bb7ec5e1cf62\", \"name\": \"movies\", \"current_period\": \"1950b710-3e63-4c41-a19e-46a715000980\", \"epoch\": 1 }",
"radosgw-admin zonegroup create --rgw-zonegroup={name} --endpoints={url} [--rgw-realm={realm-name}|--realm-id={realm-id}] --master --default",
"radosgw-admin zonegroup create --rgw-zonegroup=us --endpoints=http://rgw1:80 --rgw-realm=movies --master --default",
"{ \"id\": \"f1a233f5-c354-4107-b36c-df66126475a6\", \"name\": \"us\", \"api_name\": \"us\", \"is_master\": \"true\", \"endpoints\": [ \"http:\\/\\/rgw1:80\" ], \"hostnames\": [], \"hostnames_s3webzone\": [], \"master_zone\": \"\", \"zones\": [], \"placement_targets\": [], \"default_placement\": \"\", \"realm_id\": \"0956b174-fe14-4f97-8b50-bb7ec5e1cf62\" }",
"radosgw-admin zone create --rgw-zonegroup={zone-group-name} --rgw-zone={zone-name} --master --default --endpoints={http://fqdn:port}[,{http://fqdn:port}]",
"radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-east --master --default --endpoints={http://fqdn:port}[,{http://fqdn:port}]",
"radosgw-admin zonegroup remove --rgw-zonegroup=default --rgw-zone=default radosgw-admin zone delete --rgw-zone=default radosgw-admin zonegroup delete --rgw-zonegroup=default",
"radosgw-admin period update --commit",
"ceph osd pool delete default.rgw.control default.rgw.control --yes-i-really-really-mean-it ceph osd pool delete default.rgw.data.root default.rgw.data.root --yes-i-really-really-mean-it ceph osd pool delete default.rgw.log default.rgw.log --yes-i-really-really-mean-it ceph osd pool delete default.rgw.users.uid default.rgw.users.uid --yes-i-really-really-mean-it",
"radosgw-admin user create --uid=\"{user-name}\" --display-name=\"{Display Name}\" --system",
"radosgw-admin user create --uid=\"synchronization-user\" --display-name=\"Synchronization User\" --system",
"radosgw-admin zone modify --rgw-zone=us-east --access-key={access-key} --secret={secret} radosgw-admin period update --commit",
"radosgw-admin period update --commit",
"[client.rgw.{instance-name}] rgw_zone={zone-name}",
"[client.rgw.rgw1.rgw0] host = rgw1 rgw frontends = \"civetweb port=80\" rgw_zone=us-east",
"systemctl start ceph-radosgw@rgw.`hostname -s`.rgw0 systemctl enable ceph-radosgw@rgw.`hostname -s`.rgw0",
"systemctl restart ceph-radosgw@rgw.`hostname -s`.rgw0",
"radosgw-admin realm pull --url={url-to-master-zone-gateway} --access-key={access-key} --secret={secret}",
"radosgw-admin realm default --rgw-realm={realm-name}",
"radosgw-admin period pull --url={url-to-master-zone-gateway} --access-key={access-key} --secret={secret}",
"radosgw-admin zone create --rgw-zonegroup={zone-group-name} --rgw-zone={zone-name} --access-key={system-key} --secret={secret} --endpoints=http://{fqdn}:80 [--read-only]",
"radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-west --access-key={system-key} --secret={secret} --endpoints=http://rgw2:80",
"radosgw-admin zone delete --rgw-zone=default",
"ceph osd pool delete default.rgw.control default.rgw.control --yes-i-really-really-mean-it ceph osd pool delete default.rgw.data.root default.rgw.data.root --yes-i-really-really-mean-it ceph osd pool delete default.rgw.log default.rgw.log --yes-i-really-really-mean-it ceph osd pool delete default.rgw.users.uid default.rgw.users.uid --yes-i-really-really-mean-it",
"radosgw-admin period update --commit",
"[client.rgw.{instance-name}] rgw_zone={zone-name}",
"[client.rgw.rgw2.rgw0] host = rgw2 rgw frontends = \"civetweb port=80\" rgw_zone=us-west",
"systemctl start ceph-radosgw@rgw.`hostname -s`.rgw0 systemctl enable ceph-radosgw@rgw.`hostname -s`.rgw0",
"systemctl restart ceph-radosgw@rgw.`hostname -s`.rgw0",
"radosgw-admin zone create --rgw-zonegroup={ ZONE_GROUP_NAME } --rgw-zone={ ZONE_NAME } --endpoints={http://fqdn:port}[,{http://fqdn:port] --tier-type=archive",
"radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-east --endpoints={http://fqdn:port}[,{http://fqdn:port}] --tier-type=archive",
"radosgw-admin zone modify --rgw-zone={zone-name} --master --default",
"radosgw-admin zone modify --rgw-zone={zone-name} --master --default --read-only=false",
"radosgw-admin period update --commit",
"systemctl restart ceph-radosgw@rgw.`hostname -s`.rgw0",
"radosgw-admin realm pull --url={url-to-master-zone-gateway} --access-key={access-key} --secret={secret}",
"radosgw-admin zone modify --rgw-zone={zone-name} --master --default",
"radosgw-admin period update --commit",
"systemctl restart ceph-radosgw@rgw.`hostname -s`.rgw0",
"radosgw-admin zone modify --rgw-zone={zone-name} --read-only",
"radosgw-admin period update --commit",
"systemctl restart ceph-radosgw@rgw.`hostname -s`.rgw0",
"radosgw-admin realm create --rgw-realm=<name> --default",
"radosgw-admin zonegroup rename --rgw-zonegroup default --zonegroup-new-name=<name> radosgw-admin zone rename --rgw-zone default --zone-new-name us-east-1 --rgw-zonegroup=<name>",
"radosgw-admin zonegroup modify --rgw-realm=<name> --rgw-zonegroup=<name> --endpoints http://<fqdn>:80 --master --default",
"radosgw-admin zone modify --rgw-realm=<name> --rgw-zonegroup=<name> --rgw-zone=<name> --endpoints http://<fqdn>:80 --access-key=<access-key> --secret=<secret-key> --master --default",
"radosgw-admin user create --uid=<user-id> --display-name=\"<display-name>\" --access-key=<access-key> --secret=<secret-key> \\ --system",
"radosgw-admin period update --commit",
"systemctl restart ceph-radosgw@rgw.`hostname -s`.rgw0",
"radosgw-admin realm create --rgw-realm={realm-name} [--default]",
"radosgw-admin realm create --rgw-realm=movies --default",
"radosgw-admin realm default --rgw-realm=movies",
"radosgw-admin realm delete --rgw-realm={realm-name}",
"radosgw-admin realm delete --rgw-realm=movies",
"radosgw-admin realm get --rgw-realm=<name>",
"radosgw-admin realm get --rgw-realm=movies [> filename.json]",
"{ \"id\": \"0a68d52e-a19c-4e8e-b012-a8f831cb3ebc\", \"name\": \"movies\", \"current_period\": \"b0c5bbef-4337-4edd-8184-5aeab2ec413b\", \"epoch\": 1 }",
"radosgw-admin realm set --rgw-realm=<name> --infile=<infilename>",
"radosgw-admin realm set --rgw-realm=movies --infile=filename.json",
"radosgw-admin realm list",
"radosgw-admin realm list-periods",
"radosgw-admin realm pull --url={url-to-master-zone-gateway} --access-key={access-key} --secret={secret}",
"radosgw-admin realm rename --rgw-realm=<current-name> --realm-new-name=<new-realm-name>",
"radosgw-admin zonegroup create --rgw-zonegroup=<name> [--rgw-realm=<name>][--master] [--default]",
"radosgw-admin zonegroup default --rgw-zonegroup=comedy",
"radosgw-admin period update --commit",
"radosgw-admin zonegroup add --rgw-zonegroup=<name> --rgw-zone=<name>",
"radosgw-admin period update --commit",
"radosgw-admin zonegroup remove --rgw-zonegroup=<name> --rgw-zone=<name>",
"radosgw-admin period update --commit",
"radosgw-admin zonegroup rename --rgw-zonegroup=<name> --zonegroup-new-name=<name>",
"radosgw-admin period update --commit",
"radosgw-admin zonegroup delete --rgw-zonegroup=<name>",
"radosgw-admin period update --commit",
"radosgw-admin zonegroup list",
"{ \"default_info\": \"90b28698-e7c3-462c-a42d-4aa780d24eda\", \"zonegroups\": [ \"us\" ] }",
"radosgw-admin zonegroup get [--rgw-zonegroup=<zonegroup>]",
"{ \"id\": \"90b28698-e7c3-462c-a42d-4aa780d24eda\", \"name\": \"us\", \"api_name\": \"us\", \"is_master\": \"true\", \"endpoints\": [ \"http:\\/\\/rgw1:80\" ], \"hostnames\": [], \"hostnames_s3website\": [], \"master_zone\": \"9248cab2-afe7-43d8-a661-a40bf316665e\", \"zones\": [ { \"id\": \"9248cab2-afe7-43d8-a661-a40bf316665e\", \"name\": \"us-east\", \"endpoints\": [ \"http:\\/\\/rgw1\" ], \"log_meta\": \"true\", \"log_data\": \"true\", \"bucket_index_max_shards\": 11, \"read_only\": \"false\" }, { \"id\": \"d1024e59-7d28-49d1-8222-af101965a939\", \"name\": \"us-west\", \"endpoints\": [ \"http:\\/\\/rgw2:80\" ], \"log_meta\": \"false\", \"log_data\": \"true\", \"bucket_index_max_shards\": 11, \"read_only\": \"false\" } ], \"placement_targets\": [ { \"name\": \"default-placement\", \"tags\": [] } ], \"default_placement\": \"default-placement\", \"realm_id\": \"ae031368-8715-4e27-9a99-0c9468852cfe\" }",
"radosgw-admin zonegroup set --infile zonegroup.json",
"radosgw-admin period update --commit",
"{ \"zonegroups\": [ { \"key\": \"90b28698-e7c3-462c-a42d-4aa780d24eda\", \"val\": { \"id\": \"90b28698-e7c3-462c-a42d-4aa780d24eda\", \"name\": \"us\", \"api_name\": \"us\", \"is_master\": \"true\", \"endpoints\": [ \"http:\\/\\/rgw1:80\" ], \"hostnames\": [], \"hostnames_s3website\": [], \"master_zone\": \"9248cab2-afe7-43d8-a661-a40bf316665e\", \"zones\": [ { \"id\": \"9248cab2-afe7-43d8-a661-a40bf316665e\", \"name\": \"us-east\", \"endpoints\": [ \"http:\\/\\/rgw1\" ], \"log_meta\": \"true\", \"log_data\": \"true\", \"bucket_index_max_shards\": 11, \"read_only\": \"false\" }, { \"id\": \"d1024e59-7d28-49d1-8222-af101965a939\", \"name\": \"us-west\", \"endpoints\": [ \"http:\\/\\/rgw2:80\" ], \"log_meta\": \"false\", \"log_data\": \"true\", \"bucket_index_max_shards\": 11, \"read_only\": \"false\" } ], \"placement_targets\": [ { \"name\": \"default-placement\", \"tags\": [] } ], \"default_placement\": \"default-placement\", \"realm_id\": \"ae031368-8715-4e27-9a99-0c9468852cfe\" } } ], \"master_zonegroup\": \"90b28698-e7c3-462c-a42d-4aa780d24eda\", \"bucket_quota\": { \"enabled\": false, \"max_size_kb\": -1, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"max_size_kb\": -1, \"max_objects\": -1 } }",
"radosgw-admin zonegroup-map set --infile zonegroupmap.json",
"radosgw-admin period update --commit",
"[root@zone] radosgw-admin zone create --rgw-zone=<name> [--zonegroup=<zonegroup-name] [--endpoints=<endpoint:port>[,<endpoint:port>] [--master] [--default] --access-key USDSYSTEM_ACCESS_KEY --secret USDSYSTEM_SECRET_KEY",
"radosgw-admin period update --commit",
"radosgw-admin zonegroup remove --rgw-zonegroup=<name> --rgw-zone=<name>",
"radosgw-admin period update --commit",
"radosgw-admin zone delete --rgw-zone<name>",
"radosgw-admin period update --commit",
"ceph osd pool delete <del-zone>.rgw.control <del-zone>.rgw.control --yes-i-really-really-mean-it ceph osd pool delete <del-zone>.rgw.data.root <del-zone>.rgw.data.root --yes-i-really-really-mean-it ceph osd pool delete <del-zone>.rgw.log <del-zone>.rgw.log --yes-i-really-really-mean-it ceph osd pool delete <del-zone>.rgw.users.uid <del-zone>.rgw.users.uid --yes-i-really-really-mean-it",
"radosgw-admin zone modify [options]",
"radosgw-admin period update --commit",
"radosgw-admin zone list",
"radosgw-admin zone get [--rgw-zone=<zone>]",
"{ \"domain_root\": \".rgw\", \"control_pool\": \".rgw.control\", \"gc_pool\": \".rgw.gc\", \"log_pool\": \".log\", \"intent_log_pool\": \".intent-log\", \"usage_log_pool\": \".usage\", \"user_keys_pool\": \".users\", \"user_email_pool\": \".users.email\", \"user_swift_pool\": \".users.swift\", \"user_uid_pool\": \".users.uid\", \"system_key\": { \"access_key\": \"\", \"secret_key\": \"\"}, \"placement_pools\": [ { \"key\": \"default-placement\", \"val\": { \"index_pool\": \".rgw.buckets.index\", \"data_pool\": \".rgw.buckets\"} } ] }",
"radosgw-admin zone set --rgw-zone={zone-name} --infile zone.json",
"radosgw-admin period update --commit",
"radosgw-admin zone rename --rgw-zone=<name> --zone-new-name=<name>",
"radosgw-admin period update --commit",
"radosgw-admin bucket sync disable --bucket= BUCKET_NAME",
"radosgw-admin bucket reshard --bucket= BUCKET_NAME --num-shards= NEW_SHARDS_NUMBER",
"radosgw-admin bucket rm --purge-objects --bucket= BUCKET_NAME",
"radosgw-admin bucket sync enable --bucket= BUCKET_NAME",
"radosgw-admin realm create --rgw-realm= realm-name [--default]",
"radosgw-admin realm create --rgw-realm=movies --default { \"id\": \"0956b174-fe14-4f97-8b50-bb7ec5e1cf62\", \"name\": \"movies\", \"current_period\": \"1950b710-3e63-4c41-a19e-46a715000980\", \"epoch\": 1 }",
"radosgw-admin zonegroup create --rgw-zonegroup= zone-group-name --endpoints= url [--rgw-realm= realm-name |--realm-id= realm-id ] --master --default",
"radosgw-admin zonegroup create --rgw-zonegroup=us --endpoints=http://rgw1:80 --rgw-realm=movies --master --default { \"id\": \"f1a233f5-c354-4107-b36c-df66126475a6\", \"name\": \"us\", \"api_name\": \"us\", \"is_master\": \"true\", \"endpoints\": [ \"http:\\/\\/rgw1:80\" ], \"hostnames\": [], \"hostnames_s3webzone\": [], \"master_zone\": \"\", \"zones\": [], \"placement_targets\": [], \"default_placement\": \"\", \"realm_id\": \"0956b174-fe14-4f97-8b50-bb7ec5e1cf62\" }",
"radosgw-admin zone create --rgw-zonegroup= zone-group-name --rgw-zone= zone-name --master --default --endpoints= http://fqdn:port [, http://fqdn:port ]",
"radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-east --master --default --endpoints=http://rgw1:80",
"radosgw-admin zonegroup get --rgw-zonegroup= zone-group-name > zonegroup.json",
"radosgw-admin zonegroup get --rgw-zonegroup=us > zonegroup.json",
"{ \"id\": \"72f3a886-4c70-420b-bc39-7687f072997d\", \"name\": \"default\", \"api_name\": \"\", \"is_master\": \"true\", \"endpoints\": [], \"hostnames\": [], \"hostnames_s3website\": [], \"master_zone\": \"a5e44ecd-7aae-4e39-b743-3a709acb60c5\", \"zones\": [ { \"id\": \"975558e0-44d8-4866-a435-96d3e71041db\", \"name\": \"testzone\", \"endpoints\": [], \"log_meta\": \"false\", \"log_data\": \"false\", \"bucket_index_max_shards\": 11, \"read_only\": \"false\", \"tier_type\": \"\", \"sync_from_all\": \"false\", \"sync_from\": [] }, { \"id\": \"a5e44ecd-7aae-4e39-b743-3a709acb60c5\", \"name\": \"default\", \"endpoints\": [], \"log_meta\": \"false\", \"log_data\": \"false\", \"bucket_index_max_shards\": 11, \"read_only\": \"false\", \"tier_type\": \"\", \"sync_from_all\": \"false\", \"sync_from\": [] } ], \"placement_targets\": [ { \"name\": \"default-placement\", \"tags\": [] } ], \"default_placement\": \"default-placement\", \"realm_id\": \"2d988e7d-917e-46e7-bb18-79350f6a5155\" }",
"radosgw-admin zonegroup set --rgw-zonegroup= zone-group-name --infile=zonegroup.json",
"radosgw-admin zonegroup set --rgw-zonegroup=us --infile=zonegroup.json",
"radosgw-admin period update --commit",
"radosgw-admin realm create --rgw-realm= REALM_NAME --default",
"radosgw-admin realm create --rgw-realm=ldc1 --default",
"radosgw-admin zonegroup create --rgw-zonegroup= ZONE_GROUP_NAME --endpoints=http:// RGW_NODE_NAME :80 --rgw-realm= REALM_NAME --master --default",
"radosgw-admin zonegroup create --rgw-zonegroup=ldc1zg --endpoints=http://rgw1:80 --rgw-realm=ldc1 --master --default",
"radosgw-admin zone create --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= ZONE_NAME --master --default --endpoints= HTTP_FQDN [, HTTP_FQDN ]",
"radosgw-admin zone create --rgw-zonegroup=ldc1zg --rgw-zone=ldc1z --master --default --endpoints=http://rgw.example.com",
"radosgw-admin period update --commit",
"rgw_realm = REALM_NAME rgw_zonegroup = ZONE_GROUP_NAME rgw_zone = ZONE_NAME",
"rgw_realm = ldc1 rgw_zonegroup = ldc1zg rgw_zone = ldc1z",
"systemctl restart [email protected](hostname -s).rgw0.service",
"radosgw-admin realm create --rgw-realm= REALM_NAME --default",
"radosgw-admin realm create --rgw-realm=ldc2 --default",
"radosgw-admin zonegroup create --rgw-zonegroup= ZONE_GROUP_NAME --endpoints=http:// RGW_NODE_NAME :80 --rgw-realm= REALM_NAME --master --default",
"radosgw-admin zonegroup create --rgw-zonegroup=ldc2zg --endpoints=http://rgw2:80 --rgw-realm=ldc2 --master --default",
"radosgw-admin zone create --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= ZONE_NAME --master --default --endpoints= HTTP_FQDN [, HTTP_FQDN ]",
"radosgw-admin zone create --rgw-zonegroup=ldc2zg --rgw-zone=ldc2z --master --default --endpoints=http://rgw.example.com",
"radosgw-admin period update --commit",
"rgw_realm = REALM_NAME rgw_zonegroup = ZONE_GROUP_NAME rgw_zone = ZONE_NAME",
"rgw_realm = ldc2 rgw_zonegroup = ldc2zg rgw_zone = ldc2z",
"systemctl restart [email protected](hostname -s).rgw0.service",
"radosgw-admin realm create --rgw-realm= REPLICATED_REALM_1 --default",
"[user@rgw1 ~] radosgw-admin realm create --rgw-realm=rdc1 --default",
"radosgw-admin zonegroup create --rgw-zonegroup= RGW_ZONE_GROUP --endpoints=http://_RGW_NODE_NAME :80 --rgw-realm=_RGW_REALM_NAME --master --default",
"radosgw-admin zonegroup create --rgw-zonegroup=rdc1zg --endpoints=http://rgw1:80 --rgw-realm=rdc1 --master --default",
"radosgw-admin zone create --rgw-zonegroup= RGW_ZONE_GROUP --rgw-zone=_MASTER_RGW_NODE_NAME --master --default --endpoints= HTTP_FQDN [, HTTP_FQDN ]",
"radosgw-admin zone create --rgw-zonegroup=rdc1zg --rgw-zone=rdc1z --master --default --endpoints=http://rgw.example.com",
"radosgw-admin user create --uid=\"r_REPLICATION_SYNCHRONIZATION_USER_\" --display-name=\"Replication-Synchronization User\" --system radosgw-admin zone modify --rgw-zone= RGW_ZONE --access-key= ACCESS_KEY --secret= SECRET_KEY",
"radosgw-admin zone modify --rgw-zone=rdc1zg --access-key=3QV0D6ZMMCJZMSCXJ2QJ --secret=VpvQWcsfI9OPzUCpR4kynDLAbqa1OIKqRB6WEnH8",
"radosgw-admin period update --commit",
"rgw_realm = REALM_NAME rgw_zonegroup = ZONE_GROUP_NAME rgw_zone = ZONE_NAME",
"rgw_realm = rdc1 rgw_zonegroup = rdc1zg rgw_zone = rdc1z",
"systemctl restart [email protected](hostname -s).rgw0.service",
"radosgw-admin realm pull --url=https://tower-osd1.cephtips.com --access-key= ACCESS_KEY --secret-key= SECRET_KEY",
"radosgw-admin realm pull --url=https://tower-osd1.cephtips.com --access-key=3QV0D6ZMMCJZMSCXJ2QJ --secret-key=VpvQWcsfI9OPzUCpR4kynDLAbqa1OIKqRB6WEnH8",
"radosgw-admin period pull --url=https://tower-osd1.cephtips.com --access-key= ACCESS_KEY --secret-key= SECRET_KEY",
"radosgw-admin period pull --url=https://tower-osd1.cephtips.com --access-key=3QV0D6ZMMCJZMSCXJ2QJ --secret-key=VpvQWcsfI9OPzUCpR4kynDLAbqa1OIKqRB6WEnH8",
"radosgw-admin zone create --rgw-zone= RGW_ZONE --rgw-zonegroup= RGW_ZONE_GROUP --endpoints=https://tower-osd4.cephtips.com --access-key=_ACCESS_KEY --secret-key= SECRET_KEY",
"radosgw-admin zone create --rgw-zone=rdc2z --rgw-zonegroup=rdc1zg --endpoints=https://tower-osd4.cephtips.com --access-key=3QV0D6ZMMCJZMSCXJ2QJ --secret-key=VpvQWcsfI9OPzUCpR4kynDLAbqa1OIKqRB6WEnH8",
"radosgw-admin period update --commit",
"rgw_realm = REALM_NAME rgw_zonegroup = ZONE_GROUP_NAME rgw_zone = ZONE_NAME",
"rgw realm = rdc1 rgw zonegroup = rdc1zg rgw zone = rdc2z",
"systemctl restart [email protected](hostname -s).rgw0.service",
"radosgw-admin sync status",
"radosgw-admin sync status realm 59762f08-470c-46de-b2b1-d92c50986e67 (ldc2) zonegroup 7cf8daf8-d279-4d5c-b73e-c7fd2af65197 (ldc2zg) zone 034ae8d3-ae0c-4e35-8760-134782cb4196 (ldc2z) metadata sync no sync (zone is master)",
"radosgw-admin sync status --rgw-realm RGW_REALM_NAME",
"radosgw-admin sync status --rgw-realm rdc1 realm 73c7b801-3736-4a89-aaf8-e23c96e6e29d (rdc1) zonegroup d67cc9c9-690a-4076-89b8-e8127d868398 (rdc1zg) zone 67584789-375b-4d61-8f12-d1cf71998b38 (rdc2z) metadata sync syncing full sync: 0/64 shards incremental sync: 64/64 shards metadata is caught up with master data sync source: 705ff9b0-68d5-4475-9017-452107cec9a0 (rdc1z) syncing full sync: 0/128 shards incremental sync: 128/128 shards data is caught up with source realm 73c7b801-3736-4a89-aaf8-e23c96e6e29d (rdc1) zonegroup d67cc9c9-690a-4076-89b8-e8127d868398 (rdc1zg) zone 67584789-375b-4d61-8f12-d1cf71998b38 (rdc2z) metadata sync syncing full sync: 0/64 shards incremental sync: 64/64 shards metadata is caught up with master data sync source: 705ff9b0-68d5-4475-9017-452107cec9a0 (rdc1z) syncing full sync: 0/128 shards incremental sync: 128/128 shards data is caught up with source",
"radosgw-admin user create --uid=\" LOCAL_USER\" --display-name=\"Local user\" --rgw-realm=_REALM_NAME --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= ZONE_NAME",
"radosgw-admin user create --uid=\"local-user\" --display-name=\"Local user\" --rgw-realm=ldc1 --rgw-zonegroup=ldc1zg --rgw-zone=ldc1z"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/object_gateway_configuration_and_administration_guide/rgw-multisite-rgw |
Anaconda Customization Guide | Anaconda Customization Guide Red Hat Enterprise Linux 7 Changing the installer appearance and creating custom add-ons Vladimir Slavik [email protected] Sharon Moroney [email protected] Petr Bokoc Vratislav Podzimek | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/anaconda_customization_guide/index |
Chapter 100. KafkaTopicStatus schema reference | Chapter 100. KafkaTopicStatus schema reference Used in: KafkaTopic Property Property type Description conditions Condition array List of status conditions. observedGeneration integer The generation of the CRD that was last reconciled by the operator. topicName string Topic name. topicId string The topic's id. For a KafkaTopic with the ready condition, this will change only if the topic gets deleted and recreated with the same name. replicasChange ReplicasChangeStatus Replication factor change status. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-KafkaTopicStatus-reference |
Chapter 111. ReplicasChangeStatus schema reference | Chapter 111. ReplicasChangeStatus schema reference Used in: KafkaTopicStatus Property Property type Description targetReplicas integer The target replicas value requested by the user. This may be different from .spec.replicas when a change is ongoing. state string (one of [ongoing, pending]) Current state of the replicas change operation. This can be pending , when the change has been requested, or ongoing , when the change has been successfully submitted to Cruise Control. message string Message for the user related to the replicas change request. This may contain transient error messages that would disappear on periodic reconciliations. sessionId string The session identifier for replicas change requests pertaining to this KafkaTopic resource. This is used by the Topic Operator to track the status of ongoing replicas change operations. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-ReplicasChangeStatus-reference |
Chapter 2. Deploying OpenShift Data Foundation on Red Hat OpenStack Platform in internal mode | Chapter 2. Deploying OpenShift Data Foundation on Red Hat OpenStack Platform in internal mode Deploying OpenShift Data Foundation on OpenShift Container Platform in internal mode using dynamic storage devices provided by Red Hat OpenStack Platform installer-provisioned infrastructure (IPI) enables you to create internal cluster resources. This results in internal provisioning of the base services, which helps to make additional storage classes available to applications. Ensure that you have addressed the requirements in Preparing to deploy OpenShift Data Foundation chapter before proceeding with the below steps for deploying using dynamic storage devices: Install the Red Hat OpenShift Data Foundation Operator . Create the OpenShift Data Foundation Cluster 2.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.18 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 2.2. Enabling cluster-wide encryption with KMS using the Token authentication method You can enable the key value backend path and policy in the vault for token authentication. Prerequisites Administrator access to the vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . Carefully, select a unique path name as the backend path that follows the naming convention since you cannot change it later. Procedure Enable the Key/Value (KV) backend path in the vault. For vault KV secret engine API, version 1: For vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Create a token that matches the above policy: 2.3. Enabling cluster-wide encryption with KMS using the Kubernetes authentication method You can enable the Kubernetes authentication method for cluster-wide encryption using the Key Management System (KMS). Prerequisites Administrator access to Vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . The OpenShift Data Foundation operator must be installed from the Operator Hub. Select a unique path name as the backend path that follows the naming convention carefully. You cannot change this path name later. Procedure Create a service account: where, <serviceaccount_name> specifies the name of the service account. For example: Create clusterrolebindings and clusterroles : For example: Create a secret for the serviceaccount token and CA certificate. where, <serviceaccount_name> is the service account created in the earlier step. Get the token and the CA certificate from the secret. Retrieve the OCP cluster endpoint. Fetch the service account issuer: Use the information collected in the step to setup the Kubernetes authentication method in Vault: Important To configure the Kubernetes authentication method in Vault when the issuer is empty: Enable the Key/Value (KV) backend path in Vault. For Vault KV secret engine API, version 1: For Vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Generate the roles: The role odf-rook-ceph-op is later used while you configure the KMS connection details during the creation of the storage system. 2.3.1. Enabling and disabling key rotation when using KMS Security common practices require periodic encryption of key rotation. You can enable or disable key rotation when using KMS. 2.3.1.1. Enabling key rotation To enable key rotation, add the annotation keyrotation.csiaddons.openshift.io/schedule: <value> to PersistentVolumeClaims , Namespace , or StorageClass (in the decreasing order of precedence). <value> can be @hourly , @daily , @weekly , @monthly , or @yearly . If <value> is empty, the default is @weekly . The below examples use @weekly . Important Key rotation is only supported for RBD backed volumes. Annotating Namespace Annotating StorageClass Annotating PersistentVolumeClaims 2.3.1.2. Disabling key rotation You can disable key rotation for the following: All the persistent volume claims (PVCs) of storage class A specific PVC Disabling key rotation for all PVCs of a storage class To disable key rotation for all PVCs, update the annotation of the storage class: Disabling key rotation for a specific persistent volume claim Identify the EncryptionKeyRotationCronJob CR for the PVC you want to disable key rotation on: Where <PVC_NAME> is the name of the PVC that you want to disable. Apply the following to the EncryptionKeyRotationCronJob CR from the step to disable the key rotation: Update the csiaddons.openshift.io/state annotation from managed to unmanaged : Where <encryptionkeyrotationcronjob_name> is the name of the EncryptionKeyRotationCronJob CR. Add suspend: true under the spec field: Save and exit. The key rotation will be disabled for the PVC. 2.4. Creating OpenShift Data Foundation cluster Create an OpenShift Data Foundation cluster after you install the OpenShift Data Foundation operator. Prerequisites The OpenShift Data Foundation operator must be installed from the Operator Hub. For more information, see Installing OpenShift Data Foundation Operator using the Operator Hub . Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator, and then click Create StorageSystem . In the Backing storage page, select the following: Select Full Deployment for the Deployment type option. Select the Use an existing StorageClass option. Select the Storage Class . By default, it is set to standard . Optional: Select Use external PostgreSQL checkbox to use an external PostgreSQL [Technology preview] . This provides high availability solution for Multicloud Object Gateway where the PostgreSQL pod is a single point of failure. Provide the following connection details: Username Password Server name and Port Database name Select Enable TLS/SSL checkbox to enable encryption for the Postgres server. Click . In the Capacity and nodes page, provide the necessary information: Select a value for Requested Capacity from the dropdown list. It is set to 2 TiB by default. Note Once you select the initial storage capacity, cluster expansion is performed only using the selected usable capacity (three times of raw storage). In the Select Nodes section, select at least three available nodes. In the Configure performance section, select one of the following performance profiles: Lean Use this in a resource constrained environment with minimum resources that are lower than the recommended. This profile minimizes resource consumption by allocating fewer CPUs and less memory. Balanced (default) Use this when recommended resources are available. This profile provides a balance between resource consumption and performance for diverse workloads. Performance Use this in an environment with sufficient resources to get the best performance. This profile is tailored for high performance by allocating ample memory and CPUs to ensure optimal execution of demanding workloads. Note You have the option to configure the performance profile even after the deployment using the Configure performance option from the options menu of the StorageSystems tab. Important Before selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures. For more information about resource requirements, see Resource requirement for performance profiles . Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation. For cloud platforms with multiple availability zones, ensure that the Nodes are spread across different Locations/availability zones. If the nodes selected do not match the OpenShift Data Foundation cluster requirements of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide. Click . Optional: In the Security and network page, configure the following based on your requirements: To enable encryption, select Enable data encryption for block and file storage . Select either one or both the encryption levels: Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, select one of the following providers and provide the necessary details: Vault Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Note In case you need to enable key rotation for Vault KMS, run the following command in the OpenShift web console after the storage cluster is created: Thales CipherTrust Manager (using KMIP) Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . To enable in-transit encryption, select In-transit encryption . Select a Network . Click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back . Click Create StorageSystem . Note When your deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources . Verify that Status of StorageCluster is Ready and has a green tick mark to it. To verify that OpenShift Data Foundation is successfully installed, see Verifying your OpenShift Data Foundation deployment . Additional resources To enable Overprovision Control alerts, refer to Alerts in Monitoring guide. 2.5. Verifying OpenShift Data Foundation deployment Use this section to verify that OpenShift Data Foundation is deployed correctly. 2.5.1. Verifying the state of the pods Procedure Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see the following table: Set filter for Running and Completed pods to verify that the following pods are in Running and Completed state: Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) ux-backend-server- * (1 pod on any storage node) * ocs-client-operator -* (1 pod on any storage node) ocs-client-operator-console -* (1 pod on any storage node) ocs-provider-server -* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) MON rook-ceph-mon-* (3 pods distributed across storage nodes) MGR rook-ceph-mgr-* (1 pod on any storage node) MDS rook-ceph-mds-ocs-storagecluster-cephfilesystem-* (2 pods distributed across storage nodes) CSI cephfs csi-cephfsplugin-* (1 pod on each storage node) csi-cephfsplugin-provisioner-* (2 pods distributed across storage nodes) rbd csi-rbdplugin-* (1 pod on each storage node) csi-rbdplugin-provisioner-* (2 pods distributed across storage nodes) rook-ceph-crashcollector rook-ceph-crashcollector-* (1 pod on each storage node) OSD rook-ceph-osd-* (1 pod for each device) rook-ceph-osd-prepare-ocs-deviceset-* (1 pod for each device) ceph-csi-operator ceph-csi-controller-manager-* (1 pod for each device) 2.5.2. Verifying the OpenShift Data Foundation cluster is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 2.5.3. Verifying the Multicloud Object Gateway is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation . Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 2.5.4. Verifying that the specific storage classes exist Procedure Click Storage Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs openshift-storage.noobaa.io 2.6. Uninstalling OpenShift Data Foundation 2.6.1. Uninstalling OpenShift Data Foundation in Internal mode To uninstall OpenShift Data Foundation in Internal mode, refer to the knowledgebase article on Uninstalling OpenShift Data Foundation . | [
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"vault secrets enable -path=odf kv",
"vault secrets enable -path=odf kv-v2",
"echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -",
"vault token create -policy=odf -format json",
"oc -n openshift-storage create serviceaccount <serviceaccount_name>",
"oc -n openshift-storage create serviceaccount odf-vault-auth",
"oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:_<serviceaccount_name>_",
"oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:odf-vault-auth",
"cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: odf-vault-auth-token namespace: openshift-storage annotations: kubernetes.io/service-account.name: <serviceaccount_name> type: kubernetes.io/service-account-token data: {} EOF",
"SA_JWT_TOKEN=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['token']}\" | base64 --decode; echo) SA_CA_CRT=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['ca\\.crt']}\" | base64 --decode; echo)",
"OCP_HOST=USD(oc config view --minify --flatten -o jsonpath=\"{.clusters[0].cluster.server}\")",
"oc proxy & proxy_pid=USD! issuer=\"USD( curl --silent http://127.0.0.1:8001/.well-known/openid-configuration | jq -r .issuer)\" kill USDproxy_pid",
"vault auth enable kubernetes",
"vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\" issuer=\"USDissuer\"",
"vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\"",
"vault secrets enable -path=odf kv",
"vault secrets enable -path=odf kv-v2",
"echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -",
"vault write auth/kubernetes/role/odf-rook-ceph-op bound_service_account_names=rook-ceph-system,rook-ceph-osd,noobaa bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h",
"vault write auth/kubernetes/role/odf-rook-ceph-osd bound_service_account_names=rook-ceph-osd bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h",
"oc get namespace default NAME STATUS AGE default Active 5d2h",
"oc annotate namespace default \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" namespace/default annotated",
"oc get storageclass rbd-sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rbd-sc rbd.csi.ceph.com Delete Immediate true 5d2h",
"oc annotate storageclass rbd-sc \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" storageclass.storage.k8s.io/rbd-sc annotated",
"oc get pvc data-pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-pvc Bound pvc-f37b8582-4b04-4676-88dd-e1b95c6abf74 1Gi RWO default 20h",
"oc annotate pvc data-pvc \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" persistentvolumeclaim/data-pvc annotated",
"oc get encryptionkeyrotationcronjobs.csiaddons.openshift.io NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642663516 @weekly 3s",
"oc annotate pvc data-pvc \"keyrotation.csiaddons.openshift.io/schedule=*/1 * * * *\" --overwrite=true persistentvolumeclaim/data-pvc annotated",
"oc get encryptionkeyrotationcronjobs.csiaddons.openshift.io NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642664617 */1 * * * * 3s",
"oc get storageclass rbd-sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rbd-sc rbd.csi.ceph.com Delete Immediate true 5d2h",
"oc annotate storageclass rbd-sc \"keyrotation.csiaddons.openshift.io/enable: false\" storageclass.storage.k8s.io/rbd-sc annotated",
"oc get encryptionkeyrotationcronjob -o jsonpath='{range .items[?(@.spec.jobTemplate.spec.target.persistentVolumeClaim==\"<PVC_NAME>\")]}{.metadata.name}{\"\\n\"}{end}'",
"oc annotate encryptionkeyrotationcronjob <encryptionkeyrotationcronjob_name> \"csiaddons.openshift.io/state=unmanaged\" --overwrite=true",
"oc patch encryptionkeyrotationcronjob <encryptionkeyrotationcronjob_name> -p '{\"spec\": {\"suspend\": true}}' --type=merge.",
"patch storagecluster ocs-storagecluster -n openshift-storage --type=json -p '[{\"op\": \"add\", \"path\":\"/spec/encryption/keyRotation/enable\", \"value\": true}]'"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/deploying_openshift_data_foundation_on_red_hat_openstack_platform_in_internal_mode |
Chapter 1. Overview | Chapter 1. Overview Camel K is set to deprecate in favor of an unified Camel approach to OpenShift. Targeting the Red Hat Build of Camel for Quarkus, we aim to provide existing customers with a migration path to transition their Camel K integrations. This approach ensures a seamless migration to the Red Hat Build of Apache Camel for Quarkus, requiring minimal effort while considering the supported features of both Camel K and the Red Hat Build of Apache Camel for Quarkus. You must understand the Quarkus way to build, configure, deploy and run applications. Section 1.1, "Assumptions" Section 1.2, "Traits" Section 1.3, "Kamel run configuration" Section 1.4, "Kamelets, KameletBindings and Pipes" Section 1.5, "Migration Process" Section 1.6, "Troubleshooting" Section 1.7, "Known Issues" Section 1.8, "Reference documentation" 1.1. Assumptions The required source files to migrate are in java, xml or yaml. The target system to deploy is an OpenShift Cluster 4.15+. Camel K version is 1.10.7. The migration is to the Red Hat build of Apache Camel for Quarkus. Camel K operates using the Kamel CLI to run integrations, while the Camel K Operator manages and deploys them as running pods along with various Kubernetes objects, including Deployment, Service, Route, ConfigMap, Secret, and Knative. Note The running java program is a Camel on Quarkus application. When using the Red Hat build of Apache Camel for Quarkus, the starting point is a Maven project that contains all the artifacts needed to build and run the integration. This project will include a Deployment, Service, ConfigMap, and other resources, although their configurations may differ from those in Camel K. For instance, properties might be stored in an application.properties file, and Knative configurations may require separate files. The main goal is to ensure the integration route is deployed and running in an OpenShift cluster. 1.1.1. Requirements To perform the migration, following set of tools and configurations are required. Camel JBang 4.7.0 . JDK 17 or 21. Maven (mvn cli) 3.9.5. oc cli . OpenShift cluster 4.12+. Explore the Supported Configurations and Component Details about Red Hat build of Apache Camel. 1.1.2. Out of scope Use of Camel Spring Boot (CSB) as a target. The migration path is similar but should be tailored for CSB and JKube. Refer the documentation for numerous examples . OpenShift management. Customization of maven project. 1.1.3. Use cases Camel K integrations can vary, typically consisting of several files that correspond to integration routes and configurations. The integration routes may be defined in Java, XML, or YAML, while configurations can be specified in properties files or as parameters in the kamel run command. This migration document addresses use cases involving KameletBinding, Kamelet, Knative, and properties in ConfigMap. 1.1.4. Versions Note Camel K 1.10.7 uses different versions of Camel and Quarkus that the Red Hat build of Apache Camel for Quarkus. Table 1.1. Camel K Artifact Camel K Red Hat build of Apache Camel for Quarkus JDK 11 21 (preferred), 17 (supported) Camel 3.18.6.redhat-00009 4.4.0.redhat-00025 Camel for Quarkus 2.13.3.redhat-00011 3.8.0.redhat-00006 Quarkus Platform 2.13.9.SP2-redhat-00003 3.8.5.redhat-00003 Kamelet Catalog 1.10.7 2.3.x Migrating from Camel K to Red Hat build of Apache Camel for Quarkus updates several libraries simultaneously. Therefore, you may encounter some errors when building or running the integration in Red Hat build of Apache Camel for Quarkus, due to differences in the underlying libraries. 1.1.5. Project and Organization Camel K integration routes originate from a single file in java, yaml or xml. There is no concept of a project to organize the dependencies and builds. At the end, each kamel run <my app> results in a running pod. Red Hat build of Apache Camel for Quarkus requires a maven project. Use the camel export <many files> to generate the maven project. On building the project, the container image contains all the integration routes defined in the project. If you want one pod for each integration route, you must create a maven project for each integration route. While there are many complex ways to use a single maven project with multiple integration routes and custom builds to generate container images with different run entrypoints to start the pod, this is beyond the scope of this migration guide. 1.2. Traits Traits in Camel K provide an easy way for the operator, to materialize parameters from kamel cli to kubernetes objects and configurations. Only a few traits are supported in Camel K 1.10, that are covered in this migration path. There is no need to cover the configuration in the migration path for the following traits: camel, platform, deployment, dependencies, deployer, openapi. The following list contains the traits with their parameters and equivalents in Red Hat build of Apache Camel for Quarkus. Note The properties for Red Hat build of Apache Camel for Quarkus must be set in application.properties . On building the project, kubernetes appearing in target/kubernetes/openshift.yml must contain the properties. For more information about properties, see Quarkus OpenShift Extension . Table 1.2. Builder Trait Trait Parameter Quarkus Parameter builder.properties Add the properties to application.properties Table 1.3. Container Trait Trait Parameter Quarkus Parameter container.expose The Service kubernetes object is created automatically. container.image No replacement in Quarkus, since this property was meant for sourceless Camel K integrations, which are not supported in Red Hat build of Apache Camel for Quarkus. container.limit-cpu quarkus.openshift.resources.limits.cpu container.limit-memory quarkus.openshift.resources.limits.memory container.liveness-failure-threshold quarkus.openshift.liveness-probe.failure-threshold container.liveness-initial-delay quarkus.openshift.liveness-probe.initial-delay container.liveness-period quarkus.openshift.liveness-probe.period container.liveness-success-threshold quarkus.openshift.liveness-probe.success-threshold container.liveness-timeout quarkus.openshift.liveness-probe.timeout container.name quarkus.openshift.container-name container.port quarkus.openshift.ports."<port name>".container-port container.port-name Set the port name in the property name. The syntax is: quarkus.openshift.ports."<port name>".container-port . Example for https port is quarkus.openshift.ports.https.container-port . container.probes-enabled Add the quarkus maven dependency to the pom.xml <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-smallrye-health</artifactId> </dependency> It will also add the startup probe to the container. Note that, delay, timeout and period values may be different. container.readiness-failure-threshold quarkus.openshift.readiness-probe.failure-threshold container.readiness-initial-delay quarkus.openshift.readiness-probe.initial-delay container.readiness-period quarkus.openshift.readiness-probe.period container.readiness-success-threshold quarkus.openshift.readiness-probe.success-threshold container.readiness-timeout quarkus.openshift.readiness-probe.timeout container.request-cpu quarkus.openshift.resources.requests.cpu container.request-memory quarkus.openshift.resources.requests.memory container.service-port quarkus.openshift.ports.<port-name>.host-port container.service-port-name Set the port name in the property name. The syntax is: quarkus.openshift.ports."<port name>".host-port . Example for https port is quarkus.openshift.ports.https.host-port . Also, ensure to set the route port name to quarkus.openshift.route.target-port . Table 1.4. Environment Trait Trait Parameter Quarkus Parameter environment.vars quarkus.openshift.env.vars.<key>=<value> environment.http-proxy You must set the proxy host with the values of: quarkus.kubernetes-client.http-proxy quarkus.kubernetes-client.https-proxy quarkus.kubernetes-client.no-proxy Table 1.5. Error Handler Trait Trait Parameter Quarkus Parameter error-handler.ref You must manually add the Error Handler in the integration route. Table 1.6. JVM Trait Trait Parameter Quarkus Parameter jvm.debug quarkus.openshift.remote-debug.enabled jvm.debug-suspend quarkus.openshift.remote-debug.suspend jvm.print-command No replacement. jvm.debug-address quarkus.openshift.remote-debug.address-port jvm.options Edit src/main/docker/Dockerfile.jvm and change the JAVA_OPTS value to set the desired values. Example to increase the camel log level to debug: Note: The Docker configuration is dependent on the base image, configuration for OpenJDK 21 . jvm.classpath You must set the classpath at the maven project, so the complete list of dependencies are collected in the target/quarkus-app/ and later packaged in the containter image. Table 1.7. Node Affinity Trait Trait Parameter Quarkus Parameter There is no affinity configuration in Quarkus. Table 1.8. Owner Trait Trait Parameter Quarkus Parameter owner.enabled There is no owner configuration in Quarkus. Table 1.9. Quarkus Trait Trait Parameter Quarkus Parameter quarkus.package-type For native builds, use -Dnative . Table 1.10. Knative Trait Trait Parameter Quarkus Parameter knative.enabled Add the maven dependency org.apache.camel.quarkus:camel-quarkus-knative to the pom.xml, and set the following properties: The quarkus.container-image.* properties are required by the quarkus maven plugin to set the image url in the generated knative.yml. knative.configuration camel.component.knative.environmentPath knative.channel-sources Configurable in the knative.json. knative.channel-sinks Configurable in the knative.json. knative.endpoint-sources Configurable in the knative.json. knative.endpoint-sinks Configurable in the knative.json. knative.event-sources Configurable in the knative.json. knative.event-sinks Configurable in the knative.json. knative.filter-source-channels Configurable in the knative.json. knative.sink-binding No replacement, you must create the SinkBinding object. knative.auto No replacement. knative.namespace-label You must set the label bindings.knative.dev/include=true manually to the desired namespace. Table 1.11. Knative Service Trait Trait Parameter Quarkus Parameter knative-service.enabled quarkus.kubernetes.deployment-target=knative knative-service.annotations quarkus.knative.annotations.<annotation-name>=<value> knative-service.autoscaling-class quarkus.knative.revision-auto-scaling.auto-scaler-class knative-service.autoscaling-metric quarkus.knative.revision-auto-scaling.metric knative-service.autoscaling-target quarkus.knative.revision-auto-scaling.target knative-service.min-scale quarkus.knative.min-scale knative-service.max-scale quarkus.knative.max-scale knative-service.rollout-duration quarkus.knative.annotations."serving.knative.dev/rollout-duration" knative-service.visibility quarkus.knative.labels."networking.knative.dev/visibility" It must be in quotation marks. knative-service.auto This behavior is unnecessary in Red Hat build of Apache Camel for Quarkus. Table 1.12. Prometheus Trait Trait Parameter Quarkus Parameter prometheus.enabled Add the following maven dependencies to pom.xml <dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-micrometer</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-micrometer-registry-prometheus</artifactId> </dependency> Note: Camel K creates a PodMonitor object, while Quarkus creates a ServiceMonitor object, both are correct to configure the monitoring feature. prometheus.pod-monitor quarkus.openshift.prometheus.generate-service-monitor prometheus.pod-monitor-labels No quarkus property is available to set custom labels, but you can configure the labels in ServiceMonitor object in target/kubernetes/openshift.yml before deploying. Table 1.13. PodDisruptionBudget (PDB) Trait Trait Parameter Quarkus Parameter There is no Quarkus configuration for PodDisruptionBudget objects. Table 1.14. Pull Secret Trait Trait Parameter Quarkus Parameter pull-secret.secret-name quarkus.openshift.image-pull-secrets Table 1.15. Route Trait Trait Parameter Quarkus Parameter route.enabled quarkus.openshift.route.expose route.annotations quarkus.openshift.route.annotations.<key>=<value route.host quarkus.openshift.route.host route.tls-termination quarkus.openshift.route.tls.termination route.tls-certificate quarkus.openshift.route.tls.certificate route.tls-certificate-secret There is no quarkus property to read the certificate from a secret. route.tls-key quarkus.openshift.route.tls.key route.tls-key-secret There is no quarkus property to read the key from a secret. route.tls-ca-certificate quarkus.openshift.route.tls.ca-certificate route.tls-ca-certificate-secret There is no quarkus property to read the CA certificate from a secret. route.tls-destination-ca-certificate quarkus.openshift.route.tls.destination-ca-certificate route.tls-destination-ca-certificate-secret There is no quarkus property to read the destination certificate from a secret. route.tls-insecure-edge-termination-policy quarkus.openshift.route.tls.insecure-edge-termination-policy Table 1.16. Service Trait Trait Parameter Quarkus Parameter service.enabled The Service kubernetes object is created automatically. To disable it, you must remove the kind: Service from target/kubernetes/openshift.yml before deployment. 1.3. Kamel run configuration There are additional configuration parameters in the kamel run command listed below, along with their equivalents in the Red Hat build of Apache Camel for Quarkus, which must be added in src/main/resources/application.properties or pom.xml . kamel run parameter Quarkus Parameter --annotation quarkus.openshift.annotations.<annotation-name>=<value> --build-property Add the property in the <properties> tag of the pom.xml . --dependency Add the dependency in pom.xml . --env quarkus.openshift.env.vars.<env-name>=<value> --label quarkus.openshift.labels.<label-name>=<value> --maven-repository Add the repository in pom.xml or use the camel export --repos=<my repo> . --logs oc logs -f `oc get pod -l app.kubernetes.io/name=<artifact name> -oname` --volume quarkus.openshift.mounts.<my-volume>.path=</where/to/mount > 1.4. Kamelets, KameletBindings and Pipes Camel K operator bundles the Kamelets and installs them as kubernetes objects. For Red Hat build of Apache Camel for Quarkus project, you must manage kamelets yaml files in the maven project. There are two ways to manage the kamelets yaml files. Kamelets are packaged and released as maven artifact org.apache.camel.kamelets:camel-kamelets . You can add this dependency to pom.xml , and when the camel route starts, it loads the kamelet yaml files from that jar file in classpath. There are opensource kamelets and the ones produced by Red Hat, whose artifact suffix is redhat-000nnn . For example:`1.10.7.redhat-00015`. These are available from the Red Hat maven repository . 2.Add the kamelet yaml files in src/main/resources/kamelets directory, that are later packaged in the final deployable artifact. Do not declare the org.apache.camel.kamelets:camel-kamelets in pom.xml . This way, the camel route loads the Kamelet yaml file from the packaged project. KameletBinding was renamed to Pipe . So consider this to understand the use case 3. While the kubernetes resource name KameletBinding is still supported, it is deprecated. We recommend renaming it to Pipe as soon as possible. We recommend to update the Kamelets, as there were many updates since Camel K 1.10.7. For example, you can compare the jms-amqp-10-sink.kamelet.yaml of 1.10 and 2.3 If you have custom Kamelets, you must update them accordingly. rename flow to template in Kamelet files. rename property to properties for the bean properties. 1.4.1. Knative When running integration routes with Knative endpoints in Camel K, the Camel K Operator creates some Knative objects such as: SinkBindings , Trigger , Subscription . Also, Camel K Operator creates the knative.json environment file, required for camel-knative component to interact with the Knative objects deployed in the cluster. Example of a knative.json { "services": [ { "type": "channel", "name": "messages", "url": "{{k.sink}}", "metadata": { "camel.endpoint.kind": "sink", "knative.apiVersion": "messaging.knative.dev/v1", "knative.kind": "Channel", "knative.reply": "false" } } ] } Red Hat build of Apache Camel for Quarkus is a maven project. You must create those Knative files manually and provide additional configuration. See use case 2 for the migration of an integration route with Knative endpoints. 1.4.2. Monitoring We recommend you to add custom labels to identify the kubernetes objects installed in the cluster, to allow an easier way to locate these kubernetes. By default, the quarkus openshift extension adds the label app.kubernetes.io/name=<app name> , so you can search the objects created using this label. For monitoring purposes, you can use the HawtIO Diagnostic Console to monitor the Camel applications. 1.5. Migration Process The migration process is composed of the following steps. Task Description Create the maven project Use the camel cli from Camel JBang to export the files, it will create a maven project. Adjust the configuration Configure the project by adding and changing files. Build Building the project will generate the JAR files. Build the container image and push to a container registry. Deploy Deploy the kubernetes objects to the Openshift cluster and run the pod. 1.5.1. Migration Steps 1.5.1.1. Use Case 1 - Simple Integration Route with Configuration Given the following integration route, featuring rest and kamelet endpoints. import org.apache.camel.builder.RouteBuilder; public class Http2Jms extends RouteBuilder { @Override public void configure() throws Exception { rest() .post("/message") .id("rest") .to("direct:jms"); from("direct:jms") .log("Sending message to JMS {{broker}}: USD{body}") .to("kamelet:jms-amqp-10-sink?remoteURI=amqp://myhost:61616&destinationName=queue"); } } The http2jms.properties file The kamel run command It builds and runs the pod with the annotations. Environment variable and the properties file are added as a ConfigMap and mounted in the pod. 1.5.1.1.1. Step 1 - Create the maven project Use camel jbang to export the file into a maven project. camel export \ --runtime=quarkus \ --quarkus-group-id=com.redhat.quarkus.platform \ --quarkus-version=3.8.5.redhat-00003 \ --repos=https://maven.repository.redhat.com/ga \ --dep=io.quarkus:quarkus-openshift \ --gav=com.mycompany:ceq-app:1.0 \ --dir=ceq-app1 \ Http2Jms.java Description of the parameters: Parameter Description --runtime=quarkus Use the Quarkus runtime. The generated project contains the quarkus BOM. --quarkus-group-id=com.redhat.quarkus.platform The Red Hat supported quarkus platform maven artifact group is com.redhat.quarkus.platform . --quarkus-version=3.8.5.redhat-00003 This is the latest supported version at the time. Check the Quarkus documentation for a recent release version. --repos=https://maven.repository.redhat.com/ga Use the Red Hat Maven repository with the GA releases. --dep=io.quarkus:quarkus-openshift Adds the quarkus-openshift dependency to pom.xml ,to build in Openshift. --gav=com.mycompany:ceq-app:1.0 Set a GAV to the generated pom.xml. You must set a GAV accordingly to your project. --dir=ceq-app1 The maven project directory. You can see more parameters with camel export --help If you are using kamelets, it must be part of the maven project. You can download the Kamelet repository and unzip it. If you have any custom kamelets, add them to this kamelet directory. While using camel export , you can use the parameter --local-kamelet-dir=<kamelet directory> that copies all kamelets to src/main/resources/kamelets , which are later packed into the final archive. If you choose not to use the --local-kamelet-dir=<kamelet directory> parameter, then you must manually copy the desired kamelet yaml files to the above mentioned directory. Track the artifact name in the generated pom, as the artifact name is used in the generated Openshift files (Deployment, Service, Route, etc.). 1.5.1.1.2. Step 2 - Configure the project This is the step to configure the maven project and artifacts to suit your environment. Get into the maven project cd ceq-app1 Set the docker build strategy. echo quarkus.openshift.build-strategy=docker >> src/main/resources/application.properties Change the base image to OpenJDK 21 in src/main/docker (optional) FROM registry.access.redhat.com/ubi9/openjdk-21:1.20 Change the compiler version to 21 in pom.xml (optional) <maven.compiler.release>21</maven.compiler.release> Set the environment variables, labels and annotations in src/main/resources/application.properties , if you need them. If you want to customize the image and container registry settings with these parameters: quarkus.container-image.registry quarkus.container-image.group quarkus.container-image.name quarkus.container-image.tag As there is a http2jms.properties with configuration used at runtime, kamel cli creates a ConfigMap and mount it in the pod. We must achieve the same with Red Hat build of Apache Camel for Quarkus. Create a local ConfigMap file named ceq-app in `src/main/kubernetes/common.yml which will be a part of the image build process. The following command sets the ConfigMap key as application.properties oc create configmap ceq-app --from-file application.properties=http2jms.properties --dry-run=client -oyaml > src/main/kubernetes/common.yml Add the following property to application.properties , for Quarkus to mount the ConfigMap . 1.5.1.1.3. Step 3 - Build Build the package for local inspection. ./mvnw -ntp package This step builds the maven artifacts (JAR files) locally and generates the Openshift files in target/kubernetes directory. Track the target/kubernetes/openshift.yml to understand the deployment that is deployed to the Openshift cluster. 1.5.1.1.4. Step 4 - Build and Deploy Build the package and deploy to Openshift ./mvnw -ntp package -Dquarkus.openshift.deploy=true You can follow the image build in the maven output. After the build, you can see the pod running. 1.5.1.1.5. Step 5 - Test Verify if the integration route is working. If the project can run locally, you can try the following. mvn -ntp quarkus:run Follow the pod container log oc logs -f `oc get pod -l app.kubernetes.io/name=app -oname` It must show something like the following output: INFO exec -a "java" java -Dquarkus.http.host=0.0.0.0 -Djava.util.logging.manager=org.jboss.logmanager.LogManager -cp "." -jar /deployments/quarkus-run.jar INFO running in /deployments __ ____ __ _____ ___ __ ____ ______ --/ __ \/ / / / _ | / _ \/ //_/ / / / __/ -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \ --\___\_\____/_/ |_/_/|_/_/|_|\____/___/ [org.apa.cam.qua.cor.CamelBootstrapRecorder] (main) Bootstrap runtime: org.apache.camel.quarkus.main.CamelMainRuntime [org.apa.cam.mai.MainSupport] (main) Apache Camel (Main) 4.4.0.redhat-00025 is starting [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 4.4.0.redhat-00025 (camel-1) is starting [org.apa.cam.mai.BaseMainSupport] (main) Property-placeholders summary [org.apa.cam.mai.BaseMainSupport] (main) [MicroProfilePropertiesSource] broker=amqp://172.30.177.216:61616 [org.apa.cam.mai.BaseMainSupport] (main) [MicroProfilePropertiesSource] queue=qtest [org.apa.cam.mai.BaseMainSupport] (main) [ms-amqp-10-sink.kamelet.yaml] destinationName=qtest [org.apa.cam.mai.BaseMainSupport] (main) [ms-amqp-10-sink.kamelet.yaml] connectionFactoryBean=connectionFactoryBean-1 [org.apa.cam.mai.BaseMainSupport] (main) [ms-amqp-10-sink.kamelet.yaml] remoteURI=amqp://172.30.177.216:61616 [org.apa.cam.imp.eng.AbstractCamelContext] (main) Routes startup (started:3) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started route1 (direct://jms) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started rest (rest://post:/message) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started jms-amqp-10-sink-1 (kamelet://source) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 4.4.0.redhat-00025 (camel-1) started in 17ms (build:0ms init:0ms start:17ms) [io.quarkus] (main) app 1.0 on JVM (powered by Quarkus 3.8.5.redhat-00004) started in 1.115s. Listening on: http://0.0.0.0:8080 [io.quarkus] (main) Profile prod activated. [io.quarkus] (main) Installed features: [camel-amqp, camel-attachments, camel-core, camel-direct, camel-jms, camel-kamelet, camel-microprofile-health, camel-platform-http, camel-rest, camel-rest-openapi, camel-yaml-dsl, cdi, kubernetes, qpid-jms, smallrye-context-propagation, smallrye-health, vertx] See the MicroProfilePropertiesSource line, it shows the content of the properties file added as a ConfigMap and mounted into the pod. 1.5.1.2. Use Case 2 - Knative Integration Route This use case features two Knative integration routes. The Feed route periodically sends a text message to a Knative channel, The second route Printer receives the message from the Knative channel and prints it. For Camel K, there are two pods, each one running a single integration route. So, this migration must create two projects, each one having one integration route. Later if you want, you can customize it to a single maven project with both integration routes in a single pod. The Feed integration route. import org.apache.camel.builder.RouteBuilder; public class Feed extends RouteBuilder { @Override public void configure() throws Exception { from("timer:clock?period=15s") .setBody().simple("Hello World from Camel - USD{date:now}") .log("sent message to messages channel: USD{body}") .to("knative:channel/messages"); } } The Printer integration route. import org.apache.camel.builder.RouteBuilder; public class Printer extends RouteBuilder { @Override public void configure() throws Exception { from("knative:channel/messages") .convertBodyTo(String.class) .to("log:info"); } } The kamel run command shows you how this runs with Camel K. kamel run Feed.java kamel run Printer.java There are going to be two pods running. 1.5.1.2.1. Step 1 - Create the maven project Use camel jbang to export the file into a full maven project Export the feed integration. camel export \ --runtime=quarkus \ --quarkus-group-id=com.redhat.quarkus.platform \ --quarkus-version=3.8.5.redhat-00003 \ --repos=https://maven.repository.redhat.com/ga \ --dep=io.quarkus:quarkus-openshift \ --gav=com.mycompany:ceq-feed:1.0 \ --dir=ceq-feed \ Feed.java Export the printer integration. camel export \ --runtime=quarkus \ --quarkus-group-id=com.redhat.quarkus.platform \ --quarkus-version=3.8.5.redhat-00003 \ --repos=https://maven.repository.redhat.com/ga \ --dep=io.quarkus:quarkus-openshift \ --gav=com.mycompany:ceq-printer:1.0 \ --dir=ceq-printer \ Printer.java A maven project will be created for each integration. 1.5.1.2.2. Step 2 - Configure the project This step is to configure the maven project and the artifacts to suit your environment. Use case 1 contains information about labels, annotation and configuration in ConfigMaps. Get into the maven project Set the docker build strategy. Change the base image to OpenJDK 21 in src/main/docker (optional) Change the compiler version to 21 in pom.xml (optional) Add openshift as a deployment target. You must set these container image properties, to set the image address in the generated openshift.yml and knative.yml file. Add the following property in application.properties to allow the Knative controller to inject the K_SINK environment variable to the deployment. Add the knative.json in src/main/resources . This is a required configuration for Camel to connect to the Knative channel. Note There is k.sink property placeholder. When the pod is running it will look at the environment variable named K_SINK and replace in the url value. { "services": [ { "type": "channel", "name": "messages", "url": "{{k.sink}}", "metadata": { "camel.endpoint.kind": "sink", "knative.apiVersion": "messaging.knative.dev/v1", "knative.kind": "Channel", "knative.reply": "false" } } ] } Add the following property to allow Camel to load the Knative environment configuration. To make the inject work, you must create a Knative SinkBinding object. Add the SinkBinding file to src/main/kubernetes/openshift.yml cat <<EOF >> src/main/kubernetes/openshift.yml apiVersion: sources.knative.dev/v1 kind: SinkBinding metadata: finalizers: - sinkbindings.sources.knative.dev name: ceq-feed spec: sink: ref: apiVersion: messaging.knative.dev/v1 kind: Channel name: messages subject: apiVersion: apps/v1 kind: Deployment name: ceq-feed EOF Now, configure the ceq-printer project. Set the docker build strategy. Change the base image to OpenJDK 21 in src/main/docker (optional) Change the compiler version to 21 in pom.xml (optional) Set knative as a deployment target. You must set these container image properties, to correctly set the image address in the generated openshift.yml and knative.yml file. Add the knative.json in src/main/resources . This is a required configuration for Camel to connect to the Knative channel. { "services": [ { "type": "channel", "name": "messages", "path": "/channels/messages", "metadata": { "camel.endpoint.kind": "source", "knative.apiVersion": "messaging.knative.dev/v1", "knative.kind": "Channel", "knative.reply": "false" } } ] } Add the following property to allow Camel to load the Knative environment configuration. A Knative Subscription is required for the message delivery from the channel to a sink. Add the Subscription file to src/main/kubernetes/knative.yml apiVersion: messaging.knative.dev/v1 kind: Subscription metadata: finalizers: - subscriptions.messaging.knative.dev name: ceq-printer spec: channel: apiVersion: messaging.knative.dev/v1 kind: Channel name: messages subscriber: ref: apiVersion: serving.knative.dev/v1 kind: Service name: ceq-printer uri: /channels/messages 1.5.1.2.3. Step 3 - Build Build the package for local inspection. ./mvnw -ntp package This step builds the maven artifacts (JAR files) locally and generates the Openshift files in target/kubernetes directory. Track the target/kubernetes/openshift.yml and `target/kubernetes/knative.yml`to understand the deployment that is deployed to the Openshift cluster. 1.5.1.2.4. Step 4 - Build and Deploy Build the package and deploy to Openshift. You can follow the image build in the maven output. After build, you can see the pod running. 1.5.1.2.5. Step 5 - Test Verify if the integration route is working. Follow the pod container log It must show like the following output: ceq-feed pod INFO exec -a "java" java -Dquarkus.http.host=0.0.0.0 -Djava.util.logging.manager=org.jboss.logmanager.LogManager -cp "." -jar /deployments/quarkus-run.jar INFO running in /deployments __ ____ __ _____ ___ __ ____ ______ --/ __ \/ / / / _ | / _ \/ //_/ / / / __/ -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \ --\___\_\____/_/ |_/_/|_/_/|_|\____/___/ [org.apa.cam.qua.cor.CamelBootstrapRecorder] (main) Bootstrap runtime: org.apache.camel.quarkus.main.CamelMainRuntime [org.apa.cam.mai.MainSupport] (main) Apache Camel (Main) 4.4.0.redhat-00025 is starting [org.apa.cam.mai.BaseMainSupport] (main) Auto-configuration summary [org.apa.cam.mai.BaseMainSupport] (main) [MicroProfilePropertiesSource] camel.component.knative.environmentPath=classpath:knative.json [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 4.4.0.redhat-00025 (camel-1) is starting [org.apa.cam.mai.BaseMainSupport] (main) Property-placeholders summary [org.apa.cam.mai.BaseMainSupport] (main) [OS Environment Variable] k.sink=http://hello-kn-channel.cmiranda-camel.svc.cluster.local [org.apa.cam.imp.eng.AbstractCamelContext] (main) Routes startup (started:1) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started route1 (timer://clock) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 4.4.0.redhat-00025 (camel-1) started in 43ms (build:0ms init:0ms start:43ms) [io.quarkus] (main) ceq-feed 1.0 on JVM (powered by Quarkus 3.8.5.redhat-00004) started in 1.386s. Listening on: http://0.0.0.0:8080 [io.quarkus] (main) Profile prod activated. [io.quarkus] (main) Installed features: [camel-attachments, camel-cloudevents, camel-core, camel-knative, camel-platform-http, camel-rest, camel-rest-openapi, camel-timer, cdi, kubernetes, smallrye-context-propagation, vertx] [route1] (Camel (camel-1) thread #1 - timer://clock) sent message to hello channel: Hello World from Camel - Thu Aug 01 13:54:41 UTC 2024 [route1] (Camel (camel-1) thread #1 - timer://clock) sent message to hello channel: Hello World from Camel - Thu Aug 01 13:54:56 UTC 2024 [route1] (Camel (camel-1) thread #1 - timer://clock) sent message to hello channel: Hello World from Camel - Thu Aug 01 13:55:11 UTC 2024 See the Property-placeholders . It shows the k.sink property value. ceq-printer pod INFO exec -a "java" java -Dquarkus.http.host=0.0.0.0 -Djava.util.logging.manager=org.jboss.logmanager.LogManager -cp "." -jar /deployments/quarkus-run.jar INFO running in /deployments __ ____ __ _____ ___ __ ____ ______ --/ __ \/ / / / _ | / _ \/ //_/ / / / __/ -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \ --\___\_\____/_/ |_/_/|_/_/|_|\____/___/ [org.apa.cam.qua.cor.CamelBootstrapRecorder] (main) Bootstrap runtime: org.apache.camel.quarkus.main.CamelMainRuntime [org.apa.cam.mai.MainSupport] (main) Apache Camel (Main) 4.4.0.redhat-00025 is starting [org.apa.cam.mai.BaseMainSupport] (main) Auto-configuration summary [org.apa.cam.mai.BaseMainSupport] (main) [MicroProfilePropertiesSource] camel.component.knative.environmentPath=classpath:knative.json [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 4.4.0.redhat-00025 (camel-1) is starting [org.apa.cam.imp.eng.AbstractCamelContext] (main) Routes startup (started:1) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started route1 (knative://channel/hello) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 4.4.0.redhat-00025 (camel-1) started in 10ms (build:0ms init:0ms start:10ms) [io.quarkus] (main) ceq-printer 1.0 on JVM (powered by Quarkus 3.8.5.redhat-00004) started in 1.211s. Listening on: http://0.0.0.0:8080 [io.quarkus] (main) Profile prod activated. [io.quarkus] (main) Installed features: [camel-attachments, camel-cloudevents, camel-core, camel-knative, camel-log, camel-platform-http, camel-rest, camel-rest-openapi, cdi, kubernetes, smallrye-context-propagation, vertx] [info] (executor-thread-1) Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello World from Camel - Thu Aug 01 13:54:41 UTC 2024] [info] (executor-thread-1) Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello World from Camel - Thu Aug 01 13:54:56 UTC 2024] [info] (executor-thread-1) Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello World from Camel - Thu Aug 01 13:55:11 UTC 2024] 1.5.1.3. Use Case 3 - Pipe Given the following integration route as a KameletBinding. apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: sample spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: period: 5000 contentType: application/json message: '{"id":"1","field":"hello","message":"Camel Rocks"}' steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: extract-field-action properties: field: "message" sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: log-sink properties: showStreams: true 1.5.1.3.1. Step 1 - Create the maven project Use camel jbang to export the file into a maven project. camel export \ --runtime=quarkus \ --quarkus-group-id=com.redhat.quarkus.platform \ --quarkus-version=3.8.5.redhat-00003 \ --repos=https://maven.repository.redhat.com/ga \ --dep=io.quarkus:quarkus-openshift \ --gav=com.mycompany:ceq-timer2log-kbind:1.0 \ --dir=ceq-timer2log-kbind \ timer-2-log-kbind.yaml You can see more parameters with camel export --help 1.5.1.3.2. Step 2 - Configure the project This is the step to configure the maven project and the artifacts to suit your environment. Note You can follow use cases 1 and 2 for the common configuration and we will provide the steps required for the KameletBinding configuration. You can try to run the integration route locally with camel jbang to see how it works, before building and deploying to Openshift. Get into the maven project cd ceq-timer2log-kbind See the note at the beginning about how to manage Kamelets. For this migration use case, I use the org.apache.camel.kamelets:camel-kamelets dependency in pom.xml . When exporting, it adds the following properties in application.properties , but you can remove it. Set the docker build strategy. If your Kamelet or KameletBinding has trait annotations like the following: trait.camel.apache.org/environment.vars: "my_key=my_val" , then you must follow the trait configuration section about how to set it using Quarkus properties. 1.5.1.3.3. Step 3 - Build Build the package for local inspection. ./mvnw -ntp package This step builds the maven artifacts (JAR files) locally and generates the Openshift manifest files in target/kubernetes directory. Track the target/kubernetes/openshift.yml to understand the deployment that is deployed to the Openshift cluster. 1.5.1.3.4. Step 4 - Build and Deploy Build the package and deploy to Openshift. ./mvnw -ntp package -Dquarkus.openshift.deploy=true You can follow the image build in the maven output. After build, you can see the pod running. 1.5.1.3.5. Step 5 - Test Verify if the integration route is working. Follow the pod container log oc logs -f `oc get pod -l app.kubernetes.io/name=ceq-timer2log-kbind -oname` It must show like the following output: [org.apa.cam.qua.cor.CamelBootstrapRecorder] (main) Bootstrap runtime: org.apache.camel.quarkus.main.CamelMainRuntime [org.apa.cam.mai.MainSupport] (main) Apache Camel (Main) 4.4.0.redhat-00025 is starting [org.apa.cam.cli.con.LocalCliConnector] (main) Management from Camel JBang enabled [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 4.4.0.redhat-00025 (camel-1) is starting [org.apa.cam.mai.BaseMainSupport] (main) Property-placeholders summary [org.apa.cam.mai.BaseMainSupport] (main) [timer-source.kamelet.yaml] period=5000 [org.apa.cam.mai.BaseMainSupport] (main) [timer-source.kamelet.yaml] message={"id":"1","field":"hello","message":"Camel Rocks"} [org.apa.cam.mai.BaseMainSupport] (main) [timer-source.kamelet.yaml] contentType=application/json [org.apa.cam.mai.BaseMainSupport] (main) [log-sink.kamelet.yaml] showStreams=true [org.apa.cam.mai.BaseMainSupport] (main) [ct-field-action.kamelet.yaml] extractField=extractField-1 [org.apa.cam.mai.BaseMainSupport] (main) [ct-field-action.kamelet.yaml] field=message [org.apa.cam.imp.eng.AbstractCamelContext] (main) Routes startup (started:4) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started sample (kamelet://timer-source) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started timer-source-1 (timer://tick) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started log-sink-2 (kamelet://source) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started extract-field-action-3 (kamelet://source) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 4.4.0.redhat-00025 (camel-1) started in 276ms (build:0ms init:0ms start:276ms) [io.quarkus] (main) ceq-timer2log-kbind 1.0 on JVM (powered by Quarkus 3.8.5.redhat-00004) started in 1.867s. Listening on: http://0.0.0.0:8080 [io.quarkus] (main) Profile prod activated. [io.quarkus] (main) Installed features: [camel-attachments, camel-cli-connector, camel-console, camel-core, camel-direct, camel-jackson, camel-kamelet, camel-log, camel-management, camel-microprofile-health, camel-platform-http, camel-rest, camel-rest-openapi, camel-timer, camel-xml-jaxb, camel-yaml-dsl, cdi, kubernetes, smallrye-context-propagation, smallrye-health, vertx] [log-sink] (Camel (camel-1) thread #2 - timer://tick) Exchange[ExchangePattern: InOnly, BodyType: org.apache.camel.converter.stream.InputStreamCache, Body: "Camel Rocks"] 1.5.2. Undeploy kubernetes resources To delete all resouces installed by the quarkus-maven-plugin, you must run the following command. 1.5.3. Kubernetes CronJob Camel K has a feature when there is a consumer of type cron, quartz or timer.In some circumstances, it creates a kubernetes CronJob object instead of a regular Deployment . This saves computing resources by not running the Deployment Pod all time. To obtain the same outcome in Red Hat build of Apache Camel for Quarkus, you must set the following properties in src/main/resources/application.properties . And you must set the timer consumer to execute only once, as follows: from("timer:java?delay=0&period=1&repeatCount=1") The following are the timer parameters. delay=0 : Starts the consumer with no delay. period=1 : Run only once 1s. repeatCount=1 : Don't run after the first run. 1.6. Troubleshooting 1.6.1. Product Support If you encounter any problems during the migration process you can open a support case and we will help you resolve the issue. 1.6.2. Ignore loading errors when exporting with camel jbang When using camel jbang export, it may fail to load the routes. Here, you can use the --ignore-loading-error parameter, as follows: 1.6.3. Increase logging You can set a category logging, by using the following property in application.properties for org.apache.camel.component.knative category to debug level. 1.6.4. Disable health checks Your application pod may fail with CrashLoopBackOff and the following error appears in the log pod. If you do not want the container health checks, you can disable the container health check by removing this maven dependency from the pom.xml <dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-microprofile-health</artifactId> </dependency> 1.7. Known Issues There are a few known issues related to migrating integration routes, along with their workarounds. These workarounds are not limitations of the Red Hat build of Apache Camel for Quarkus, but rather part of the migration process. Once the migration is complete, the resulting Maven project is customizable to meet customer needs. 1.7.1. Camel K features not available in Camel for Quarkus Some Camel K features are not available in Quarkus or Camel as a quarkus property. These features may require additional configuration steps to achieve the same functionality when building and deploying in Red Hat build of Apache Camel for Quarkus. 1.7.1.1. Owner Trait The owner trait sets the kubernetes owner fields for all created resources, simplifying the process of tracking who created a kubernetes resource. There is an open Quarkus issue #13952 requesting this feature. There is no workaround to set the owner fields. 1.7.1.2. Affinity Trait The node affinity trait enables you to constrain the nodes on which the integration pods are scheduled to run. There is an open Quarkus issue #13596 requesting this feature. The workaround would be to implement a post processing task after maven package step, to add the affinity configuration to target/kubernetes/openshift.yml . 1.7.1.3. PodDisruptionBudget Trait The PodDisruptionBudget trait allows to configure the PodDisruptionBudget resource for the Integration pods. There is configuration in Quarkus to generate the PodDisruptionBudget resource. The workaround would be to implement a post processing task after maven package step, to add the PodDisruptionBudget configuration to target/kubernetes/openshift.yml . 1.7.2. Camel Jbang fails to add camel-quarkus-direct dependency If the integration route contains a rest and a direct endpoint, as shown in the example below, verify that pom.xml contains camel-quarkus-direct dependency. If it is missing, you must add it. rest() .post("/message") .id("rest") .to("direct:foo"); from("direct:foo") .log("hello"); The camel-quarkus-direct dependency to add to the pom.xml <dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-direct</artifactId> </dependency> 1.7.3. Quarkus build fails with The server certificate is not trusted by the client. Therefore, you must either add the server public key to the client or trust the server certificate. If you are testing, you can add the following property to the src/main/resources/application.properties and rebuild it. 1.7.4. Camel Jbang fails to export a route Camel Jbang fails to export a route when the route contains a kamelet endpoint, which is backed by a bean. If the endpoint contains a kamelet, with property placeholders {{broker}} , and in the kamelet there is a type: "#class:org.apache.qpid.jms.JmsConnectionFactory" to initialize the camel component, it may fail. The error is composed of the following errors. How to fix: Replace the property placeholders in the kamelet endpoint {{broker}} and {{queue}} with any value, for example: remoteURI=broker&destinationName=queue . Now export the file, and you can add the property placeholder back in the exported route in src/main/ directory. 1.8. Reference documentation For more details about Camel products, refer the following links. Red Hat build of Apache Camel for Quarkus releases Red Hat build of Apache Camel for Quarkus Documentation, including migration to Camel Spring Boot Camel K documentation Deploying a Camel Spring Boot application to OpenShift Deploying Red Hat build of Apache Camel for Quarkus applications Deploying your Red Hat build of Quarkus applications to OpenShift Container Platform Developer Resources for Red Hat Build of Quarkus Quarkus Configuration for Kubernetes Quarkus Configuration for Openshift | [
"<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-smallrye-health</artifactId> </dependency>",
"ENV JAVA_OPTS=\"USDJAVA_OPTS -Dquarkus.log.category.\\\"org.apache.camel\\\".level=debug\"",
"affinity.pod-affinity affinity.pod-affinity-labels affinity.pod-anti-affinity affinity.pod-anti-affinity-labels affinity.node-affinity-labels",
"quarkus.kubernetes.deployment-target=knative quarkus.container-image.group=<group-name> quarkus.container-image.registry=image-registry.openshift-image-registry.svc:5000",
"<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-micrometer</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-micrometer-registry-prometheus</artifactId> </dependency>",
"pdb.enabled pdb.min-available pdb.max-unavailable",
"{ \"services\": [ { \"type\": \"channel\", \"name\": \"messages\", \"url\": \"{{k.sink}}\", \"metadata\": { \"camel.endpoint.kind\": \"sink\", \"knative.apiVersion\": \"messaging.knative.dev/v1\", \"knative.kind\": \"Channel\", \"knative.reply\": \"false\" } } ] }",
"import org.apache.camel.builder.RouteBuilder; public class Http2Jms extends RouteBuilder { @Override public void configure() throws Exception { rest() .post(\"/message\") .id(\"rest\") .to(\"direct:jms\"); from(\"direct:jms\") .log(\"Sending message to JMS {{broker}}: USD{body}\") .to(\"kamelet:jms-amqp-10-sink?remoteURI=amqp://myhost:61616&destinationName=queue\"); } }",
"broker=amqp://172.30.177.216:61616 queue=qtest",
"kamel run Http2Jms.java -p file://USDPWD/http2jms.properties --annotation some_annotation=foo --env MY_ENV1=VAL1",
"camel export --runtime=quarkus --quarkus-group-id=com.redhat.quarkus.platform --quarkus-version=3.8.5.redhat-00003 --repos=https://maven.repository.redhat.com/ga --dep=io.quarkus:quarkus-openshift --gav=com.mycompany:ceq-app:1.0 --dir=ceq-app1 Http2Jms.java",
"cd ceq-app1",
"echo quarkus.openshift.build-strategy=docker >> src/main/resources/application.properties",
"FROM registry.access.redhat.com/ubi9/openjdk-21:1.20",
"<maven.compiler.release>21</maven.compiler.release>",
"quarkus.openshift.annotations.sample_annotation=sample_value1 quarkus.openshift.env.vars.SAMPLE_KEY=sample_value2 quarkus.openshift.labels.sample_label=sample_value3",
"quarkus.container-image.registry quarkus.container-image.group quarkus.container-image.name quarkus.container-image.tag",
"create configmap ceq-app --from-file application.properties=http2jms.properties --dry-run=client -oyaml > src/main/kubernetes/common.yml",
"quarkus.openshift.app-config-map=ceq-app",
"./mvnw -ntp package",
"./mvnw -ntp package -Dquarkus.openshift.deploy=true",
"mvn -ntp quarkus:run",
"logs -f `oc get pod -l app.kubernetes.io/name=app -oname`",
"INFO exec -a \"java\" java -Dquarkus.http.host=0.0.0.0 -Djava.util.logging.manager=org.jboss.logmanager.LogManager -cp \".\" -jar /deployments/quarkus-run.jar INFO running in /deployments __ ____ __ _____ ___ __ ____ ______ --/ __ \\/ / / / _ | / _ \\/ //_/ / / / __/ -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\\ --\\___\\_\\____/_/ |_/_/|_/_/|_|\\____/___/ [org.apa.cam.qua.cor.CamelBootstrapRecorder] (main) Bootstrap runtime: org.apache.camel.quarkus.main.CamelMainRuntime [org.apa.cam.mai.MainSupport] (main) Apache Camel (Main) 4.4.0.redhat-00025 is starting [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 4.4.0.redhat-00025 (camel-1) is starting [org.apa.cam.mai.BaseMainSupport] (main) Property-placeholders summary [org.apa.cam.mai.BaseMainSupport] (main) [MicroProfilePropertiesSource] broker=amqp://172.30.177.216:61616 [org.apa.cam.mai.BaseMainSupport] (main) [MicroProfilePropertiesSource] queue=qtest [org.apa.cam.mai.BaseMainSupport] (main) [ms-amqp-10-sink.kamelet.yaml] destinationName=qtest [org.apa.cam.mai.BaseMainSupport] (main) [ms-amqp-10-sink.kamelet.yaml] connectionFactoryBean=connectionFactoryBean-1 [org.apa.cam.mai.BaseMainSupport] (main) [ms-amqp-10-sink.kamelet.yaml] remoteURI=amqp://172.30.177.216:61616 [org.apa.cam.imp.eng.AbstractCamelContext] (main) Routes startup (started:3) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started route1 (direct://jms) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started rest (rest://post:/message) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started jms-amqp-10-sink-1 (kamelet://source) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 4.4.0.redhat-00025 (camel-1) started in 17ms (build:0ms init:0ms start:17ms) [io.quarkus] (main) app 1.0 on JVM (powered by Quarkus 3.8.5.redhat-00004) started in 1.115s. Listening on: http://0.0.0.0:8080 [io.quarkus] (main) Profile prod activated. [io.quarkus] (main) Installed features: [camel-amqp, camel-attachments, camel-core, camel-direct, camel-jms, camel-kamelet, camel-microprofile-health, camel-platform-http, camel-rest, camel-rest-openapi, camel-yaml-dsl, cdi, kubernetes, qpid-jms, smallrye-context-propagation, smallrye-health, vertx]",
"import org.apache.camel.builder.RouteBuilder; public class Feed extends RouteBuilder { @Override public void configure() throws Exception { from(\"timer:clock?period=15s\") .setBody().simple(\"Hello World from Camel - USD{date:now}\") .log(\"sent message to messages channel: USD{body}\") .to(\"knative:channel/messages\"); } }",
"import org.apache.camel.builder.RouteBuilder; public class Printer extends RouteBuilder { @Override public void configure() throws Exception { from(\"knative:channel/messages\") .convertBodyTo(String.class) .to(\"log:info\"); } }",
"kamel run Feed.java kamel run Printer.java",
"camel export --runtime=quarkus --quarkus-group-id=com.redhat.quarkus.platform --quarkus-version=3.8.5.redhat-00003 --repos=https://maven.repository.redhat.com/ga --dep=io.quarkus:quarkus-openshift --gav=com.mycompany:ceq-feed:1.0 --dir=ceq-feed Feed.java",
"camel export --runtime=quarkus --quarkus-group-id=com.redhat.quarkus.platform --quarkus-version=3.8.5.redhat-00003 --repos=https://maven.repository.redhat.com/ga --dep=io.quarkus:quarkus-openshift --gav=com.mycompany:ceq-printer:1.0 --dir=ceq-printer Printer.java",
"cd ceq-feed",
"echo quarkus.openshift.build-strategy=docker >> src/main/resources/application.properties",
"FROM registry.access.redhat.com/ubi9/openjdk-21:1.20",
"<maven.compiler.release>21</maven.compiler.release>",
"quarkus.kubernetes.deployment-target=openshift",
"quarkus.container-image.registry=image-registry.openshift-image-registry.svc:5000 quarkus.container-image.group=<namespace>",
"quarkus.openshift.labels.\"bindings.knative.dev/include\"=true",
"{ \"services\": [ { \"type\": \"channel\", \"name\": \"messages\", \"url\": \"{{k.sink}}\", \"metadata\": { \"camel.endpoint.kind\": \"sink\", \"knative.apiVersion\": \"messaging.knative.dev/v1\", \"knative.kind\": \"Channel\", \"knative.reply\": \"false\" } } ] }",
"camel.component.knative.environmentPath=classpath:knative.json",
"cat <<EOF >> src/main/kubernetes/openshift.yml apiVersion: sources.knative.dev/v1 kind: SinkBinding metadata: finalizers: - sinkbindings.sources.knative.dev name: ceq-feed spec: sink: ref: apiVersion: messaging.knative.dev/v1 kind: Channel name: messages subject: apiVersion: apps/v1 kind: Deployment name: ceq-feed EOF",
"cd ceq-printer",
"echo quarkus.openshift.build-strategy=docker >> src/main/resources/application.properties",
"FROM registry.access.redhat.com/ubi9/openjdk-21:1.20",
"<maven.compiler.release>21</maven.compiler.release>",
"quarkus.kubernetes.deployment-target=knative",
"quarkus.container-image.registry=image-registry.openshift-image-registry.svc:5000 quarkus.container-image.group=<namespace>",
"{ \"services\": [ { \"type\": \"channel\", \"name\": \"messages\", \"path\": \"/channels/messages\", \"metadata\": { \"camel.endpoint.kind\": \"source\", \"knative.apiVersion\": \"messaging.knative.dev/v1\", \"knative.kind\": \"Channel\", \"knative.reply\": \"false\" } } ] }",
"camel.component.knative.environmentPath=classpath:knative.json",
"apiVersion: messaging.knative.dev/v1 kind: Subscription metadata: finalizers: - subscriptions.messaging.knative.dev name: ceq-printer spec: channel: apiVersion: messaging.knative.dev/v1 kind: Channel name: messages subscriber: ref: apiVersion: serving.knative.dev/v1 kind: Service name: ceq-printer uri: /channels/messages",
"./mvnw -ntp package",
"./mvnw -ntp package -Dquarkus.openshift.deploy=true",
"logs -f `oc get pod -l app.kubernetes.io/name=ceq-feed -oname`",
"INFO exec -a \"java\" java -Dquarkus.http.host=0.0.0.0 -Djava.util.logging.manager=org.jboss.logmanager.LogManager -cp \".\" -jar /deployments/quarkus-run.jar INFO running in /deployments __ ____ __ _____ ___ __ ____ ______ --/ __ \\/ / / / _ | / _ \\/ //_/ / / / __/ -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\\ --\\___\\_\\____/_/ |_/_/|_/_/|_|\\____/___/ [org.apa.cam.qua.cor.CamelBootstrapRecorder] (main) Bootstrap runtime: org.apache.camel.quarkus.main.CamelMainRuntime [org.apa.cam.mai.MainSupport] (main) Apache Camel (Main) 4.4.0.redhat-00025 is starting [org.apa.cam.mai.BaseMainSupport] (main) Auto-configuration summary [org.apa.cam.mai.BaseMainSupport] (main) [MicroProfilePropertiesSource] camel.component.knative.environmentPath=classpath:knative.json [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 4.4.0.redhat-00025 (camel-1) is starting [org.apa.cam.mai.BaseMainSupport] (main) Property-placeholders summary [org.apa.cam.mai.BaseMainSupport] (main) [OS Environment Variable] k.sink=http://hello-kn-channel.cmiranda-camel.svc.cluster.local [org.apa.cam.imp.eng.AbstractCamelContext] (main) Routes startup (started:1) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started route1 (timer://clock) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 4.4.0.redhat-00025 (camel-1) started in 43ms (build:0ms init:0ms start:43ms) [io.quarkus] (main) ceq-feed 1.0 on JVM (powered by Quarkus 3.8.5.redhat-00004) started in 1.386s. Listening on: http://0.0.0.0:8080 [io.quarkus] (main) Profile prod activated. [io.quarkus] (main) Installed features: [camel-attachments, camel-cloudevents, camel-core, camel-knative, camel-platform-http, camel-rest, camel-rest-openapi, camel-timer, cdi, kubernetes, smallrye-context-propagation, vertx] [route1] (Camel (camel-1) thread #1 - timer://clock) sent message to hello channel: Hello World from Camel - Thu Aug 01 13:54:41 UTC 2024 [route1] (Camel (camel-1) thread #1 - timer://clock) sent message to hello channel: Hello World from Camel - Thu Aug 01 13:54:56 UTC 2024 [route1] (Camel (camel-1) thread #1 - timer://clock) sent message to hello channel: Hello World from Camel - Thu Aug 01 13:55:11 UTC 2024",
"INFO exec -a \"java\" java -Dquarkus.http.host=0.0.0.0 -Djava.util.logging.manager=org.jboss.logmanager.LogManager -cp \".\" -jar /deployments/quarkus-run.jar INFO running in /deployments __ ____ __ _____ ___ __ ____ ______ --/ __ \\/ / / / _ | / _ \\/ //_/ / / / __/ -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\\ --\\___\\_\\____/_/ |_/_/|_/_/|_|\\____/___/ [org.apa.cam.qua.cor.CamelBootstrapRecorder] (main) Bootstrap runtime: org.apache.camel.quarkus.main.CamelMainRuntime [org.apa.cam.mai.MainSupport] (main) Apache Camel (Main) 4.4.0.redhat-00025 is starting [org.apa.cam.mai.BaseMainSupport] (main) Auto-configuration summary [org.apa.cam.mai.BaseMainSupport] (main) [MicroProfilePropertiesSource] camel.component.knative.environmentPath=classpath:knative.json [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 4.4.0.redhat-00025 (camel-1) is starting [org.apa.cam.imp.eng.AbstractCamelContext] (main) Routes startup (started:1) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started route1 (knative://channel/hello) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 4.4.0.redhat-00025 (camel-1) started in 10ms (build:0ms init:0ms start:10ms) [io.quarkus] (main) ceq-printer 1.0 on JVM (powered by Quarkus 3.8.5.redhat-00004) started in 1.211s. Listening on: http://0.0.0.0:8080 [io.quarkus] (main) Profile prod activated. [io.quarkus] (main) Installed features: [camel-attachments, camel-cloudevents, camel-core, camel-knative, camel-log, camel-platform-http, camel-rest, camel-rest-openapi, cdi, kubernetes, smallrye-context-propagation, vertx] [info] (executor-thread-1) Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello World from Camel - Thu Aug 01 13:54:41 UTC 2024] [info] (executor-thread-1) Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello World from Camel - Thu Aug 01 13:54:56 UTC 2024] [info] (executor-thread-1) Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello World from Camel - Thu Aug 01 13:55:11 UTC 2024]",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: sample spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: period: 5000 contentType: application/json message: '{\"id\":\"1\",\"field\":\"hello\",\"message\":\"Camel Rocks\"}' steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: extract-field-action properties: field: \"message\" sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: log-sink properties: showStreams: true",
"camel export --runtime=quarkus --quarkus-group-id=com.redhat.quarkus.platform --quarkus-version=3.8.5.redhat-00003 --repos=https://maven.repository.redhat.com/ga --dep=io.quarkus:quarkus-openshift --gav=com.mycompany:ceq-timer2log-kbind:1.0 --dir=ceq-timer2log-kbind timer-2-log-kbind.yaml",
"cd ceq-timer2log-kbind",
"quarkus.native.resources.includes camel.main.routes-include-pattern",
"echo quarkus.openshift.build-strategy=docker >> src/main/resources/application.properties",
"./mvnw -ntp package",
"./mvnw -ntp package -Dquarkus.openshift.deploy=true",
"logs -f `oc get pod -l app.kubernetes.io/name=ceq-timer2log-kbind -oname`",
"[org.apa.cam.qua.cor.CamelBootstrapRecorder] (main) Bootstrap runtime: org.apache.camel.quarkus.main.CamelMainRuntime [org.apa.cam.mai.MainSupport] (main) Apache Camel (Main) 4.4.0.redhat-00025 is starting [org.apa.cam.cli.con.LocalCliConnector] (main) Management from Camel JBang enabled [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 4.4.0.redhat-00025 (camel-1) is starting [org.apa.cam.mai.BaseMainSupport] (main) Property-placeholders summary [org.apa.cam.mai.BaseMainSupport] (main) [timer-source.kamelet.yaml] period=5000 [org.apa.cam.mai.BaseMainSupport] (main) [timer-source.kamelet.yaml] message={\"id\":\"1\",\"field\":\"hello\",\"message\":\"Camel Rocks\"} [org.apa.cam.mai.BaseMainSupport] (main) [timer-source.kamelet.yaml] contentType=application/json [org.apa.cam.mai.BaseMainSupport] (main) [log-sink.kamelet.yaml] showStreams=true [org.apa.cam.mai.BaseMainSupport] (main) [ct-field-action.kamelet.yaml] extractField=extractField-1 [org.apa.cam.mai.BaseMainSupport] (main) [ct-field-action.kamelet.yaml] field=message [org.apa.cam.imp.eng.AbstractCamelContext] (main) Routes startup (started:4) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started sample (kamelet://timer-source) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started timer-source-1 (timer://tick) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started log-sink-2 (kamelet://source) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started extract-field-action-3 (kamelet://source) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 4.4.0.redhat-00025 (camel-1) started in 276ms (build:0ms init:0ms start:276ms) [io.quarkus] (main) ceq-timer2log-kbind 1.0 on JVM (powered by Quarkus 3.8.5.redhat-00004) started in 1.867s. Listening on: http://0.0.0.0:8080 [io.quarkus] (main) Profile prod activated. [io.quarkus] (main) Installed features: [camel-attachments, camel-cli-connector, camel-console, camel-core, camel-direct, camel-jackson, camel-kamelet, camel-log, camel-management, camel-microprofile-health, camel-platform-http, camel-rest, camel-rest-openapi, camel-timer, camel-xml-jaxb, camel-yaml-dsl, cdi, kubernetes, smallrye-context-propagation, smallrye-health, vertx] [log-sink] (Camel (camel-1) thread #2 - timer://tick) Exchange[ExchangePattern: InOnly, BodyType: org.apache.camel.converter.stream.InputStreamCache, Body: \"Camel Rocks\"]",
"delete -f target/kubernetes/openshift.yml",
"quarkus.openshift.deployment-kind=CronJob quarkus.openshift.cron-job.schedule=<your cron schedule> camel.main.duration-max-idle-seconds=1",
"from(\"timer:java?delay=0&period=1&repeatCount=1\")",
"camel export --ignore-loading-error <parameters>",
"quarkus.log.category.\"org.apache.camel.component.knative\".level=debug",
"Get \"http://127.0.0.1:8080/q/health/ready\": dial tcp 127.0.0.1:8080: connect: connection refused",
"<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-microprofile-health</artifactId> </dependency>",
"rest() .post(\"/message\") .id(\"rest\") .to(\"direct:foo\"); from(\"direct:foo\") .log(\"hello\");",
"<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-direct</artifactId> </dependency>",
"PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target",
"quarkus.kubernetes-client.trust-certs=true",
"from(\"direct:jms\") .to(\"kamelet:jms-amqp-10-sink?remoteURI={{broker}}&destinationName={{queue}}\");",
"org.apache.camel.RuntimeCamelException: org.apache.camel.VetoCamelContextStartException: Failure creating route from template: jms-amqp-10-sink Caused by: org.apache.camel.VetoCamelContextStartException: Failure creating route from template: jms-amqp-10-sink Caused by: org.apache.camel.component.kamelet.FailedToCreateKameletException: Error creating or loading Kamelet with id jms-amqp-10-sink (locations: classpath:kamelets,github:apache:camel-kamelets/kamelets) Caused by: org.apache.camel.FailedToCreateRouteException: Failed to create route jms-amqp-10-sink-1 at: >>> To[jms:{{destinationType}}:{{destinationName}}?connectionFactory=#bean:{{connectionFactoryBean}}] Caused by: org.apache.camel.ResolveEndpointFailedException: Failed to resolve endpoint: jms://Queue:USD%7Bqueue%7D?connectionFactory=%23bean%3AconnectionFactoryBean-1 due to: Error binding property (connectionFactory=#bean:connectionFactoryBean-1) Caused by: org.apache.camel.PropertyBindingException: Error binding property (connectionFactory=#bean:connectionFactoryBean-1) with name: connectionFactory on bean: Caused by: java.lang.IllegalStateException: Cannot create bean: #class:org.apache.qpid.jms.JmsConnectionFactory Caused by: org.apache.camel.PropertyBindingException: Error binding property (remoteURI=@@[broker]@@) with name: remoteURI on bean: org.apache.qpid.jms.JmsConnectionFactory@a2b54e3 with value: @@[broker]@@ Caused by: java.lang.IllegalArgumentException: Invalid remote URI: @@[broker]@@ Caused by: java.net.URISyntaxException: Illegal character in path at index 2: @@[broker]@@"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/migration_guide_camel_k_to_camel_extensions_for_quarkus/overview |
1.7. Red Hat Gluster Storage Support Matrix | 1.7. Red Hat Gluster Storage Support Matrix This section lists all the supported Red Hat Enterprise Linux version for a particular Red Hat Gluster Storage release. Table 1.7. Red Hat Gluster Storage Support Matrix Red Hat Enterprise Linux version Red Hat Gluster Storage version 6.5 3.0 6.6 3.0.2, 3.0.3, 3.0.4 6.7 3.1, 3.1.1, 3.1.2 6.8 3.1.3 6.9 3.2 6.9 3.3 6.9 3.3.1 6.10 3.4, 3.5 7.1 3.1, 3.1.1 7.2 3.1.2 7.2 3.1.3 7.3 3.2 7.4 3.2 7.4 3.3 7.4 3.3.1 7.5 3.3.1, 3.4 7.6 3.3.1, 3.4 7.7 3.5, 3.5.1 7.8 3.5.1, 3.5.2 7.9 3.5.3, 3.5.4, 3.5.5, 3.5.6, 3.5.7 8.2 3.5.2, 3.5.3 8.3 3.5.3 8.4 3.5.4 8.5 3.5.5, 3.5.6 8.6 3.5.7 | null | https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/installation_guide/red_hat_gluster_storage_support_matrix |
Chapter 2. New features | Chapter 2. New features This part describes new features and major enhancements introduced in Red Hat Satellite 6.15. Command to refresh all/bulk ACS's using hammer Due to a potential conflict with an existing API endpoint, the correct command is: Jira:SAT-23132 Setting's behavior for Append domain names to the host changed The behavior of the Append domain names to the host setting was changed to save hosts with full names in the database and displaying the value. Jira:SAT-24730 New actions added to the host details page The vertical ellipses on the host details page have been updated. These actions will force a package profile upload on the host through remote execution, ensuring that the applicability calculation is up-to-date. Refresh applicability is added to the main vertical ellipsis on the top right of the page Refresh package applicability is added on the vertical ellipsis above the Content > Packages table Refresh errata applicability replaces the Recalculate action on the vertical ellipsis menu on the Content > Errata tab Jira:SAT-22617 Provisioning templates now use the Global Registration method to register hosts Previously, provisioning templates used the Katello CA Consumer to register hosts during provisioning, which is deprecated and not compatible with newer RHEL systems. With this release, provisioning templates register hosts by using the same method as the Global Registration template, because they include the shared subscription_manager_setup snippet. Bugzilla:2153548 Refresh counts available for Capsule packages If your Capsules have synchronized content enabled, you can refresh the number of content counts available to the environments associated with the Capsule. This displays the Content Views inside those environments available to the Capsule. You can then expand the Content View to view the repositories associated with that Content View version. Jira:SAT-17368 New report template for hosts in SCA organizations Host - Installed Products Use this template for hosts in Simple Content Access (SCA) organizations. It generates a report with installed product information along with other metrics included in Subscription - Entitlement Report except information about subscriptions. Subscription - Entitlement Report Use this template for hosts that are not in SCA organizations. It generates a report with information about subscription entitlements including when they expire. It only outputs information for hosts in organizations that do not use SCA. Jira:SAT-20479 RHEL end of support visible in Satellite Satellite provides multiple mechanisms to display information about upcoming End of Support (EOS) events for your Red Hat Enterprise Linux hosts: Notification banner A column on the Hosts index page Search field In the Satellite web UI, navigate to Hosts > All Hosts . Click Manage columns . Select the Content column to expand it. Select RHEL Lifecycle status . Click Save to generate a new column that displays the Red Hat Enterprise Linux lifecycle status. You can use the Search field to search hosts by rhel_lifecycle_status . It can have the following values: full_support maintenance_support approaching_end_of_maintenance extended_support approaching_end_of_support support_ended You can also find the RHEL lifecycle status on the Host status card on the Host Details page. Jira:SAT-20480 fapolicyd on Satellite and Capsule is now available You can now install and enable fapolicyd on Satellite Server and Capsule Server. The fapolicyd software framework is one of the most efficient ways to prevent running untrusted and possibly malicious applications on the system. Jira:SAT-20753 Vertical navigation changes This release brings the following vertical navigation changes: New search bar at the top of vertical navigation enables you to quickly find menu items. You can focus the search bar by clicking on it or by pressing Ctrl + Shift + F . Some menu items in the vertical navigation have been grouped into expandable submenus. For example, Config Management and Report Templates under Monitor have been grouped into Reports . Click the submenu to expand it. The order of the menu items remains unchanged. Menu and submenu items, such as Monitor or Reports , now expand when you click them instead of when you hover over them. Jira:SAT-20947 Satellite EOL date in the web UI Admin users can now see the end of life (EOL) date in the Satellite web UI if the EOL date of the Satellite version is within the 6 months. This information displays as a warning banner. The warning banner changes to an error banner if the Satellite version is past the EOL date. You can dismiss the banners and they reappear after one month or on the EOL date. Jira:SAT-20990 Satellite auto-selects the activation key during host registration When you register a host using Hosts > Register Host in the Satellite web UI and there is only one activation key available for the organization and location selected in the registration form, Satellite selects the activation key automatically. Bugzilla:1994654 Satellite sends email notifications after certain background actions fail Previously, when background actions such as repository synchronization failed, users had to log in to the Satellite Web UI to learn about the failures. With this update, you can configure email notifications for the following events: failed content view promotion, failed content view publish, failed Capsule sync, and failed repository sync. To start receiving the notifications, log in to the Satellite Web UI and navigate to Administer > Users . Select the required user, switch to the Email Preferences tab, and specify the required notifications. Make sure that the Mail Enabled checkbox on the Email Preferences tab is selected. Note that users whose accounts are disabled do not receive any notification emails. Jira:SAT-20393 Satellite installer now automatically determines the most appropriate logging layout Previously, you had to configure a layout for Satellite logs manually by passing the --foreman-logging-layout option to satellite-installer . With this release, satellite-installer automatically selects the most appropriate layout type if you do not specify a layout type manually. For file-based logging, the multiline_request_pattern layout is used by default. For logging to journald, the pattern layout is used by default. To specify the required logging layout manually, pass the --foreman-logging-layout option to satellite-installer . Jira:SAT-20206 Redis cache Satellite now includes the ability to configure redis as the cache for the Satellite WebUI. Use redis cache if you have a large number of hosts registered to the Satellite Server or if you use the extra-large tuning profile and that is causing issues. To use redis cache: To revert back to file based caching: Jira:SAT-20910 display_fqdn_for_hosts replaces append_domain_name_for_hosts in Satellite settings Previously, you were able to configure whether Satellite stores the name of the host with the domain name appended in the database. With this update, the name property of the host in the database always contains the fully qualified domain name (FQDN). As a result, the following settings are no longer available in Satellite: Append domain names to the host in Satellite Web UI append_domain_name_for_hosts in Hammer and API The settings above have been replaced with the following settings. The new settings only control how host names are displayed: Display FQDN for hosts in the Satellite Web UI display_fqdn_for_hosts in Hammer and API Jira:SAT-19793 Permissions The following permissions have been added: create_lookup_values destroy_lookup_values edit_lookup_values view_lookup_values These permissions were created to solve an issue with overriding Ansible variables and are automatically assigned to relevant roles. For more information, see Non-admin users can override Ansible variables . Jira:SAT-18126 New Hammer subcommands and options New subcommands hammer alternate-content-source bulk hammer capsule content reclaim-space hammer capsule content update-counts hammer proxy content reclaim-space hammer proxy content update-counts New options file-id and nondefault added to hammer content-view version list lifecycle-environment , environment , and environment-id added to hammer erratum list delete-empty-repo-filters added to hammer repository delete Jira:SAT-24698 New API endpoints The following API endpoints have been added: /katello/api/alternate_content_sources/bulk/refresh_all /katello/api/capsules/:id/content/counts /katello/api/capsules/:id/content/update_counts /katello/api/capsules/:id/content/reclaim_space /api/v2/hosts/bulk Jira:SAT-24552 | [
"hammer alternate-content-source bulk refresh-all",
"satellite-installer --foreman-rails-cache-store type:redis",
"satellite-installer --foreman-rails-cache-store type:file"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/release_notes/new-features |
A.5. Kerberos Errors | A.5. Kerberos Errors Kerberos errors frequently become apparent when trying to connect to the realm using kinit or a similar client. For information related to Kerberos, first check the Kerberos manpages, help files, and other resources. Important Identity Management has its own command-line tools to use to manage Kerberos policies. Do not use kadmin or kadmin.local to manage IdM Kerberos settings. There are several places to look for Kerberos error log information: For kinit problems or other Kerberos server problems, look at the KDC log in /var/log/krb5kdc.log . For IdM-specific errors, look in /var/log/httpd/error_log . The IdM logs, both for the server and for IdM-associated services, are covered in Section 28.1.4, "Checking IdM Server Logs" . A.5.1. Problems making connections with SSH when using GSS-API If there are bad reverse DNS entries in the DNS configuration, then it may not be possible to log into IdM resources using SSH. When SSH attempts to connect to a resource using GSS-API as its security method, GSS-API first checks the DNS records. The bad records prevent SSH from locating the resource. It is possible to disable reverse DNS lookups in the SSH configuration. Rather than using reverse DNS records, SSH passes the given username directly to GSS-API. To disable reverse DNS lookups with SSH, add or edit the GSSAPITrustDNS directive and set the value to no . A.5.2. There are problems connecting to an NFS server after changing a keytab Clients attempting to mount NFS exports rely on the existence of a valid principal and secret key on both the NFS server and the client host. Clients themselves should not have access to the NFS keytab. The ticket for the NFS connection will be given to clients from the KDC. Failure to export an updated keytab can cause problems that are difficult to isolate. For example, existing service connections may continue to function, but no new connections may be possible. | [
"vim /etc/ssh/ssh_config GSSAPITrustDNS no"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/kerberos_errors |
Chapter 28. Configuring ingress cluster traffic | Chapter 28. Configuring ingress cluster traffic 28.1. Configuring ingress cluster traffic overview OpenShift Container Platform provides the following methods for communicating from outside the cluster with services running in the cluster. The methods are recommended, in order or preference: If you have HTTP/HTTPS, use an Ingress Controller. If you have a TLS-encrypted protocol other than HTTPS. For example, for TLS with the SNI header, use an Ingress Controller. Otherwise, use a Load Balancer, an External IP, or a NodePort . Method Purpose Use an Ingress Controller Allows access to HTTP/HTTPS traffic and TLS-encrypted protocols other than HTTPS (for example, TLS with the SNI header). Automatically assign an external IP using a load balancer service Allows traffic to non-standard ports through an IP address assigned from a pool. Most cloud platforms offer a method to start a service with a load-balancer IP address. About MetalLB and the MetalLB Operator Allows traffic to a specific IP address or address from a pool on the machine network. For bare-metal installations or platforms that are like bare metal, MetalLB provides a way to start a service with a load-balancer IP address. Manually assign an external IP to a service Allows traffic to non-standard ports through a specific IP address. Configure a NodePort Expose a service on all nodes in the cluster. 28.1.1. Comparision: Fault tolerant access to external IP addresses For the communication methods that provide access to an external IP address, fault tolerant access to the IP address is another consideration. The following features provide fault tolerant access to an external IP address. IP failover IP failover manages a pool of virtual IP address for a set of nodes. It is implemented with Keepalived and Virtual Router Redundancy Protocol (VRRP). IP failover is a layer 2 mechanism only and relies on multicast. Multicast can have disadvantages for some networks. MetalLB MetalLB has a layer 2 mode, but it does not use multicast. Layer 2 mode has a disadvantage that it transfers all traffic for an external IP address through one node. Manually assigning external IP addresses You can configure your cluster with an IP address block that is used to assign external IP addresses to services. By default, this feature is disabled. This feature is flexible, but places the largest burden on the cluster or network administrator. The cluster is prepared to receive traffic that is destined for the external IP, but each customer has to decide how they want to route traffic to nodes. 28.2. Configuring ExternalIPs for services As a cluster administrator, you can designate an IP address block that is external to the cluster that can send traffic to services in the cluster. This functionality is generally most useful for clusters installed on bare-metal hardware. 28.2.1. Prerequisites Your network infrastructure must route traffic for the external IP addresses to your cluster. 28.2.2. About ExternalIP For non-cloud environments, OpenShift Container Platform supports the use of the ExternalIP facility to specify external IP addresses in the spec.externalIPs[] parameter of the Service object. A service configured with an ExternalIP functions similarly to a service with type=NodePort , whereby you traffic directs to a local node for load balancing. Important For cloud environments, use the load balancer services for automatic deployment of a cloud load balancer to target the endpoints of a service. After you specify a value for the parameter, OpenShift Container Platform assigns an additional virtual IP address to the service. The IP address can exist outside of the service network that you defined for your cluster. Warning Because ExternalIP is disabled by default, enabling the ExternalIP functionality might introduce security risks for the service, because in-cluster traffic to an external IP address is directed to that service. This configuration means that cluster users could intercept sensitive traffic destined for external resources. You can use either a MetalLB implementation or an IP failover deployment to attach an ExternalIP resource to a service in the following ways: Automatic assignment of an external IP OpenShift Container Platform automatically assigns an IP address from the autoAssignCIDRs CIDR block to the spec.externalIPs[] array when you create a Service object with spec.type=LoadBalancer set. For this configuration, OpenShift Container Platform implements a cloud version of the load balancer service type and assigns IP addresses to the services. Automatic assignment is disabled by default and must be configured by a cluster administrator as described in the "Configuration for ExternalIP" section. Manual assignment of an external IP OpenShift Container Platform uses the IP addresses assigned to the spec.externalIPs[] array when you create a Service object. You cannot specify an IP address that is already in use by another service. After using either the MetalLB implementation or an IP failover deployment to host external IP address blocks, you must configure your networking infrastructure to ensure that the external IP address blocks are routed to your cluster. This configuration means that the IP address is not configured in the network interfaces from nodes. To handle the traffic, you must configure the routing and access to the external IP by using a method, such as static Address Resolution Protocol (ARP) entries. OpenShift Container Platform extends the ExternalIP functionality in Kubernetes by adding the following capabilities: Restrictions on the use of external IP addresses by users through a configurable policy Allocation of an external IP address automatically to a service upon request 28.2.3. Additional resources Configuring IP failover About MetalLB and the MetalLB Operator 28.2.4. Configuration for ExternalIP Use of an external IP address in OpenShift Container Platform is governed by the following parameters in the Network.config.openshift.io custom resource (CR) that is named cluster : spec.externalIP.autoAssignCIDRs defines an IP address block used by the load balancer when choosing an external IP address for the service. OpenShift Container Platform supports only a single IP address block for automatic assignment. This configuration requires less steps than manually assigning ExternalIPs to services, which requires managing the port space of a limited number of shared IP addresses. If you enable automatic assignment, a Service object with spec.type=LoadBalancer is allocated an external IP address. spec.externalIP.policy defines the permissible IP address blocks when manually specifying an IP address. OpenShift Container Platform does not apply policy rules to IP address blocks that you defined in the spec.externalIP.autoAssignCIDRs parameter. If routed correctly, external traffic from the configured external IP address block can reach service endpoints through any TCP or UDP port that the service exposes. Important As a cluster administrator, you must configure routing to externalIPs. You must also ensure that the IP address block you assign terminates at one or more nodes in your cluster. For more information, see Kubernetes External IPs . OpenShift Container Platform supports both the automatic and manual assignment of IP addresses, where each address is guaranteed to be assigned to a maximum of one service. This configuration ensures that each service can expose its chosen ports regardless of the ports exposed by other services. Note To use IP address blocks defined by autoAssignCIDRs in OpenShift Container Platform, you must configure the necessary IP address assignment and routing for your host network. The following YAML describes a service with an external IP address configured: Example Service object with spec.externalIPs[] set apiVersion: v1 kind: Service metadata: name: http-service spec: clusterIP: 172.30.163.110 externalIPs: - 192.168.132.253 externalTrafficPolicy: Cluster ports: - name: highport nodePort: 31903 port: 30102 protocol: TCP targetPort: 30102 selector: app: web sessionAffinity: None type: LoadBalancer status: loadBalancer: ingress: - ip: 192.168.132.253 # ... 28.2.5. Restrictions on the assignment of an external IP address As a cluster administrator, you can specify IP address blocks to allow and to reject IP addresses for a service. Restrictions apply only to users without cluster-admin privileges. A cluster administrator can always set the service spec.externalIPs[] field to any IP address. You configure an IP address policy by specifying Classless Inter-Domain Routing (CIDR) address blocks for the spec.ExternalIP.policy parameter in the policy object. Example in JSON form of a policy object and its CIDR parameters { "policy": { "allowedCIDRs": [], "rejectedCIDRs": [] } } When configuring policy restrictions, the following rules apply: If policy is set to {} , creating a Service object with spec.ExternalIPs[] results in a failed service. This setting is the default for OpenShift Container Platform. The same behavior exists for policy: null . If policy is set and either policy.allowedCIDRs[] or policy.rejectedCIDRs[] is set, the following rules apply: If allowedCIDRs[] and rejectedCIDRs[] are both set, rejectedCIDRs[] has precedence over allowedCIDRs[] . If allowedCIDRs[] is set, creating a Service object with spec.ExternalIPs[] succeeds only if the specified IP addresses are allowed. If rejectedCIDRs[] is set, creating a Service object with spec.ExternalIPs[] succeeds only if the specified IP addresses are not rejected. 28.2.6. Example policy objects The examples in this section show different spec.externalIP.policy configurations. In the following example, the policy prevents OpenShift Container Platform from creating any service with a specified external IP address. Example policy to reject any value specified for Service object spec.externalIPs[] apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: policy: {} # ... In the following example, both the allowedCIDRs and rejectedCIDRs fields are set. Example policy that includes both allowed and rejected CIDR blocks apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: policy: allowedCIDRs: - 172.16.66.10/23 rejectedCIDRs: - 172.16.66.10/24 # ... In the following example, policy is set to {} . With this configuration, using the oc get networks.config.openshift.io -o yaml command to view the configuration means policy parameter does not show on the command output. The same behavior exists for policy: null . Example policy to allow any value specified for Service object spec.externalIPs[] apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 externalIP: policy: {} # ... 28.2.7. ExternalIP address block configuration The configuration for ExternalIP address blocks is defined by a Network custom resource (CR) named cluster . The Network CR is part of the config.openshift.io API group. Important During cluster installation, the Cluster Version Operator (CVO) automatically creates a Network CR named cluster . Creating any other CR objects of this type is not supported. The following YAML describes the ExternalIP configuration: Network.config.openshift.io CR named cluster apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: autoAssignCIDRs: [] 1 policy: 2 ... 1 Defines the IP address block in CIDR format that is available for automatic assignment of external IP addresses to a service. Only a single IP address range is allowed. 2 Defines restrictions on manual assignment of an IP address to a service. If no restrictions are defined, specifying the spec.externalIP field in a Service object is not allowed. By default, no restrictions are defined. The following YAML describes the fields for the policy stanza: Network.config.openshift.io policy stanza policy: allowedCIDRs: [] 1 rejectedCIDRs: [] 2 1 A list of allowed IP address ranges in CIDR format. 2 A list of rejected IP address ranges in CIDR format. Example external IP configurations Several possible configurations for external IP address pools are displayed in the following examples: The following YAML describes a configuration that enables automatically assigned external IP addresses: Example configuration with spec.externalIP.autoAssignCIDRs set apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: ... externalIP: autoAssignCIDRs: - 192.168.132.254/29 The following YAML configures policy rules for the allowed and rejected CIDR ranges: Example configuration with spec.externalIP.policy set apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: ... externalIP: policy: allowedCIDRs: - 192.168.132.0/29 - 192.168.132.8/29 rejectedCIDRs: - 192.168.132.7/32 28.2.8. Configure external IP address blocks for your cluster As a cluster administrator, you can configure the following ExternalIP settings: An ExternalIP address block used by OpenShift Container Platform to automatically populate the spec.clusterIP field for a Service object. A policy object to restrict what IP addresses may be manually assigned to the spec.clusterIP array of a Service object. Prerequisites Install the OpenShift CLI ( oc ). Access to the cluster as a user with the cluster-admin role. Procedure Optional: To display the current external IP configuration, enter the following command: USD oc describe networks.config cluster To edit the configuration, enter the following command: USD oc edit networks.config cluster Modify the ExternalIP configuration, as in the following example: apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: ... externalIP: 1 ... 1 Specify the configuration for the externalIP stanza. To confirm the updated ExternalIP configuration, enter the following command: USD oc get networks.config cluster -o go-template='{{.spec.externalIP}}{{"\n"}}' 28.2.9. steps Configuring ingress cluster traffic for a service external IP 28.3. Configuring ingress cluster traffic using an Ingress Controller OpenShift Container Platform provides methods for communicating from outside the cluster with services running in the cluster. This method uses an Ingress Controller. 28.3.1. Using Ingress Controllers and routes The Ingress Operator manages Ingress Controllers and wildcard DNS. Using an Ingress Controller is the most common way to allow external access to an OpenShift Container Platform cluster. An Ingress Controller is configured to accept external requests and proxy them based on the configured routes. This is limited to HTTP, HTTPS using SNI, and TLS using SNI, which is sufficient for web applications and services that work over TLS with SNI. Work with your administrator to configure an Ingress Controller to accept external requests and proxy them based on the configured routes. The administrator can create a wildcard DNS entry and then set up an Ingress Controller. Then, you can work with the edge Ingress Controller without having to contact the administrators. By default, every Ingress Controller in the cluster can admit any route created in any project in the cluster. The Ingress Controller: Has two replicas by default, which means it should be running on two worker nodes. Can be scaled up to have more replicas on more nodes. Note The procedures in this section require prerequisites performed by the cluster administrator. 28.3.2. Prerequisites Before starting the following procedures, the administrator must: Set up the external port to the cluster networking environment so that requests can reach the cluster. Make sure there is at least one user with cluster admin role. To add this role to a user, run the following command: You have an OpenShift Container Platform cluster with at least one master and at least one node and a system outside the cluster that has network access to the cluster. This procedure assumes that the external system is on the same subnet as the cluster. The additional networking required for external systems on a different subnet is out-of-scope for this topic. 28.3.3. Creating a project and service If the project and service that you want to expose does not exist, create the project and then create the service. If the project and service already exists, skip to the procedure on exposing the service to create a route. Prerequisites Install the OpenShift CLI ( oc ) and log in as a cluster administrator. Procedure Create a new project for your service by running the oc new-project command: USD oc new-project <project_name> Use the oc new-app command to create your service: USD oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git To verify that the service was created, run the following command: USD oc get svc -n <project_name> Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s Note By default, the new service does not have an external IP address. 28.3.4. Exposing the service by creating a route You can expose the service as a route by using the oc expose command. Prerequisites You logged into OpenShift Container Platform. Procedure Log in to the project where the service you want to expose is located: USD oc project <project_name> Run the oc expose service command to expose the route: USD oc expose service nodejs-ex Example output route.route.openshift.io/nodejs-ex exposed To verify that the service is exposed, you can use a tool, such as curl to check that the service is accessible from outside the cluster. To find the hostname of the route, enter the following command: USD oc get route Example output NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nodejs-ex nodejs-ex-myproject.example.com nodejs-ex 8080-tcp None To check that the host responds to a GET request, enter the following command: Example curl command USD curl --head nodejs-ex-myproject.example.com Example output HTTP/1.1 200 OK ... 28.3.5. Ingress sharding in OpenShift Container Platform In OpenShift Container Platform, an Ingress Controller can serve all routes, or it can serve a subset of routes. By default, the Ingress Controller serves any route created in any namespace in the cluster. You can add additional Ingress Controllers to your cluster to optimize routing by creating shards , which are subsets of routes based on selected characteristics. To mark a route as a member of a shard, use labels in the route or namespace metadata field. The Ingress Controller uses selectors , also known as a selection expression , to select a subset of routes from the entire pool of routes to serve. Ingress sharding is useful in cases where you want to load balance incoming traffic across multiple Ingress Controllers, when you want to isolate traffic to be routed to a specific Ingress Controller, or for a variety of other reasons described in the section. By default, each route uses the default domain of the cluster. However, routes can be configured to use the domain of the router instead. 28.3.6. Ingress Controller sharding You can use Ingress sharding, also known as router sharding, to distribute a set of routes across multiple routers by adding labels to routes, namespaces, or both. The Ingress Controller uses a corresponding set of selectors to admit only the routes that have a specified label. Each Ingress shard comprises the routes that are filtered by using a given selection expression. As the primary mechanism for traffic to enter the cluster, the demands on the Ingress Controller can be significant. As a cluster administrator, you can shard the routes to: Balance Ingress Controllers, or routers, with several routes to accelerate responses to changes. Assign certain routes to have different reliability guarantees than other routes. Allow certain Ingress Controllers to have different policies defined. Allow only specific routes to use additional features. Expose different routes on different addresses so that internal and external users can see different routes, for example. Transfer traffic from one version of an application to another during a blue-green deployment. When Ingress Controllers are sharded, a given route is admitted to zero or more Ingress Controllers in the group. The status of a route describes whether an Ingress Controller has admitted the route. An Ingress Controller only admits a route if the route is unique to a shard. With sharding, you can distribute subsets of routes over multiple Ingress Controllers. These subsets can be nonoverlapping, also called traditional sharding, or overlapping, otherwise known as overlapped sharding. The following table outlines three sharding methods: Sharding method Description Namespace selector After you add a namespace selector to the Ingress Controller, all routes in a namespace that have matching labels for the namespace selector are included in the Ingress shard. Consider this method when an Ingress Controller serves all routes created in a namespace. Route selector After you add a route selector to the Ingress Controller, all routes with labels that match the route selector are included in the Ingress shard. Consider this method when you want an Ingress Controller to serve only a subset of routes or a specific route in a namespace. Namespace and route selectors Provides your Ingress Controller scope for both namespace selector and route selector methods. Consider this method when you want the flexibility of both the namespace selector and the route selector methods. 28.3.6.1. Traditional sharding example An example of a configured Ingress Controller finops-router that has the label selector spec.namespaceSelector.matchExpressions with key values set to finance and ops : Example YAML definition for finops-router apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: finops-router namespace: openshift-ingress-operator spec: namespaceSelector: matchExpressions: - key: name operator: In values: - finance - ops An example of a configured Ingress Controller dev-router that has the label selector spec.namespaceSelector.matchLabels.name with the key value set to dev : Example YAML definition for dev-router apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: dev-router namespace: openshift-ingress-operator spec: namespaceSelector: matchLabels: name: dev If all application routes are in separate namespaces, such as each labeled with name:finance , name:ops , and name:dev , the configuration effectively distributes your routes between the two Ingress Controllers. OpenShift Container Platform routes for console, authentication, and other purposes should not be handled. In the scenario, sharding becomes a special case of partitioning, with no overlapping subsets. Routes are divided between router shards. Warning The default Ingress Controller continues to serve all routes unless the namespaceSelector or routeSelector fields contain routes that are meant for exclusion. See this Red Hat Knowledgebase solution and the section "Sharding the default Ingress Controller" for more information on how to exclude routes from the default Ingress Controller. 28.3.6.2. Overlapped sharding example An example of a configured Ingress Controller devops-router that has the label selector spec.namespaceSelector.matchExpressions with key values set to dev and ops : Example YAML definition for devops-router apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: devops-router namespace: openshift-ingress-operator spec: namespaceSelector: matchExpressions: - key: name operator: In values: - dev - ops The routes in the namespaces labeled name:dev and name:ops are now serviced by two different Ingress Controllers. With this configuration, you have overlapping subsets of routes. With overlapping subsets of routes you can create more complex routing rules. For example, you can divert higher priority traffic to the dedicated finops-router while sending lower priority traffic to devops-router . 28.3.6.3. Sharding the default Ingress Controller After creating a new Ingress shard, there might be routes that are admitted to your new Ingress shard that are also admitted by the default Ingress Controller. This is because the default Ingress Controller has no selectors and admits all routes by default. You can restrict an Ingress Controller from servicing routes with specific labels using either namespace selectors or route selectors. The following procedure restricts the default Ingress Controller from servicing your newly sharded finance , ops , and dev , routes using a namespace selector. This adds further isolation to Ingress shards. Important You must keep all of OpenShift Container Platform's administration routes on the same Ingress Controller. Therefore, avoid adding additional selectors to the default Ingress Controller that exclude these essential routes. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in as a project administrator. Procedure Modify the default Ingress Controller by running the following command: USD oc edit ingresscontroller -n openshift-ingress-operator default Edit the Ingress Controller to contain a namespaceSelector that excludes the routes with any of the finance , ops , and dev labels: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: namespaceSelector: matchExpressions: - key: name operator: NotIn values: - finance - ops - dev The default Ingress Controller will no longer serve the namespaces labeled name:finance , name:ops , and name:dev . 28.3.6.4. Ingress sharding and DNS The cluster administrator is responsible for making a separate DNS entry for each router in a project. A router will not forward unknown routes to another router. Consider the following example: Router A lives on host 192.168.0.5 and has routes with *.foo.com . Router B lives on host 192.168.1.9 and has routes with *.example.com . Separate DNS entries must resolve *.foo.com to the node hosting Router A and *.example.com to the node hosting Router B: *.foo.com A IN 192.168.0.5 *.example.com A IN 192.168.1.9 28.3.6.5. Configuring Ingress Controller sharding by using route labels Ingress Controller sharding by using route labels means that the Ingress Controller serves any route in any namespace that is selected by the route selector. Figure 28.1. Ingress sharding using route labels Ingress Controller sharding is useful when balancing incoming traffic load among a set of Ingress Controllers and when isolating traffic to a specific Ingress Controller. For example, company A goes to one Ingress Controller and company B to another. Procedure Edit the router-internal.yaml file: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: sharded namespace: openshift-ingress-operator spec: domain: <apps-sharded.basedomain.example.net> 1 nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: "" routeSelector: matchLabels: type: sharded 1 Specify a domain to be used by the Ingress Controller. This domain must be different from the default Ingress Controller domain. Apply the Ingress Controller router-internal.yaml file: # oc apply -f router-internal.yaml The Ingress Controller selects routes in any namespace that have the label type: sharded . Create a new route using the domain configured in the router-internal.yaml : USD oc expose svc <service-name> --hostname <route-name>.apps-sharded.basedomain.example.net 28.3.6.6. Configuring Ingress Controller sharding by using namespace labels Ingress Controller sharding by using namespace labels means that the Ingress Controller serves any route in any namespace that is selected by the namespace selector. Figure 28.2. Ingress sharding using namespace labels Ingress Controller sharding is useful when balancing incoming traffic load among a set of Ingress Controllers and when isolating traffic to a specific Ingress Controller. For example, company A goes to one Ingress Controller and company B to another. Procedure Edit the router-internal.yaml file: USD cat router-internal.yaml Example output apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: sharded namespace: openshift-ingress-operator spec: domain: <apps-sharded.basedomain.example.net> 1 nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: "" namespaceSelector: matchLabels: type: sharded 1 Specify a domain to be used by the Ingress Controller. This domain must be different from the default Ingress Controller domain. Apply the Ingress Controller router-internal.yaml file: USD oc apply -f router-internal.yaml The Ingress Controller selects routes in any namespace that is selected by the namespace selector that have the label type: sharded . Create a new route using the domain configured in the router-internal.yaml : USD oc expose svc <service-name> --hostname <route-name>.apps-sharded.basedomain.example.net 28.3.6.7. Creating a route for Ingress Controller sharding A route allows you to host your application at a URL. In this case, the hostname is not set and the route uses a subdomain instead. When you specify a subdomain, you automatically use the domain of the Ingress Controller that exposes the route. For situations where a route is exposed by multiple Ingress Controllers, the route is hosted at multiple URLs. The following procedure describes how to create a route for Ingress Controller sharding, using the hello-openshift application as an example. Ingress Controller sharding is useful when balancing incoming traffic load among a set of Ingress Controllers and when isolating traffic to a specific Ingress Controller. For example, company A goes to one Ingress Controller and company B to another. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in as a project administrator. You have a web application that exposes a port and an HTTP or TLS endpoint listening for traffic on the port. You have configured the Ingress Controller for sharding. Procedure Create a project called hello-openshift by running the following command: USD oc new-project hello-openshift Create a pod in the project by running the following command: USD oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json Create a service called hello-openshift by running the following command: USD oc expose pod/hello-openshift Create a route definition called hello-openshift-route.yaml : YAML definition of the created route for sharding: apiVersion: route.openshift.io/v1 kind: Route metadata: labels: type: sharded 1 name: hello-openshift-edge namespace: hello-openshift spec: subdomain: hello-openshift 2 tls: termination: edge to: kind: Service name: hello-openshift 1 Both the label key and its corresponding label value must match the ones specified in the Ingress Controller. In this example, the Ingress Controller has the label key and value type: sharded . 2 The route will be exposed using the value of the subdomain field. When you specify the subdomain field, you must leave the hostname unset. If you specify both the host and subdomain fields, then the route will use the value of the host field, and ignore the subdomain field. Use hello-openshift-route.yaml to create a route to the hello-openshift application by running the following command: USD oc -n hello-openshift create -f hello-openshift-route.yaml Verification Get the status of the route with the following command: USD oc -n hello-openshift get routes/hello-openshift-edge -o yaml The resulting Route resource should look similar to the following: Example output apiVersion: route.openshift.io/v1 kind: Route metadata: labels: type: sharded name: hello-openshift-edge namespace: hello-openshift spec: subdomain: hello-openshift tls: termination: edge to: kind: Service name: hello-openshift status: ingress: - host: hello-openshift.<apps-sharded.basedomain.example.net> 1 routerCanonicalHostname: router-sharded.<apps-sharded.basedomain.example.net> 2 routerName: sharded 3 1 The hostname the Ingress Controller, or router, uses to expose the route. The value of the host field is automatically determined by the Ingress Controller, and uses its domain. In this example, the domain of the Ingress Controller is <apps-sharded.basedomain.example.net> . 2 The hostname of the Ingress Controller. 3 The name of the Ingress Controller. In this example, the Ingress Controller has the name sharded . Additional resources Baseline Ingress Controller (router) performance Ingress Operator in OpenShift Container Platform . Installing a cluster on bare metal . Installing a cluster on vSphere . About network policy 28.4. Configuring the Ingress Controller endpoint publishing strategy The endpointPublishingStrategy is used to publish the Ingress Controller endpoints to other networks, enable load balancer integrations, and provide access to other systems. Important On Red Hat OpenStack Platform (RHOSP), the LoadBalancerService endpoint publishing strategy is supported only if a cloud provider is configured to create health monitors. For RHOSP 16.2, this strategy is possible only if you use the Amphora Octavia provider. For more information, see the "Setting RHOSP Cloud Controller Manager options" section of the RHOSP installation documentation. 28.4.1. Ingress Controller endpoint publishing strategy NodePortService endpoint publishing strategy The NodePortService endpoint publishing strategy publishes the Ingress Controller using a Kubernetes NodePort service. In this configuration, the Ingress Controller deployment uses container networking. A NodePortService is created to publish the deployment. The specific node ports are dynamically allocated by OpenShift Container Platform; however, to support static port allocations, your changes to the node port field of the managed NodePortService are preserved. Figure 28.3. Diagram of NodePortService The preceding graphic shows the following concepts pertaining to OpenShift Container Platform Ingress NodePort endpoint publishing strategy: All the available nodes in the cluster have their own, externally accessible IP addresses. The service running in the cluster is bound to the unique NodePort for all the nodes. When the client connects to a node that is down, for example, by connecting the 10.0.128.4 IP address in the graphic, the node port directly connects the client to an available node that is running the service. In this scenario, no load balancing is required. As the image shows, the 10.0.128.4 address is down and another IP address must be used instead. Note The Ingress Operator ignores any updates to .spec.ports[].nodePort fields of the service. By default, ports are allocated automatically and you can access the port allocations for integrations. However, sometimes static port allocations are necessary to integrate with existing infrastructure which may not be easily reconfigured in response to dynamic ports. To achieve integrations with static node ports, you can update the managed service resource directly. For more information, see the Kubernetes Services documentation on NodePort . HostNetwork endpoint publishing strategy The HostNetwork endpoint publishing strategy publishes the Ingress Controller on node ports where the Ingress Controller is deployed. An Ingress Controller with the HostNetwork endpoint publishing strategy can have only one pod replica per node. If you want n replicas, you must use at least n nodes where those replicas can be scheduled. Because each pod replica requests ports 80 and 443 on the node host where it is scheduled, a replica cannot be scheduled to a node if another pod on the same node is using those ports. The HostNetwork object has a hostNetwork field with the following default values for the optional binding ports: httpPort: 80 , httpsPort: 443 , and statsPort: 1936 . By specifying different binding ports for your network, you can deploy multiple Ingress Controllers on the same node for the HostNetwork strategy. Example apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: internal namespace: openshift-ingress-operator spec: domain: example.com endpointPublishingStrategy: type: HostNetwork hostNetwork: httpPort: 80 httpsPort: 443 statsPort: 1936 28.4.1.1. Configuring the Ingress Controller endpoint publishing scope to Internal When a cluster administrator installs a new cluster without specifying that the cluster is private, the default Ingress Controller is created with a scope set to External . Cluster administrators can change an External scoped Ingress Controller to Internal . Prerequisites You installed the oc CLI. Procedure To change an External scoped Ingress Controller to Internal , enter the following command: USD oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{"spec":{"endpointPublishingStrategy":{"type":"LoadBalancerService","loadBalancer":{"scope":"Internal"}}}}' To check the status of the Ingress Controller, enter the following command: USD oc -n openshift-ingress-operator get ingresscontrollers/default -o yaml The Progressing status condition indicates whether you must take further action. For example, the status condition can indicate that you need to delete the service by entering the following command: USD oc -n openshift-ingress delete services/router-default If you delete the service, the Ingress Operator recreates it as Internal . 28.4.1.2. Configuring the Ingress Controller endpoint publishing scope to External When a cluster administrator installs a new cluster without specifying that the cluster is private, the default Ingress Controller is created with a scope set to External . The Ingress Controller's scope can be configured to be Internal during installation or after, and cluster administrators can change an Internal Ingress Controller to External . Important On some platforms, it is necessary to delete and recreate the service. Changing the scope can cause disruption to Ingress traffic, potentially for several minutes. This applies to platforms where it is necessary to delete and recreate the service, because the procedure can cause OpenShift Container Platform to deprovision the existing service load balancer, provision a new one, and update DNS. Prerequisites You installed the oc CLI. Procedure To change an Internal scoped Ingress Controller to External , enter the following command: USD oc -n openshift-ingress-operator patch ingresscontrollers/private --type=merge --patch='{"spec":{"endpointPublishingStrategy":{"type":"LoadBalancerService","loadBalancer":{"scope":"External"}}}}' To check the status of the Ingress Controller, enter the following command: USD oc -n openshift-ingress-operator get ingresscontrollers/default -o yaml The Progressing status condition indicates whether you must take further action. For example, the status condition can indicate that you need to delete the service by entering the following command: USD oc -n openshift-ingress delete services/router-default If you delete the service, the Ingress Operator recreates it as External . 28.4.1.3. Adding a single NodePort service to an Ingress Controller Instead of creating a NodePort -type Service for each project, you can create a custom Ingress Controller to use the NodePortService endpoint publishing strategy. To prevent port conflicts, consider this configuration for your Ingress Controller when you want to apply a set of routes, through Ingress sharding, to nodes that might already have a HostNetwork Ingress Controller. Before you set a NodePort -type Service for each project, read the following considerations: You must create a wildcard DNS record for the Nodeport Ingress Controller domain. A Nodeport Ingress Controller route can be reached from the address of a worker node. For more information about the required DNS records for routes, see "User-provisioned DNS requirements". You must expose a route for your service and specify the --hostname argument for your custom Ingress Controller domain. You must append the port that is assigned to the NodePort -type Service in the route so that you can access application pods. Prerequisites You installed the OpenShift CLI ( oc ). Logged in as a user with cluster-admin privileges. You created a wildcard DNS record. Procedure Create a custom resource (CR) file for the Ingress Controller: Example of a CR file that defines information for the IngressController object apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: <custom_ic_name> 1 namespace: openshift-ingress-operator spec: replicas: 1 domain: <custom_ic_domain_name> 2 nodePlacement: nodeSelector: matchLabels: <key>: <value> 3 namespaceSelector: matchLabels: <key>: <value> 4 endpointPublishingStrategy: type: NodePortService # ... 1 Specify the a custom name for the IngressController CR. 2 The DNS name that the Ingress Controller services. As an example, the default ingresscontroller domain is apps.ipi-cluster.example.com , so you would specify the <custom_ic_domain_name> as nodeportsvc.ipi-cluster.example.com . 3 Specify the label for the nodes that include the custom Ingress Controller. 4 Specify the label for a set of namespaces. Substitute <key>:<value> with a map of key-value pairs where <key> is a unique name for the new label and <value> is its value. For example: ingresscontroller: custom-ic . Add a label to a node by using the oc label node command: USD oc label node <node_name> <key>=<value> 1 1 Where <value> must match the key-value pair specified in the nodePlacement section of your IngressController CR. Create the IngressController object: USD oc create -f <ingress_controller_cr>.yaml Find the port for the service created for the IngressController CR: USD oc get svc -n openshift-ingress Example output that shows port 80:32432/TCP for the router-nodeport-custom-ic3 service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-internal-default ClusterIP 172.30.195.74 <none> 80/TCP,443/TCP,1936/TCP 223d router-nodeport-custom-ic3 NodePort 172.30.109.219 <none> 80:32432/TCP,443:31366/TCP,1936:30499/TCP 155m To create a new project, enter the following command: USD oc new-project <project_name> To label the new namespace, enter the following command: USD oc label namespace <project_name> <key>=<value> 1 1 Where <key>=<value> must match the value in the namespaceSelector section of your Ingress Controller CR. Create a new application in your cluster: USD oc new-app --image=<image_name> 1 1 An example of <image_name> is quay.io/openshifttest/hello-openshift:multiarch . Create a Route object for a service, so that the pod can use the service to expose the application external to the cluster. USD oc expose svc/<service_name> --hostname=<svc_name>-<project_name>.<custom_ic_domain_name> 1 Note You must specify the domain name of your custom Ingress Controller in the --hostname argument. If you do not do this, the Ingress Operator uses the default Ingress Controller to serve all the routes for your cluster. Check that the route has the Admitted status and that it includes metadata for the custom Ingress Controller: USD oc get route/hello-openshift -o json | jq '.status.ingress' Example output # ... { "conditions": [ { "lastTransitionTime": "2024-05-17T18:25:41Z", "status": "True", "type": "Admitted" } ], [ { "host": "hello-openshift.nodeportsvc.ipi-cluster.example.com", "routerCanonicalHostname": "router-nodeportsvc.nodeportsvc.ipi-cluster.example.com", "routerName": "nodeportsvc", "wildcardPolicy": "None" } ], } Update the default IngressController CR to prevent the default Ingress Controller from managing the NodePort -type Service . The default Ingress Controller will continue to monitor all other cluster traffic. USD oc patch --type=merge -n openshift-ingress-operator ingresscontroller/default --patch '{"spec":{"namespaceSelector":{"matchExpressions":[{"key":"<key>","operator":"NotIn","values":["<value>]}]}}}' Verification Verify that the DNS entry can route inside and outside of your cluster by entering the following command. The command outputs the IP address of the node that received the label from running the oc label node command earlier in the procedure. USD dig +short <svc_name>-<project_name>.<custom_ic_domain_name> To verify that your cluster uses the IP addresses from external DNS servers for DNS resolution, check the connection of your cluster by entering the following command: USD curl <svc_name>-<project_name>.<custom_ic_domain_name>:<port> 1 1 1 Where <port> is the node port from the NodePort -type Service . Based on example output from the oc get svc -n openshift-ingress command, the 80:32432/TCP HTTP route means that 32432 is the node port. Output example Hello OpenShift! 28.4.2. Additional resources Ingress Controller configuration parameters Setting RHOSP Cloud Controller Manager options User-provisioned DNS requirements 28.5. Configuring ingress cluster traffic using a load balancer OpenShift Container Platform provides methods for communicating from outside the cluster with services running in the cluster. This method uses a load balancer. 28.5.1. Using a load balancer to get traffic into the cluster If you do not need a specific external IP address, you can configure a load balancer service to allow external access to an OpenShift Container Platform cluster. A load balancer service allocates a unique IP. The load balancer has a single edge router IP, which can be a virtual IP (VIP), but is still a single machine for initial load balancing. Note If a pool is configured, it is done at the infrastructure level, not by a cluster administrator. Note The procedures in this section require prerequisites performed by the cluster administrator. 28.5.2. Prerequisites Before starting the following procedures, the administrator must: Set up the external port to the cluster networking environment so that requests can reach the cluster. Make sure there is at least one user with cluster admin role. To add this role to a user, run the following command: Have an OpenShift Container Platform cluster with at least one master and at least one node and a system outside the cluster that has network access to the cluster. This procedure assumes that the external system is on the same subnet as the cluster. The additional networking required for external systems on a different subnet is out-of-scope for this topic. 28.5.3. Creating a project and service If the project and service that you want to expose does not exist, create the project and then create the service. If the project and service already exists, skip to the procedure on exposing the service to create a route. Prerequisites Install the OpenShift CLI ( oc ) and log in as a cluster administrator. Procedure Create a new project for your service by running the oc new-project command: USD oc new-project <project_name> Use the oc new-app command to create your service: USD oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git To verify that the service was created, run the following command: USD oc get svc -n <project_name> Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s Note By default, the new service does not have an external IP address. 28.5.4. Exposing the service by creating a route You can expose the service as a route by using the oc expose command. Prerequisites You logged into OpenShift Container Platform. Procedure Log in to the project where the service you want to expose is located: USD oc project <project_name> Run the oc expose service command to expose the route: USD oc expose service nodejs-ex Example output route.route.openshift.io/nodejs-ex exposed To verify that the service is exposed, you can use a tool, such as curl to check that the service is accessible from outside the cluster. To find the hostname of the route, enter the following command: USD oc get route Example output NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nodejs-ex nodejs-ex-myproject.example.com nodejs-ex 8080-tcp None To check that the host responds to a GET request, enter the following command: Example curl command USD curl --head nodejs-ex-myproject.example.com Example output HTTP/1.1 200 OK ... 28.5.5. Creating a load balancer service Use the following procedure to create a load balancer service. Prerequisites Make sure that the project and service you want to expose exist. Your cloud provider supports load balancers. Procedure To create a load balancer service: Log in to OpenShift Container Platform. Load the project where the service you want to expose is located. USD oc project project1 Open a text file on the control plane node and paste the following text, editing the file as needed: Sample load balancer configuration file 1 Enter a descriptive name for the load balancer service. 2 Enter the same port that the service you want to expose is listening on. 3 Enter a list of specific IP addresses to restrict traffic through the load balancer. This field is ignored if the cloud-provider does not support the feature. 4 Enter Loadbalancer as the type. 5 Enter the name of the service. Note To restrict the traffic through the load balancer to specific IP addresses, it is recommended to use the Ingress Controller field spec.endpointPublishingStrategy.loadBalancer.allowedSourceRanges . Do not set the loadBalancerSourceRanges field. Save and exit the file. Run the following command to create the service: USD oc create -f <file-name> For example: USD oc create -f mysql-lb.yaml Execute the following command to view the new service: USD oc get svc Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE egress-2 LoadBalancer 172.30.22.226 ad42f5d8b303045-487804948.example.com 3306:30357/TCP 15m The service has an external IP address automatically assigned if there is a cloud provider enabled. On the master, use a tool, such as cURL, to make sure you can reach the service using the public IP address: USD curl <public-ip>:<port> For example: USD curl 172.29.121.74:3306 The examples in this section use a MySQL service, which requires a client application. If you get a string of characters with the Got packets out of order message, you are connecting with the service: If you have a MySQL client, log in with the standard CLI command: USD mysql -h 172.30.131.89 -u admin -p Example output Enter password: Welcome to the MariaDB monitor. Commands end with ; or \g. MySQL [(none)]> 28.6. Configuring ingress cluster traffic on AWS OpenShift Container Platform provides methods for communicating from outside the cluster with services running in the cluster. This method uses load balancers on AWS, specifically a Network Load Balancer (NLB) or a Classic Load Balancer (CLB). Both types of load balancers can forward the client's IP address to the node, but a CLB requires proxy protocol support, which OpenShift Container Platform automatically enables. There are two ways to configure an Ingress Controller to use an NLB: By force replacing the Ingress Controller that is currently using a CLB. This deletes the IngressController object and an outage will occur while the new DNS records propagate and the NLB is being provisioned. By editing an existing Ingress Controller that uses a CLB to use an NLB. This changes the load balancer without having to delete and recreate the IngressController object. Both methods can be used to switch from an NLB to a CLB. You can configure these load balancers on a new or existing AWS cluster. 28.6.1. Configuring Classic Load Balancer timeouts on AWS OpenShift Container Platform provides a method for setting a custom timeout period for a specific route or Ingress Controller. Additionally, an AWS Classic Load Balancer (CLB) has its own timeout period with a default time of 60 seconds. If the timeout period of the CLB is shorter than the route timeout or Ingress Controller timeout, the load balancer can prematurely terminate the connection. You can prevent this problem by increasing both the timeout period of the route and CLB. 28.6.1.1. Configuring route timeouts You can configure the default timeouts for an existing route when you have services in need of a low timeout, which is required for Service Level Availability (SLA) purposes, or a high timeout, for cases with a slow back end. Prerequisites You need a deployed Ingress Controller on a running cluster. Procedure Using the oc annotate command, add the timeout to the route: USD oc annotate route <route_name> \ --overwrite haproxy.router.openshift.io/timeout=<timeout><time_unit> 1 1 Supported time units are microseconds (us), milliseconds (ms), seconds (s), minutes (m), hours (h), or days (d). The following example sets a timeout of two seconds on a route named myroute : USD oc annotate route myroute --overwrite haproxy.router.openshift.io/timeout=2s 28.6.1.2. Configuring Classic Load Balancer timeouts You can configure the default timeouts for a Classic Load Balancer (CLB) to extend idle connections. Prerequisites You must have a deployed Ingress Controller on a running cluster. Procedure Set an AWS connection idle timeout of five minutes for the default ingresscontroller by running the following command: USD oc -n openshift-ingress-operator patch ingresscontroller/default \ --type=merge --patch='{"spec":{"endpointPublishingStrategy": \ {"type":"LoadBalancerService", "loadBalancer": \ {"scope":"External", "providerParameters":{"type":"AWS", "aws": \ {"type":"Classic", "classicLoadBalancer": \ {"connectionIdleTimeout":"5m"}}}}}}}' Optional: Restore the default value of the timeout by running the following command: USD oc -n openshift-ingress-operator patch ingresscontroller/default \ --type=merge --patch='{"spec":{"endpointPublishingStrategy": \ {"loadBalancer":{"providerParameters":{"aws":{"classicLoadBalancer": \ {"connectionIdleTimeout":null}}}}}}}' Note You must specify the scope field when you change the connection timeout value unless the current scope is already set. When you set the scope field, you do not need to do so again if you restore the default timeout value. 28.6.2. Configuring ingress cluster traffic on AWS using a Network Load Balancer OpenShift Container Platform provides methods for communicating from outside the cluster with services that run in the cluster. One such method uses a Network Load Balancer (NLB). You can configure an NLB on a new or existing AWS cluster. 28.6.2.1. Switching the Ingress Controller from using a Classic Load Balancer to a Network Load Balancer You can switch the Ingress Controller that is using a Classic Load Balancer (CLB) to one that uses a Network Load Balancer (NLB) on AWS. Switching between these load balancers will not delete the IngressController object. Warning This procedure might cause the following issues: An outage that can last several minutes due to new DNS records propagation, new load balancers provisioning, and other factors. IP addresses and canonical names of the Ingress Controller load balancer might change after applying this procedure. Leaked load balancer resources due to a change in the annotation of the service. Procedure Modify the existing Ingress Controller that you want to switch to using an NLB. This example assumes that your default Ingress Controller has an External scope and no other customizations: Example ingresscontroller.yaml file apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService Note If you do not specify a value for the spec.endpointPublishingStrategy.loadBalancer.providerParameters.aws.type field, the Ingress Controller uses the spec.loadBalancer.platform.aws.type value from the cluster Ingress configuration that was set during installation. Tip If your Ingress Controller has other customizations that you want to update, such as changing the domain, consider force replacing the Ingress Controller definition file instead. Apply the changes to the Ingress Controller YAML file by running the command: USD oc apply -f ingresscontroller.yaml Expect several minutes of outages while the Ingress Controller updates. 28.6.2.2. Switching the Ingress Controller from using a Network Load Balancer to a Classic Load Balancer You can switch the Ingress Controller that is using a Network Load Balancer (NLB) to one that uses a Classic Load Balancer (CLB) on AWS. Switching between these load balancers will not delete the IngressController object. Warning This procedure might cause an outage that can last several minutes due to new DNS records propagation, new load balancers provisioning, and other factors. IP addresses and canonical names of the Ingress Controller load balancer might change after applying this procedure. Procedure Modify the existing Ingress Controller that you want to switch to using a CLB. This example assumes that your default Ingress Controller has an External scope and no other customizations: Example ingresscontroller.yaml file apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: Classic type: LoadBalancerService Note If you do not specify a value for the spec.endpointPublishingStrategy.loadBalancer.providerParameters.aws.type field, the Ingress Controller uses the spec.loadBalancer.platform.aws.type value from the cluster Ingress configuration that was set during installation. Tip If your Ingress Controller has other customizations that you want to update, such as changing the domain, consider force replacing the Ingress Controller definition file instead. Apply the changes to the Ingress Controller YAML file by running the command: USD oc apply -f ingresscontroller.yaml Expect several minutes of outages while the Ingress Controller updates. 28.6.2.3. Replacing Ingress Controller Classic Load Balancer with Network Load Balancer You can replace an Ingress Controller that is using a Classic Load Balancer (CLB) with one that uses a Network Load Balancer (NLB) on AWS. Warning This procedure might cause the following issues: An outage that can last several minutes due to new DNS records propagation, new load balancers provisioning, and other factors. IP addresses and canonical names of the Ingress Controller load balancer might change after applying this procedure. Leaked load balancer resources due to a change in the annotation of the service. Procedure Create a file with a new default Ingress Controller. The following example assumes that your default Ingress Controller has an External scope and no other customizations: Example ingresscontroller.yml file apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService If your default Ingress Controller has other customizations, ensure that you modify the file accordingly. Tip If your Ingress Controller has no other customizations and you are only updating the load balancer type, consider following the procedure detailed in "Switching the Ingress Controller from using a Classic Load Balancer to a Network Load Balancer". Force replace the Ingress Controller YAML file: USD oc replace --force --wait -f ingresscontroller.yml Wait until the Ingress Controller is replaced. Expect several of minutes of outages. 28.6.2.4. Configuring an Ingress Controller Network Load Balancer on an existing AWS cluster You can create an Ingress Controller backed by an AWS Network Load Balancer (NLB) on an existing cluster. Prerequisites You must have an installed AWS cluster. PlatformStatus of the infrastructure resource must be AWS. To verify that the PlatformStatus is AWS, run: USD oc get infrastructure/cluster -o jsonpath='{.status.platformStatus.type}' AWS Procedure Create an Ingress Controller backed by an AWS NLB on an existing cluster. Create the Ingress Controller manifest: USD cat ingresscontroller-aws-nlb.yaml Example output apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: USDmy_ingress_controller 1 namespace: openshift-ingress-operator spec: domain: USDmy_unique_ingress_domain 2 endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: External 3 providerParameters: type: AWS aws: type: NLB 1 Replace USDmy_ingress_controller with a unique name for the Ingress Controller. 2 Replace USDmy_unique_ingress_domain with a domain name that is unique among all Ingress Controllers in the cluster. This variable must be a subdomain of the DNS name <clustername>.<domain> . 3 You can replace External with Internal to use an internal NLB. Create the resource in the cluster: USD oc create -f ingresscontroller-aws-nlb.yaml Important Before you can configure an Ingress Controller NLB on a new AWS cluster, you must complete the Creating the installation configuration file procedure. 28.6.2.5. Configuring an Ingress Controller Network Load Balancer on a new AWS cluster You can create an Ingress Controller backed by an AWS Network Load Balancer (NLB) on a new cluster. Prerequisites Create the install-config.yaml file and complete any modifications to it. Procedure Create an Ingress Controller backed by an AWS NLB on a new cluster. Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the name of the directory that contains the install-config.yaml file for your cluster. Create a file that is named cluster-ingress-default-ingresscontroller.yaml in the <installation_directory>/manifests/ directory: USD touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1 1 For <installation_directory> , specify the directory name that contains the manifests/ directory for your cluster. After creating the file, several network configuration files are in the manifests/ directory, as shown: USD ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml Example output cluster-ingress-default-ingresscontroller.yaml Open the cluster-ingress-default-ingresscontroller.yaml file in an editor and enter a custom resource (CR) that describes the Operator configuration you want: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService Save the cluster-ingress-default-ingresscontroller.yaml file and quit the text editor. Optional: Back up the manifests/cluster-ingress-default-ingresscontroller.yaml file. The installation program deletes the manifests/ directory when creating the cluster. 28.6.3. Additional resources Installing a cluster on AWS with network customizations . For more information on support for NLBs, see Network Load Balancer support on AWS . For more information on proxy protocol support for CLBs, see Configure proxy protocol support for your Classic Load Balancer 28.7. Configuring ingress cluster traffic for a service external IP You can use either a MetalLB implementation or an IP failover deployment to attach an ExternalIP resource to a service so that the service is available to traffic outside your OpenShift Container Platform cluster. Hosting an external IP address in this way is only applicable for a cluster installed on bare-metal hardware. You must ensure that you correctly configure the external network infrastructure to route traffic to the service. 28.7.1. Prerequisites Your cluster is configured with ExternalIPs enabled. For more information, read Configuring ExternalIPs for services . Note Do not use the same ExternalIP for the egress IP. 28.7.2. Attaching an ExternalIP to a service You can attach an ExternalIP resource to a service. If you configured your cluster to automatically attach the resource to a service, you might not need to manually attach an ExternalIP to the service. The examples in the procedure use a scenario that manually attaches an ExternalIP resource to a service in a cluster with an IP failover configuration. Procedure Confirm compatible IP address ranges for the ExternalIP resource by entering the following command in your CLI: USD oc get networks.config cluster -o jsonpath='{.spec.externalIP}{"\n"}' Note If autoAssignCIDRs is set and you did not specify a value for spec.externalIPs in the ExternalIP resource, OpenShift Container Platform automatically assigns ExternalIP to a new Service object. Choose one of the following options to attach an ExternalIP resource to the service: If you are creating a new service, specify a value in the spec.externalIPs field and array of one or more valid IP addresses in the allowedCIDRs parameter. Example of service YAML configuration file that supports an ExternalIP resource apiVersion: v1 kind: Service metadata: name: svc-with-externalip spec: externalIPs: policy: allowedCIDRs: - 192.168.123.0/28 If you are attaching an ExternalIP to an existing service, enter the following command. Replace <name> with the service name. Replace <ip_address> with a valid ExternalIP address. You can provide multiple IP addresses separated by commas. USD oc patch svc <name> -p \ '{ "spec": { "externalIPs": [ "<ip_address>" ] } }' For example: USD oc patch svc mysql-55-rhel7 -p '{"spec":{"externalIPs":["192.174.120.10"]}}' Example output "mysql-55-rhel7" patched To confirm that an ExternalIP address is attached to the service, enter the following command. If you specified an ExternalIP for a new service, you must create the service first. USD oc get svc Example output NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE mysql-55-rhel7 172.30.131.89 192.174.120.10 3306/TCP 13m 28.7.3. Additional resources About MetalLB and the MetalLB Operator Configuring IP failover Configuring ExternalIPs for services 28.8. Configuring ingress cluster traffic by using a NodePort OpenShift Container Platform provides methods for communicating from outside the cluster with services running in the cluster. This method uses a NodePort . 28.8.1. Using a NodePort to get traffic into the cluster Use a NodePort -type Service resource to expose a service on a specific port on all nodes in the cluster. The port is specified in the Service resource's .spec.ports[*].nodePort field. Important Using a node port requires additional port resources. A NodePort exposes the service on a static port on the node's IP address. NodePort s are in the 30000 to 32767 range by default, which means a NodePort is unlikely to match a service's intended port. For example, port 8080 may be exposed as port 31020 on the node. The administrator must ensure the external IP addresses are routed to the nodes. NodePort s and external IPs are independent and both can be used concurrently. Note The procedures in this section require prerequisites performed by the cluster administrator. 28.8.2. Prerequisites Before starting the following procedures, the administrator must: Set up the external port to the cluster networking environment so that requests can reach the cluster. Make sure there is at least one user with cluster admin role. To add this role to a user, run the following command: Have an OpenShift Container Platform cluster with at least one master and at least one node and a system outside the cluster that has network access to the cluster. This procedure assumes that the external system is on the same subnet as the cluster. The additional networking required for external systems on a different subnet is out-of-scope for this topic. 28.8.3. Creating a project and service If the project and service that you want to expose does not exist, create the project and then create the service. If the project and service already exists, skip to the procedure on exposing the service to create a route. Prerequisites Install the OpenShift CLI ( oc ) and log in as a cluster administrator. Procedure Create a new project for your service by running the oc new-project command: USD oc new-project <project_name> Use the oc new-app command to create your service: USD oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git To verify that the service was created, run the following command: USD oc get svc -n <project_name> Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s Note By default, the new service does not have an external IP address. 28.8.4. Exposing the service by creating a route You can expose the service as a route by using the oc expose command. Prerequisites You logged into OpenShift Container Platform. Procedure Log in to the project where the service you want to expose is located: USD oc project <project_name> To expose a node port for the application, modify the custom resource definition (CRD) of a service by entering the following command: USD oc edit svc <service_name> Example output spec: ports: - name: 8443-tcp nodePort: 30327 1 port: 8443 protocol: TCP targetPort: 8443 sessionAffinity: None type: NodePort 2 1 Optional: Specify the node port range for the application. By default, OpenShift Container Platform selects an available port in the 30000-32767 range. 2 Define the service type. Optional: To confirm the service is available with a node port exposed, enter the following command: USD oc get svc -n myproject Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.217.127 <none> 3306/TCP 9m44s nodejs-ex-ingress NodePort 172.30.107.72 <none> 3306:31345/TCP 39s Optional: To remove the service created automatically by the oc new-app command, enter the following command: USD oc delete svc nodejs-ex Verification To check that the service node port is updated with a port in the 30000-32767 range, enter the following command: USD oc get svc In the following example output, the updated port is 30327 : Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE httpd NodePort 172.xx.xx.xx <none> 8443:30327/TCP 109s 28.8.5. Additional resources Configuring the node port service range Adding a single NodePort service to an Ingress Controller 28.9. Configuring ingress cluster traffic using load balancer allowed source ranges You can specify a list of IP address ranges for the IngressController . This restricts access to the load balancer service when the endpointPublishingStrategy is LoadBalancerService . 28.9.1. Configuring load balancer allowed source ranges You can enable and configure the spec.endpointPublishingStrategy.loadBalancer.allowedSourceRanges field. By configuring load balancer allowed source ranges, you can limit the access to the load balancer for the Ingress Controller to a specified list of IP address ranges. The Ingress Operator reconciles the load balancer Service and sets the spec.loadBalancerSourceRanges field based on AllowedSourceRanges . Note If you have already set the spec.loadBalancerSourceRanges field or the load balancer service anotation service.beta.kubernetes.io/load-balancer-source-ranges in a version of OpenShift Container Platform, Ingress Controller starts reporting Progressing=True after an upgrade. To fix this, set AllowedSourceRanges that overwrites the spec.loadBalancerSourceRanges field and clears the service.beta.kubernetes.io/load-balancer-source-ranges annotation. Ingress Controller starts reporting Progressing=False again. Prerequisites You have a deployed Ingress Controller on a running cluster. Procedure Set the allowed source ranges API for the Ingress Controller by running the following command: USD oc -n openshift-ingress-operator patch ingresscontroller/default \ --type=merge --patch='{"spec":{"endpointPublishingStrategy": \ {"type":"LoadBalancerService", "loadbalancer": \ {"scope":"External", "allowedSourceRanges":["0.0.0.0/0"]}}}}' 1 1 The example value 0.0.0.0/0 specifies the allowed source range. 28.9.2. Migrating to load balancer allowed source ranges If you have already set the annotation service.beta.kubernetes.io/load-balancer-source-ranges , you can migrate to load balancer allowed source ranges. When you set the AllowedSourceRanges , the Ingress Controller sets the spec.loadBalancerSourceRanges field based on the AllowedSourceRanges value and unsets the service.beta.kubernetes.io/load-balancer-source-ranges annotation. Note If you have already set the spec.loadBalancerSourceRanges field or the load balancer service anotation service.beta.kubernetes.io/load-balancer-source-ranges in a version of OpenShift Container Platform, the Ingress Controller starts reporting Progressing=True after an upgrade. To fix this, set AllowedSourceRanges that overwrites the spec.loadBalancerSourceRanges field and clears the service.beta.kubernetes.io/load-balancer-source-ranges annotation. The Ingress Controller starts reporting Progressing=False again. Prerequisites You have set the service.beta.kubernetes.io/load-balancer-source-ranges annotation. Procedure Ensure that the service.beta.kubernetes.io/load-balancer-source-ranges is set: USD oc get svc router-default -n openshift-ingress -o yaml Example output apiVersion: v1 kind: Service metadata: annotations: service.beta.kubernetes.io/load-balancer-source-ranges: 192.168.0.1/32 Ensure that the spec.loadBalancerSourceRanges field is unset: USD oc get svc router-default -n openshift-ingress -o yaml Example output ... spec: loadBalancerSourceRanges: - 0.0.0.0/0 ... Update your cluster to OpenShift Container Platform 4.13. Set the allowed source ranges API for the ingresscontroller by running the following command: USD oc -n openshift-ingress-operator patch ingresscontroller/default \ --type=merge --patch='{"spec":{"endpointPublishingStrategy": \ {"loadBalancer":{"allowedSourceRanges":["0.0.0.0/0"]}}}}' 1 1 The example value 0.0.0.0/0 specifies the allowed source range. 28.9.3. Additional resources Updating your cluster 28.10. Patching existing ingress objects You can update or modify the following fields of existing Ingress objects without recreating the objects or disrupting services to them: Specifications Host Path Backend services SSL/TLS settings Annotations 28.10.1. Patching Ingress objects to resolve an ingressWithoutClassName alert The ingressClassName field specifies the name of the IngressClass object. You must define the ingressClassName field for each Ingress object. If you have not defined the ingressClassName field for an Ingress object, you could experience routing issues. After 24 hours, you will receive an ingressWithoutClassName alert to remind you to set the ingressClassName field. Procedure Patch the Ingress objects with a completed ingressClassName field to ensure proper routing and functionality. List all IngressClass objects: USD oc get ingressclass List all Ingress objects in all namespaces: USD oc get ingress -A Patch the Ingress object: USD oc patch ingress/<ingress_name> --type=merge --patch '{"spec":{"ingressClassName":"openshift-default"}}' Replace <ingress_name> with the name of the Ingress object. This command patches the Ingress object to include the desired ingress class name. | [
"apiVersion: v1 kind: Service metadata: name: http-service spec: clusterIP: 172.30.163.110 externalIPs: - 192.168.132.253 externalTrafficPolicy: Cluster ports: - name: highport nodePort: 31903 port: 30102 protocol: TCP targetPort: 30102 selector: app: web sessionAffinity: None type: LoadBalancer status: loadBalancer: ingress: - ip: 192.168.132.253",
"{ \"policy\": { \"allowedCIDRs\": [], \"rejectedCIDRs\": [] } }",
"apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: policy: {}",
"apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: policy: allowedCIDRs: - 172.16.66.10/23 rejectedCIDRs: - 172.16.66.10/24",
"apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 externalIP: policy: {}",
"apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: autoAssignCIDRs: [] 1 policy: 2",
"policy: allowedCIDRs: [] 1 rejectedCIDRs: [] 2",
"apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: autoAssignCIDRs: - 192.168.132.254/29",
"apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: policy: allowedCIDRs: - 192.168.132.0/29 - 192.168.132.8/29 rejectedCIDRs: - 192.168.132.7/32",
"oc describe networks.config cluster",
"oc edit networks.config cluster",
"apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: 1",
"oc get networks.config cluster -o go-template='{{.spec.externalIP}}{{\"\\n\"}}'",
"oc adm policy add-cluster-role-to-user cluster-admin username",
"oc new-project <project_name>",
"oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git",
"oc get svc -n <project_name>",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s",
"oc project <project_name>",
"oc expose service nodejs-ex",
"route.route.openshift.io/nodejs-ex exposed",
"oc get route",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nodejs-ex nodejs-ex-myproject.example.com nodejs-ex 8080-tcp None",
"curl --head nodejs-ex-myproject.example.com",
"HTTP/1.1 200 OK",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: finops-router namespace: openshift-ingress-operator spec: namespaceSelector: matchExpressions: - key: name operator: In values: - finance - ops",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: dev-router namespace: openshift-ingress-operator spec: namespaceSelector: matchLabels: name: dev",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: devops-router namespace: openshift-ingress-operator spec: namespaceSelector: matchExpressions: - key: name operator: In values: - dev - ops",
"oc edit ingresscontroller -n openshift-ingress-operator default",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: namespaceSelector: matchExpressions: - key: name operator: NotIn values: - finance - ops - dev",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: sharded namespace: openshift-ingress-operator spec: domain: <apps-sharded.basedomain.example.net> 1 nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: \"\" routeSelector: matchLabels: type: sharded",
"oc apply -f router-internal.yaml",
"oc expose svc <service-name> --hostname <route-name>.apps-sharded.basedomain.example.net",
"cat router-internal.yaml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: sharded namespace: openshift-ingress-operator spec: domain: <apps-sharded.basedomain.example.net> 1 nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: \"\" namespaceSelector: matchLabels: type: sharded",
"oc apply -f router-internal.yaml",
"oc expose svc <service-name> --hostname <route-name>.apps-sharded.basedomain.example.net",
"oc new-project hello-openshift",
"oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json",
"oc expose pod/hello-openshift",
"apiVersion: route.openshift.io/v1 kind: Route metadata: labels: type: sharded 1 name: hello-openshift-edge namespace: hello-openshift spec: subdomain: hello-openshift 2 tls: termination: edge to: kind: Service name: hello-openshift",
"oc -n hello-openshift create -f hello-openshift-route.yaml",
"oc -n hello-openshift get routes/hello-openshift-edge -o yaml",
"apiVersion: route.openshift.io/v1 kind: Route metadata: labels: type: sharded name: hello-openshift-edge namespace: hello-openshift spec: subdomain: hello-openshift tls: termination: edge to: kind: Service name: hello-openshift status: ingress: - host: hello-openshift.<apps-sharded.basedomain.example.net> 1 routerCanonicalHostname: router-sharded.<apps-sharded.basedomain.example.net> 2 routerName: sharded 3",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: internal namespace: openshift-ingress-operator spec: domain: example.com endpointPublishingStrategy: type: HostNetwork hostNetwork: httpPort: 80 httpsPort: 443 statsPort: 1936",
"oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{\"spec\":{\"endpointPublishingStrategy\":{\"type\":\"LoadBalancerService\",\"loadBalancer\":{\"scope\":\"Internal\"}}}}'",
"oc -n openshift-ingress-operator get ingresscontrollers/default -o yaml",
"oc -n openshift-ingress delete services/router-default",
"oc -n openshift-ingress-operator patch ingresscontrollers/private --type=merge --patch='{\"spec\":{\"endpointPublishingStrategy\":{\"type\":\"LoadBalancerService\",\"loadBalancer\":{\"scope\":\"External\"}}}}'",
"oc -n openshift-ingress-operator get ingresscontrollers/default -o yaml",
"oc -n openshift-ingress delete services/router-default",
"apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: <custom_ic_name> 1 namespace: openshift-ingress-operator spec: replicas: 1 domain: <custom_ic_domain_name> 2 nodePlacement: nodeSelector: matchLabels: <key>: <value> 3 namespaceSelector: matchLabels: <key>: <value> 4 endpointPublishingStrategy: type: NodePortService",
"oc label node <node_name> <key>=<value> 1",
"oc create -f <ingress_controller_cr>.yaml",
"oc get svc -n openshift-ingress",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-internal-default ClusterIP 172.30.195.74 <none> 80/TCP,443/TCP,1936/TCP 223d router-nodeport-custom-ic3 NodePort 172.30.109.219 <none> 80:32432/TCP,443:31366/TCP,1936:30499/TCP 155m",
"oc new-project <project_name>",
"oc label namespace <project_name> <key>=<value> 1",
"oc new-app --image=<image_name> 1",
"oc expose svc/<service_name> --hostname=<svc_name>-<project_name>.<custom_ic_domain_name> 1",
"oc get route/hello-openshift -o json | jq '.status.ingress'",
"{ \"conditions\": [ { \"lastTransitionTime\": \"2024-05-17T18:25:41Z\", \"status\": \"True\", \"type\": \"Admitted\" } ], [ { \"host\": \"hello-openshift.nodeportsvc.ipi-cluster.example.com\", \"routerCanonicalHostname\": \"router-nodeportsvc.nodeportsvc.ipi-cluster.example.com\", \"routerName\": \"nodeportsvc\", \"wildcardPolicy\": \"None\" } ], }",
"oc patch --type=merge -n openshift-ingress-operator ingresscontroller/default --patch '{\"spec\":{\"namespaceSelector\":{\"matchExpressions\":[{\"key\":\"<key>\",\"operator\":\"NotIn\",\"values\":[\"<value>]}]}}}'",
"dig +short <svc_name>-<project_name>.<custom_ic_domain_name>",
"curl <svc_name>-<project_name>.<custom_ic_domain_name>:<port> 1",
"Hello OpenShift!",
"oc adm policy add-cluster-role-to-user cluster-admin username",
"oc new-project <project_name>",
"oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git",
"oc get svc -n <project_name>",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s",
"oc project <project_name>",
"oc expose service nodejs-ex",
"route.route.openshift.io/nodejs-ex exposed",
"oc get route",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nodejs-ex nodejs-ex-myproject.example.com nodejs-ex 8080-tcp None",
"curl --head nodejs-ex-myproject.example.com",
"HTTP/1.1 200 OK",
"oc project project1",
"apiVersion: v1 kind: Service metadata: name: egress-2 1 spec: ports: - name: db port: 3306 2 loadBalancerIP: loadBalancerSourceRanges: 3 - 10.0.0.0/8 - 192.168.0.0/16 type: LoadBalancer 4 selector: name: mysql 5",
"oc create -f <file-name>",
"oc create -f mysql-lb.yaml",
"oc get svc",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE egress-2 LoadBalancer 172.30.22.226 ad42f5d8b303045-487804948.example.com 3306:30357/TCP 15m",
"curl <public-ip>:<port>",
"curl 172.29.121.74:3306",
"mysql -h 172.30.131.89 -u admin -p",
"Enter password: Welcome to the MariaDB monitor. Commands end with ; or \\g. MySQL [(none)]>",
"oc annotate route <route_name> --overwrite haproxy.router.openshift.io/timeout=<timeout><time_unit> 1",
"oc annotate route myroute --overwrite haproxy.router.openshift.io/timeout=2s",
"oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge --patch='{\"spec\":{\"endpointPublishingStrategy\": {\"type\":\"LoadBalancerService\", \"loadBalancer\": {\"scope\":\"External\", \"providerParameters\":{\"type\":\"AWS\", \"aws\": {\"type\":\"Classic\", \"classicLoadBalancer\": {\"connectionIdleTimeout\":\"5m\"}}}}}}}'",
"oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge --patch='{\"spec\":{\"endpointPublishingStrategy\": {\"loadBalancer\":{\"providerParameters\":{\"aws\":{\"classicLoadBalancer\": {\"connectionIdleTimeout\":null}}}}}}}'",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService",
"oc apply -f ingresscontroller.yaml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: Classic type: LoadBalancerService",
"oc apply -f ingresscontroller.yaml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService",
"oc replace --force --wait -f ingresscontroller.yml",
"oc get infrastructure/cluster -o jsonpath='{.status.platformStatus.type}' AWS",
"cat ingresscontroller-aws-nlb.yaml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: USDmy_ingress_controller 1 namespace: openshift-ingress-operator spec: domain: USDmy_unique_ingress_domain 2 endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: External 3 providerParameters: type: AWS aws: type: NLB",
"oc create -f ingresscontroller-aws-nlb.yaml",
"./openshift-install create manifests --dir <installation_directory> 1",
"touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1",
"ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml",
"cluster-ingress-default-ingresscontroller.yaml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService",
"oc get networks.config cluster -o jsonpath='{.spec.externalIP}{\"\\n\"}'",
"apiVersion: v1 kind: Service metadata: name: svc-with-externalip spec: externalIPs: policy: allowedCIDRs: - 192.168.123.0/28",
"oc patch svc <name> -p '{ \"spec\": { \"externalIPs\": [ \"<ip_address>\" ] } }'",
"oc patch svc mysql-55-rhel7 -p '{\"spec\":{\"externalIPs\":[\"192.174.120.10\"]}}'",
"\"mysql-55-rhel7\" patched",
"oc get svc",
"NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE mysql-55-rhel7 172.30.131.89 192.174.120.10 3306/TCP 13m",
"oc adm policy add-cluster-role-to-user cluster-admin <user_name>",
"oc new-project <project_name>",
"oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git",
"oc get svc -n <project_name>",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s",
"oc project <project_name>",
"oc edit svc <service_name>",
"spec: ports: - name: 8443-tcp nodePort: 30327 1 port: 8443 protocol: TCP targetPort: 8443 sessionAffinity: None type: NodePort 2",
"oc get svc -n myproject",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.217.127 <none> 3306/TCP 9m44s nodejs-ex-ingress NodePort 172.30.107.72 <none> 3306:31345/TCP 39s",
"oc delete svc nodejs-ex",
"oc get svc",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE httpd NodePort 172.xx.xx.xx <none> 8443:30327/TCP 109s",
"oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge --patch='{\"spec\":{\"endpointPublishingStrategy\": {\"type\":\"LoadBalancerService\", \"loadbalancer\": {\"scope\":\"External\", \"allowedSourceRanges\":[\"0.0.0.0/0\"]}}}}' 1",
"oc get svc router-default -n openshift-ingress -o yaml",
"apiVersion: v1 kind: Service metadata: annotations: service.beta.kubernetes.io/load-balancer-source-ranges: 192.168.0.1/32",
"oc get svc router-default -n openshift-ingress -o yaml",
"spec: loadBalancerSourceRanges: - 0.0.0.0/0",
"oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge --patch='{\"spec\":{\"endpointPublishingStrategy\": {\"loadBalancer\":{\"allowedSourceRanges\":[\"0.0.0.0/0\"]}}}}' 1",
"oc get ingressclass",
"oc get ingress -A",
"oc patch ingress/<ingress_name> --type=merge --patch '{\"spec\":{\"ingressClassName\":\"openshift-default\"}}'"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/networking/configuring-ingress-cluster-traffic |
A.2. The dmsetup Command | A.2. The dmsetup Command The dmsetup command is a command line wrapper for communication with the Device Mapper. For general system information about LVM devices, you may find the info , ls , status , and deps options of the dmsetup command to be useful, as described in the following subsections. For information about additional options and capabilities of the dmsetup command, see the dmsetup (8) man page. A.2.1. The dmsetup info Command The dmsetup info device command provides summary information about Device Mapper devices. If you do not specify a device name, the output is information about all of the currently configured Device Mapper devices. If you specify a device, then this command yields information for that device only. The dmsetup info command provides information in the following categories: Name The name of the device. An LVM device is expressed as the volume group name and the logical volume name separated by a hyphen. A hyphen in the original name is translated to two hyphens. During standard LVM operations, you should not use the name of an LVM device in this format to specify an LVM device directly, but instead you should use the vg / lv alternative. State Possible device states are SUSPENDED , ACTIVE , and READ-ONLY . The dmsetup suspend command sets a device state to SUSPENDED . When a device is suspended, all I/O operations to that device stop. The dmsetup resume command restores a device state to ACTIVE . Read Ahead The number of data blocks that the system reads ahead for any open file on which read operations are ongoing. By default, the kernel chooses a suitable value automatically. You can change this value with the --readahead option of the dmsetup command. Tables present Possible states for this category are LIVE and INACTIVE . An INACTIVE state indicates that a table has been loaded which will be swapped in when a dmsetup resume command restores a device state to ACTIVE , at which point the table's state becomes LIVE . For information, see the dmsetup man page. Open count The open reference count indicates how many times the device is opened. A mount command opens a device. Event number The current number of events received. Issuing a dmsetup wait n command allows you to wait for the n'th event, blocking the call until it is received. Major, minor Major and minor device number. Number of targets The number of segments that make up a device. For example, a linear device spanning 3 disks would have 3 targets. A linear device composed of the beginning and end of a disk, but not the middle would have 2 targets. UUID UUID of the device. The following example shows partial output for the dmsetup info command. A.2.2. The dmsetup ls Command You can list the device names of mapped devices with the dmsetup ls command. You can list devices that have at least one target of a specified type with the dmsetup ls --target target_type command. For other options of the dmsetup ls command, see the dmsetup man page. The following example shows the command to list the device names of currently configured mapped devices. The following example shows the command to list the device names of currently configured mirror mappings. LVM configurations that are stacked on multipath or other device mapper devices can be complex to sort out. The dmsetup ls command provides a --tree option that displays dependencies between devices as a tree, as in the following example. A.2.3. The dmsetup status Command The dmsetup status device command provides status information for each target in a specified device. If you do not specify a device name, the output is information about all of the currently configured Device Mapper devices. You can list the status only of devices that have at least one target of a specified type with the dmsetup status --target target_type command. The following example shows the command to list the status of the targets in all currently configured mapped devices. A.2.4. The dmsetup deps Command The dmsetup deps device command provides a list of (major, minor) pairs for devices referenced by the mapping table for the specified device. If you do not specify a device name, the output is information about all of the currently configured Device Mapper devices. The following example shows the command to list the dependencies of all currently configured mapped devices. The following example shows the command to list the dependencies only of the device lock_stress-grant--02.1722 : | [
"dmsetup info Name: testgfsvg-testgfslv1 State: ACTIVE Read Ahead: 256 Tables present: LIVE Open count: 0 Event number: 0 Major, minor: 253, 2 Number of targets: 2 UUID: LVM-K528WUGQgPadNXYcFrrf9LnPlUMswgkCkpgPIgYzSvigM7SfeWCypddNSWtNzc2N Name: VolGroup00-LogVol00 State: ACTIVE Read Ahead: 256 Tables present: LIVE Open count: 1 Event number: 0 Major, minor: 253, 0 Number of targets: 1 UUID: LVM-tOcS1kqFV9drb0X1Vr8sxeYP0tqcrpdegyqj5lZxe45JMGlmvtqLmbLpBcenh2L3",
"dmsetup ls testgfsvg-testgfslv3 (253:4) testgfsvg-testgfslv2 (253:3) testgfsvg-testgfslv1 (253:2) VolGroup00-LogVol01 (253:1) VolGroup00-LogVol00 (253:0)",
"dmsetup ls --target mirror lock_stress-grant--02.1722 (253, 34) lock_stress-grant--01.1720 (253, 18) lock_stress-grant--03.1718 (253, 52) lock_stress-grant--02.1716 (253, 40) lock_stress-grant--03.1713 (253, 47) lock_stress-grant--02.1709 (253, 23) lock_stress-grant--01.1707 (253, 8) lock_stress-grant--01.1724 (253, 14) lock_stress-grant--03.1711 (253, 27)",
"dmsetup ls --tree vgtest-lvmir (253:13) ββvgtest-lvmir_mimage_1 (253:12) β ββmpathep1 (253:8) β ββmpathe (253:5) β ββ (8:112) β ββ (8:64) ββvgtest-lvmir_mimage_0 (253:11) β ββmpathcp1 (253:3) β ββmpathc (253:2) β ββ (8:32) β ββ (8:16) ββvgtest-lvmir_mlog (253:4) ββmpathfp1 (253:10) ββmpathf (253:6) ββ (8:128) ββ (8:80)",
"dmsetup status testgfsvg-testgfslv3: 0 312352768 linear testgfsvg-testgfslv2: 0 312352768 linear testgfsvg-testgfslv1: 0 312352768 linear testgfsvg-testgfslv1: 312352768 50331648 linear VolGroup00-LogVol01: 0 4063232 linear VolGroup00-LogVol00: 0 151912448 linear",
"dmsetup deps testgfsvg-testgfslv3: 1 dependencies : (8, 16) testgfsvg-testgfslv2: 1 dependencies : (8, 16) testgfsvg-testgfslv1: 1 dependencies : (8, 16) VolGroup00-LogVol01: 1 dependencies : (8, 2) VolGroup00-LogVol00: 1 dependencies : (8, 2)",
"dmsetup deps lock_stress-grant--02.1722 3 dependencies : (253, 33) (253, 32) (253, 31)"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/logical_volume_manager_administration/dmsetup |
Chapter 1. Overview of images | Chapter 1. Overview of images 1.1. Understanding containers, images, and image streams Containers, images, and image streams are important concepts to understand when you set out to create and manage containerized software. An image holds a set of software that is ready to run, while a container is a running instance of a container image. An image stream provides a way of storing different versions of the same basic image. Those different versions are represented by different tags on the same image name. 1.2. Images Containers in OpenShift Container Platform are based on OCI- or Docker-formatted container images . An image is a binary that includes all of the requirements for running a single container, as well as metadata describing its needs and capabilities. You can think of it as a packaging technology. Containers only have access to resources defined in the image unless you give the container additional access when creating it. By deploying the same image in multiple containers across multiple hosts and load balancing between them, OpenShift Container Platform can provide redundancy and horizontal scaling for a service packaged into an image. You can use the podman or docker CLI directly to build images, but OpenShift Container Platform also supplies builder images that assist with creating new images by adding your code or configuration to existing images. Because applications develop over time, a single image name can actually refer to many different versions of the same image. Each different image is referred to uniquely by its hash, a long hexadecimal number such as fd44297e2ddb050ec4f... , which is usually shortened to 12 characters, such as fd44297e2ddb . You can create , manage , and use container images. 1.3. Image registry An image registry is a content server that can store and serve container images. For example: registry.redhat.io A registry contains a collection of one or more image repositories, which contain one or more tagged images. Red Hat provides a registry at registry.redhat.io for subscribers. OpenShift Container Platform can also supply its own OpenShift image registry for managing custom container images. 1.4. Image repository An image repository is a collection of related container images and tags identifying them. For example, the OpenShift Container Platform Jenkins images are in the repository: docker.io/openshift/jenkins-2-centos7 1.5. Image tags An image tag is a label applied to a container image in a repository that distinguishes a specific image from other images in an image stream. Typically, the tag represents a version number of some sort. For example, here :v3.11.59-2 is the tag: registry.access.redhat.com/openshift3/jenkins-2-rhel7:v3.11.59-2 You can add additional tags to an image. For example, an image might be assigned the tags :v3.11.59-2 and :latest . OpenShift Container Platform provides the oc tag command, which is similar to the docker tag command, but operates on image streams instead of directly on images. 1.6. Image IDs An image ID is a SHA (Secure Hash Algorithm) code that can be used to pull an image. A SHA image ID cannot change. A specific SHA identifier always references the exact same container image content. For example: docker.io/openshift/jenkins-2-centos7@sha256:ab312bda324 1.7. Containers The basic units of OpenShift Container Platform applications are called containers. Linux container technologies are lightweight mechanisms for isolating running processes so that they are limited to interacting with only their designated resources. The word container is defined as a specific running or paused instance of a container image. Many application instances can be running in containers on a single host without visibility into each others' processes, files, network, and so on. Typically, each container provides a single service, often called a micro-service, such as a web server or a database, though containers can be used for arbitrary workloads. The Linux kernel has been incorporating capabilities for container technologies for years. The Docker project developed a convenient management interface for Linux containers on a host. More recently, the Open Container Initiative has developed open standards for container formats and container runtimes. OpenShift Container Platform and Kubernetes add the ability to orchestrate OCI- and Docker-formatted containers across multi-host installations. Though you do not directly interact with container runtimes when using OpenShift Container Platform, understanding their capabilities and terminology is important for understanding their role in OpenShift Container Platform and how your applications function inside of containers. Tools such as podman can be used to replace docker command-line tools for running and managing containers directly. Using podman , you can experiment with containers separately from OpenShift Container Platform. 1.8. Why use imagestreams An image stream and its associated tags provide an abstraction for referencing container images from within OpenShift Container Platform. The image stream and its tags allow you to see what images are available and ensure that you are using the specific image you need even if the image in the repository changes. Image streams do not contain actual image data, but present a single virtual view of related images, similar to an image repository. You can configure builds and deployments to watch an image stream for notifications when new images are added and react by performing a build or deployment, respectively. For example, if a deployment is using a certain image and a new version of that image is created, a deployment could be automatically performed to pick up the new version of the image. However, if the image stream tag used by the deployment or build is not updated, then even if the container image in the container image registry is updated, the build or deployment continues using the , presumably known good image. The source images can be stored in any of the following: OpenShift Container Platform's integrated registry. An external registry, for example registry.redhat.io or quay.io. Other image streams in the OpenShift Container Platform cluster. When you define an object that references an image stream tag, such as a build or deployment configuration, you point to an image stream tag and not the repository. When you build or deploy your application, OpenShift Container Platform queries the repository using the image stream tag to locate the associated ID of the image and uses that exact image. The image stream metadata is stored in the etcd instance along with other cluster information. Using image streams has several significant benefits: You can tag, rollback a tag, and quickly deal with images, without having to re-push using the command line. You can trigger builds and deployments when a new image is pushed to the registry. Also, OpenShift Container Platform has generic triggers for other resources, such as Kubernetes objects. You can mark a tag for periodic re-import. If the source image has changed, that change is picked up and reflected in the image stream, which triggers the build or deployment flow, depending upon the build or deployment configuration. You can share images using fine-grained access control and quickly distribute images across your teams. If the source image changes, the image stream tag still points to a known-good version of the image, ensuring that your application does not break unexpectedly. You can configure security around who can view and use the images through permissions on the image stream objects. Users that lack permission to read or list images on the cluster level can still retrieve the images tagged in a project using image streams. You can manage image streams, use image streams with Kubernetes resources , and trigger updates on image stream updates . 1.9. Image stream tags An image stream tag is a named pointer to an image in an image stream. An image stream tag is similar to a container image tag. 1.10. Image stream images An image stream image allows you to retrieve a specific container image from a particular image stream where it is tagged. An image stream image is an API resource object that pulls together some metadata about a particular image SHA identifier. 1.11. Image stream triggers An image stream trigger causes a specific action when an image stream tag changes. For example, importing can cause the value of the tag to change, which causes a trigger to fire when there are deployments, builds, or other resources listening for those. 1.12. How you can use the Cluster Samples Operator During the initial startup, the Operator creates the default samples resource to initiate the creation of the image streams and templates. You can use the Cluster Samples Operator to manage the sample image streams and templates stored in the openshift namespace. As a cluster administrator, you can use the Cluster Samples Operator to: Configure the Operator . Use the Operator with an alternate registry . 1.13. About templates A template is a definition of an object to be replicated. You can use templates to build and deploy configurations. 1.14. How you can use Ruby on Rails As a developer, you can use Ruby on Rails to: Write your application: Set up a database. Create a welcome page. Configure your application for OpenShift Container Platform. Store your application in Git. Deploy your application in OpenShift Container Platform: Create the database service. Create the frontend service. Create a route for your application. | [
"registry.redhat.io",
"docker.io/openshift/jenkins-2-centos7",
"registry.access.redhat.com/openshift3/jenkins-2-rhel7:v3.11.59-2",
"docker.io/openshift/jenkins-2-centos7@sha256:ab312bda324"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/images/overview-of-images |
Chapter 5. Installing the Apache HTTP Server on RHEL 9 by using Application Streams | Chapter 5. Installing the Apache HTTP Server on RHEL 9 by using Application Streams The Red Hat Enterprise Linux (RHEL) Application Streams feature delivers and updates multiple versions of user-space components such as applications, runtime languages, and databases in an AppStream repository. On RHEL 9, if you want to install the Apache HTTP Server from an RPM package, you must install the RHEL distribution of the Apache HTTP Server by using Application Streams. Important Red Hat JBoss Core Services (JBCS) does not provide an RPM distribution of the Apache HTTP Server for RHEL 9. The Apache HTTP Server httpd package that the RHEL AppStream repository provides is the only supported RPM distribution of the Apache HTTP Server for RHEL 9 systems. Note Installing the RHEL distribution of the Apache HTTP Server does not automatically install the mod_jk and mod_proxy_cluster packages. For more information about installing mod_jk and mod_proxy_cluster from RPM packages on RHEL 9, see the Apache HTTP Server Connectors and Load Balancing Guide . 5.1. Installation of the Apache HTTP Server when using Application Streams You can install the RHEL 9 distribution of the Apache HTTP Server from an RPM package by using the standard dnf install command. You can subsequently start and stop the Apache HTTP Server from the command line as the root user. Alternatively, you can enable the Apache HTTP Server to start automatically at system startup. For more information about installing, starting, and stopping the RHEL distribution of the Apache HTTP Server, see Setting up the Apache HTTP web server . Additional resources Application Streams Managing software with the DNF tool 5.2. SELinux policies for the Apache HTTP Server You can use Security-Enhanced Linux (SELinux) policies to define access controls for the Apache HTTP Server. These policies are a set of rules that determine access rights to the product. The Apache HTTP Server has an SELinux type name of httpd_t . By default, the Apache HTTP Server can access files and directories in /var/www/html and other web server directories that have an SELinux type context of httpd_sys_content_t . You can also customize the SELinux policy for the Apache HTTP Server if you want to use a non-standard configuration. Additional resources Using SELinux Customizing the SElinux policy for the Apache HTTP Server in a non-standard configuration | null | https://docs.redhat.com/en/documentation/red_hat_jboss_core_services/2.4.57/html/apache_http_server_installation_guide/rhel_appstream |
Chapter 18. Log Record Fields | Chapter 18. Log Record Fields The following fields can be present in log records exported by the logging subsystem. Although log records are typically formatted as JSON objects, the same data model can be applied to other encodings. To search these fields from Elasticsearch and Kibana, use the full dotted field name when searching. For example, with an Elasticsearch /_search URL , to look for a Kubernetes pod name, use /_search/q=kubernetes.pod_name:name-of-my-pod . The top level fields may be present in every record. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/logging/cluster-logging-exported-fields |
25.14. Configuring iSCSI Offload and Interface Binding | 25.14. Configuring iSCSI Offload and Interface Binding This chapter describes how to set up iSCSI interfaces in order to bind a session to a NIC port when using software iSCSI. It also describes how to set up interfaces for use with network devices that support offloading. The network subsystem can be configured to determine the path/NIC that iSCSI interfaces should use for binding. For example, if portals and NICs are set up on different subnets, then it is not necessary to manually configure iSCSI interfaces for binding. Before attempting to configure an iSCSI interface for binding, run the following command first: If ping fails, then you will not be able to bind a session to a NIC. If this is the case, check the network settings first. 25.14.1. Viewing Available iface Configurations iSCSI offload and interface binding is supported for the following iSCSI initiator implementations: Software iSCSI This stack allocates an iSCSI host instance (that is, scsi_host ) per session, with a single connection per session. As a result, /sys/class_scsi_host and /proc/scsi will report a scsi_host for each connection/session you are logged into. Offload iSCSI This stack allocates a scsi_host for each PCI device. As such, each port on a host bus adapter will show up as a different PCI device, with a different scsi_host per HBA port. To manage both types of initiator implementations, iscsiadm uses the iface structure. With this structure, an iface configuration must be entered in /var/lib/iscsi/ifaces for each HBA port, software iSCSI, or network device ( eth X ) used to bind sessions. To view available iface configurations, run iscsiadm -m iface . This will display iface information in the following format: Refer to the following table for an explanation of each value/setting. Table 25.2. iface Settings Setting Description iface_name iface configuration name. transport_name Name of driver hardware_address MAC address ip_address IP address to use for this port net_iface_name Name used for the vlan or alias binding of a software iSCSI session. For iSCSI offloads, net_iface_name will be <empty> because this value is not persistent across reboots. initiator_name This setting is used to override a default name for the initiator, which is defined in /etc/iscsi/initiatorname.iscsi Example 25.6. Sample Output of the iscsiadm -m iface Command The following is a sample output of the iscsiadm -m iface command: For software iSCSI, each iface configuration must have a unique name (with less than 65 characters). The iface_name for network devices that support offloading appears in the format transport_name . hardware_name . Example 25.7. iscsiadm -m iface Output with a Chelsio Network Card For example, the sample output of iscsiadm -m iface on a system using a Chelsio network card might appear as: It is also possible to display the settings of a specific iface configuration in a more friendly way. To do so, use the option -I iface_name . This will display the settings in the following format: Example 25.8. Using iface Settings with a Chelsio Converged Network Adapter Using the example, the iface settings of the same Chelsio converged network adapter (i.e. iscsiadm -m iface -I cxgb3i.00:07:43:05:97:07 ) would appear as: 25.14.2. Configuring an iface for Software iSCSI As mentioned earlier, an iface configuration is required for each network object that will be used to bind a session. Before To create an iface configuration for software iSCSI, run the following command: This will create a new empty iface configuration with a specified iface_name . If an existing iface configuration already has the same iface_name , then it will be overwritten with a new, empty one. To configure a specific setting of an iface configuration, use the following command: Example 25.9. Set MAC Address of iface0 For example, to set the MAC address ( hardware_address ) of iface0 to 00:0F:1F:92:6B:BF , run: Warning Do not use default or iser as iface names. Both strings are special values used by iscsiadm for backward compatibility. Any manually-created iface configurations named default or iser will disable backwards compatibility. 25.14.3. Configuring an iface for iSCSI Offload By default, iscsiadm creates an iface configuration for each port. To view available iface configurations, use the same command for doing so in software iSCSI: iscsiadm -m iface . Before using the iface of a network card for iSCSI offload, first set the iface.ipaddress value of the offload interface to the initiator IP address that the interface should use: For devices that use the be2iscsi driver, the IP address is configured in the BIOS setup screen. For all other devices, to configure the IP address of the iface , use: Example 25.10. Set the iface IP Address of a Chelsio Card For example, to set the iface IP address to 20.15.0.66 when using a card with the iface name of cxgb3i.00:07:43:05:97:07 , use: 25.14.4. Binding/Unbinding an iface to a Portal Whenever iscsiadm is used to scan for interconnects, it will first check the iface.transport settings of each iface configuration in /var/lib/iscsi/ifaces . The iscsiadm utility will then bind discovered portals to any iface whose iface.transport is tcp . This behavior was implemented for compatibility reasons. To override this, use the -I iface_name to specify which portal to bind to an iface , as in: By default, the iscsiadm utility will not automatically bind any portals to iface configurations that use offloading. This is because such iface configurations will not have iface.transport set to tcp . As such, the iface configurations need to be manually bound to discovered portals. It is also possible to prevent a portal from binding to any existing iface . To do so, use default as the iface_name , as in: To remove the binding between a target and iface , use: To delete all bindings for a specific iface , use: To delete bindings for a specific portal (e.g. for Equalogic targets), use: Note If there are no iface configurations defined in /var/lib/iscsi/iface and the -I option is not used, iscsiadm will allow the network subsystem to decide which device a specific portal should use. [6] Refer to Section 25.15, "Scanning iSCSI Interconnects" for information on proper_target_name . | [
"ping -I eth X target_IP",
"iface_name transport_name , hardware_address , ip_address , net_ifacename , initiator_name",
"iface0 qla4xxx,00:c0:dd:08:63:e8,20.15.0.7,default,iqn.2005-06.com.redhat:madmax iface1 qla4xxx,00:c0:dd:08:63:ea,20.15.0.9,default,iqn.2005-06.com.redhat:madmax",
"default tcp,<empty>,<empty>,<empty>,<empty> iser iser,<empty>,<empty>,<empty>,<empty> cxgb3i.00:07:43:05:97:07 cxgb3i,00:07:43:05:97:07,<empty>,<empty>,<empty>",
"iface. setting = value",
"BEGIN RECORD 2.0-871 iface.iscsi_ifacename = cxgb3i.00:07:43:05:97:07 iface.net_ifacename = <empty> iface.ipaddress = <empty> iface.hwaddress = 00:07:43:05:97:07 iface.transport_name = cxgb3i iface.initiatorname = <empty> END RECORD",
"iscsiadm -m iface -I iface_name --op=new",
"iscsiadm -m iface -I iface_name --op=update -n iface. setting -v hw_address",
"iscsiadm -m iface -I iface0 --op=update -n iface.hwaddress -v 00:0F:1F:92:6B:BF",
"iscsiadm -m iface -I iface_name -o update -n iface.ipaddress -v initiator_ip_address",
"iscsiadm -m iface -I cxgb3i.00:07:43:05:97:07 -o update -n iface.ipaddress -v 20.15.0.66",
"iscsiadm -m discovery -t st -p target_IP:port -I iface_name -P 1 [5]",
"iscsiadm -m discovery -t st -p IP:port -I default -P 1",
"iscsiadm -m node -targetname proper_target_name -I iface0 --op=delete [6]",
"iscsiadm -m node -I iface_name --op=delete",
"iscsiadm -m node -p IP:port -I iface_name --op=delete"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/iscsi-offload-config |
Chapter 13. Installing a three-node cluster on Azure | Chapter 13. Installing a three-node cluster on Azure In OpenShift Container Platform version 4.14, you can install a three-node cluster on Microsoft Azure. A three-node cluster consists of three control plane machines, which also act as compute machines. This type of cluster provides a smaller, more resource efficient cluster, for cluster administrators and developers to use for testing, development, and production. You can install a three-node cluster using either installer-provisioned or user-provisioned infrastructure. Note Deploying a three-node cluster using an Azure Marketplace image is not supported. 13.1. Configuring a three-node cluster You configure a three-node cluster by setting the number of worker nodes to 0 in the install-config.yaml file before deploying the cluster. Setting the number of worker nodes to 0 ensures that the control plane machines are schedulable. This allows application workloads to be scheduled to run from the control plane nodes. Note Because application workloads run from control plane nodes, additional subscriptions are required, as the control plane nodes are considered to be compute nodes. Prerequisites You have an existing install-config.yaml file. Procedure Set the number of compute replicas to 0 in your install-config.yaml file, as shown in the following compute stanza: Example install-config.yaml file for a three-node cluster apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0 # ... If you are deploying a cluster with user-provisioned infrastructure: After you create the Kubernetes manifest files, make sure that the spec.mastersSchedulable parameter is set to true in cluster-scheduler-02-config.yml file. You can locate this file in <installation_directory>/manifests . For more information, see "Creating the Kubernetes manifest and Ignition config files" in "Installing a cluster on Azure using ARM templates". Do not create additional worker nodes. Example cluster-scheduler-02-config.yml file for a three-node cluster apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: null name: cluster spec: mastersSchedulable: true policy: name: "" status: {} 13.2. steps Installing a cluster on Azure with customizations Installing a cluster on Azure using ARM templates | [
"apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: null name: cluster spec: mastersSchedulable: true policy: name: \"\" status: {}"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_azure/installing-azure-three-node |
Chapter 14. Deploying nodes with spine-leaf configuration by using director Operator | Chapter 14. Deploying nodes with spine-leaf configuration by using director Operator Deploy nodes with spine-leaf networking architecture to replicate an extensive network topology within your environment. Current restrictions allow only one provisioning network for Metal3 . 14.1. Creating or updating the OpenStackNetConfig custom resource to define all subnets Define your OpenStackNetConfig custom resource and specify the subnets for the overcloud networks. Director Operator then renders the configuration and creates, or updates, the network topology. Prerequisites Your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly. You have installed the oc command line tool on your workstation. Procedure Create a configuration file called openstacknetconfig.yaml : Create the internal API network: Verification View the resources and child resources for OpenStackNetConfig: 14.2. Add roles for leaf networks to your deployment To add roles for the leaf networks to your deployment, update the roles_data.yaml configuration file and create the ConfigMap. Note You must use roles_data.yaml as the filename. Prerequisites Your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly. You have installed the oc command line tool on your workstation. Procedure Update the roles_data.yaml file: In the ~/custom_environment_files directory, archive the templates into a tarball: Create the tripleo-tarball-config ConfigMap: 14.3. Creating NIC templates for the new roles In Red Hat OpenStack Platform (RHOSP) 16.2, the tripleo NIC templates include the InterfaceRoutes parameter by default. You usually set up the routes parameter that you rendered in the environments/network-environment.yaml configuration file on the host_routes property of the Networking service (neutron) network. You then add it to the InterfaceRoutes parameter. In director Operator the Networking service (neutron) is not present. To create new NIC templates for new roles, you must add the routes for a specific network to the NIC template and concatenate the lists. 14.3.1. Creating default network routes Create the default network routes by adding the networking routes to the NIC template, and then concatenate the lists. Procedure Open the NIC template. Add the network routes to the template, and then concatenate the lists: 14.3.2. Subnet routes Routes subnet information is auto rendered to the tripleo environment file environments/network-environment.yaml that is used by the Ansible playbooks. In the NIC templates use the Routes_<subnet_name> parameter to set the correct routing on the host, for example, StorageRoutes_storage_leaf1 . 14.3.3. Modifying NIC templates for spine-leaf networking To configure spine-leaf networking, modify the NIC templates for each role and re-create the ConfigMap. Prerequisites Your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly. You have installed the oc command line tool on your workstation. Procedure Create NIC templates for each Compute role: In the ~/custom_environment_files directory, archive the templates into a tarball: Create the tripleo-tarball-config ConfigMap: 14.3.4. Creating or updating an environment file to register the NIC templates To create or update your environment file, add the NIC templates for the new nodes to the resource registry and re-create the ConfigMap. Prerequisites Your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly. You have installed the oc command line tool on your workstation. The tripleo-tarball-config ConfigMap was updated with the required roles_data.yaml and NIC template for the role. Procedure Add the NIC templates for the new nodes to an environment file in the resource_registry section: In the ~/custom_environment_files directory archive the templates into a tarball: Create the tripleo-tarball-config ConfigMap: 14.4. Deploying the overcloud with multiple routed networks To deploy the overcloud with multiple sets of routed networking, create the control plane and the compute nodes for spine-leaf networking, and then render the Ansible playbooks and apply them. 14.4.1. Creating the control plane To create the control plane, specify the resources for the Controller nodes and director Operator will create the openstackclient pod for remote shell access. Prerequisites Your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly. You have installed the oc command line tool on your workstation. You have used the OpenStackNetConfig resource to create a control plane network and any additional network resources. Procedure Create a file named openstack-controller.yaml on your workstation. Include the resource specification for the Controller nodes. The following example shows a specification for a control plane that consists of three Controller nodes: Create the control plane: Wait until OCP creates the resources related to OpenStackControlPlane resource. The director Operator also creates an openstackclient pod providing remote shell access to run Red Hat OpenStack Platform (RHOSP) commands. Verification View the resource for the control plane: View the OpenStackVMSet resources to verify the creation of the control plane virtual machine set: View the virtual machine resources to verify the creation of the control plane virtual machines in OpenShift Virtualization: Test access to the openstackclient pod remote shell: 14.4.2. Creating the compute nodes for the leafs To create the Compute nodes from baremetal machines, include the resource specification in the OpenStackBaremetalSet custom resource. Prerequisites Your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly. You have installed the oc command line tool on your workstation. You have used the OpenStackNetConfig resource to create a control plane network and any additional network resources. Procedure Create a file named openstack-computeleaf1.yaml on your workstation. Include the resource specification for the Compute nodes. The following example shows a specification for one Compute leaf node: Create the Compute nodes: Verification View the resource for the Compute node: View the baremetal machines managed by OpenShift to verify the creation of the Compute node: 14.5. Render playbooks and apply them You can now configure your overcloud. For more information, see Configuring overcloud software with the director Operator . | [
"apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackNetConfig metadata: name: openstacknetconfig spec: attachConfigurations: br-osp: nodeNetworkConfigurationPolicy: nodeSelector: node-role.kubernetes.io/worker: \"\" desiredState: interfaces: - bridge: options: stp: enabled: false port: - name: enp7s0 description: Linux bridge with enp7s0 as a port name: br-osp state: up type: linux-bridge mtu: 1500 br-ex: nodeNetworkConfigurationPolicy: nodeSelector: node-role.kubernetes.io/worker: \"\" desiredState: interfaces: - bridge: options: stp: enabled: false port: - name: enp6s0 description: Linux bridge with enp6s0 as a port name: br-ex state: up type: linux-bridge mtu: 1500 # optional DnsServers list dnsServers: - 192.168.25.1 # optional DnsSearchDomains list dnsSearchDomains: - osptest.test.metalkube.org - some.other.domain # DomainName of the OSP environment domainName: osptest.test.metalkube.org networks: - name: Control nameLower: ctlplane subnets: - name: ctlplane ipv4: allocationEnd: 192.168.25.250 allocationStart: 192.168.25.100 cidr: 192.168.25.0/24 gateway: 192.168.25.1 attachConfiguration: br-osp - name: InternalApi nameLower: internal_api mtu: 1350 subnets: - name: internal_api ipv4: allocationEnd: 172.17.0.250 allocationStart: 172.17.0.10 cidr: 172.17.0.0/24 routes: - destination: 172.17.1.0/24 nexthop: 172.17.0.1 - destination: 172.17.2.0/24 nexthop: 172.17.0.1 vlan: 20 attachConfiguration: br-osp - name: internal_api_leaf1 ipv4: allocationEnd: 172.17.1.250 allocationStart: 172.17.1.10 cidr: 172.17.1.0/24 routes: - destination: 172.17.0.0/24 nexthop: 172.17.1.1 - destination: 172.17.2.0/24 nexthop: 172.17.1.1 vlan: 21 attachConfiguration: br-osp - name: internal_api_leaf2 ipv4: allocationEnd: 172.17.2.250 allocationStart: 172.17.2.10 cidr: 172.17.2.0/24 routes: - destination: 172.17.1.0/24 nexthop: 172.17.2.1 - destination: 172.17.0.0/24 nexthop: 172.17.2.1 vlan: 22 attachConfiguration: br-osp - name: External nameLower: external subnets: - name: external ipv4: allocationEnd: 10.0.0.250 allocationStart: 10.0.0.10 cidr: 10.0.0.0/24 gateway: 10.0.0.1 attachConfiguration: br-ex - name: Storage nameLower: storage mtu: 1350 subnets: - name: storage ipv4: allocationEnd: 172.18.0.250 allocationStart: 172.18.0.10 cidr: 172.18.0.0/24 routes: - destination: 172.18.1.0/24 nexthop: 172.18.0.1 - destination: 172.18.2.0/24 nexthop: 172.18.0.1 vlan: 30 attachConfiguration: br-osp - name: storage_leaf1 ipv4: allocationEnd: 172.18.1.250 allocationStart: 172.18.1.10 cidr: 172.18.1.0/24 routes: - destination: 172.18.0.0/24 nexthop: 172.18.1.1 - destination: 172.18.2.0/24 nexthop: 172.18.1.1 vlan: 31 attachConfiguration: br-osp - name: storage_leaf2 ipv4: allocationEnd: 172.18.2.250 allocationStart: 172.18.2.10 cidr: 172.18.2.0/24 routes: - destination: 172.18.0.0/24 nexthop: 172.18.2.1 - destination: 172.18.1.0/24 nexthop: 172.18.2.1 vlan: 32 attachConfiguration: br-osp - name: StorageMgmt nameLower: storage_mgmt mtu: 1350 subnets: - name: storage_mgmt ipv4: allocationEnd: 172.19.0.250 allocationStart: 172.19.0.10 cidr: 172.19.0.0/24 routes: - destination: 172.19.1.0/24 nexthop: 172.19.0.1 - destination: 172.19.2.0/24 nexthop: 172.19.0.1 vlan: 40 attachConfiguration: br-osp - name: storage_mgmt_leaf1 ipv4: allocationEnd: 172.19.1.250 allocationStart: 172.19.1.10 cidr: 172.19.1.0/24 routes: - destination: 172.19.0.0/24 nexthop: 172.19.1.1 - destination: 172.19.2.0/24 nexthop: 172.19.1.1 vlan: 41 attachConfiguration: br-osp - name: storage_mgmt_leaf2 ipv4: allocationEnd: 172.19.2.250 allocationStart: 172.19.2.10 cidr: 172.19.2.0/24 routes: - destination: 172.19.0.0/24 nexthop: 172.19.2.1 - destination: 172.19.1.0/24 nexthop: 172.19.2.1 vlan: 42 attachConfiguration: br-osp - name: Tenant nameLower: tenant vip: False mtu: 1350 subnets: - name: tenant ipv4: allocationEnd: 172.20.0.250 allocationStart: 172.20.0.10 cidr: 172.20.0.0/24 routes: - destination: 172.20.1.0/24 nexthop: 172.20.0.1 - destination: 172.20.2.0/24 nexthop: 172.20.0.1 vlan: 50 attachConfiguration: br-osp - name: tenant_leaf1 ipv4: allocationEnd: 172.20.1.250 allocationStart: 172.20.1.10 cidr: 172.20.1.0/24 routes: - destination: 172.20.0.0/24 nexthop: 172.20.1.1 - destination: 172.20.2.0/24 nexthop: 172.20.1.1 vlan: 51 attachConfiguration: br-osp - name: tenant_leaf2 ipv4: allocationEnd: 172.20.2.250 allocationStart: 172.20.2.10 cidr: 172.20.2.0/24 routes: - destination: 172.20.0.0/24 nexthop: 172.20.2.1 - destination: 172.20.1.0/24 nexthop: 172.20.2.1 vlan: 52 attachConfiguration: br-osp",
"oc create -f openstacknetconfig.yaml -n openstack",
"oc get openstacknetconfig/openstacknetconfig -n openstack oc get openstacknetattachment -n openstack oc get openstacknet -n openstack",
"############################################################################### Role: ComputeLeaf1 # ############################################################################### - name: ComputeLeaf1 description: | Basic ComputeLeaf1 Node role # Create external Neutron bridge (unset if using ML2/OVS without DVR) tags: - external_bridge networks: InternalApi: subnet: internal_api_leaf1 Tenant: subnet: tenant_leaf1 Storage: subnet: storage_leaf1 HostnameFormatDefault: '%stackname%-novacompute-leaf1-%index%' ############################################################################### Role: ComputeLeaf2 # ############################################################################### - name: ComputeLeaf2 description: | Basic ComputeLeaf1 Node role # Create external Neutron bridge (unset if using ML2/OVS without DVR) tags: - external_bridge networks: InternalApi: subnet: internal_api_leaf2 Tenant: subnet: tenant_leaf2 Storage: subnet: storage_leaf2 HostnameFormatDefault: '%stackname%-novacompute-leaf2-%index%'",
"tar -cvzf custom-config.tar.gz *.yaml",
"oc create configmap tripleo-tarball-config --from-file=custom-config.tar.gz -n openstack",
"parameters: {{ USDnet.Name }}Routes: default: [] description: > Routes for the storage network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json - type: interface routes: list_concat_unique: - get_param: {{ USDnet.Name }}Routes - get_param: {{ USDnet.Name }}InterfaceRoutes",
"StorageRoutes_storage_leaf1: default: [] description: > Routes for the storage network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json InternalApiRoutes_internal_api_leaf1: default: [] description: > Routes for the internal_api network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json TenantRoutes_tenant_leaf1: default: [] description: > Routes for the internal_api network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json get_param: StorageIpSubnet routes: list_concat_unique: - get_param: StorageRoutes_storage_leaf1 - type: vlan get_param: InternalApiIpSubnet routes: list_concat_unique: - get_param: InternalApiRoutes_internal_api_leaf1 get_param: TenantIpSubnet routes: list_concat_unique: - get_param: TenantRoutes_tenant_leaf1 - type: ovs_bridge",
"tar -cvzf custom-config.tar.gz *.yaml",
"oc create configmap tripleo-tarball-config --from-file=custom-config.tar.gz -n openstack",
"resource_registry: OS::TripleO::Compute::Net::SoftwareConfig: net-config-two-nic-vlan-compute.yaml OS::TripleO::ComputeLeaf1::Net::SoftwareConfig: net-config-two-nic-vlan-compute_leaf1.yaml OS::TripleO::ComputeLeaf2::Net::SoftwareConfig: net-config-two-nic-vlan-compute_leaf2.yaml",
"tar -cvzf custom-config.tar.gz *.yaml",
"oc create configmap tripleo-tarball-config --from-file=custom-config.tar.gz -n openstack",
"apiVersion: osp-director.openstack.org/v1beta2 kind: OpenStackControlPlane metadata: name: overcloud namespace: openstack spec: gitSecret: git-secret openStackClientImageURL: registry.redhat.io/rhosp-rhel8/openstack-tripleoclient:16.2 openStackClientNetworks: - ctlplane - external - internal_api - internal_api_leaf1 # optionally the openstackclient can also be connected to subnets openStackClientStorageClass: host-nfs-storageclass passwordSecret: userpassword domainName: ostest.test.metalkube.org virtualMachineRoles: Controller: roleName: Controller roleCount: 1 networks: - ctlplane - internal_api - external - tenant - storage - storage_mgmt cores: 6 memory: 20 rootDisk: diskSize: 500 baseImageVolumeName: openstack-base-img storageClass: host-nfs-storageclass storageAccessMode: ReadWriteMany storageVolumeMode: Filesystem enableFencing: False",
"oc create -f openstack-controller.yaml -n openstack",
"oc get openstackcontrolplane/overcloud -n openstack",
"oc get openstackvmsets -n openstack",
"oc get virtualmachines",
"oc rsh -n openstack openstackclient",
"apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackBaremetalSet metadata: name: computeleaf1 namespace: openstack spec: # How many nodes to provision count: 1 # The image to install on the provisioned nodes baseImageUrl: http://host/images/rhel-image-8.4.x86_64.qcow2 # The secret containing the SSH pub key to place on the provisioned nodes deploymentSSHSecret: osp-controlplane-ssh-keys # The interface on the nodes that will be assigned an IP from the mgmtCidr ctlplaneInterface: enp7s0 # Networks to associate with this host networks: - ctlplane - internal_api_leaf1 - external - tenant_leaf1 - storage_leaf1 roleName: ComputeLeaf1 passwordSecret: userpassword",
"oc create -f openstack-computeleaf1.yaml -n openstack",
"oc get openstackbaremetalset/computeleaf1 -n openstack",
"oc get baremetalhosts -n openshift-machine-api"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/rhosp_director_operator_for_openshift_container_platform/assembly_deploying-nodes-with-spine-leaf-configuration-using-director-operator_changing-service-account-passwords |
Appendix A. General configuration options | Appendix A. General configuration options These are the general configuration options for Ceph. Note Typically, these will be set automatically by deployment tools, such as cephadm . fsid Description The file system ID. One per cluster. Type UUID Required No. Default N/A. Usually generated by deployment tools. admin_socket Description The socket for executing administrative commands on a daemon, irrespective of whether Ceph monitors have established a quorum. Type String Required No Default /var/run/ceph/USDcluster-USDname.asok pid_file Description The file in which the monitor or OSD will write its PID. For instance, /var/run/USDcluster/USDtype.USDid.pid will create /var/run/ceph/mon.a.pid for the mon with id a running in the ceph cluster. The pid file is removed when the daemon stops gracefully. If the process is not daemonized (meaning it runs with the -f or -d option), the pid file is not created. Type String Required No Default No chdir Description The directory Ceph daemons change to once they are up and running. Default / directory recommended. Type String Required No Default / max_open_files Description If set, when the Red Hat Ceph Storage cluster starts, Ceph sets the max_open_fds at the OS level (that is, the max # of file descriptors). It helps prevent Ceph OSDs from running out of file descriptors. Type 64-bit Integer Required No Default 0 fatal_signal_handlers Description If set, we will install signal handlers for SEGV, ABRT, BUS, ILL, FPE, XCPU, XFSZ, SYS signals to generate a useful log message. Type Boolean Default true | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/configuration_guide/general-configuration-options_conf |
5.4. About Managed Entries | 5.4. About Managed Entries Some clients and integration with Red Hat Directory Server require dual entries. For example, Posix systems typically have a group for each user. The Directory Server's Managed Entries Plug-in creates a new managed entry, with accurate and specific values for attributes, automatically whenever an appropriate origin entry is created. The basic idea is that there are situations when Entry A is created and there should automatically be an Entry B with related attribute values. For example, when a Posix user ( posixAccount entry) is created, a corresponding group entry ( posixGroup entry) should also be created. An instance of the Managed Entries Plug-in identifies what entry (the origin entry ) triggers the plug-in to automatically generate a new entry (the managed entry ). It also identifies a separate template entry which defines the managed entry configuration. The instance of the Managed Entries Plug-in defines three things: The search criteria to identify the origin entries (using a search scope and a search filter) The subtree under which to create the managed entries (the new entry location) The template entry to use for the managed entries Figure 5.4. Defining Managed Entries For example: The origin entry does not have to have any special configuration or settings to create a managed entry; it simply has to be created within the scope of the plug-in and match the given search filter. 5.4.1. Defining the Template for Managed Entries The template entry lays out the entire configuration of the managed entry, using static attributes (ones with pre-defined values) and mapped attributes (mapped attributes that pull their values from the origin entry). The mapped attributes in the template use tokens, prepended by a dollar sign (USD), to pull in values from the origin entry and use it in the managed entry. Figure 5.5. Managed Entries, Templates, and Origin Entries Note Make sure that the values given for static and mapped attributes comply with the required attribute syntax. 5.4.2. Entry Attributes Written by the Managed Entries Plug-in Both the origin entry and the managed entry have special managed entries attributes which indicate that they are being managed by an instance of the Managed Entries Plug-in. For the origin entry, the plug-in adds links to associated managed entries. On the managed entry, the plug-in adds attributes that point back to the origin entry, in addition to the attributes defined in the template. Using special attributes to indicate managed and origin entries makes it easy to identify the related entries and to assess the changes made by the Managed Entries Plug-in. 5.4.3. Managed Entries Plug-in and Directory Server Operations The Managed Entries Plug-in has some impact on how the Directory Server carries out common operations, like add and delete operations: Add . With every add operation, the server checks to see if the new entry is within the scope of any Managed Entries Plug-in instance. If it meets the criteria for an origin entry, then a managed entry is created and managed entry-related attributes are added to both the origin and managed entry. Modify . If an origin entry is modified, it triggers the plug-in to update the managed entry. Changing a template entry, however, does not update the managed entry automatically. Any changes to the template entry are not reflected in the managed entry until after the time the origin entry is modified. The mapped managed attributes within a managed entry cannot be modified manually, only by the Managed Entry Plug-in. Other attributes in the managed entry (including static attributes added by the Managed Entry Plug-in) can be modified manually. Delete . If an origin entry is deleted, then the Managed Entries Plug-in will also delete any managed entry associated with that entry. There are some limits on what entries can be deleted. A template entry cannot be deleted if it is currently referenced by a plug-in instance definition. A managed entry cannot be deleted except by the Managed Entries Plug-in. Rename . If an origin entry is renamed, then plug-in updates the corresponding managed entry. If the entry is moved out of the plug-in scope, then the managed entry is deleted, while if an entry is moved into the plug-in scope, it is treated like an add operation and a new managed entry is created. As with delete operations, there are limits on what entries can be renamed or moved. A configuration definition entry cannot be moved out of the Managed Entries Plug-in container entry. If the entry is removed, that plug-in instance is inactivated. If an entry is moved into the Managed Entries Plug-in container entry, then it is validated and treated as an active configuration definition. A template entry cannot be renamed or moved if it is currently referenced by a plug-in instance definition. A managed entry cannot be renamed or moved except by the Managed Entries Plug-in. Replication . The Managed Entries Plug-in operations are not initiated by replication updates . If an add or modify operation for an entry in the plug-in scope is replicated to another replica, that operation does not trigger the Managed Entries Plug-in instance on the replica to create or update an entry. The only way for updates for managed entries to be replicated is to replicate the final managed entry over to the replica. | [
"dn: cn=Posix User-Group,cn=Managed Entries,cn=plugins,cn=config objectclass: extensibleObject cn: Posix User-Group originScope: ou=people,dc=example,dc=com originFilter: objectclass=posixAccount managedBase: ou=groups,dc=example,dc=com managedTemplate: cn=Posix User-Group Template,ou=Templates,dc=example,dc=com",
"dn: cn=Posix User-Group Template,ou=Templates,dc=example,dc=com objectclass: mepTemplateEntry cn: Posix User-Group Template mepRDNAttr: cn mepStaticAttr: objectclass: posixGroup mepMappedAttr: cn: USDuid Group mepMappedAttr: gidNumber: USDgidNumber mepMappedAttr: memberUid: USDuid",
"dn: uid=jsmith,ou=people,dc=example,dc=com objectclass: mepOriginEntry objectclass: posixAccount sn: Smith mail: [email protected] mepManagedEntry: cn=jsmith Posix Group,ou=groups,dc=example,dc=com",
"dn: cn=jsmith Posix Group,ou=groups,dc=example,dc=com objectclass: mepManagedEntry objectclass: posixGroup mepManagedBy: uid=jsmith,ou=people,dc=example,dc=com"
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/deployment_guide/managed-entries |
Chapter 6. Additional Information | Chapter 6. Additional Information 6.1. Implemented SAP Notes in sap*preconfigure The implemented SAP notes in the three preconfigure roles, along with the SAP note versions, are contained in each preconfigure role's vars files, in a variable named _<role_name>_sapnotes_versions . Sample file name: /usr/share/ansible/roles/sap_general_preconfigure/vars/RedHat_8.yml . In these files the variable _sap_general_preconfigure_sapnotes_versions contains the implemented SAP notes along with their version numbers. 6.2. Role Variables The file README.md of each role, located in directory /usr/share/ansible/roles/<role> , describes the purpose of all user configurable variables as well as their default settings. The variables are defined and can be changed in several places, e.g. in an inventory file, in your playbooks, or by using the ansible-playbook command line parameter --extra-vars or -e . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/red_hat_enterprise_linux_system_roles_for_sap/additional_information |
5.12. Tools | 5.12. Tools coolkey component, BZ# 906537 Personal Identity Verification (PIV) Endpoint Cards which support both CAC and PIV interfaces might not work with the latest coolkey update; some signature operations like PKINIT can fail. To work around this problem, downgrade coolkey to the version shipped with Red Hat Enterprise Linux 6.3. libreport component Even if the stored credentials are used , the report-gtk utility can report the following error message: To work around this problem, close the dialog window; the Login=<rhn-user> and Password=<rhn-password> credentials in the /etc/libreport/plugins/rhtsupport.conf will be used in the same way they are used by report-rhtsupport . For more information, refer to this Knowledge Base article. vlock component When a user password is used to lock a console with vlock , the console can only be unlocked with the user password, not the root password. That is, even if the first inserted password is incorrect, and the user is prompted to provide the root password, entering the root password fails with an error message. libreoffice component Libreoffice contains a number of harmless files used for testing purposes. However, on Microsoft Windows system, these files can trigger false positive alerts on various anti-virus software, such as Microsoft Security Essentials. For example, the alerts can be triggered when scanning the Red Hat Enterprise Linux 6 ISO file. gnome-power-manager component When the computer runs on battery, custom brightness level is not remembered and restored if power saving features like "dim display when idle" or "reduce backlight brightness when idle" are enabled. rsyslog component rsyslog does not reload its configuration after a SIGHUP signal is issued. To reload the configuration, the rsyslog daemon needs to be restarted: parted component The parted utility in Red Hat Enterprise Linux 6 cannot handle Extended Address Volumes (EAV) Direct Access Storage Devices (DASD) that have more than 65535 cylinders. Consequently, EAV DASD drives cannot be partitioned using parted , and installation on EAV DASD drives will fail. To work around this issue, complete the installation on a non EAV DASD drive, then add the EAV device after the installation using the tools provided in the s390-utils package. | [
"Wrong settings detected for Red Hat Customer Support [..]",
"~]# service rsyslog restart"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/tools_issues |
Chapter 11. Troubleshooting | Chapter 11. Troubleshooting This section describes resources for troubleshooting the Migration Toolkit for Containers (MTC). For known issues, see the MTC release notes . 11.1. MTC workflow You can migrate Kubernetes resources, persistent volume data, and internal container images to OpenShift Container Platform 4.15 by using the Migration Toolkit for Containers (MTC) web console or the Kubernetes API. MTC migrates the following resources: A namespace specified in a migration plan. Namespace-scoped resources: When the MTC migrates a namespace, it migrates all the objects and resources associated with that namespace, such as services or pods. Additionally, if a resource that exists in the namespace but not at the cluster level depends on a resource that exists at the cluster level, the MTC migrates both resources. For example, a security context constraint (SCC) is a resource that exists at the cluster level and a service account (SA) is a resource that exists at the namespace level. If an SA exists in a namespace that the MTC migrates, the MTC automatically locates any SCCs that are linked to the SA and also migrates those SCCs. Similarly, the MTC migrates persistent volumes that are linked to the persistent volume claims of the namespace. Note Cluster-scoped resources might have to be migrated manually, depending on the resource. Custom resources (CRs) and custom resource definitions (CRDs): MTC automatically migrates CRs and CRDs at the namespace level. Migrating an application with the MTC web console involves the following steps: Install the Migration Toolkit for Containers Operator on all clusters. You can install the Migration Toolkit for Containers Operator in a restricted environment with limited or no internet access. The source and target clusters must have network access to each other and to a mirror registry. Configure the replication repository, an intermediate object storage that MTC uses to migrate data. The source and target clusters must have network access to the replication repository during migration. If you are using a proxy server, you must configure it to allow network traffic between the replication repository and the clusters. Add the source cluster to the MTC web console. Add the replication repository to the MTC web console. Create a migration plan, with one of the following data migration options: Copy : MTC copies the data from the source cluster to the replication repository, and from the replication repository to the target cluster. Note If you are using direct image migration or direct volume migration, the images or volumes are copied directly from the source cluster to the target cluster. Move : MTC unmounts a remote volume, for example, NFS, from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. The remote volume must be accessible to the source and target clusters. Note Although the replication repository does not appear in this diagram, it is required for migration. Run the migration plan, with one of the following options: Stage copies data to the target cluster without stopping the application. A stage migration can be run multiple times so that most of the data is copied to the target before migration. Running one or more stage migrations reduces the duration of the cutover migration. Cutover stops the application on the source cluster and moves the resources to the target cluster. Optional: You can clear the Halt transactions on the source cluster during migration checkbox. About MTC custom resources The Migration Toolkit for Containers (MTC) creates the following custom resources (CRs): MigCluster (configuration, MTC cluster): Cluster definition MigStorage (configuration, MTC cluster): Storage definition MigPlan (configuration, MTC cluster): Migration plan The MigPlan CR describes the source and target clusters, replication repository, and namespaces being migrated. It is associated with 0, 1, or many MigMigration CRs. Note Deleting a MigPlan CR deletes the associated MigMigration CRs. BackupStorageLocation (configuration, MTC cluster): Location of Velero backup objects VolumeSnapshotLocation (configuration, MTC cluster): Location of Velero volume snapshots MigMigration (action, MTC cluster): Migration, created every time you stage or migrate data. Each MigMigration CR is associated with a MigPlan CR. Backup (action, source cluster): When you run a migration plan, the MigMigration CR creates two Velero backup CRs on each source cluster: Backup CR #1 for Kubernetes objects Backup CR #2 for PV data Restore (action, target cluster): When you run a migration plan, the MigMigration CR creates two Velero restore CRs on the target cluster: Restore CR #1 (using Backup CR #2) for PV data Restore CR #2 (using Backup CR #1) for Kubernetes objects 11.2. Migration Toolkit for Containers custom resource manifests Migration Toolkit for Containers (MTC) uses the following custom resource (CR) manifests for migrating applications. 11.2.1. DirectImageMigration The DirectImageMigration CR copies images directly from the source cluster to the destination cluster. apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageMigration metadata: labels: controller-tools.k8s.io: "1.0" name: <direct_image_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration namespaces: 1 - <source_namespace_1> - <source_namespace_2>:<destination_namespace_3> 2 1 One or more namespaces containing images to be migrated. By default, the destination namespace has the same name as the source namespace. 2 Source namespace mapped to a destination namespace with a different name. 11.2.2. DirectImageStreamMigration The DirectImageStreamMigration CR copies image stream references directly from the source cluster to the destination cluster. apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageStreamMigration metadata: labels: controller-tools.k8s.io: "1.0" name: <direct_image_stream_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration imageStreamRef: name: <image_stream> namespace: <source_image_stream_namespace> destNamespace: <destination_image_stream_namespace> 11.2.3. DirectVolumeMigration The DirectVolumeMigration CR copies persistent volumes (PVs) directly from the source cluster to the destination cluster. apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigration metadata: name: <direct_volume_migration> namespace: openshift-migration spec: createDestinationNamespaces: false 1 deleteProgressReportingCRs: false 2 destMigClusterRef: name: <host_cluster> 3 namespace: openshift-migration persistentVolumeClaims: - name: <pvc> 4 namespace: <pvc_namespace> srcMigClusterRef: name: <source_cluster> namespace: openshift-migration 1 Set to true to create namespaces for the PVs on the destination cluster. 2 Set to true to delete DirectVolumeMigrationProgress CRs after migration. The default is false so that DirectVolumeMigrationProgress CRs are retained for troubleshooting. 3 Update the cluster name if the destination cluster is not the host cluster. 4 Specify one or more PVCs to be migrated. 11.2.4. DirectVolumeMigrationProgress The DirectVolumeMigrationProgress CR shows the progress of the DirectVolumeMigration CR. apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigrationProgress metadata: labels: controller-tools.k8s.io: "1.0" name: <direct_volume_migration_progress> spec: clusterRef: name: <source_cluster> namespace: openshift-migration podRef: name: <rsync_pod> namespace: openshift-migration 11.2.5. MigAnalytic The MigAnalytic CR collects the number of images, Kubernetes resources, and the persistent volume (PV) capacity from an associated MigPlan CR. You can configure the data that it collects. apiVersion: migration.openshift.io/v1alpha1 kind: MigAnalytic metadata: annotations: migplan: <migplan> name: <miganalytic> namespace: openshift-migration labels: migplan: <migplan> spec: analyzeImageCount: true 1 analyzeK8SResources: true 2 analyzePVCapacity: true 3 listImages: false 4 listImagesLimit: 50 5 migPlanRef: name: <migplan> namespace: openshift-migration 1 Optional: Returns the number of images. 2 Optional: Returns the number, kind, and API version of the Kubernetes resources. 3 Optional: Returns the PV capacity. 4 Returns a list of image names. The default is false so that the output is not excessively long. 5 Optional: Specify the maximum number of image names to return if listImages is true . 11.2.6. MigCluster The MigCluster CR defines a host, local, or remote cluster. apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: labels: controller-tools.k8s.io: "1.0" name: <host_cluster> 1 namespace: openshift-migration spec: isHostCluster: true 2 # The 'azureResourceGroup' parameter is relevant only for Microsoft Azure. azureResourceGroup: <azure_resource_group> 3 caBundle: <ca_bundle_base64> 4 insecure: false 5 refresh: false 6 # The 'restartRestic' parameter is relevant for a source cluster. restartRestic: true 7 # The following parameters are relevant for a remote cluster. exposedRegistryPath: <registry_route> 8 url: <destination_cluster_url> 9 serviceAccountSecretRef: name: <source_secret> 10 namespace: openshift-config 1 Update the cluster name if the migration-controller pod is not running on this cluster. 2 The migration-controller pod runs on this cluster if true . 3 Microsoft Azure only: Specify the resource group. 4 Optional: If you created a certificate bundle for self-signed CA certificates and if the insecure parameter value is false , specify the base64-encoded certificate bundle. 5 Set to true to disable SSL verification. 6 Set to true to validate the cluster. 7 Set to true to restart the Restic pods on the source cluster after the Stage pods are created. 8 Remote cluster and direct image migration only: Specify the exposed secure registry path. 9 Remote cluster only: Specify the URL. 10 Remote cluster only: Specify the name of the Secret object. 11.2.7. MigHook The MigHook CR defines a migration hook that runs custom code at a specified stage of the migration. You can create up to four migration hooks. Each hook runs during a different phase of the migration. You can configure the hook name, runtime duration, a custom image, and the cluster where the hook will run. The migration phases and namespaces of the hooks are configured in the MigPlan CR. apiVersion: migration.openshift.io/v1alpha1 kind: MigHook metadata: generateName: <hook_name_prefix> 1 name: <mighook> 2 namespace: openshift-migration spec: activeDeadlineSeconds: 1800 3 custom: false 4 image: <hook_image> 5 playbook: <ansible_playbook_base64> 6 targetCluster: source 7 1 Optional: A unique hash is appended to the value for this parameter so that each migration hook has a unique name. You do not need to specify the value of the name parameter. 2 Specify the migration hook name, unless you specify the value of the generateName parameter. 3 Optional: Specify the maximum number of seconds that a hook can run. The default is 1800 . 4 The hook is a custom image if true . The custom image can include Ansible or it can be written in a different programming language. 5 Specify the custom image, for example, quay.io/konveyor/hook-runner:latest . Required if custom is true . 6 Base64-encoded Ansible playbook. Required if custom is false . 7 Specify the cluster on which the hook will run. Valid values are source or destination . 11.2.8. MigMigration The MigMigration CR runs a MigPlan CR. You can configure a Migmigration CR to run a stage or incremental migration, to cancel a migration in progress, or to roll back a completed migration. apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: "1.0" name: <migmigration> namespace: openshift-migration spec: canceled: false 1 rollback: false 2 stage: false 3 quiescePods: true 4 keepAnnotations: true 5 verify: false 6 migPlanRef: name: <migplan> namespace: openshift-migration 1 Set to true to cancel a migration in progress. 2 Set to true to roll back a completed migration. 3 Set to true to run a stage migration. Data is copied incrementally and the pods on the source cluster are not stopped. 4 Set to true to stop the application during migration. The pods on the source cluster are scaled to 0 after the Backup stage. 5 Set to true to retain the labels and annotations applied during the migration. 6 Set to true to check the status of the migrated pods on the destination cluster are checked and to return the names of pods that are not in a Running state. 11.2.9. MigPlan The MigPlan CR defines the parameters of a migration plan. You can configure destination namespaces, hook phases, and direct or indirect migration. Note By default, a destination namespace has the same name as the source namespace. If you configure a different destination namespace, you must ensure that the namespaces are not duplicated on the source or the destination clusters because the UID and GID ranges are copied during migration. apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: labels: controller-tools.k8s.io: "1.0" name: <migplan> namespace: openshift-migration spec: closed: false 1 srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration hooks: 2 - executionNamespace: <namespace> 3 phase: <migration_phase> 4 reference: name: <hook> 5 namespace: <hook_namespace> 6 serviceAccount: <service_account> 7 indirectImageMigration: true 8 indirectVolumeMigration: false 9 migStorageRef: name: <migstorage> namespace: openshift-migration namespaces: - <source_namespace_1> 10 - <source_namespace_2> - <source_namespace_3>:<destination_namespace_4> 11 refresh: false 12 1 The migration has completed if true . You cannot create another MigMigration CR for this MigPlan CR. 2 Optional: You can specify up to four migration hooks. Each hook must run during a different migration phase. 3 Optional: Specify the namespace in which the hook will run. 4 Optional: Specify the migration phase during which a hook runs. One hook can be assigned to one phase. Valid values are PreBackup , PostBackup , PreRestore , and PostRestore . 5 Optional: Specify the name of the MigHook CR. 6 Optional: Specify the namespace of MigHook CR. 7 Optional: Specify a service account with cluster-admin privileges. 8 Direct image migration is disabled if true . Images are copied from the source cluster to the replication repository and from the replication repository to the destination cluster. 9 Direct volume migration is disabled if true . PVs are copied from the source cluster to the replication repository and from the replication repository to the destination cluster. 10 Specify one or more source namespaces. If you specify only the source namespace, the destination namespace is the same. 11 Specify the destination namespace if it is different from the source namespace. 12 The MigPlan CR is validated if true . 11.2.10. MigStorage The MigStorage CR describes the object storage for the replication repository. Amazon Web Services (AWS), Microsoft Azure, Google Cloud Storage, Multi-Cloud Object Gateway, and generic S3-compatible cloud storage are supported. AWS and the snapshot copy method have additional parameters. apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: labels: controller-tools.k8s.io: "1.0" name: <migstorage> namespace: openshift-migration spec: backupStorageProvider: <backup_storage_provider> 1 volumeSnapshotProvider: <snapshot_storage_provider> 2 backupStorageConfig: awsBucketName: <bucket> 3 awsRegion: <region> 4 credsSecretRef: namespace: openshift-config name: <storage_secret> 5 awsKmsKeyId: <key_id> 6 awsPublicUrl: <public_url> 7 awsSignatureVersion: <signature_version> 8 volumeSnapshotConfig: awsRegion: <region> 9 credsSecretRef: namespace: openshift-config name: <storage_secret> 10 refresh: false 11 1 Specify the storage provider. 2 Snapshot copy method only: Specify the storage provider. 3 AWS only: Specify the bucket name. 4 AWS only: Specify the bucket region, for example, us-east-1 . 5 Specify the name of the Secret object that you created for the storage. 6 AWS only: If you are using the AWS Key Management Service, specify the unique identifier of the key. 7 AWS only: If you granted public access to the AWS bucket, specify the bucket URL. 8 AWS only: Specify the AWS signature version for authenticating requests to the bucket, for example, 4 . 9 Snapshot copy method only: Specify the geographical region of the clusters. 10 Snapshot copy method only: Specify the name of the Secret object that you created for the storage. 11 Set to true to validate the cluster. 11.3. Logs and debugging tools This section describes logs and debugging tools that you can use for troubleshooting. 11.3.1. Viewing migration plan resources You can view migration plan resources to monitor a running migration or to troubleshoot a failed migration by using the MTC web console and the command line interface (CLI). Procedure In the MTC web console, click Migration Plans . Click the Migrations number to a migration plan to view the Migrations page. Click a migration to view the Migration details . Expand Migration resources to view the migration resources and their status in a tree view. Note To troubleshoot a failed migration, start with a high-level resource that has failed and then work down the resource tree towards the lower-level resources. Click the Options menu to a resource and select one of the following options: Copy oc describe command copies the command to your clipboard. Log in to the relevant cluster and then run the command. The conditions and events of the resource are displayed in YAML format. Copy oc logs command copies the command to your clipboard. Log in to the relevant cluster and then run the command. If the resource supports log filtering, a filtered log is displayed. View JSON displays the resource data in JSON format in a web browser. The data is the same as the output for the oc get <resource> command. 11.3.2. Viewing a migration plan log You can view an aggregated log for a migration plan. You use the MTC web console to copy a command to your clipboard and then run the command from the command line interface (CLI). The command displays the filtered logs of the following pods: Migration Controller Velero Restic Rsync Stunnel Registry Procedure In the MTC web console, click Migration Plans . Click the Migrations number to a migration plan. Click View logs . Click the Copy icon to copy the oc logs command to your clipboard. Log in to the relevant cluster and enter the command on the CLI. The aggregated log for the migration plan is displayed. 11.3.3. Using the migration log reader You can use the migration log reader to display a single filtered view of all the migration logs. Procedure Get the mig-log-reader pod: USD oc -n openshift-migration get pods | grep log Enter the following command to display a single migration log: USD oc -n openshift-migration logs -f <mig-log-reader-pod> -c color 1 1 The -c plain option displays the log without colors. 11.3.4. Accessing performance metrics The MigrationController custom resource (CR) records metrics and pulls them into on-cluster monitoring storage. You can query the metrics by using Prometheus Query Language (PromQL) to diagnose migration performance issues. All metrics are reset when the Migration Controller pod restarts. You can access the performance metrics and run queries by using the OpenShift Container Platform web console. Procedure In the OpenShift Container Platform web console, click Observe Metrics . Enter a PromQL query, select a time window to display, and click Run Queries . If your web browser does not display all the results, use the Prometheus console. 11.3.4.1. Provided metrics The MigrationController custom resource (CR) provides metrics for the MigMigration CR count and for its API requests. 11.3.4.1.1. cam_app_workload_migrations This metric is a count of MigMigration CRs over time. It is useful for viewing alongside the mtc_client_request_count and mtc_client_request_elapsed metrics to collate API request information with migration status changes. This metric is included in Telemetry. Table 11.1. cam_app_workload_migrations metric Queryable label name Sample label values Label description status running , idle , failed , completed Status of the MigMigration CR type stage, final Type of the MigMigration CR 11.3.4.1.2. mtc_client_request_count This metric is a cumulative count of Kubernetes API requests that MigrationController issued. It is not included in Telemetry. Table 11.2. mtc_client_request_count metric Queryable label name Sample label values Label description cluster https://migcluster-url:443 Cluster that the request was issued against component MigPlan , MigCluster Sub-controller API that issued request function (*ReconcileMigPlan).Reconcile Function that the request was issued from kind SecretList , Deployment Kubernetes kind the request was issued for 11.3.4.1.3. mtc_client_request_elapsed This metric is a cumulative latency, in milliseconds, of Kubernetes API requests that MigrationController issued. It is not included in Telemetry. Table 11.3. mtc_client_request_elapsed metric Queryable label name Sample label values Label description cluster https://cluster-url.com:443 Cluster that the request was issued against component migplan , migcluster Sub-controller API that issued request function (*ReconcileMigPlan).Reconcile Function that the request was issued from kind SecretList , Deployment Kubernetes resource that the request was issued for 11.3.4.1.4. Useful queries The table lists some helpful queries that can be used for monitoring performance. Table 11.4. Useful queries Query Description mtc_client_request_count Number of API requests issued, sorted by request type sum(mtc_client_request_count) Total number of API requests issued mtc_client_request_elapsed API request latency, sorted by request type sum(mtc_client_request_elapsed) Total latency of API requests sum(mtc_client_request_elapsed) / sum(mtc_client_request_count) Average latency of API requests mtc_client_request_elapsed / mtc_client_request_count Average latency of API requests, sorted by request type cam_app_workload_migrations{status="running"} * 100 Count of running migrations, multiplied by 100 for easier viewing alongside request counts 11.3.5. Using the must-gather tool You can collect logs, metrics, and information about MTC custom resources by using the must-gather tool. The must-gather data must be attached to all customer cases. You can collect data for a one-hour or a 24-hour period and view the data with the Prometheus console. Prerequisites You must be logged in to the OpenShift Container Platform cluster as a user with the cluster-admin role. You must have the OpenShift CLI ( oc ) installed. Procedure Navigate to the directory where you want to store the must-gather data. Run the oc adm must-gather command for one of the following data collection options: To collect data for the past 24 hours, run the following command: USD oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.8 This command saves the data as the must-gather/must-gather.tar.gz file. You can upload this file to a support case on the Red Hat Customer Portal . To collect data for the past 24 hours, run the following command: USD oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.8 -- /usr/bin/gather_metrics_dump This operation can take a long time. This command saves the data as the must-gather/metrics/prom_data.tar.gz file. 11.3.6. Debugging Velero resources with the Velero CLI tool You can debug Backup and Restore custom resources (CRs) and retrieve logs with the Velero CLI tool. The Velero CLI tool provides more detailed information than the OpenShift CLI tool. Syntax Use the oc exec command to run a Velero CLI command: USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> <command> <cr_name> Example USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql Help option Use the velero --help option to list all Velero CLI commands: USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ --help Describe command Use the velero describe command to retrieve a summary of warnings and errors associated with a Backup or Restore CR: USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> describe <cr_name> Example USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql The following types of restore errors and warnings are shown in the output of a velero describe request: Velero : A list of messages related to the operation of Velero itself, for example, messages related to connecting to the cloud, reading a backup file, and so on Cluster : A list of messages related to backing up or restoring cluster-scoped resources Namespaces : A list of list of messages related to backing up or restoring resources stored in namespaces One or more errors in one of these categories results in a Restore operation receiving the status of PartiallyFailed and not Completed . Warnings do not lead to a change in the completion status. Important For resource-specific errors, that is, Cluster and Namespaces errors, the restore describe --details output includes a resource list that lists all resources that Velero succeeded in restoring. For any resource that has such an error, check to see if the resource is actually in the cluster. If there are Velero errors, but no resource-specific errors, in the output of a describe command, it is possible that the restore completed without any actual problems in restoring workloads, but carefully validate post-restore applications. For example, if the output contains PodVolumeRestore or node agent-related errors, check the status of PodVolumeRestores and DataDownloads . If none of these are failed or still running, then volume data might have been fully restored. Logs command Use the velero logs command to retrieve the logs of a Backup or Restore CR: USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> logs <cr_name> Example USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf 11.3.7. Debugging a partial migration failure You can debug a partial migration failure warning message by using the Velero CLI to examine the Restore custom resource (CR) logs. A partial failure occurs when Velero encounters an issue that does not cause a migration to fail. For example, if a custom resource definition (CRD) is missing or if there is a discrepancy between CRD versions on the source and target clusters, the migration completes but the CR is not created on the target cluster. Velero logs the issue as a partial failure and then processes the rest of the objects in the Backup CR. Procedure Check the status of a MigMigration CR: USD oc get migmigration <migmigration> -o yaml Example output status: conditions: - category: Warn durable: true lastTransitionTime: "2021-01-26T20:48:40Z" message: 'Final Restore openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf: partially failed on destination cluster' status: "True" type: VeleroFinalRestorePartiallyFailed - category: Advisory durable: true lastTransitionTime: "2021-01-26T20:48:42Z" message: The migration has completed with warnings, please look at `Warn` conditions. reason: Completed status: "True" type: SucceededWithWarnings Check the status of the Restore CR by using the Velero describe command: USD oc -n {namespace} exec deployment/velero -c velero -- ./velero \ restore describe <restore> Example output Phase: PartiallyFailed (run 'velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf' for more information) Errors: Velero: <none> Cluster: <none> Namespaces: migration-example: error restoring example.com/migration-example/migration-example: the server could not find the requested resource Check the Restore CR logs by using the Velero logs command: USD oc -n {namespace} exec deployment/velero -c velero -- ./velero \ restore logs <restore> Example output time="2021-01-26T20:48:37Z" level=info msg="Attempting to restore migration-example: migration-example" logSource="pkg/restore/restore.go:1107" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf time="2021-01-26T20:48:37Z" level=info msg="error restoring migration-example: the server could not find the requested resource" logSource="pkg/restore/restore.go:1170" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf The Restore CR log error message, the server could not find the requested resource , indicates the cause of the partially failed migration. 11.3.8. Using MTC custom resources for troubleshooting You can check the following Migration Toolkit for Containers (MTC) custom resources (CRs) to troubleshoot a failed migration: MigCluster MigStorage MigPlan BackupStorageLocation The BackupStorageLocation CR contains a migrationcontroller label to identify the MTC instance that created the CR: labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93 VolumeSnapshotLocation The VolumeSnapshotLocation CR contains a migrationcontroller label to identify the MTC instance that created the CR: labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93 MigMigration Backup MTC changes the reclaim policy of migrated persistent volumes (PVs) to Retain on the target cluster. The Backup CR contains an openshift.io/orig-reclaim-policy annotation that indicates the original reclaim policy. You can manually restore the reclaim policy of the migrated PVs. Restore Procedure List the MigMigration CRs in the openshift-migration namespace: USD oc get migmigration -n openshift-migration Example output NAME AGE 88435fe0-c9f8-11e9-85e6-5d593ce65e10 6m42s Inspect the MigMigration CR: USD oc describe migmigration 88435fe0-c9f8-11e9-85e6-5d593ce65e10 -n openshift-migration The output is similar to the following examples. MigMigration example output name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10 namespace: openshift-migration labels: <none> annotations: touch: 3b48b543-b53e-4e44-9d34-33563f0f8147 apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: creationTimestamp: 2019-08-29T01:01:29Z generation: 20 resourceVersion: 88179 selfLink: /apis/migration.openshift.io/v1alpha1/namespaces/openshift-migration/migmigrations/88435fe0-c9f8-11e9-85e6-5d593ce65e10 uid: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 spec: migPlanRef: name: socks-shop-mig-plan namespace: openshift-migration quiescePods: true stage: false status: conditions: category: Advisory durable: True lastTransitionTime: 2019-08-29T01:03:40Z message: The migration has completed successfully. reason: Completed status: True type: Succeeded phase: Completed startTimestamp: 2019-08-29T01:01:29Z events: <none> Velero backup CR #2 example output that describes the PV data apiVersion: velero.io/v1 kind: Backup metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: "true" openshift.io/migration-registry: 172.30.105.179:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-44dd3bd5-c9f8-11e9-95ad-0205fe66cbb6 openshift.io/orig-reclaim-policy: delete creationTimestamp: "2019-08-29T01:03:15Z" generateName: 88435fe0-c9f8-11e9-85e6-5d593ce65e10- generation: 1 labels: app.kubernetes.io/part-of: migration migmigration: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 migration-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 velero.io/storage-location: myrepo-vpzq9 name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 namespace: openshift-migration resourceVersion: "87313" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/backups/88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 uid: c80dbbc0-c9f8-11e9-95ad-0205fe66cbb6 spec: excludedNamespaces: [] excludedResources: [] hooks: resources: [] includeClusterResources: null includedNamespaces: - sock-shop includedResources: - persistentvolumes - persistentvolumeclaims - namespaces - imagestreams - imagestreamtags - secrets - configmaps - pods labelSelector: matchLabels: migration-included-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 storageLocation: myrepo-vpzq9 ttl: 720h0m0s volumeSnapshotLocations: - myrepo-wv6fx status: completionTimestamp: "2019-08-29T01:02:36Z" errors: 0 expiration: "2019-09-28T01:02:35Z" phase: Completed startTimestamp: "2019-08-29T01:02:35Z" validationErrors: null version: 1 volumeSnapshotsAttempted: 0 volumeSnapshotsCompleted: 0 warnings: 0 Velero restore CR #2 example output that describes the Kubernetes resources apiVersion: velero.io/v1 kind: Restore metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: "true" openshift.io/migration-registry: 172.30.90.187:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-36f54ca7-c925-11e9-825a-06fa9fb68c88 creationTimestamp: "2019-08-28T00:09:49Z" generateName: e13a1b60-c927-11e9-9555-d129df7f3b96- generation: 3 labels: app.kubernetes.io/part-of: migration migmigration: e18252c9-c927-11e9-825a-06fa9fb68c88 migration-final-restore: e18252c9-c927-11e9-825a-06fa9fb68c88 name: e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx namespace: openshift-migration resourceVersion: "82329" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/restores/e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx uid: 26983ec0-c928-11e9-825a-06fa9fb68c88 spec: backupName: e13a1b60-c927-11e9-9555-d129df7f3b96-sz24f excludedNamespaces: null excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io includedNamespaces: null includedResources: null namespaceMapping: null restorePVs: true status: errors: 0 failureReason: "" phase: Completed validationErrors: null warnings: 15 11.4. Common issues and concerns This section describes common issues and concerns that can cause issues during migration. 11.4.1. Direct volume migration does not complete If direct volume migration does not complete, the target cluster might not have the same node-selector annotations as the source cluster. Migration Toolkit for Containers (MTC) migrates namespaces with all annotations to preserve security context constraints and scheduling requirements. During direct volume migration, MTC creates Rsync transfer pods on the target cluster in the namespaces that were migrated from the source cluster. If a target cluster namespace does not have the same annotations as the source cluster namespace, the Rsync transfer pods cannot be scheduled. The Rsync pods remain in a Pending state. You can identify and fix this issue by performing the following procedure. Procedure Check the status of the MigMigration CR: USD oc describe migmigration <pod> -n openshift-migration The output includes the following status message: Example output Some or all transfer pods are not running for more than 10 mins on destination cluster On the source cluster, obtain the details of a migrated namespace: USD oc get namespace <namespace> -o yaml 1 1 Specify the migrated namespace. On the target cluster, edit the migrated namespace: USD oc edit namespace <namespace> Add the missing openshift.io/node-selector annotations to the migrated namespace as in the following example: apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: "region=east" ... Run the migration plan again. 11.4.2. Error messages and resolutions This section describes common error messages you might encounter with the Migration Toolkit for Containers (MTC) and how to resolve their underlying causes. 11.4.2.1. CA certificate error displayed when accessing the MTC console for the first time If a CA certificate error message is displayed the first time you try to access the MTC console, the likely cause is the use of self-signed CA certificates in one of the clusters. To resolve this issue, navigate to the oauth-authorization-server URL displayed in the error message and accept the certificate. To resolve this issue permanently, add the certificate to the trust store of your web browser. If an Unauthorized message is displayed after you have accepted the certificate, navigate to the MTC console and refresh the web page. 11.4.2.2. OAuth timeout error in the MTC console If a connection has timed out message is displayed in the MTC console after you have accepted a self-signed certificate, the causes are likely to be the following: Interrupted network access to the OAuth server Interrupted network access to the OpenShift Container Platform console Proxy configuration that blocks access to the oauth-authorization-server URL. See MTC console inaccessible because of OAuth timeout error for details. To determine the cause of the timeout: Inspect the MTC console web page with a browser web inspector. Check the Migration UI pod log for errors. 11.4.2.3. Certificate signed by unknown authority error If you use a self-signed certificate to secure a cluster or a replication repository for the MTC, certificate verification might fail with the following error message: Certificate signed by unknown authority . You can create a custom CA certificate bundle file and upload it in the MTC web console when you add a cluster or a replication repository. Procedure Download a CA certificate from a remote endpoint and save it as a CA bundle file: USD echo -n | openssl s_client -connect <host_FQDN>:<port> \ 1 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <ca_bundle.cert> 2 1 Specify the host FQDN and port of the endpoint, for example, api.my-cluster.example.com:6443 . 2 Specify the name of the CA bundle file. 11.4.2.4. Backup storage location errors in the Velero pod log If a Velero Backup custom resource contains a reference to a backup storage location (BSL) that does not exist, the Velero pod log might display the following error messages: USD oc logs <Velero_Pod> -n openshift-migration Example output level=error msg="Error checking repository for stale locks" error="error getting backup storage location: BackupStorageLocation.velero.io \"ts-dpa-1\" not found" error.file="/remote-source/src/github.com/vmware-tanzu/velero/pkg/restic/repository_manager.go:259" You can ignore these error messages. A missing BSL cannot cause a migration to fail. 11.4.2.5. Pod volume backup timeout error in the Velero pod log If a migration fails because Restic times out, the following error is displayed in the Velero pod log. level=error msg="Error backing up item" backup=velero/monitoring error="timed out waiting for all PodVolumeBackups to complete" error.file="/go/src/github.com/heptio/velero/pkg/restic/backupper.go:165" error.function="github.com/heptio/velero/pkg/restic.(*backupper).BackupPodVolumes" group=v1 The default value of restic_timeout is one hour. You can increase this parameter for large migrations, keeping in mind that a higher value may delay the return of error messages. Procedure In the OpenShift Container Platform web console, navigate to Operators Installed Operators . Click Migration Toolkit for Containers Operator . In the MigrationController tab, click migration-controller . In the YAML tab, update the following parameter value: spec: restic_timeout: 1h 1 1 Valid units are h (hours), m (minutes), and s (seconds), for example, 3h30m15s . Click Save . 11.4.2.6. Restic verification errors in the MigMigration custom resource If data verification fails when migrating a persistent volume with the file system data copy method, the following error is displayed in the MigMigration CR. Example output status: conditions: - category: Warn durable: true lastTransitionTime: 2020-04-16T20:35:16Z message: There were verify errors found in 1 Restic volume restores. See restore `<registry-example-migration-rvwcm>` for details 1 status: "True" type: ResticVerifyErrors 2 1 The error message identifies the Restore CR name. 2 ResticVerifyErrors is a general error warning type that includes verification errors. Note A data verification error does not cause the migration process to fail. You can check the Restore CR to identify the source of the data verification error. Procedure Log in to the target cluster. View the Restore CR: USD oc describe <registry-example-migration-rvwcm> -n openshift-migration The output identifies the persistent volume with PodVolumeRestore errors. Example output status: phase: Completed podVolumeRestoreErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration podVolumeRestoreResticErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration View the PodVolumeRestore CR: USD oc describe <migration-example-rvwcm-98t49> The output identifies the Restic pod that logged the errors. Example output completionTimestamp: 2020-05-01T20:49:12Z errors: 1 resticErrors: 1 ... resticPod: <restic-nr2v5> View the Restic pod log to locate the errors: USD oc logs -f <restic-nr2v5> 11.4.2.7. Restic permission error when migrating from NFS storage with root_squash enabled If you are migrating data from NFS storage and root_squash is enabled, Restic maps to nfsnobody and does not have permission to perform the migration. The following error is displayed in the Restic pod log. Example output backup=openshift-migration/<backup_id> controller=pod-volume-backup error="fork/exec /usr/bin/restic: permission denied" error.file="/go/src/github.com/vmware-tanzu/velero/pkg/controller/pod_volume_backup_controller.go:280" error.function="github.com/vmware-tanzu/velero/pkg/controller.(*podVolumeBackupController).processBackup" logSource="pkg/controller/pod_volume_backup_controller.go:280" name=<backup_id> namespace=openshift-migration You can resolve this issue by creating a supplemental group for Restic and adding the group ID to the MigrationController CR manifest. Procedure Create a supplemental group for Restic on the NFS storage. Set the setgid bit on the NFS directories so that group ownership is inherited. Add the restic_supplemental_groups parameter to the MigrationController CR manifest on the source and target clusters: spec: restic_supplemental_groups: <group_id> 1 1 Specify the supplemental group ID. Wait for the Restic pods to restart so that the changes are applied. 11.4.3. Applying the Skip SELinux relabel workaround with spc_t automatically on workloads running on OpenShift Container Platform When attempting to migrate a namespace with Migration Toolkit for Containers (MTC) and a substantial volume associated with it, the rsync-server may become frozen without any further information to troubleshoot the issue. 11.4.3.1. Diagnosing the need for the Skip SELinux relabel workaround Search for an error of Unable to attach or mount volumes for pod... timed out waiting for the condition in the kubelet logs from the node where the rsync-server for the Direct Volume Migration (DVM) runs. Example kubelet log kubenswrapper[3879]: W0326 16:30:36.749224 3879 volume_linux.go:49] Setting volume ownership for /var/lib/kubelet/pods/8905d88e-6531-4d65-9c2a-eff11dc7eb29/volumes/kubernetes.io~csi/pvc-287d1988-3fd9-4517-a0c7-22539acd31e6/mount and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699 kubenswrapper[3879]: E0326 16:32:02.706363 3879 kubelet.go:1841] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[8db9d5b032dab17d4ea9495af12e085a], unattached volumes=[crane2-rsync-server-secret 8db9d5b032dab17d4ea9495af12e085a kube-api-access-dlbd2 crane2-stunnel-server-config crane2-stunnel-server-secret crane2-rsync-server-config]: timed out waiting for the condition" pod="caboodle-preprod/rsync-server" kubenswrapper[3879]: E0326 16:32:02.706496 3879 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[8db9d5b032dab17d4ea9495af12e085a], unattached volumes=[crane2-rsync-server-secret 8db9d5b032dab17d4ea9495af12e085a kube-api-access-dlbd2 crane2-stunnel-server-config crane2-stunnel-server-secret crane2-rsync-server-config]: timed out waiting for the condition" pod="caboodle-preprod/rsync-server" podUID=8905d88e-6531-4d65-9c2a-eff11dc7eb29 11.4.3.2. Resolving using the Skip SELinux relabel workaround To resolve this issue, set the migration_rsync_super_privileged parameter to true in both the source and destination MigClusters using the MigrationController custom resource (CR). Example MigrationController CR apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: migration_rsync_super_privileged: true 1 azure_resource_group: "" cluster_name: host mig_namespace_limit: "10" mig_pod_limit: "100" mig_pv_limit: "100" migration_controller: true migration_log_reader: true migration_ui: true migration_velero: true olm_managed: true restic_timeout: 1h version: 1.8.3 1 The value of the migration_rsync_super_privileged parameter indicates whether or not to run Rsync Pods as super privileged containers ( spc_t selinux context ). Valid settings are true or false . 11.5. Rolling back a migration You can roll back a migration by using the MTC web console or the CLI. You can also roll back a migration manually . 11.5.1. Rolling back a migration by using the MTC web console You can roll back a migration by using the Migration Toolkit for Containers (MTC) web console. Note The following resources remain in the migrated namespaces for debugging after a failed direct volume migration (DVM): Config maps (source and destination clusters) Secret objects (source and destination clusters) Rsync CRs (source cluster) These resources do not affect rollback. You can delete them manually. If you later run the same migration plan successfully, the resources from the failed migration are deleted automatically. If your application was stopped during a failed migration, you must roll back the migration to prevent data corruption in the persistent volume. Rollback is not required if the application was not stopped during migration because the original application is still running on the source cluster. Procedure In the MTC web console, click Migration plans . Click the Options menu beside a migration plan and select Rollback under Migration . Click Rollback and wait for rollback to complete. In the migration plan details, Rollback succeeded is displayed. Verify that rollback was successful in the OpenShift Container Platform web console of the source cluster: Click Home Projects . Click the migrated project to view its status. In the Routes section, click Location to verify that the application is functioning, if applicable. Click Workloads Pods to verify that the pods are running in the migrated namespace. Click Storage Persistent volumes to verify that the migrated persistent volume is correctly provisioned. 11.5.2. Rolling back a migration from the command line interface You can roll back a migration by creating a MigMigration custom resource (CR) from the command line interface. Note The following resources remain in the migrated namespaces for debugging after a failed direct volume migration (DVM): Config maps (source and destination clusters) Secret objects (source and destination clusters) Rsync CRs (source cluster) These resources do not affect rollback. You can delete them manually. If you later run the same migration plan successfully, the resources from the failed migration are deleted automatically. If your application was stopped during a failed migration, you must roll back the migration to prevent data corruption in the persistent volume. Rollback is not required if the application was not stopped during migration because the original application is still running on the source cluster. Procedure Create a MigMigration CR based on the following example: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: "1.0" name: <migmigration> namespace: openshift-migration spec: ... rollback: true ... migPlanRef: name: <migplan> 1 namespace: openshift-migration EOF 1 Specify the name of the associated MigPlan CR. In the MTC web console, verify that the migrated project resources have been removed from the target cluster. Verify that the migrated project resources are present in the source cluster and that the application is running. 11.5.3. Rolling back a migration manually You can roll back a failed migration manually by deleting the stage pods and unquiescing the application. If you run the same migration plan successfully, the resources from the failed migration are deleted automatically. Note The following resources remain in the migrated namespaces after a failed direct volume migration (DVM): Config maps (source and destination clusters) Secret objects (source and destination clusters) Rsync CRs (source cluster) These resources do not affect rollback. You can delete them manually. Procedure Delete the stage pods on all clusters: USD oc delete USD(oc get pods -l migration.openshift.io/is-stage-pod -n <namespace>) 1 1 Namespaces specified in the MigPlan CR. Unquiesce the application on the source cluster by scaling the replicas to their premigration number: USD oc scale deployment <deployment> --replicas=<premigration_replicas> The migration.openshift.io/preQuiesceReplicas annotation in the Deployment CR displays the premigration number of replicas: apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "1" migration.openshift.io/preQuiesceReplicas: "1" Verify that the application pods are running on the source cluster: USD oc get pod -n <namespace> Additional resources Deleting Operators from a cluster using the web console | [
"apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_image_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration namespaces: 1 - <source_namespace_1> - <source_namespace_2>:<destination_namespace_3> 2",
"apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageStreamMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_image_stream_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration imageStreamRef: name: <image_stream> namespace: <source_image_stream_namespace> destNamespace: <destination_image_stream_namespace>",
"apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigration metadata: name: <direct_volume_migration> namespace: openshift-migration spec: createDestinationNamespaces: false 1 deleteProgressReportingCRs: false 2 destMigClusterRef: name: <host_cluster> 3 namespace: openshift-migration persistentVolumeClaims: - name: <pvc> 4 namespace: <pvc_namespace> srcMigClusterRef: name: <source_cluster> namespace: openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigrationProgress metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_volume_migration_progress> spec: clusterRef: name: <source_cluster> namespace: openshift-migration podRef: name: <rsync_pod> namespace: openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigAnalytic metadata: annotations: migplan: <migplan> name: <miganalytic> namespace: openshift-migration labels: migplan: <migplan> spec: analyzeImageCount: true 1 analyzeK8SResources: true 2 analyzePVCapacity: true 3 listImages: false 4 listImagesLimit: 50 5 migPlanRef: name: <migplan> namespace: openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: labels: controller-tools.k8s.io: \"1.0\" name: <host_cluster> 1 namespace: openshift-migration spec: isHostCluster: true 2 The 'azureResourceGroup' parameter is relevant only for Microsoft Azure. azureResourceGroup: <azure_resource_group> 3 caBundle: <ca_bundle_base64> 4 insecure: false 5 refresh: false 6 The 'restartRestic' parameter is relevant for a source cluster. restartRestic: true 7 The following parameters are relevant for a remote cluster. exposedRegistryPath: <registry_route> 8 url: <destination_cluster_url> 9 serviceAccountSecretRef: name: <source_secret> 10 namespace: openshift-config",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigHook metadata: generateName: <hook_name_prefix> 1 name: <mighook> 2 namespace: openshift-migration spec: activeDeadlineSeconds: 1800 3 custom: false 4 image: <hook_image> 5 playbook: <ansible_playbook_base64> 6 targetCluster: source 7",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migmigration> namespace: openshift-migration spec: canceled: false 1 rollback: false 2 stage: false 3 quiescePods: true 4 keepAnnotations: true 5 verify: false 6 migPlanRef: name: <migplan> namespace: openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migplan> namespace: openshift-migration spec: closed: false 1 srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration hooks: 2 - executionNamespace: <namespace> 3 phase: <migration_phase> 4 reference: name: <hook> 5 namespace: <hook_namespace> 6 serviceAccount: <service_account> 7 indirectImageMigration: true 8 indirectVolumeMigration: false 9 migStorageRef: name: <migstorage> namespace: openshift-migration namespaces: - <source_namespace_1> 10 - <source_namespace_2> - <source_namespace_3>:<destination_namespace_4> 11 refresh: false 12",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migstorage> namespace: openshift-migration spec: backupStorageProvider: <backup_storage_provider> 1 volumeSnapshotProvider: <snapshot_storage_provider> 2 backupStorageConfig: awsBucketName: <bucket> 3 awsRegion: <region> 4 credsSecretRef: namespace: openshift-config name: <storage_secret> 5 awsKmsKeyId: <key_id> 6 awsPublicUrl: <public_url> 7 awsSignatureVersion: <signature_version> 8 volumeSnapshotConfig: awsRegion: <region> 9 credsSecretRef: namespace: openshift-config name: <storage_secret> 10 refresh: false 11",
"oc -n openshift-migration get pods | grep log",
"oc -n openshift-migration logs -f <mig-log-reader-pod> -c color 1",
"oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.8",
"oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.8 -- /usr/bin/gather_metrics_dump",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> <command> <cr_name>",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero --help",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> describe <cr_name>",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> logs <cr_name>",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf",
"oc get migmigration <migmigration> -o yaml",
"status: conditions: - category: Warn durable: true lastTransitionTime: \"2021-01-26T20:48:40Z\" message: 'Final Restore openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf: partially failed on destination cluster' status: \"True\" type: VeleroFinalRestorePartiallyFailed - category: Advisory durable: true lastTransitionTime: \"2021-01-26T20:48:42Z\" message: The migration has completed with warnings, please look at `Warn` conditions. reason: Completed status: \"True\" type: SucceededWithWarnings",
"oc -n {namespace} exec deployment/velero -c velero -- ./velero restore describe <restore>",
"Phase: PartiallyFailed (run 'velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf' for more information) Errors: Velero: <none> Cluster: <none> Namespaces: migration-example: error restoring example.com/migration-example/migration-example: the server could not find the requested resource",
"oc -n {namespace} exec deployment/velero -c velero -- ./velero restore logs <restore>",
"time=\"2021-01-26T20:48:37Z\" level=info msg=\"Attempting to restore migration-example: migration-example\" logSource=\"pkg/restore/restore.go:1107\" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf time=\"2021-01-26T20:48:37Z\" level=info msg=\"error restoring migration-example: the server could not find the requested resource\" logSource=\"pkg/restore/restore.go:1170\" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf",
"labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93",
"labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93",
"oc get migmigration -n openshift-migration",
"NAME AGE 88435fe0-c9f8-11e9-85e6-5d593ce65e10 6m42s",
"oc describe migmigration 88435fe0-c9f8-11e9-85e6-5d593ce65e10 -n openshift-migration",
"name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10 namespace: openshift-migration labels: <none> annotations: touch: 3b48b543-b53e-4e44-9d34-33563f0f8147 apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: creationTimestamp: 2019-08-29T01:01:29Z generation: 20 resourceVersion: 88179 selfLink: /apis/migration.openshift.io/v1alpha1/namespaces/openshift-migration/migmigrations/88435fe0-c9f8-11e9-85e6-5d593ce65e10 uid: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 spec: migPlanRef: name: socks-shop-mig-plan namespace: openshift-migration quiescePods: true stage: false status: conditions: category: Advisory durable: True lastTransitionTime: 2019-08-29T01:03:40Z message: The migration has completed successfully. reason: Completed status: True type: Succeeded phase: Completed startTimestamp: 2019-08-29T01:01:29Z events: <none>",
"apiVersion: velero.io/v1 kind: Backup metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: \"true\" openshift.io/migration-registry: 172.30.105.179:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-44dd3bd5-c9f8-11e9-95ad-0205fe66cbb6 openshift.io/orig-reclaim-policy: delete creationTimestamp: \"2019-08-29T01:03:15Z\" generateName: 88435fe0-c9f8-11e9-85e6-5d593ce65e10- generation: 1 labels: app.kubernetes.io/part-of: migration migmigration: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 migration-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 velero.io/storage-location: myrepo-vpzq9 name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 namespace: openshift-migration resourceVersion: \"87313\" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/backups/88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 uid: c80dbbc0-c9f8-11e9-95ad-0205fe66cbb6 spec: excludedNamespaces: [] excludedResources: [] hooks: resources: [] includeClusterResources: null includedNamespaces: - sock-shop includedResources: - persistentvolumes - persistentvolumeclaims - namespaces - imagestreams - imagestreamtags - secrets - configmaps - pods labelSelector: matchLabels: migration-included-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 storageLocation: myrepo-vpzq9 ttl: 720h0m0s volumeSnapshotLocations: - myrepo-wv6fx status: completionTimestamp: \"2019-08-29T01:02:36Z\" errors: 0 expiration: \"2019-09-28T01:02:35Z\" phase: Completed startTimestamp: \"2019-08-29T01:02:35Z\" validationErrors: null version: 1 volumeSnapshotsAttempted: 0 volumeSnapshotsCompleted: 0 warnings: 0",
"apiVersion: velero.io/v1 kind: Restore metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: \"true\" openshift.io/migration-registry: 172.30.90.187:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-36f54ca7-c925-11e9-825a-06fa9fb68c88 creationTimestamp: \"2019-08-28T00:09:49Z\" generateName: e13a1b60-c927-11e9-9555-d129df7f3b96- generation: 3 labels: app.kubernetes.io/part-of: migration migmigration: e18252c9-c927-11e9-825a-06fa9fb68c88 migration-final-restore: e18252c9-c927-11e9-825a-06fa9fb68c88 name: e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx namespace: openshift-migration resourceVersion: \"82329\" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/restores/e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx uid: 26983ec0-c928-11e9-825a-06fa9fb68c88 spec: backupName: e13a1b60-c927-11e9-9555-d129df7f3b96-sz24f excludedNamespaces: null excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io includedNamespaces: null includedResources: null namespaceMapping: null restorePVs: true status: errors: 0 failureReason: \"\" phase: Completed validationErrors: null warnings: 15",
"oc describe migmigration <pod> -n openshift-migration",
"Some or all transfer pods are not running for more than 10 mins on destination cluster",
"oc get namespace <namespace> -o yaml 1",
"oc edit namespace <namespace>",
"apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: \"region=east\"",
"echo -n | openssl s_client -connect <host_FQDN>:<port> \\ 1 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <ca_bundle.cert> 2",
"oc logs <Velero_Pod> -n openshift-migration",
"level=error msg=\"Error checking repository for stale locks\" error=\"error getting backup storage location: BackupStorageLocation.velero.io \\\"ts-dpa-1\\\" not found\" error.file=\"/remote-source/src/github.com/vmware-tanzu/velero/pkg/restic/repository_manager.go:259\"",
"level=error msg=\"Error backing up item\" backup=velero/monitoring error=\"timed out waiting for all PodVolumeBackups to complete\" error.file=\"/go/src/github.com/heptio/velero/pkg/restic/backupper.go:165\" error.function=\"github.com/heptio/velero/pkg/restic.(*backupper).BackupPodVolumes\" group=v1",
"spec: restic_timeout: 1h 1",
"status: conditions: - category: Warn durable: true lastTransitionTime: 2020-04-16T20:35:16Z message: There were verify errors found in 1 Restic volume restores. See restore `<registry-example-migration-rvwcm>` for details 1 status: \"True\" type: ResticVerifyErrors 2",
"oc describe <registry-example-migration-rvwcm> -n openshift-migration",
"status: phase: Completed podVolumeRestoreErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration podVolumeRestoreResticErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration",
"oc describe <migration-example-rvwcm-98t49>",
"completionTimestamp: 2020-05-01T20:49:12Z errors: 1 resticErrors: 1 resticPod: <restic-nr2v5>",
"oc logs -f <restic-nr2v5>",
"backup=openshift-migration/<backup_id> controller=pod-volume-backup error=\"fork/exec /usr/bin/restic: permission denied\" error.file=\"/go/src/github.com/vmware-tanzu/velero/pkg/controller/pod_volume_backup_controller.go:280\" error.function=\"github.com/vmware-tanzu/velero/pkg/controller.(*podVolumeBackupController).processBackup\" logSource=\"pkg/controller/pod_volume_backup_controller.go:280\" name=<backup_id> namespace=openshift-migration",
"spec: restic_supplemental_groups: <group_id> 1",
"kubenswrapper[3879]: W0326 16:30:36.749224 3879 volume_linux.go:49] Setting volume ownership for /var/lib/kubelet/pods/8905d88e-6531-4d65-9c2a-eff11dc7eb29/volumes/kubernetes.io~csi/pvc-287d1988-3fd9-4517-a0c7-22539acd31e6/mount and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699 kubenswrapper[3879]: E0326 16:32:02.706363 3879 kubelet.go:1841] \"Unable to attach or mount volumes for pod; skipping pod\" err=\"unmounted volumes=[8db9d5b032dab17d4ea9495af12e085a], unattached volumes=[crane2-rsync-server-secret 8db9d5b032dab17d4ea9495af12e085a kube-api-access-dlbd2 crane2-stunnel-server-config crane2-stunnel-server-secret crane2-rsync-server-config]: timed out waiting for the condition\" pod=\"caboodle-preprod/rsync-server\" kubenswrapper[3879]: E0326 16:32:02.706496 3879 pod_workers.go:965] \"Error syncing pod, skipping\" err=\"unmounted volumes=[8db9d5b032dab17d4ea9495af12e085a], unattached volumes=[crane2-rsync-server-secret 8db9d5b032dab17d4ea9495af12e085a kube-api-access-dlbd2 crane2-stunnel-server-config crane2-stunnel-server-secret crane2-rsync-server-config]: timed out waiting for the condition\" pod=\"caboodle-preprod/rsync-server\" podUID=8905d88e-6531-4d65-9c2a-eff11dc7eb29",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: migration_rsync_super_privileged: true 1 azure_resource_group: \"\" cluster_name: host mig_namespace_limit: \"10\" mig_pod_limit: \"100\" mig_pv_limit: \"100\" migration_controller: true migration_log_reader: true migration_ui: true migration_velero: true olm_managed: true restic_timeout: 1h version: 1.8.3",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migmigration> namespace: openshift-migration spec: rollback: true migPlanRef: name: <migplan> 1 namespace: openshift-migration EOF",
"oc delete USD(oc get pods -l migration.openshift.io/is-stage-pod -n <namespace>) 1",
"oc scale deployment <deployment> --replicas=<premigration_replicas>",
"apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: \"1\" migration.openshift.io/preQuiesceReplicas: \"1\"",
"oc get pod -n <namespace>"
] | https://docs.redhat.com/en/documentation/migration_toolkit_for_containers/1.8/html/migration_toolkit_for_containers/troubleshooting-mtc |
Chapter 22. Network APIs | Chapter 22. Network APIs 22.1. Network APIs 22.1.1. Route [route.openshift.io/v1] Description A route allows developers to expose services through an HTTP(S) aware load balancing and proxy layer via a public DNS entry. The route may further specify TLS options and a certificate, or specify a public CNAME that the router should also accept for HTTP and HTTPS traffic. An administrator typically configures their router to be visible outside the cluster firewall, and may also add additional security, caching, or traffic controls on the service content. Routers usually talk directly to the service endpoints. Once a route is created, the host field may not be changed. Generally, routers use the oldest route with a given host when resolving conflicts. Routers are subject to additional customization and may support additional controls via the annotations field. Because administrators may configure multiple routers, the route status field is used to return information to clients about the names and states of the route under each router. If a client chooses a duplicate name, for instance, the route status conditions are used to indicate the route cannot be chosen. To enable HTTP/2 ALPN on a route it requires a custom (non-wildcard) certificate. This prevents connection coalescing by clients, notably web browsers. We do not support HTTP/2 ALPN on routes that use the default certificate because of the risk of connection re-use/coalescing. Routes that do not have their own custom certificate will not be HTTP/2 ALPN-enabled on either the frontend or the backend. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 22.2. Route [route.openshift.io/v1] Description A route allows developers to expose services through an HTTP(S) aware load balancing and proxy layer via a public DNS entry. The route may further specify TLS options and a certificate, or specify a public CNAME that the router should also accept for HTTP and HTTPS traffic. An administrator typically configures their router to be visible outside the cluster firewall, and may also add additional security, caching, or traffic controls on the service content. Routers usually talk directly to the service endpoints. Once a route is created, the host field may not be changed. Generally, routers use the oldest route with a given host when resolving conflicts. Routers are subject to additional customization and may support additional controls via the annotations field. Because administrators may configure multiple routers, the route status field is used to return information to clients about the names and states of the route under each router. If a client chooses a duplicate name, for instance, the route status conditions are used to indicate the route cannot be chosen. To enable HTTP/2 ALPN on a route it requires a custom (non-wildcard) certificate. This prevents connection coalescing by clients, notably web browsers. We do not support HTTP/2 ALPN on routes that use the default certificate because of the risk of connection re-use/coalescing. Routes that do not have their own custom certificate will not be HTTP/2 ALPN-enabled on either the frontend or the backend. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 22.2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec is the desired state of the route status object status is the current state of the route 22.2.1.1. .spec Description spec is the desired state of the route Type object Required to Property Type Description alternateBackends array alternateBackends allows up to 3 additional backends to be assigned to the route. Only the Service kind is allowed, and it will be defaulted to Service. Use the weight field in RouteTargetReference object to specify relative preference. alternateBackends[] object RouteTargetReference specifies the target that resolve into endpoints. Only the 'Service' kind is allowed. Use 'weight' field to emphasize one over others. host string host is an alias/DNS that points to the service. Optional. If not specified a route name will typically be automatically chosen. Must follow DNS952 subdomain conventions. path string path that the router watches for, to route traffic for to the service. Optional port object If specified, the port to be used by the router. Most routers will use all endpoints exposed by the service by default - set this value to instruct routers which port to use. subdomain string subdomain is a DNS subdomain that is requested within the ingress controller's domain (as a subdomain). If host is set this field is ignored. An ingress controller may choose to ignore this suggested name, in which case the controller will report the assigned name in the status.ingress array or refuse to admit the route. If this value is set and the server does not support this field host will be populated automatically. Otherwise host is left empty. The field may have multiple parts separated by a dot, but not all ingress controllers may honor the request. This field may not be changed after creation except by a user with the update routes/custom-host permission. Example: subdomain frontend automatically receives the router subdomain apps.mycluster.com to have a full hostname frontend.apps.mycluster.com . tls object The tls field provides the ability to configure certificates and termination for the route. to object to is an object the route should use as the primary backend. Only the Service kind is allowed, and it will be defaulted to Service. If the weight field (0-256 default 100) is set to zero, no traffic will be sent to this backend. wildcardPolicy string Wildcard policy if any for the route. Currently only 'Subdomain' or 'None' is allowed. 22.2.1.2. .spec.alternateBackends Description alternateBackends allows up to 3 additional backends to be assigned to the route. Only the Service kind is allowed, and it will be defaulted to Service. Use the weight field in RouteTargetReference object to specify relative preference. Type array 22.2.1.3. .spec.alternateBackends[] Description RouteTargetReference specifies the target that resolve into endpoints. Only the 'Service' kind is allowed. Use 'weight' field to emphasize one over others. Type object Required kind name Property Type Description kind string The kind of target that the route is referring to. Currently, only 'Service' is allowed name string name of the service/target that is being referred to. e.g. name of the service weight integer weight as an integer between 0 and 256, default 100, that specifies the target's relative weight against other target reference objects. 0 suppresses requests to this backend. 22.2.1.4. .spec.port Description If specified, the port to be used by the router. Most routers will use all endpoints exposed by the service by default - set this value to instruct routers which port to use. Type object Required targetPort Property Type Description targetPort integer-or-string 22.2.1.5. .spec.tls Description The tls field provides the ability to configure certificates and termination for the route. Type object Required termination Property Type Description caCertificate string caCertificate provides the cert authority certificate contents certificate string certificate provides certificate contents. This should be a single serving certificate, not a certificate chain. Do not include a CA certificate. destinationCACertificate string destinationCACertificate provides the contents of the ca certificate of the final destination. When using reencrypt termination this file should be provided in order to have routers use it for health checks on the secure connection. If this field is not specified, the router may provide its own destination CA and perform hostname validation using the short service name (service.namespace.svc), which allows infrastructure generated certificates to automatically verify. insecureEdgeTerminationPolicy string insecureEdgeTerminationPolicy indicates the desired behavior for insecure connections to a route. While each router may make its own decisions on which ports to expose, this is normally port 80. * Allow - traffic is sent to the server on the insecure port (edge/reencrypt terminations only) (default). * None - no traffic is allowed on the insecure port. * Redirect - clients are redirected to the secure port. key string key provides key file contents termination string termination indicates termination type. * edge - TLS termination is done by the router and http is used to communicate with the backend (default) * passthrough - Traffic is sent straight to the destination without the router providing TLS termination * reencrypt - TLS termination is done by the router and https is used to communicate with the backend 22.2.1.6. .spec.to Description to is an object the route should use as the primary backend. Only the Service kind is allowed, and it will be defaulted to Service. If the weight field (0-256 default 100) is set to zero, no traffic will be sent to this backend. Type object Required kind name Property Type Description kind string The kind of target that the route is referring to. Currently, only 'Service' is allowed name string name of the service/target that is being referred to. e.g. name of the service weight integer weight as an integer between 0 and 256, default 100, that specifies the target's relative weight against other target reference objects. 0 suppresses requests to this backend. 22.2.1.7. .status Description status is the current state of the route Type object Property Type Description ingress array ingress describes the places where the route may be exposed. The list of ingress points may contain duplicate Host or RouterName values. Routes are considered live once they are Ready ingress[] object RouteIngress holds information about the places where a route is exposed. 22.2.1.8. .status.ingress Description ingress describes the places where the route may be exposed. The list of ingress points may contain duplicate Host or RouterName values. Routes are considered live once they are Ready Type array 22.2.1.9. .status.ingress[] Description RouteIngress holds information about the places where a route is exposed. Type object Property Type Description conditions array Conditions is the state of the route, may be empty. conditions[] object RouteIngressCondition contains details for the current condition of this route on a particular router. host string Host is the host string under which the route is exposed; this value is required routerCanonicalHostname string CanonicalHostname is the external host name for the router that can be used as a CNAME for the host requested for this route. This value is optional and may not be set in all cases. routerName string Name is a name chosen by the router to identify itself; this value is required wildcardPolicy string Wildcard policy is the wildcard policy that was allowed where this route is exposed. 22.2.1.10. .status.ingress[].conditions Description Conditions is the state of the route, may be empty. Type array 22.2.1.11. .status.ingress[].conditions[] Description RouteIngressCondition contains details for the current condition of this route on a particular router. Type object Required status type Property Type Description lastTransitionTime string RFC 3339 date and time when this condition last transitioned message string Human readable message indicating details about last transition. reason string (brief) reason for the condition's last transition, and is usually a machine and human readable constant status string Status is the status of the condition. Can be True, False, Unknown. type string Type is the type of the condition. Currently only Admitted. 22.2.2. API endpoints The following API endpoints are available: /apis/route.openshift.io/v1/routes GET : list objects of kind Route /apis/route.openshift.io/v1/namespaces/{namespace}/routes DELETE : delete collection of Route GET : list objects of kind Route POST : create a Route /apis/route.openshift.io/v1/namespaces/{namespace}/routes/{name} DELETE : delete a Route GET : read the specified Route PATCH : partially update the specified Route PUT : replace the specified Route /apis/route.openshift.io/v1/namespaces/{namespace}/routes/{name}/status GET : read status of the specified Route PATCH : partially update status of the specified Route PUT : replace status of the specified Route 22.2.2.1. /apis/route.openshift.io/v1/routes Table 22.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind Route Table 22.2. HTTP responses HTTP code Reponse body 200 - OK RouteList schema 401 - Unauthorized Empty 22.2.2.2. /apis/route.openshift.io/v1/namespaces/{namespace}/routes Table 22.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 22.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Route Table 22.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 22.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Route Table 22.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 22.8. HTTP responses HTTP code Reponse body 200 - OK RouteList schema 401 - Unauthorized Empty HTTP method POST Description create a Route Table 22.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 22.10. Body parameters Parameter Type Description body Route schema Table 22.11. HTTP responses HTTP code Reponse body 200 - OK Route schema 201 - Created Route schema 202 - Accepted Route schema 401 - Unauthorized Empty 22.2.2.3. /apis/route.openshift.io/v1/namespaces/{namespace}/routes/{name} Table 22.12. Global path parameters Parameter Type Description name string name of the Route namespace string object name and auth scope, such as for teams and projects Table 22.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a Route Table 22.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 22.15. Body parameters Parameter Type Description body DeleteOptions schema Table 22.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Route Table 22.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 22.18. HTTP responses HTTP code Reponse body 200 - OK Route schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Route Table 22.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 22.20. Body parameters Parameter Type Description body Patch schema Table 22.21. HTTP responses HTTP code Reponse body 200 - OK Route schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Route Table 22.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 22.23. Body parameters Parameter Type Description body Route schema Table 22.24. HTTP responses HTTP code Reponse body 200 - OK Route schema 201 - Created Route schema 401 - Unauthorized Empty 22.2.2.4. /apis/route.openshift.io/v1/namespaces/{namespace}/routes/{name}/status Table 22.25. Global path parameters Parameter Type Description name string name of the Route namespace string object name and auth scope, such as for teams and projects Table 22.26. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified Route Table 22.27. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 22.28. HTTP responses HTTP code Reponse body 200 - OK Route schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Route Table 22.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 22.30. Body parameters Parameter Type Description body Patch schema Table 22.31. HTTP responses HTTP code Reponse body 200 - OK Route schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Route Table 22.32. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 22.33. Body parameters Parameter Type Description body Route schema Table 22.34. HTTP responses HTTP code Reponse body 200 - OK Route schema 201 - Created Route schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/api_reference/network-apis-1 |
Release notes | Release notes builds for Red Hat OpenShift 1.3 Highlights of what is new and what has changed with this OpenShift Builds release Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/builds_for_red_hat_openshift/1.3/html/release_notes/index |
Data Grid Spring Boot Starter | Data Grid Spring Boot Starter Red Hat Data Grid 8.4 Use Data Grid with your Spring Boot project Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_spring_boot_starter/index |
Chapter 11. Installing a cluster on Azure using ARM templates | Chapter 11. Installing a cluster on Azure using ARM templates In OpenShift Container Platform version 4.13, you can install a cluster on Microsoft Azure by using infrastructure that you provide. Several Azure Resource Manager (ARM) templates are provided to assist in completing these steps or to help model your own. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several ARM templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example. 11.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster. You downloaded the Azure CLI and installed it on your computer. See Install the Azure CLI in the Azure documentation. The following documentation was last tested using version 2.49.0 of the Azure CLI. Azure CLI commands might perform differently based on the version you use. If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . Note Be sure to also review this site list if you are configuring a proxy. 11.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.13, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 11.3. Configuring your Azure project Before you can install OpenShift Container Platform, you must configure an Azure project to host it. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. 11.3.1. Azure account limits The OpenShift Container Platform cluster uses a number of Microsoft Azure components, and the default Azure subscription and service limits, quotas, and constraints affect your ability to install OpenShift Container Platform clusters. Important Default limits vary by offer category types, such as Free Trial and Pay-As-You-Go, and by series, such as Dv2, F, and G. For example, the default for Enterprise Agreement subscriptions is 350 cores. Check the limits for your subscription type and if necessary, increase quota limits for your account before you install a default cluster on Azure. The following table summarizes the Azure components whose limits can impact your ability to install and run OpenShift Container Platform clusters. Component Number of components required by default Default Azure limit Description vCPU 40 20 per region A default cluster requires 40 vCPUs, so you must increase the account limit. By default, each cluster creates the following instances: One bootstrap machine, which is removed after installation Three control plane machines Three compute machines Because the bootstrap machine uses Standard_D4s_v3 machines, which use 4 vCPUs, the control plane machines use Standard_D8s_v3 virtual machines, which use 8 vCPUs, and the worker machines use Standard_D4s_v3 virtual machines, which use 4 vCPUs, a default cluster requires 40 vCPUs. The bootstrap node VM, which uses 4 vCPUs, is used only during installation. To deploy more worker nodes, enable autoscaling, deploy large workloads, or use a different instance type, you must further increase the vCPU limit for your account to ensure that your cluster can deploy the machines that you require. OS Disk 7 Each cluster machine must have a minimum of 100 GB of storage and 300 IOPS. While these are the minimum supported values, faster storage is recommended for production clusters and clusters with intensive workloads. For more information about optimizing storage for performance, see the page titled "Optimizing storage" in the "Scalability and performance" section. VNet 1 1000 per region Each default cluster requires one Virtual Network (VNet), which contains two subnets. Network interfaces 7 65,536 per region Each default cluster requires seven network interfaces. If you create more machines or your deployed workloads create load balancers, your cluster uses more network interfaces. Network security groups 2 5000 Each cluster creates network security groups for each subnet in the VNet. The default cluster creates network security groups for the control plane and for the compute node subnets: controlplane Allows the control plane machines to be reached on port 6443 from anywhere node Allows worker nodes to be reached from the internet on ports 80 and 443 Network load balancers 3 1000 per region Each cluster creates the following load balancers : default Public IP address that load balances requests to ports 80 and 443 across worker machines internal Private IP address that load balances requests to ports 6443 and 22623 across control plane machines external Public IP address that load balances requests to port 6443 across control plane machines If your applications create more Kubernetes LoadBalancer service objects, your cluster uses more load balancers. Public IP addresses 3 Each of the two public load balancers uses a public IP address. The bootstrap machine also uses a public IP address so that you can SSH into the machine to troubleshoot issues during installation. The IP address for the bootstrap node is used only during installation. Private IP addresses 7 The internal load balancer, each of the three control plane machines, and each of the three worker machines each use a private IP address. Spot VM vCPUs (optional) 0 If you configure spot VMs, your cluster must have two spot VM vCPUs for every compute node. 20 per region This is an optional component. To use spot VMs, you must increase the Azure default limit to at least twice the number of compute nodes in your cluster. Note Using spot VMs for control plane nodes is not recommended. Additional resources Optimizing storage 11.3.2. Configuring a public DNS zone in Azure To install OpenShift Container Platform, the Microsoft Azure account you use must have a dedicated public hosted DNS zone in your account. This zone must be authoritative for the domain. This service provides cluster DNS resolution and name lookup for external connections to the cluster. Procedure Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through Azure or another source. Note For more information about purchasing domains through Azure, see Buy a custom domain name for Azure App Service in the Azure documentation. If you are using an existing domain and registrar, migrate its DNS to Azure. See Migrate an active DNS name to Azure App Service in the Azure documentation. Configure DNS for your domain. Follow the steps in the Tutorial: Host your domain in Azure DNS in the Azure documentation to create a public hosted zone for your domain or subdomain, extract the new authoritative name servers, and update the registrar records for the name servers that your domain uses. Use an appropriate root domain, such as openshiftcorp.com , or subdomain, such as clusters.openshiftcorp.com . If you use a subdomain, follow your company's procedures to add its delegation records to the parent domain. You can view Azure's DNS solution by visiting this example for creating DNS zones . 11.3.3. Increasing Azure account limits To increase an account limit, file a support request on the Azure portal. Note You can increase only one type of quota per support request. Procedure From the Azure portal, click Help + support in the lower left corner. Click New support request and then select the required values: From the Issue type list, select Service and subscription limits (quotas) . From the Subscription list, select the subscription to modify. From the Quota type list, select the quota to increase. For example, select Compute-VM (cores-vCPUs) subscription limit increases to increase the number of vCPUs, which is required to install a cluster. Click : Solutions . On the Problem Details page, provide the required information for your quota increase: Click Provide details and provide the required details in the Quota details window. In the SUPPORT METHOD and CONTACT INFO sections, provide the issue severity and your contact details. Click : Review + create and then click Create . 11.3.4. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 11.3.5. Required Azure roles OpenShift Container Platform needs a service principal so it can manage Microsoft Azure resources. Before you can create a service principal, review the following information: Your Azure account subscription must have the following roles: User Access Administrator Contributor Your Azure Active Directory (AD) must have the following permission: "microsoft.directory/servicePrincipals/createAsOwner" To set roles on the Azure portal, see the Manage access to Azure resources using RBAC and the Azure portal in the Azure documentation. 11.3.6. Required Azure permissions for user-provisioned infrastructure When you assign Contributor and User Access Administrator roles to the service principal, you automatically grant all the required permissions. If your organization's security policies require a more restrictive set of permissions, you can create a custom role with the necessary permissions. The following permissions are required for creating an OpenShift Container Platform cluster on Microsoft Azure. Example 11.1. Required permissions for creating authorization resources Microsoft.Authorization/policies/audit/action Microsoft.Authorization/policies/auditIfNotExists/action Microsoft.Authorization/roleAssignments/read Microsoft.Authorization/roleAssignments/write Example 11.2. Required permissions for creating compute resources Microsoft.Compute/images/read Microsoft.Compute/images/write Microsoft.Compute/images/delete Microsoft.Compute/availabilitySets/read Microsoft.Compute/disks/beginGetAccess/action Microsoft.Compute/disks/delete Microsoft.Compute/disks/read Microsoft.Compute/disks/write Microsoft.Compute/galleries/images/read Microsoft.Compute/galleries/images/versions/read Microsoft.Compute/galleries/images/versions/write Microsoft.Compute/galleries/images/write Microsoft.Compute/galleries/read Microsoft.Compute/galleries/write Microsoft.Compute/snapshots/read Microsoft.Compute/snapshots/write Microsoft.Compute/snapshots/delete Microsoft.Compute/virtualMachines/delete Microsoft.Compute/virtualMachines/powerOff/action Microsoft.Compute/virtualMachines/read Microsoft.Compute/virtualMachines/write Microsoft.Compute/virtualMachines/deallocate/action Example 11.3. Required permissions for creating identity management resources Microsoft.ManagedIdentity/userAssignedIdentities/assign/action Microsoft.ManagedIdentity/userAssignedIdentities/read Microsoft.ManagedIdentity/userAssignedIdentities/write Example 11.4. Required permissions for creating network resources Microsoft.Network/dnsZones/A/write Microsoft.Network/dnsZones/CNAME/write Microsoft.Network/dnszones/CNAME/read Microsoft.Network/dnszones/read Microsoft.Network/loadBalancers/backendAddressPools/join/action Microsoft.Network/loadBalancers/backendAddressPools/read Microsoft.Network/loadBalancers/backendAddressPools/write Microsoft.Network/loadBalancers/read Microsoft.Network/loadBalancers/write Microsoft.Network/networkInterfaces/delete Microsoft.Network/networkInterfaces/join/action Microsoft.Network/networkInterfaces/read Microsoft.Network/networkInterfaces/write Microsoft.Network/networkSecurityGroups/join/action Microsoft.Network/networkSecurityGroups/read Microsoft.Network/networkSecurityGroups/securityRules/delete Microsoft.Network/networkSecurityGroups/securityRules/read Microsoft.Network/networkSecurityGroups/securityRules/write Microsoft.Network/networkSecurityGroups/write Microsoft.Network/privateDnsZones/A/read Microsoft.Network/privateDnsZones/A/write Microsoft.Network/privateDnsZones/A/delete Microsoft.Network/privateDnsZones/SOA/read Microsoft.Network/privateDnsZones/read Microsoft.Network/privateDnsZones/virtualNetworkLinks/read Microsoft.Network/privateDnsZones/virtualNetworkLinks/write Microsoft.Network/privateDnsZones/write Microsoft.Network/publicIPAddresses/delete Microsoft.Network/publicIPAddresses/join/action Microsoft.Network/publicIPAddresses/read Microsoft.Network/publicIPAddresses/write Microsoft.Network/virtualNetworks/join/action Microsoft.Network/virtualNetworks/read Microsoft.Network/virtualNetworks/subnets/join/action Microsoft.Network/virtualNetworks/subnets/read Microsoft.Network/virtualNetworks/subnets/write Microsoft.Network/virtualNetworks/write Example 11.5. Required permissions for checking the health of resources Microsoft.Resourcehealth/healthevent/Activated/action Microsoft.Resourcehealth/healthevent/InProgress/action Microsoft.Resourcehealth/healthevent/Pending/action Microsoft.Resourcehealth/healthevent/Resolved/action Microsoft.Resourcehealth/healthevent/Updated/action Example 11.6. Required permissions for creating a resource group Microsoft.Resources/subscriptions/resourceGroups/read Microsoft.Resources/subscriptions/resourcegroups/write Example 11.7. Required permissions for creating resource tags Microsoft.Resources/tags/write Example 11.8. Required permissions for creating storage resources Microsoft.Storage/storageAccounts/blobServices/read Microsoft.Storage/storageAccounts/blobServices/containers/write Microsoft.Storage/storageAccounts/fileServices/read Microsoft.Storage/storageAccounts/fileServices/shares/read Microsoft.Storage/storageAccounts/fileServices/shares/write Microsoft.Storage/storageAccounts/fileServices/shares/delete Microsoft.Storage/storageAccounts/listKeys/action Microsoft.Storage/storageAccounts/read Microsoft.Storage/storageAccounts/write Example 11.9. Required permissions for creating deployments Microsoft.Resources/deployments/read Microsoft.Resources/deployments/write Microsoft.Resources/deployments/validate/action Microsoft.Resources/deployments/operationstatuses/read Example 11.10. Optional permissions for creating compute resources Microsoft.Compute/availabilitySets/delete Microsoft.Compute/availabilitySets/write Example 11.11. Optional permissions for creating marketplace virtual machine resources Microsoft.MarketplaceOrdering/offertypes/publishers/offers/plans/agreements/read Microsoft.MarketplaceOrdering/offertypes/publishers/offers/plans/agreements/write Example 11.12. Optional permissions for enabling user-managed encryption Microsoft.Compute/diskEncryptionSets/read Microsoft.Compute/diskEncryptionSets/write Microsoft.Compute/diskEncryptionSets/delete Microsoft.KeyVault/vaults/read Microsoft.KeyVault/vaults/write Microsoft.KeyVault/vaults/delete Microsoft.KeyVault/vaults/deploy/action Microsoft.KeyVault/vaults/keys/read Microsoft.KeyVault/vaults/keys/write Microsoft.Features/providers/features/register/action The following permissions are required for deleting an OpenShift Container Platform cluster on Microsoft Azure. Example 11.13. Required permissions for deleting authorization resources Microsoft.Authorization/roleAssignments/delete Example 11.14. Required permissions for deleting compute resources Microsoft.Compute/disks/delete Microsoft.Compute/galleries/delete Microsoft.Compute/galleries/images/delete Microsoft.Compute/galleries/images/versions/delete Microsoft.Compute/virtualMachines/delete Microsoft.Compute/images/delete Example 11.15. Required permissions for deleting identity management resources Microsoft.ManagedIdentity/userAssignedIdentities/delete Example 11.16. Required permissions for deleting network resources Microsoft.Network/dnszones/read Microsoft.Network/dnsZones/A/read Microsoft.Network/dnsZones/A/delete Microsoft.Network/dnsZones/CNAME/read Microsoft.Network/dnsZones/CNAME/delete Microsoft.Network/loadBalancers/delete Microsoft.Network/networkInterfaces/delete Microsoft.Network/networkSecurityGroups/delete Microsoft.Network/privateDnsZones/read Microsoft.Network/privateDnsZones/A/read Microsoft.Network/privateDnsZones/delete Microsoft.Network/privateDnsZones/virtualNetworkLinks/delete Microsoft.Network/publicIPAddresses/delete Microsoft.Network/virtualNetworks/delete Example 11.17. Required permissions for checking the health of resources Microsoft.Resourcehealth/healthevent/Activated/action Microsoft.Resourcehealth/healthevent/Resolved/action Microsoft.Resourcehealth/healthevent/Updated/action Example 11.18. Required permissions for deleting a resource group Microsoft.Resources/subscriptions/resourcegroups/delete Example 11.19. Required permissions for deleting storage resources Microsoft.Storage/storageAccounts/delete Microsoft.Storage/storageAccounts/listKeys/action Note To install OpenShift Container Platform on Azure, you must scope the permissions related to resource group creation to your subscription. After the resource group is created, you can scope the rest of the permissions to the created resource group. If the public DNS zone is present in a different resource group, then the network DNS zone related permissions must always be applied to your subscription. You can scope all the permissions to your subscription when deleting an OpenShift Container Platform cluster. 11.3.7. Creating a service principal Because OpenShift Container Platform and its installation program create Microsoft Azure resources by using the Azure Resource Manager, you must create a service principal to represent it. Prerequisites Install or update the Azure CLI . Your Azure account has the required roles for the subscription that you use. If you want to use a custom role, you have created a custom role with the required permissions listed in the Required Azure permissions for user-provisioned infrastructure section. Procedure Log in to the Azure CLI: USD az login If your Azure account uses subscriptions, ensure that you are using the right subscription: View the list of available accounts and record the tenantId value for the subscription you want to use for your cluster: USD az account list --refresh Example output [ { "cloudName": "AzureCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", "user": { "name": "[email protected]", "type": "user" } } ] View your active account details and confirm that the tenantId value matches the subscription you want to use: USD az account show Example output { "environmentName": "AzureCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", 1 "user": { "name": "[email protected]", "type": "user" } } 1 Ensure that the value of the tenantId parameter is the correct subscription ID. If you are not using the right subscription, change the active subscription: USD az account set -s <subscription_id> 1 1 Specify the subscription ID. Verify the subscription ID update: USD az account show Example output { "environmentName": "AzureCloud", "id": "33212d16-bdf6-45cb-b038-f6565b61edda", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee", "user": { "name": "[email protected]", "type": "user" } } Record the tenantId and id parameter values from the output. You need these values during the OpenShift Container Platform installation. Create the service principal for your account: USD az ad sp create-for-rbac --role <role_name> \ 1 --name <service_principal> \ 2 --scopes /subscriptions/<subscription_id> 3 1 Defines the role name. You can use the Contributor role, or you can specify a custom role which contains the necessary permissions. 2 Defines the service principal name. 3 Specifies the subscription ID. Example output Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { "appId": "ac461d78-bf4b-4387-ad16-7e32e328aec6", "displayName": <service_principal>", "password": "00000000-0000-0000-0000-000000000000", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee" } Record the values of the appId and password parameters from the output. You need these values during OpenShift Container Platform installation. If you applied the Contributor role to your service principal, assign the User Administrator Access role by running the following command: USD az role assignment create --role "User Access Administrator" \ --assignee-object-id USD(az ad sp show --id <appId> --query id -o tsv) 1 1 Specify the appId parameter value for your service principal. Additional resources For more information about CCO modes, see About the Cloud Credential Operator . 11.3.8. Supported Azure regions The installation program dynamically generates the list of available Microsoft Azure regions based on your subscription. Supported Azure public regions australiacentral (Australia Central) australiaeast (Australia East) australiasoutheast (Australia South East) brazilsouth (Brazil South) canadacentral (Canada Central) canadaeast (Canada East) centralindia (Central India) centralus (Central US) eastasia (East Asia) eastus (East US) eastus2 (East US 2) francecentral (France Central) germanywestcentral (Germany West Central) israelcentral (Israel Central) italynorth (Italy North) japaneast (Japan East) japanwest (Japan West) koreacentral (Korea Central) koreasouth (Korea South) mexicocentral (Mexico Central) northcentralus (North Central US) northeurope (North Europe) norwayeast (Norway East) polandcentral (Poland Central) qatarcentral (Qatar Central) southafricanorth (South Africa North) southcentralus (South Central US) southeastasia (Southeast Asia) southindia (South India) spaincentral (Spain Central) swedencentral (Sweden Central) switzerlandnorth (Switzerland North) uaenorth (UAE North) uksouth (UK South) ukwest (UK West) westcentralus (West Central US) westeurope (West Europe) westindia (West India) westus (West US) westus2 (West US 2) westus3 (West US 3) Supported Azure Government regions Support for the following Microsoft Azure Government (MAG) regions was added in OpenShift Container Platform version 4.6: usgovtexas (US Gov Texas) usgovvirginia (US Gov Virginia) You can reference all available MAG regions in the Azure documentation . Other provided MAG regions are expected to work with OpenShift Container Platform, but have not been tested. 11.4. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 11.4.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 11.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 11.4.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 11.2. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . Important You are required to use Azure virtual machines that have the premiumIO parameter set to true . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 11.4.3. Tested instance types for Azure The following Microsoft Azure instance types have been tested with OpenShift Container Platform. Example 11.20. Machine types based on 64-bit x86 architecture standardBasv2Family standardBSFamily standardBsv2Family standardDADSv5Family standardDASv4Family standardDASv5Family standardDCACCV5Family standardDCADCCV5Family standardDCADSv5Family standardDCASv5Family standardDCSv3Family standardDCSv2Family standardDDCSv3Family standardDDSv4Family standardDDSv5Family standardDLDSv5Family standardDLSv5Family standardDSFamily standardDSv2Family standardDSv2PromoFamily standardDSv3Family standardDSv4Family standardDSv5Family standardEADSv5Family standardEASv4Family standardEASv5Family standardEBDSv5Family standardEBSv5Family standardECACCV5Family standardECADCCV5Family standardECADSv5Family standardECASv5Family standardEDSv4Family standardEDSv5Family standardEIADSv5Family standardEIASv4Family standardEIASv5Family standardEIBDSv5Family standardEIBSv5Family standardEIDSv5Family standardEISv3Family standardEISv5Family standardESv3Family standardESv4Family standardESv5Family standardFXMDVSFamily standardFSFamily standardFSv2Family standardGSFamily standardHBrsv2Family standardHBSFamily standardHBv4Family standardHCSFamily standardHXFamily standardLASv3Family standardLSFamily standardLSv2Family standardLSv3Family standardMDSHighMemoryv3Family standardMDSMediumMemoryv2Family standardMDSMediumMemoryv3Family standardMIDSHighMemoryv3Family standardMIDSMediumMemoryv2Family standardMISHighMemoryv3Family standardMISMediumMemoryv2Family standardMSFamily standardMSHighMemoryv3Family standardMSMediumMemoryv2Family standardMSMediumMemoryv3Family StandardNCADSA100v4Family Standard NCASv3_T4 Family standardNCSv3Family standardNDSv2Family StandardNGADSV620v1Family standardNPSFamily StandardNVADSA10v5Family standardNVSv3Family standardXEISv4Family 11.4.4. Tested instance types for Azure on 64-bit ARM infrastructures The following Microsoft Azure ARM64 instance types have been tested with OpenShift Container Platform. Example 11.21. Machine types based on 64-bit ARM architecture standardBpsv2Family standardDPSv5Family standardDPDSv5Family standardDPLDSv5Family standardDPLSv5Family standardEPSv5Family standardEPDSv5Family StandardDpdsv6Family StandardDpldsv6Famil StandardDplsv6Family StandardDpsv6Family StandardEpdsv6Family StandardEpsv6Family 11.5. Selecting an Azure Marketplace image If you are deploying an OpenShift Container Platform cluster using the Azure Marketplace offering, you must first obtain the Azure Marketplace image. The installation program uses this image to deploy worker nodes. When obtaining your image, consider the following: While the images are the same, the Azure Marketplace publisher is different depending on your region. If you are located in North America, specify redhat as the publisher. If you are located in EMEA, specify redhat-limited as the publisher. The offer includes a rh-ocp-worker SKU and a rh-ocp-worker-gen1 SKU. The rh-ocp-worker SKU represents a Hyper-V generation version 2 VM image. The default instance types used in OpenShift Container Platform are version 2 compatible. If you plan to use an instance type that is only version 1 compatible, use the image associated with the rh-ocp-worker-gen1 SKU. The rh-ocp-worker-gen1 SKU represents a Hyper-V version 1 VM image. Important Installing images with the Azure marketplace is not supported on clusters with 64-bit ARM instances. Prerequisites You have installed the Azure CLI client (az) . Your Azure account is entitled for the offer and you have logged into this account with the Azure CLI client. Procedure Display all of the available OpenShift Container Platform images by running one of the following commands: North America: USD az vm image list --all --offer rh-ocp-worker --publisher redhat -o table Example output Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocpworker:4.8.2021122100 4.8.2021122100 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100 EMEA: USD az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table Example output Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.8.2021122100 4.8.2021122100 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100 Note Regardless of the version of OpenShift Container Platform that you install, the correct version of the Azure Marketplace image to use is 4.8. If required, your VMs are automatically upgraded as part of the installation process. Inspect the image for your offer by running one of the following commands: North America: USD az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Review the terms of the offer by running one of the following commands: North America: USD az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Accept the terms of the offering by running one of the following commands: North America: USD az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Record the image details of your offer. If you use the Azure Resource Manager (ARM) template to deploy your worker nodes: Update storageProfile.imageReference by deleting the id parameter and adding the offer , publisher , sku , and version parameters by using the values from your offer. Specify a plan for the virtual machines (VMs). Example 06_workers.json ARM template with an updated storageProfile.imageReference object and a specified plan ... "plan" : { "name": "rh-ocp-worker", "product": "rh-ocp-worker", "publisher": "redhat" }, "dependsOn" : [ "[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]" ], "properties" : { ... "storageProfile": { "imageReference": { "offer": "rh-ocp-worker", "publisher": "redhat", "sku": "rh-ocp-worker", "version": "4.8.2021122100" } ... } ... } 11.6. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 11.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide the key to the installation program. 11.8. Creating the installation files for Azure To install OpenShift Container Platform on Microsoft Azure using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml file, Kubernetes manifests, and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation. 11.8.1. Optional: Creating a separate /var partition It is recommended that disk partitioning for OpenShift Container Platform be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Important If you follow the steps to create a separate /var partition in this procedure, it is not necessary to create the Kubernetes manifest and Ignition config files again as described later in this section. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig Example output ? SSH Public Key ... INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift Optional: Confirm that the installation program created manifests in the clusterconfig/openshift directory: USD ls USDHOME/clusterconfig/openshift/ Example output 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.13.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 MiB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 11.8.2. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Microsoft Azure. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select azure as the platform to target. If you do not have a Microsoft Azure profile stored on your computer, specify the following Azure parameter values for your subscription and service principal: azure subscription id : The subscription ID to use for the cluster. Specify the id value in your account output. azure tenant id : The tenant ID. Specify the tenantId value in your account output. azure service principal client id : The value of the appId parameter for the service principal. azure service principal client secret : The value of the password parameter for the service principal. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster. Enter a descriptive name for your cluster. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Note If you are installing a three-node cluster, be sure to set the compute.replicas parameter to 0 . This ensures that cluster's control planes are schedulable. For more information, see "Installing a three-node cluster on Azure". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 11.8.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 11.8.4. Exporting common variables for ARM templates You must export a common set of variables that are used with the provided Azure Resource Manager (ARM) templates used to assist in completing a user-provided infrastructure install on Microsoft Azure. Note Specific ARM templates can also require additional exported variables, which are detailed in their related procedures. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Export common variables found in the install-config.yaml to be used by the provided ARM templates: USD export CLUSTER_NAME=<cluster_name> 1 USD export AZURE_REGION=<azure_region> 2 USD export SSH_KEY=<ssh_key> 3 USD export BASE_DOMAIN=<base_domain> 4 USD export BASE_DOMAIN_RESOURCE_GROUP=<base_domain_resource_group> 5 1 The value of the .metadata.name attribute from the install-config.yaml file. 2 The region to deploy the cluster into, for example centralus . This is the value of the .platform.azure.region attribute from the install-config.yaml file. 3 The SSH RSA public key file as a string. You must enclose the SSH key in quotes since it contains spaces. This is the value of the .sshKey attribute from the install-config.yaml file. 4 The base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. This is the value of the .baseDomain attribute from the install-config.yaml file. 5 The resource group where the public DNS zone exists. This is the value of the .platform.azure.baseDomainResourceGroupName attribute from the install-config.yaml file. For example: USD export CLUSTER_NAME=test-cluster USD export AZURE_REGION=centralus USD export SSH_KEY="ssh-rsa xxx/xxx/xxx= [email protected]" USD export BASE_DOMAIN=example.com USD export BASE_DOMAIN_RESOURCE_GROUP=ocp-cluster Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 11.8.5. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml By removing these files, you prevent the cluster from automatically generating control plane machines. Remove the Kubernetes manifest files that define the control plane machine set: USD rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml Remove the Kubernetes manifest files that define the worker machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage the worker machines yourself, you do not need to initialize these machines. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove the privateZone and publicZone sections from the <installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file: apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {} 1 2 Remove this section completely. If you do so, you must add ingress DNS records manually in a later step. When configuring Azure on user-provisioned infrastructure, you must export some common variables defined in the manifest files to use later in the Azure Resource Manager (ARM) templates: Export the infrastructure ID by using the following command: USD export INFRA_ID=<infra_id> 1 1 The OpenShift Container Platform cluster has been assigned an identifier ( INFRA_ID ) in the form of <cluster_name>-<random_string> . This will be used as the base name for most resources created using the provided ARM templates. This is the value of the .status.infrastructureName attribute from the manifests/cluster-infrastructure-02-config.yml file. Export the resource group by using the following command: USD export RESOURCE_GROUP=<resource_group> 1 1 All resources created in this Azure deployment exists as part of a resource group . The resource group name is also based on the INFRA_ID , in the form of <cluster_name>-<random_string>-rg . This is the value of the .status.platformStatus.azure.resourceGroupName attribute from the manifests/cluster-infrastructure-02-config.yml file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 11.9. Creating the Azure resource group You must create a Microsoft Azure resource group and an identity for that resource group. These are both used during the installation of your OpenShift Container Platform cluster on Azure. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Create the resource group in a supported Azure region: USD az group create --name USD{RESOURCE_GROUP} --location USD{AZURE_REGION} Create an Azure identity for the resource group: USD az identity create -g USD{RESOURCE_GROUP} -n USD{INFRA_ID}-identity This is used to grant the required access to Operators in your cluster. For example, this allows the Ingress Operator to create a public IP and its load balancer. You must assign the Azure identity to a role. Grant the Contributor role to the Azure identity: Export the following variables required by the Azure role assignment: USD export PRINCIPAL_ID=`az identity show -g USD{RESOURCE_GROUP} -n USD{INFRA_ID}-identity --query principalId --out tsv` USD export RESOURCE_GROUP_ID=`az group show -g USD{RESOURCE_GROUP} --query id --out tsv` Assign the Contributor role to the identity: USD az role assignment create --assignee "USD{PRINCIPAL_ID}" --role 'Contributor' --scope "USD{RESOURCE_GROUP_ID}" Note If you want to assign a custom role with all the required permissions to the identity, run the following command: USD az role assignment create --assignee "USD{PRINCIPAL_ID}" --role <custom_role> \ 1 --scope "USD{RESOURCE_GROUP_ID}" 1 Specifies the custom role name. 11.10. Uploading the RHCOS cluster image and bootstrap Ignition config file The Azure client does not support deployments based on files existing locally. You must copy and store the RHCOS virtual hard disk (VHD) cluster image and bootstrap Ignition config file in a storage container so they are accessible during deployment. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Create an Azure storage account to store the VHD cluster image: USD az storage account create -g USD{RESOURCE_GROUP} --location USD{AZURE_REGION} --name USD{CLUSTER_NAME}sa --kind Storage --sku Standard_LRS Warning The Azure storage account name must be between 3 and 24 characters in length and use numbers and lower-case letters only. If your CLUSTER_NAME variable does not follow these restrictions, you must manually define the Azure storage account name. For more information on Azure storage account name restrictions, see Resolve errors for storage account names in the Azure documentation. Export the storage account key as an environment variable: USD export ACCOUNT_KEY=`az storage account keys list -g USD{RESOURCE_GROUP} --account-name USD{CLUSTER_NAME}sa --query "[0].value" -o tsv` Export the URL of the RHCOS VHD to an environment variable: USD export VHD_URL=`openshift-install coreos print-stream-json | jq -r '.architectures.<architecture>."rhel-coreos-extensions"."azure-disk".url'` Important The RHCOS images might not change with every release of OpenShift Container Platform. You must specify an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. Create the storage container for the VHD: USD az storage container create --name vhd --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} Copy the local VHD to a blob: USD az storage blob copy start --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} --destination-blob "rhcos.vhd" --destination-container vhd --source-uri "USD{VHD_URL}" Create a blob storage container and upload the generated bootstrap.ign file: USD az storage container create --name files --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} USD az storage blob upload --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c "files" -f "<installation_directory>/bootstrap.ign" -n "bootstrap.ign" 11.11. Example for creating DNS zones DNS records are required for clusters that use user-provisioned infrastructure. You should choose the DNS strategy that fits your scenario. For this example, Azure's DNS solution is used, so you will create a new public DNS zone for external (internet) visibility and a private DNS zone for internal cluster resolution. Note The public DNS zone is not required to exist in the same resource group as the cluster deployment and might already exist in your organization for the desired base domain. If that is the case, you can skip creating the public DNS zone; be sure the installation config you generated earlier reflects that scenario. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Create the new public DNS zone in the resource group exported in the BASE_DOMAIN_RESOURCE_GROUP environment variable: USD az network dns zone create -g USD{BASE_DOMAIN_RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN} You can skip this step if you are using a public DNS zone that already exists. Create the private DNS zone in the same resource group as the rest of this deployment: USD az network private-dns zone create -g USD{RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN} You can learn more about configuring a public DNS zone in Azure by visiting that section. 11.12. Creating a VNet in Azure You must create a virtual network (VNet) in Microsoft Azure for your OpenShift Container Platform cluster to use. You can customize the VNet to meet your requirements. One way to create the VNet is to modify the provided Azure Resource Manager (ARM) template. Note If you do not use the provided ARM template to create your Azure infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Copy the template from the ARM template for the VNet section of this topic and save it as 01_vnet.json in your cluster's installation directory. This template describes the VNet that your cluster requires. Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/01_vnet.json" \ --parameters baseName="USD{INFRA_ID}" 1 1 The base name to be used in resource names; this is usually the cluster's infrastructure ID. Link the VNet template to the private DNS zone: USD az network private-dns link vnet create -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n USD{INFRA_ID}-network-link -v "USD{INFRA_ID}-vnet" -e false 11.12.1. ARM template for the VNet You can use the following Azure Resource Manager (ARM) template to deploy the VNet that you need for your OpenShift Container Platform cluster: Example 11.22. 01_vnet.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } } }, "variables" : { "location" : "[resourceGroup().location]", "virtualNetworkName" : "[concat(parameters('baseName'), '-vnet')]", "addressPrefix" : "10.0.0.0/16", "masterSubnetName" : "[concat(parameters('baseName'), '-master-subnet')]", "masterSubnetPrefix" : "10.0.0.0/24", "nodeSubnetName" : "[concat(parameters('baseName'), '-worker-subnet')]", "nodeSubnetPrefix" : "10.0.1.0/24", "clusterNsgName" : "[concat(parameters('baseName'), '-nsg')]" }, "resources" : [ { "apiVersion" : "2018-12-01", "type" : "Microsoft.Network/virtualNetworks", "name" : "[variables('virtualNetworkName')]", "location" : "[variables('location')]", "dependsOn" : [ "[concat('Microsoft.Network/networkSecurityGroups/', variables('clusterNsgName'))]" ], "properties" : { "addressSpace" : { "addressPrefixes" : [ "[variables('addressPrefix')]" ] }, "subnets" : [ { "name" : "[variables('masterSubnetName')]", "properties" : { "addressPrefix" : "[variables('masterSubnetPrefix')]", "serviceEndpoints": [], "networkSecurityGroup" : { "id" : "[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]" } } }, { "name" : "[variables('nodeSubnetName')]", "properties" : { "addressPrefix" : "[variables('nodeSubnetPrefix')]", "serviceEndpoints": [], "networkSecurityGroup" : { "id" : "[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]" } } } ] } }, { "type" : "Microsoft.Network/networkSecurityGroups", "name" : "[variables('clusterNsgName')]", "apiVersion" : "2018-10-01", "location" : "[variables('location')]", "properties" : { "securityRules" : [ { "name" : "apiserver_in", "properties" : { "protocol" : "Tcp", "sourcePortRange" : "*", "destinationPortRange" : "6443", "sourceAddressPrefix" : "*", "destinationAddressPrefix" : "*", "access" : "Allow", "priority" : 101, "direction" : "Inbound" } } ] } } ] } 11.13. Deploying the RHCOS cluster image for the Azure infrastructure You must use a valid Red Hat Enterprise Linux CoreOS (RHCOS) image for Microsoft Azure for your OpenShift Container Platform nodes. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Store the RHCOS virtual hard disk (VHD) cluster image in an Azure storage container. Store the bootstrap Ignition config file in an Azure storage container. Procedure Copy the template from the ARM template for image storage section of this topic and save it as 02_storage.json in your cluster's installation directory. This template describes the image storage that your cluster requires. Export the RHCOS VHD blob URL as a variable: USD export VHD_BLOB_URL=`az storage blob url --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c vhd -n "rhcos.vhd" -o tsv` Deploy the cluster image: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/02_storage.json" \ --parameters vhdBlobURL="USD{VHD_BLOB_URL}" \ 1 --parameters baseName="USD{INFRA_ID}" \ 2 --parameters storageAccount="USD{CLUSTER_NAME}sa" \ 3 --parameters architecture="<architecture>" 4 1 The blob URL of the RHCOS VHD to be used to create master and worker machines. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 3 The name of your Azure storage account. 4 Specify the system architecture. Valid values are x64 (default) or Arm64 . 11.13.1. ARM template for image storage You can use the following Azure Resource Manager (ARM) template to deploy the stored Red Hat Enterprise Linux CoreOS (RHCOS) image that you need for your OpenShift Container Platform cluster: Example 11.23. 02_storage.json ARM template { "USDschema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": { "architecture": { "type": "string", "metadata": { "description": "The architecture of the Virtual Machines" }, "defaultValue": "x64", "allowedValues": [ "Arm64", "x64" ] }, "baseName": { "type": "string", "minLength": 1, "metadata": { "description": "Base name to be used in resource names (usually the cluster's Infra ID)" } }, "storageAccount": { "type": "string", "metadata": { "description": "The Storage Account name" } }, "vhdBlobURL": { "type": "string", "metadata": { "description": "URL pointing to the blob where the VHD to be used to create master and worker machines is located" } } }, "variables": { "location": "[resourceGroup().location]", "galleryName": "[concat('gallery_', replace(parameters('baseName'), '-', '_'))]", "imageName": "[parameters('baseName')]", "imageNameGen2": "[concat(parameters('baseName'), '-gen2')]", "imageRelease": "1.0.0" }, "resources": [ { "apiVersion": "2021-10-01", "type": "Microsoft.Compute/galleries", "name": "[variables('galleryName')]", "location": "[variables('location')]", "resources": [ { "apiVersion": "2021-10-01", "type": "images", "name": "[variables('imageName')]", "location": "[variables('location')]", "dependsOn": [ "[variables('galleryName')]" ], "properties": { "architecture": "[parameters('architecture')]", "hyperVGeneration": "V1", "identifier": { "offer": "rhcos", "publisher": "RedHat", "sku": "basic" }, "osState": "Generalized", "osType": "Linux" }, "resources": [ { "apiVersion": "2021-10-01", "type": "versions", "name": "[variables('imageRelease')]", "location": "[variables('location')]", "dependsOn": [ "[variables('imageName')]" ], "properties": { "publishingProfile": { "storageAccountType": "Standard_LRS", "targetRegions": [ { "name": "[variables('location')]", "regionalReplicaCount": "1" } ] }, "storageProfile": { "osDiskImage": { "source": { "id": "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccount'))]", "uri": "[parameters('vhdBlobURL')]" } } } } } ] }, { "apiVersion": "2021-10-01", "type": "images", "name": "[variables('imageNameGen2')]", "location": "[variables('location')]", "dependsOn": [ "[variables('galleryName')]" ], "properties": { "architecture": "[parameters('architecture')]", "hyperVGeneration": "V2", "identifier": { "offer": "rhcos-gen2", "publisher": "RedHat-gen2", "sku": "gen2" }, "osState": "Generalized", "osType": "Linux" }, "resources": [ { "apiVersion": "2021-10-01", "type": "versions", "name": "[variables('imageRelease')]", "location": "[variables('location')]", "dependsOn": [ "[variables('imageNameGen2')]" ], "properties": { "publishingProfile": { "storageAccountType": "Standard_LRS", "targetRegions": [ { "name": "[variables('location')]", "regionalReplicaCount": "1" } ] }, "storageProfile": { "osDiskImage": { "source": { "id": "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccount'))]", "uri": "[parameters('vhdBlobURL')]" } } } } } ] } ] } ] } 11.14. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. 11.14.1. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 11.3. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 11.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 11.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 11.15. Creating networking and load balancing components in Azure You must configure networking and load balancing in Microsoft Azure for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Azure Resource Manager (ARM) template. Note If you do not use the provided ARM template to create your Azure infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure. Procedure Copy the template from the ARM template for the network and load balancers section of this topic and save it as 03_infra.json in your cluster's installation directory. This template describes the networking and load balancing objects that your cluster requires. Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/03_infra.json" \ --parameters privateDNSZoneName="USD{CLUSTER_NAME}.USD{BASE_DOMAIN}" \ 1 --parameters baseName="USD{INFRA_ID}" 2 1 The name of the private DNS zone. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. Create an api DNS record in the public zone for the API public load balancer. The USD{BASE_DOMAIN_RESOURCE_GROUP} variable must point to the resource group where the public DNS zone exists. Export the following variable: USD export PUBLIC_IP=`az network public-ip list -g USD{RESOURCE_GROUP} --query "[?name=='USD{INFRA_ID}-master-pip'] | [0].ipAddress" -o tsv` Create the api DNS record in a new public zone: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n api -a USD{PUBLIC_IP} --ttl 60 If you are adding the cluster to an existing public zone, you can create the api DNS record in it instead: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n api.USD{CLUSTER_NAME} -a USD{PUBLIC_IP} --ttl 60 11.15.1. ARM template for the network and load balancers You can use the following Azure Resource Manager (ARM) template to deploy the networking objects and load balancers that you need for your OpenShift Container Platform cluster: Example 11.24. 03_infra.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } }, "vnetBaseName": { "type": "string", "defaultValue": "", "metadata" : { "description" : "The specific customer vnet's base name (optional)" } }, "privateDNSZoneName" : { "type" : "string", "metadata" : { "description" : "Name of the private DNS zone" } } }, "variables" : { "location" : "[resourceGroup().location]", "virtualNetworkName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]", "virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]", "masterSubnetName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]", "masterSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]", "masterPublicIpAddressName" : "[concat(parameters('baseName'), '-master-pip')]", "masterPublicIpAddressID" : "[resourceId('Microsoft.Network/publicIPAddresses', variables('masterPublicIpAddressName'))]", "masterLoadBalancerName" : "[parameters('baseName')]", "masterLoadBalancerID" : "[resourceId('Microsoft.Network/loadBalancers', variables('masterLoadBalancerName'))]", "internalLoadBalancerName" : "[concat(parameters('baseName'), '-internal-lb')]", "internalLoadBalancerID" : "[resourceId('Microsoft.Network/loadBalancers', variables('internalLoadBalancerName'))]", "skuName": "Standard" }, "resources" : [ { "apiVersion" : "2018-12-01", "type" : "Microsoft.Network/publicIPAddresses", "name" : "[variables('masterPublicIpAddressName')]", "location" : "[variables('location')]", "sku": { "name": "[variables('skuName')]" }, "properties" : { "publicIPAllocationMethod" : "Static", "dnsSettings" : { "domainNameLabel" : "[variables('masterPublicIpAddressName')]" } } }, { "apiVersion" : "2018-12-01", "type" : "Microsoft.Network/loadBalancers", "name" : "[variables('masterLoadBalancerName')]", "location" : "[variables('location')]", "sku": { "name": "[variables('skuName')]" }, "dependsOn" : [ "[concat('Microsoft.Network/publicIPAddresses/', variables('masterPublicIpAddressName'))]" ], "properties" : { "frontendIPConfigurations" : [ { "name" : "public-lb-ip-v4", "properties" : { "publicIPAddress" : { "id" : "[variables('masterPublicIpAddressID')]" } } } ], "backendAddressPools" : [ { "name" : "[variables('masterLoadBalancerName')]" } ], "loadBalancingRules" : [ { "name" : "api-internal", "properties" : { "frontendIPConfiguration" : { "id" :"[concat(variables('masterLoadBalancerID'), '/frontendIPConfigurations/public-lb-ip-v4')]" }, "backendAddressPool" : { "id" : "[concat(variables('masterLoadBalancerID'), '/backendAddressPools/', variables('masterLoadBalancerName'))]" }, "protocol" : "Tcp", "loadDistribution" : "Default", "idleTimeoutInMinutes" : 30, "frontendPort" : 6443, "backendPort" : 6443, "probe" : { "id" : "[concat(variables('masterLoadBalancerID'), '/probes/api-internal-probe')]" } } } ], "probes" : [ { "name" : "api-internal-probe", "properties" : { "protocol" : "Https", "port" : 6443, "requestPath": "/readyz", "intervalInSeconds" : 10, "numberOfProbes" : 3 } } ] } }, { "apiVersion" : "2018-12-01", "type" : "Microsoft.Network/loadBalancers", "name" : "[variables('internalLoadBalancerName')]", "location" : "[variables('location')]", "sku": { "name": "[variables('skuName')]" }, "properties" : { "frontendIPConfigurations" : [ { "name" : "internal-lb-ip", "properties" : { "privateIPAllocationMethod" : "Dynamic", "subnet" : { "id" : "[variables('masterSubnetRef')]" }, "privateIPAddressVersion" : "IPv4" } } ], "backendAddressPools" : [ { "name" : "internal-lb-backend" } ], "loadBalancingRules" : [ { "name" : "api-internal", "properties" : { "frontendIPConfiguration" : { "id" : "[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]" }, "frontendPort" : 6443, "backendPort" : 6443, "enableFloatingIP" : false, "idleTimeoutInMinutes" : 30, "protocol" : "Tcp", "enableTcpReset" : false, "loadDistribution" : "Default", "backendAddressPool" : { "id" : "[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]" }, "probe" : { "id" : "[concat(variables('internalLoadBalancerID'), '/probes/api-internal-probe')]" } } }, { "name" : "sint", "properties" : { "frontendIPConfiguration" : { "id" : "[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]" }, "frontendPort" : 22623, "backendPort" : 22623, "enableFloatingIP" : false, "idleTimeoutInMinutes" : 30, "protocol" : "Tcp", "enableTcpReset" : false, "loadDistribution" : "Default", "backendAddressPool" : { "id" : "[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]" }, "probe" : { "id" : "[concat(variables('internalLoadBalancerID'), '/probes/sint-probe')]" } } } ], "probes" : [ { "name" : "api-internal-probe", "properties" : { "protocol" : "Https", "port" : 6443, "requestPath": "/readyz", "intervalInSeconds" : 10, "numberOfProbes" : 3 } }, { "name" : "sint-probe", "properties" : { "protocol" : "Https", "port" : 22623, "requestPath": "/healthz", "intervalInSeconds" : 10, "numberOfProbes" : 3 } } ] } }, { "apiVersion": "2018-09-01", "type": "Microsoft.Network/privateDnsZones/A", "name": "[concat(parameters('privateDNSZoneName'), '/api')]", "location" : "[variables('location')]", "dependsOn" : [ "[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]" ], "properties": { "ttl": 60, "aRecords": [ { "ipv4Address": "[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]" } ] } }, { "apiVersion": "2018-09-01", "type": "Microsoft.Network/privateDnsZones/A", "name": "[concat(parameters('privateDNSZoneName'), '/api-int')]", "location" : "[variables('location')]", "dependsOn" : [ "[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]" ], "properties": { "ttl": 60, "aRecords": [ { "ipv4Address": "[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]" } ] } } ] } 11.16. Creating the bootstrap machine in Azure You must create the bootstrap machine in Microsoft Azure to use during OpenShift Container Platform cluster initialization. One way to create this machine is to modify the provided Azure Resource Manager (ARM) template. Note If you do not use the provided ARM template to create your bootstrap machine, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure. Create and configure networking and load balancers in Azure. Create control plane and compute roles. Procedure Copy the template from the ARM template for the bootstrap machine section of this topic and save it as 04_bootstrap.json in your cluster's installation directory. This template describes the bootstrap machine that your cluster requires. Export the bootstrap URL variable: USD bootstrap_url_expiry=`date -u -d "10 hours" '+%Y-%m-%dT%H:%MZ'` USD export BOOTSTRAP_URL=`az storage blob generate-sas -c 'files' -n 'bootstrap.ign' --https-only --full-uri --permissions r --expiry USDbootstrap_url_expiry --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -o tsv` Export the bootstrap ignition variable: USD export BOOTSTRAP_IGNITION=`jq -rcnM --arg v "3.2.0" --arg url USD{BOOTSTRAP_URL} '{ignition:{version:USDv,config:{replace:{source:USDurl}}}}' | base64 | tr -d '\n'` Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/04_bootstrap.json" \ --parameters bootstrapIgnition="USD{BOOTSTRAP_IGNITION}" \ 1 --parameters baseName="USD{INFRA_ID}" 2 1 The bootstrap Ignition content for the bootstrap cluster. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 11.16.1. ARM template for the bootstrap machine You can use the following Azure Resource Manager (ARM) template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster: Example 11.25. 04_bootstrap.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } }, "vnetBaseName": { "type": "string", "defaultValue": "", "metadata" : { "description" : "The specific customer vnet's base name (optional)" } }, "bootstrapIgnition" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Bootstrap ignition content for the bootstrap cluster" } }, "sshKeyData" : { "type" : "securestring", "defaultValue" : "Unused", "metadata" : { "description" : "Unused" } }, "bootstrapVMSize" : { "type" : "string", "defaultValue" : "Standard_D4s_v3", "metadata" : { "description" : "The size of the Bootstrap Virtual Machine" } }, "hyperVGen": { "type": "string", "metadata": { "description": "VM generation image to use" }, "defaultValue": "V2", "allowedValues": [ "V1", "V2" ] } }, "variables" : { "location" : "[resourceGroup().location]", "virtualNetworkName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]", "virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]", "masterSubnetName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]", "masterSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]", "masterLoadBalancerName" : "[parameters('baseName')]", "internalLoadBalancerName" : "[concat(parameters('baseName'), '-internal-lb')]", "sshKeyPath" : "/home/core/.ssh/authorized_keys", "identityName" : "[concat(parameters('baseName'), '-identity')]", "vmName" : "[concat(parameters('baseName'), '-bootstrap')]", "nicName" : "[concat(variables('vmName'), '-nic')]", "galleryName": "[concat('gallery_', replace(parameters('baseName'), '-', '_'))]", "imageName" : "[concat(parameters('baseName'), if(equals(parameters('hyperVGen'), 'V2'), '-gen2', ''))]", "clusterNsgName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-nsg')]", "sshPublicIpAddressName" : "[concat(variables('vmName'), '-ssh-pip')]" }, "resources" : [ { "apiVersion" : "2018-12-01", "type" : "Microsoft.Network/publicIPAddresses", "name" : "[variables('sshPublicIpAddressName')]", "location" : "[variables('location')]", "sku": { "name": "Standard" }, "properties" : { "publicIPAllocationMethod" : "Static", "dnsSettings" : { "domainNameLabel" : "[variables('sshPublicIpAddressName')]" } } }, { "apiVersion" : "2018-06-01", "type" : "Microsoft.Network/networkInterfaces", "name" : "[variables('nicName')]", "location" : "[variables('location')]", "dependsOn" : [ "[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]" ], "properties" : { "ipConfigurations" : [ { "name" : "pipConfig", "properties" : { "privateIPAllocationMethod" : "Dynamic", "publicIPAddress": { "id": "[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]" }, "subnet" : { "id" : "[variables('masterSubnetRef')]" }, "loadBalancerBackendAddressPools" : [ { "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/', variables('masterLoadBalancerName'))]" }, { "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]" } ] } } ] } }, { "apiVersion" : "2018-06-01", "type" : "Microsoft.Compute/virtualMachines", "name" : "[variables('vmName')]", "location" : "[variables('location')]", "identity" : { "type" : "userAssigned", "userAssignedIdentities" : { "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]" : {} } }, "dependsOn" : [ "[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]" ], "properties" : { "hardwareProfile" : { "vmSize" : "[parameters('bootstrapVMSize')]" }, "osProfile" : { "computerName" : "[variables('vmName')]", "adminUsername" : "core", "adminPassword" : "NotActuallyApplied!", "customData" : "[parameters('bootstrapIgnition')]", "linuxConfiguration" : { "disablePasswordAuthentication" : false } }, "storageProfile" : { "imageReference": { "id": "[resourceId('Microsoft.Compute/galleries/images', variables('galleryName'), variables('imageName'))]" }, "osDisk" : { "name": "[concat(variables('vmName'),'_OSDisk')]", "osType" : "Linux", "createOption" : "FromImage", "managedDisk": { "storageAccountType": "Premium_LRS" }, "diskSizeGB" : 100 } }, "networkProfile" : { "networkInterfaces" : [ { "id" : "[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]" } ] } } }, { "apiVersion" : "2018-06-01", "type": "Microsoft.Network/networkSecurityGroups/securityRules", "name" : "[concat(variables('clusterNsgName'), '/bootstrap_ssh_in')]", "location" : "[variables('location')]", "dependsOn" : [ "[resourceId('Microsoft.Compute/virtualMachines', variables('vmName'))]" ], "properties": { "protocol" : "Tcp", "sourcePortRange" : "*", "destinationPortRange" : "22", "sourceAddressPrefix" : "*", "destinationAddressPrefix" : "*", "access" : "Allow", "priority" : 100, "direction" : "Inbound" } } ] } 11.17. Creating the control plane machines in Azure You must create the control plane machines in Microsoft Azure for your cluster to use. One way to create these machines is to modify the provided Azure Resource Manager (ARM) template. Note By default, Microsoft Azure places control plane machines and compute machines in a pre-set availability zone. You can manually set an availability zone for a compute node or control plane node. To do this, modify a vendor's Azure Resource Manager (ARM) template by specifying each of your availability zones in the zones parameter of the virtual machine resource. If you do not use the provided ARM template to create your control plane machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, consider contacting Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure. Create and configure networking and load balancers in Azure. Create control plane and compute roles. Create the bootstrap machine. Procedure Copy the template from the ARM template for control plane machines section of this topic and save it as 05_masters.json in your cluster's installation directory. This template describes the control plane machines that your cluster requires. Export the following variable needed by the control plane machine deployment: USD export MASTER_IGNITION=`cat <installation_directory>/master.ign | base64 | tr -d '\n'` Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/05_masters.json" \ --parameters masterIgnition="USD{MASTER_IGNITION}" \ 1 --parameters baseName="USD{INFRA_ID}" 2 1 The Ignition content for the control plane nodes. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 11.17.1. ARM template for control plane machines You can use the following Azure Resource Manager (ARM) template to deploy the control plane machines that you need for your OpenShift Container Platform cluster: Example 11.26. 05_masters.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } }, "vnetBaseName": { "type": "string", "defaultValue": "", "metadata" : { "description" : "The specific customer vnet's base name (optional)" } }, "masterIgnition" : { "type" : "string", "metadata" : { "description" : "Ignition content for the master nodes" } }, "numberOfMasters" : { "type" : "int", "defaultValue" : 3, "minValue" : 2, "maxValue" : 30, "metadata" : { "description" : "Number of OpenShift masters to deploy" } }, "sshKeyData" : { "type" : "securestring", "defaultValue" : "Unused", "metadata" : { "description" : "Unused" } }, "privateDNSZoneName" : { "type" : "string", "defaultValue" : "", "metadata" : { "description" : "unused" } }, "masterVMSize" : { "type" : "string", "defaultValue" : "Standard_D8s_v3", "metadata" : { "description" : "The size of the Master Virtual Machines" } }, "diskSizeGB" : { "type" : "int", "defaultValue" : 1024, "metadata" : { "description" : "Size of the Master VM OS disk, in GB" } }, "hyperVGen": { "type": "string", "metadata": { "description": "VM generation image to use" }, "defaultValue": "V2", "allowedValues": [ "V1", "V2" ] } }, "variables" : { "location" : "[resourceGroup().location]", "virtualNetworkName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]", "virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]", "masterSubnetName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]", "masterSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]", "masterLoadBalancerName" : "[parameters('baseName')]", "internalLoadBalancerName" : "[concat(parameters('baseName'), '-internal-lb')]", "sshKeyPath" : "/home/core/.ssh/authorized_keys", "identityName" : "[concat(parameters('baseName'), '-identity')]", "galleryName": "[concat('gallery_', replace(parameters('baseName'), '-', '_'))]", "imageName" : "[concat(parameters('baseName'), if(equals(parameters('hyperVGen'), 'V2'), '-gen2', ''))]", "copy" : [ { "name" : "vmNames", "count" : "[parameters('numberOfMasters')]", "input" : "[concat(parameters('baseName'), '-master-', copyIndex('vmNames'))]" } ] }, "resources" : [ { "apiVersion" : "2018-06-01", "type" : "Microsoft.Network/networkInterfaces", "copy" : { "name" : "nicCopy", "count" : "[length(variables('vmNames'))]" }, "name" : "[concat(variables('vmNames')[copyIndex()], '-nic')]", "location" : "[variables('location')]", "properties" : { "ipConfigurations" : [ { "name" : "pipConfig", "properties" : { "privateIPAllocationMethod" : "Dynamic", "subnet" : { "id" : "[variables('masterSubnetRef')]" }, "loadBalancerBackendAddressPools" : [ { "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/', variables('masterLoadBalancerName'))]" }, { "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]" } ] } } ] } }, { "apiVersion" : "2018-06-01", "type" : "Microsoft.Compute/virtualMachines", "copy" : { "name" : "vmCopy", "count" : "[length(variables('vmNames'))]" }, "name" : "[variables('vmNames')[copyIndex()]]", "location" : "[variables('location')]", "identity" : { "type" : "userAssigned", "userAssignedIdentities" : { "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]" : {} } }, "dependsOn" : [ "[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]" ], "properties" : { "hardwareProfile" : { "vmSize" : "[parameters('masterVMSize')]" }, "osProfile" : { "computerName" : "[variables('vmNames')[copyIndex()]]", "adminUsername" : "core", "adminPassword" : "NotActuallyApplied!", "customData" : "[parameters('masterIgnition')]", "linuxConfiguration" : { "disablePasswordAuthentication" : false } }, "storageProfile" : { "imageReference": { "id": "[resourceId('Microsoft.Compute/galleries/images', variables('galleryName'), variables('imageName'))]" }, "osDisk" : { "name": "[concat(variables('vmNames')[copyIndex()], '_OSDisk')]", "osType" : "Linux", "createOption" : "FromImage", "caching": "ReadOnly", "writeAcceleratorEnabled": false, "managedDisk": { "storageAccountType": "Premium_LRS" }, "diskSizeGB" : "[parameters('diskSizeGB')]" } }, "networkProfile" : { "networkInterfaces" : [ { "id" : "[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]", "properties": { "primary": false } } ] } } } ] } 11.18. Wait for bootstrap completion and remove bootstrap resources in Azure After you create all of the required infrastructure in Microsoft Azure, wait for the bootstrap process to complete on the machines that you provisioned by using the Ignition config files that you generated with the installation program. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure. Create and configure networking and load balancers in Azure. Create control plane and compute roles. Create the bootstrap machine. Create the control plane machines. Procedure Change to the directory that contains the installation program and run the following command: USD ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \ 1 --log-level info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . If the command exits without a FATAL warning, your production control plane has initialized. Delete the bootstrap resources: USD az network nsg rule delete -g USD{RESOURCE_GROUP} --nsg-name USD{INFRA_ID}-nsg --name bootstrap_ssh_in USD az vm stop -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap USD az vm deallocate -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap USD az vm delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap --yes USD az disk delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap_OSDisk --no-wait --yes USD az network nic delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-nic --no-wait USD az storage blob delete --account-key USD{ACCOUNT_KEY} --account-name USD{CLUSTER_NAME}sa --container-name files --name bootstrap.ign USD az network public-ip delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-ssh-pip Note If you do not delete the bootstrap server, installation may not succeed due to API traffic being routed to the bootstrap server. 11.19. Creating additional worker machines in Azure You can create worker machines in Microsoft Azure for your cluster to use by launching individual instances discretely or by automated processes outside the cluster, such as auto scaling groups. You can also take advantage of the built-in cluster scaling mechanisms and the machine API in OpenShift Container Platform. Note If you are installing a three-node cluster, skip this step. A three-node cluster consists of three control plane machines, which also act as compute machines. In this example, you manually launch one instance by using the Azure Resource Manager (ARM) template. Additional instances can be launched by including additional resources of type 06_workers.json in the file. Note By default, Microsoft Azure places control plane machines and compute machines in a pre-set availability zone. You can manually set an availability zone for a compute node or control plane node. To do this, modify a vendor's ARM template by specifying each of your availability zones in the zones parameter of the virtual machine resource. If you do not use the provided ARM template to create your control plane machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, consider contacting Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure. Create and configure networking and load balancers in Azure. Create control plane and compute roles. Create the bootstrap machine. Create the control plane machines. Procedure Copy the template from the ARM template for worker machines section of this topic and save it as 06_workers.json in your cluster's installation directory. This template describes the worker machines that your cluster requires. Export the following variable needed by the worker machine deployment: USD export WORKER_IGNITION=`cat <installation_directory>/worker.ign | base64 | tr -d '\n'` Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/06_workers.json" \ --parameters workerIgnition="USD{WORKER_IGNITION}" \ 1 --parameters baseName="USD{INFRA_ID}" 2 1 The Ignition content for the worker nodes. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 11.19.1. ARM template for worker machines You can use the following Azure Resource Manager (ARM) template to deploy the worker machines that you need for your OpenShift Container Platform cluster: Example 11.27. 06_workers.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } }, "vnetBaseName": { "type": "string", "defaultValue": "", "metadata" : { "description" : "The specific customer vnet's base name (optional)" } }, "workerIgnition" : { "type" : "string", "metadata" : { "description" : "Ignition content for the worker nodes" } }, "numberOfNodes" : { "type" : "int", "defaultValue" : 3, "minValue" : 2, "maxValue" : 30, "metadata" : { "description" : "Number of OpenShift compute nodes to deploy" } }, "sshKeyData" : { "type" : "securestring", "defaultValue" : "Unused", "metadata" : { "description" : "Unused" } }, "nodeVMSize" : { "type" : "string", "defaultValue" : "Standard_D4s_v3", "metadata" : { "description" : "The size of the each Node Virtual Machine" } }, "hyperVGen": { "type": "string", "metadata": { "description": "VM generation image to use" }, "defaultValue": "V2", "allowedValues": [ "V1", "V2" ] } }, "variables" : { "location" : "[resourceGroup().location]", "virtualNetworkName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]", "virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]", "nodeSubnetName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-worker-subnet')]", "nodeSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('nodeSubnetName'))]", "infraLoadBalancerName" : "[parameters('baseName')]", "sshKeyPath" : "/home/capi/.ssh/authorized_keys", "identityName" : "[concat(parameters('baseName'), '-identity')]", "galleryName": "[concat('gallery_', replace(parameters('baseName'), '-', '_'))]", "imageName" : "[concat(parameters('baseName'), if(equals(parameters('hyperVGen'), 'V2'), '-gen2', ''))]", "copy" : [ { "name" : "vmNames", "count" : "[parameters('numberOfNodes')]", "input" : "[concat(parameters('baseName'), '-worker-', variables('location'), '-', copyIndex('vmNames', 1))]" } ] }, "resources" : [ { "apiVersion" : "2019-05-01", "name" : "[concat('node', copyIndex())]", "type" : "Microsoft.Resources/deployments", "copy" : { "name" : "nodeCopy", "count" : "[length(variables('vmNames'))]" }, "properties" : { "mode" : "Incremental", "template" : { "USDschema" : "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "resources" : [ { "apiVersion" : "2018-06-01", "type" : "Microsoft.Network/networkInterfaces", "name" : "[concat(variables('vmNames')[copyIndex()], '-nic')]", "location" : "[variables('location')]", "properties" : { "ipConfigurations" : [ { "name" : "pipConfig", "properties" : { "privateIPAllocationMethod" : "Dynamic", "subnet" : { "id" : "[variables('nodeSubnetRef')]" } } } ] } }, { "apiVersion" : "2018-06-01", "type" : "Microsoft.Compute/virtualMachines", "name" : "[variables('vmNames')[copyIndex()]]", "location" : "[variables('location')]", "tags" : { "kubernetes.io-cluster-ffranzupi": "owned" }, "identity" : { "type" : "userAssigned", "userAssignedIdentities" : { "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]" : {} } }, "dependsOn" : [ "[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]" ], "properties" : { "hardwareProfile" : { "vmSize" : "[parameters('nodeVMSize')]" }, "osProfile" : { "computerName" : "[variables('vmNames')[copyIndex()]]", "adminUsername" : "capi", "adminPassword" : "NotActuallyApplied!", "customData" : "[parameters('workerIgnition')]", "linuxConfiguration" : { "disablePasswordAuthentication" : false } }, "storageProfile" : { "imageReference": { "id": "[resourceId('Microsoft.Compute/galleries/images', variables('galleryName'), variables('imageName'))]" }, "osDisk" : { "name": "[concat(variables('vmNames')[copyIndex()],'_OSDisk')]", "osType" : "Linux", "createOption" : "FromImage", "managedDisk": { "storageAccountType": "Premium_LRS" }, "diskSizeGB": 128 } }, "networkProfile" : { "networkInterfaces" : [ { "id" : "[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]", "properties": { "primary": true } } ] } } } ] } } } ] } 11.20. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.13. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.13 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 11.21. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 11.22. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.26.0 master-1 Ready master 63m v1.26.0 master-2 Ready master 64m v1.26.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.26.0 master-1 Ready master 73m v1.26.0 master-2 Ready master 74m v1.26.0 worker-0 Ready worker 11m v1.26.0 worker-1 Ready worker 11m v1.26.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 11.23. Adding the Ingress DNS records If you removed the DNS Zone configuration when creating Kubernetes manifests and generating Ignition configs, you must manually create DNS records that point at the Ingress load balancer. You can create either a wildcard *.apps.{baseDomain}. or specific records. You can use A, CNAME, and other records per your requirements. Prerequisites You deployed an OpenShift Container Platform cluster on Microsoft Azure by using infrastructure that you provisioned. Install the OpenShift CLI ( oc ). Install or update the Azure CLI . Procedure Confirm the Ingress router has created a load balancer and populated the EXTERNAL-IP field: USD oc -n openshift-ingress get service router-default Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.20.10 35.130.120.110 80:32288/TCP,443:31215/TCP 20 Export the Ingress router IP as a variable: USD export PUBLIC_IP_ROUTER=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'` Add a *.apps record to the public DNS zone. If you are adding this cluster to a new public zone, run: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER} --ttl 300 If you are adding this cluster to an already existing public zone, run: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n *.apps.USD{CLUSTER_NAME} -a USD{PUBLIC_IP_ROUTER} --ttl 300 Add a *.apps record to the private DNS zone: Create a *.apps record by using the following command: USD az network private-dns record-set a create -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps --ttl 300 Add the *.apps record to the private DNS zone by using the following command: USD az network private-dns record-set a add-record -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER} If you prefer to add explicit domains instead of using a wildcard, you can create entries for each of the cluster's current routes: USD oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes Example output oauth-openshift.apps.cluster.basedomain.com console-openshift-console.apps.cluster.basedomain.com downloads-openshift-console.apps.cluster.basedomain.com alertmanager-main-openshift-monitoring.apps.cluster.basedomain.com prometheus-k8s-openshift-monitoring.apps.cluster.basedomain.com 11.24. Completing an Azure installation on user-provisioned infrastructure After you start the OpenShift Container Platform installation on Microsoft Azure user-provisioned infrastructure, you can monitor the cluster events until the cluster is ready. Prerequisites Deploy the bootstrap machine for an OpenShift Container Platform cluster on user-provisioned Azure infrastructure. Install the oc CLI and log in. Procedure Complete the cluster installation: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 Example output INFO Waiting up to 30m0s for the cluster to initialize... 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 11.25. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.13, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service | [
"az login",
"az account list --refresh",
"[ { \"cloudName\": \"AzureCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]",
"az account show",
"{ \"environmentName\": \"AzureCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", 1 \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az account set -s <subscription_id> 1",
"az account show",
"{ \"environmentName\": \"AzureCloud\", \"id\": \"33212d16-bdf6-45cb-b038-f6565b61edda\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az ad sp create-for-rbac --role <role_name> \\ 1 --name <service_principal> \\ 2 --scopes /subscriptions/<subscription_id> 3",
"Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { \"appId\": \"ac461d78-bf4b-4387-ad16-7e32e328aec6\", \"displayName\": <service_principal>\", \"password\": \"00000000-0000-0000-0000-000000000000\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\" }",
"az role assignment create --role \"User Access Administrator\" --assignee-object-id USD(az ad sp show --id <appId> --query id -o tsv) 1",
"az vm image list --all --offer rh-ocp-worker --publisher redhat -o table",
"Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocpworker:4.8.2021122100 4.8.2021122100 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100",
"az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table",
"Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.8.2021122100 4.8.2021122100 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100",
"az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"\"plan\" : { \"name\": \"rh-ocp-worker\", \"product\": \"rh-ocp-worker\", \"publisher\": \"redhat\" }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]\" ], \"properties\" : { \"storageProfile\": { \"imageReference\": { \"offer\": \"rh-ocp-worker\", \"publisher\": \"redhat\", \"sku\": \"rh-ocp-worker\", \"version\": \"4.8.2021122100\" } } }",
"tar -xvf openshift-install-linux.tar.gz",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig",
"? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift",
"ls USDHOME/clusterconfig/openshift/",
"99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.13.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"export CLUSTER_NAME=<cluster_name> 1 export AZURE_REGION=<azure_region> 2 export SSH_KEY=<ssh_key> 3 export BASE_DOMAIN=<base_domain> 4 export BASE_DOMAIN_RESOURCE_GROUP=<base_domain_resource_group> 5",
"export CLUSTER_NAME=test-cluster export AZURE_REGION=centralus export SSH_KEY=\"ssh-rsa xxx/xxx/xxx= [email protected]\" export BASE_DOMAIN=example.com export BASE_DOMAIN_RESOURCE_GROUP=ocp-cluster",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml",
"rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}",
"export INFRA_ID=<infra_id> 1",
"export RESOURCE_GROUP=<resource_group> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". βββ auth β βββ kubeadmin-password β βββ kubeconfig βββ bootstrap.ign βββ master.ign βββ metadata.json βββ worker.ign",
"az group create --name USD{RESOURCE_GROUP} --location USD{AZURE_REGION}",
"az identity create -g USD{RESOURCE_GROUP} -n USD{INFRA_ID}-identity",
"export PRINCIPAL_ID=`az identity show -g USD{RESOURCE_GROUP} -n USD{INFRA_ID}-identity --query principalId --out tsv`",
"export RESOURCE_GROUP_ID=`az group show -g USD{RESOURCE_GROUP} --query id --out tsv`",
"az role assignment create --assignee \"USD{PRINCIPAL_ID}\" --role 'Contributor' --scope \"USD{RESOURCE_GROUP_ID}\"",
"az role assignment create --assignee \"USD{PRINCIPAL_ID}\" --role <custom_role> \\ 1 --scope \"USD{RESOURCE_GROUP_ID}\"",
"az storage account create -g USD{RESOURCE_GROUP} --location USD{AZURE_REGION} --name USD{CLUSTER_NAME}sa --kind Storage --sku Standard_LRS",
"export ACCOUNT_KEY=`az storage account keys list -g USD{RESOURCE_GROUP} --account-name USD{CLUSTER_NAME}sa --query \"[0].value\" -o tsv`",
"export VHD_URL=`openshift-install coreos print-stream-json | jq -r '.architectures.<architecture>.\"rhel-coreos-extensions\".\"azure-disk\".url'`",
"az storage container create --name vhd --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY}",
"az storage blob copy start --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} --destination-blob \"rhcos.vhd\" --destination-container vhd --source-uri \"USD{VHD_URL}\"",
"az storage container create --name files --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY}",
"az storage blob upload --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c \"files\" -f \"<installation_directory>/bootstrap.ign\" -n \"bootstrap.ign\"",
"az network dns zone create -g USD{BASE_DOMAIN_RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN}",
"az network private-dns zone create -g USD{RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN}",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/01_vnet.json\" --parameters baseName=\"USD{INFRA_ID}\" 1",
"az network private-dns link vnet create -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n USD{INFRA_ID}-network-link -v \"USD{INFRA_ID}-vnet\" -e false",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(parameters('baseName'), '-vnet')]\", \"addressPrefix\" : \"10.0.0.0/16\", \"masterSubnetName\" : \"[concat(parameters('baseName'), '-master-subnet')]\", \"masterSubnetPrefix\" : \"10.0.0.0/24\", \"nodeSubnetName\" : \"[concat(parameters('baseName'), '-worker-subnet')]\", \"nodeSubnetPrefix\" : \"10.0.1.0/24\", \"clusterNsgName\" : \"[concat(parameters('baseName'), '-nsg')]\" }, \"resources\" : [ { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/virtualNetworks\", \"name\" : \"[variables('virtualNetworkName')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[concat('Microsoft.Network/networkSecurityGroups/', variables('clusterNsgName'))]\" ], \"properties\" : { \"addressSpace\" : { \"addressPrefixes\" : [ \"[variables('addressPrefix')]\" ] }, \"subnets\" : [ { \"name\" : \"[variables('masterSubnetName')]\", \"properties\" : { \"addressPrefix\" : \"[variables('masterSubnetPrefix')]\", \"serviceEndpoints\": [], \"networkSecurityGroup\" : { \"id\" : \"[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]\" } } }, { \"name\" : \"[variables('nodeSubnetName')]\", \"properties\" : { \"addressPrefix\" : \"[variables('nodeSubnetPrefix')]\", \"serviceEndpoints\": [], \"networkSecurityGroup\" : { \"id\" : \"[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]\" } } } ] } }, { \"type\" : \"Microsoft.Network/networkSecurityGroups\", \"name\" : \"[variables('clusterNsgName')]\", \"apiVersion\" : \"2018-10-01\", \"location\" : \"[variables('location')]\", \"properties\" : { \"securityRules\" : [ { \"name\" : \"apiserver_in\", \"properties\" : { \"protocol\" : \"Tcp\", \"sourcePortRange\" : \"*\", \"destinationPortRange\" : \"6443\", \"sourceAddressPrefix\" : \"*\", \"destinationAddressPrefix\" : \"*\", \"access\" : \"Allow\", \"priority\" : 101, \"direction\" : \"Inbound\" } } ] } } ] }",
"export VHD_BLOB_URL=`az storage blob url --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c vhd -n \"rhcos.vhd\" -o tsv`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/02_storage.json\" --parameters vhdBlobURL=\"USD{VHD_BLOB_URL}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" \\ 2 --parameters storageAccount=\"USD{CLUSTER_NAME}sa\" \\ 3 --parameters architecture=\"<architecture>\" 4",
"{ \"USDschema\": \"https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#\", \"contentVersion\": \"1.0.0.0\", \"parameters\": { \"architecture\": { \"type\": \"string\", \"metadata\": { \"description\": \"The architecture of the Virtual Machines\" }, \"defaultValue\": \"x64\", \"allowedValues\": [ \"Arm64\", \"x64\" ] }, \"baseName\": { \"type\": \"string\", \"minLength\": 1, \"metadata\": { \"description\": \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"storageAccount\": { \"type\": \"string\", \"metadata\": { \"description\": \"The Storage Account name\" } }, \"vhdBlobURL\": { \"type\": \"string\", \"metadata\": { \"description\": \"URL pointing to the blob where the VHD to be used to create master and worker machines is located\" } } }, \"variables\": { \"location\": \"[resourceGroup().location]\", \"galleryName\": \"[concat('gallery_', replace(parameters('baseName'), '-', '_'))]\", \"imageName\": \"[parameters('baseName')]\", \"imageNameGen2\": \"[concat(parameters('baseName'), '-gen2')]\", \"imageRelease\": \"1.0.0\" }, \"resources\": [ { \"apiVersion\": \"2021-10-01\", \"type\": \"Microsoft.Compute/galleries\", \"name\": \"[variables('galleryName')]\", \"location\": \"[variables('location')]\", \"resources\": [ { \"apiVersion\": \"2021-10-01\", \"type\": \"images\", \"name\": \"[variables('imageName')]\", \"location\": \"[variables('location')]\", \"dependsOn\": [ \"[variables('galleryName')]\" ], \"properties\": { \"architecture\": \"[parameters('architecture')]\", \"hyperVGeneration\": \"V1\", \"identifier\": { \"offer\": \"rhcos\", \"publisher\": \"RedHat\", \"sku\": \"basic\" }, \"osState\": \"Generalized\", \"osType\": \"Linux\" }, \"resources\": [ { \"apiVersion\": \"2021-10-01\", \"type\": \"versions\", \"name\": \"[variables('imageRelease')]\", \"location\": \"[variables('location')]\", \"dependsOn\": [ \"[variables('imageName')]\" ], \"properties\": { \"publishingProfile\": { \"storageAccountType\": \"Standard_LRS\", \"targetRegions\": [ { \"name\": \"[variables('location')]\", \"regionalReplicaCount\": \"1\" } ] }, \"storageProfile\": { \"osDiskImage\": { \"source\": { \"id\": \"[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccount'))]\", \"uri\": \"[parameters('vhdBlobURL')]\" } } } } } ] }, { \"apiVersion\": \"2021-10-01\", \"type\": \"images\", \"name\": \"[variables('imageNameGen2')]\", \"location\": \"[variables('location')]\", \"dependsOn\": [ \"[variables('galleryName')]\" ], \"properties\": { \"architecture\": \"[parameters('architecture')]\", \"hyperVGeneration\": \"V2\", \"identifier\": { \"offer\": \"rhcos-gen2\", \"publisher\": \"RedHat-gen2\", \"sku\": \"gen2\" }, \"osState\": \"Generalized\", \"osType\": \"Linux\" }, \"resources\": [ { \"apiVersion\": \"2021-10-01\", \"type\": \"versions\", \"name\": \"[variables('imageRelease')]\", \"location\": \"[variables('location')]\", \"dependsOn\": [ \"[variables('imageNameGen2')]\" ], \"properties\": { \"publishingProfile\": { \"storageAccountType\": \"Standard_LRS\", \"targetRegions\": [ { \"name\": \"[variables('location')]\", \"regionalReplicaCount\": \"1\" } ] }, \"storageProfile\": { \"osDiskImage\": { \"source\": { \"id\": \"[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccount'))]\", \"uri\": \"[parameters('vhdBlobURL')]\" } } } } } ] } ] } ] }",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/03_infra.json\" --parameters privateDNSZoneName=\"USD{CLUSTER_NAME}.USD{BASE_DOMAIN}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2",
"export PUBLIC_IP=`az network public-ip list -g USD{RESOURCE_GROUP} --query \"[?name=='USD{INFRA_ID}-master-pip'] | [0].ipAddress\" -o tsv`",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n api -a USD{PUBLIC_IP} --ttl 60",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n api.USD{CLUSTER_NAME} -a USD{PUBLIC_IP} --ttl 60",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"vnetBaseName\": { \"type\": \"string\", \"defaultValue\": \"\", \"metadata\" : { \"description\" : \"The specific customer vnet's base name (optional)\" } }, \"privateDNSZoneName\" : { \"type\" : \"string\", \"metadata\" : { \"description\" : \"Name of the private DNS zone\" } } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"masterSubnetName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]\", \"masterSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]\", \"masterPublicIpAddressName\" : \"[concat(parameters('baseName'), '-master-pip')]\", \"masterPublicIpAddressID\" : \"[resourceId('Microsoft.Network/publicIPAddresses', variables('masterPublicIpAddressName'))]\", \"masterLoadBalancerName\" : \"[parameters('baseName')]\", \"masterLoadBalancerID\" : \"[resourceId('Microsoft.Network/loadBalancers', variables('masterLoadBalancerName'))]\", \"internalLoadBalancerName\" : \"[concat(parameters('baseName'), '-internal-lb')]\", \"internalLoadBalancerID\" : \"[resourceId('Microsoft.Network/loadBalancers', variables('internalLoadBalancerName'))]\", \"skuName\": \"Standard\" }, \"resources\" : [ { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/publicIPAddresses\", \"name\" : \"[variables('masterPublicIpAddressName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"[variables('skuName')]\" }, \"properties\" : { \"publicIPAllocationMethod\" : \"Static\", \"dnsSettings\" : { \"domainNameLabel\" : \"[variables('masterPublicIpAddressName')]\" } } }, { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/loadBalancers\", \"name\" : \"[variables('masterLoadBalancerName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"[variables('skuName')]\" }, \"dependsOn\" : [ \"[concat('Microsoft.Network/publicIPAddresses/', variables('masterPublicIpAddressName'))]\" ], \"properties\" : { \"frontendIPConfigurations\" : [ { \"name\" : \"public-lb-ip-v4\", \"properties\" : { \"publicIPAddress\" : { \"id\" : \"[variables('masterPublicIpAddressID')]\" } } } ], \"backendAddressPools\" : [ { \"name\" : \"[variables('masterLoadBalancerName')]\" } ], \"loadBalancingRules\" : [ { \"name\" : \"api-internal\", \"properties\" : { \"frontendIPConfiguration\" : { \"id\" :\"[concat(variables('masterLoadBalancerID'), '/frontendIPConfigurations/public-lb-ip-v4')]\" }, \"backendAddressPool\" : { \"id\" : \"[concat(variables('masterLoadBalancerID'), '/backendAddressPools/', variables('masterLoadBalancerName'))]\" }, \"protocol\" : \"Tcp\", \"loadDistribution\" : \"Default\", \"idleTimeoutInMinutes\" : 30, \"frontendPort\" : 6443, \"backendPort\" : 6443, \"probe\" : { \"id\" : \"[concat(variables('masterLoadBalancerID'), '/probes/api-internal-probe')]\" } } } ], \"probes\" : [ { \"name\" : \"api-internal-probe\", \"properties\" : { \"protocol\" : \"Https\", \"port\" : 6443, \"requestPath\": \"/readyz\", \"intervalInSeconds\" : 10, \"numberOfProbes\" : 3 } } ] } }, { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/loadBalancers\", \"name\" : \"[variables('internalLoadBalancerName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"[variables('skuName')]\" }, \"properties\" : { \"frontendIPConfigurations\" : [ { \"name\" : \"internal-lb-ip\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"subnet\" : { \"id\" : \"[variables('masterSubnetRef')]\" }, \"privateIPAddressVersion\" : \"IPv4\" } } ], \"backendAddressPools\" : [ { \"name\" : \"internal-lb-backend\" } ], \"loadBalancingRules\" : [ { \"name\" : \"api-internal\", \"properties\" : { \"frontendIPConfiguration\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]\" }, \"frontendPort\" : 6443, \"backendPort\" : 6443, \"enableFloatingIP\" : false, \"idleTimeoutInMinutes\" : 30, \"protocol\" : \"Tcp\", \"enableTcpReset\" : false, \"loadDistribution\" : \"Default\", \"backendAddressPool\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]\" }, \"probe\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/probes/api-internal-probe')]\" } } }, { \"name\" : \"sint\", \"properties\" : { \"frontendIPConfiguration\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]\" }, \"frontendPort\" : 22623, \"backendPort\" : 22623, \"enableFloatingIP\" : false, \"idleTimeoutInMinutes\" : 30, \"protocol\" : \"Tcp\", \"enableTcpReset\" : false, \"loadDistribution\" : \"Default\", \"backendAddressPool\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]\" }, \"probe\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/probes/sint-probe')]\" } } } ], \"probes\" : [ { \"name\" : \"api-internal-probe\", \"properties\" : { \"protocol\" : \"Https\", \"port\" : 6443, \"requestPath\": \"/readyz\", \"intervalInSeconds\" : 10, \"numberOfProbes\" : 3 } }, { \"name\" : \"sint-probe\", \"properties\" : { \"protocol\" : \"Https\", \"port\" : 22623, \"requestPath\": \"/healthz\", \"intervalInSeconds\" : 10, \"numberOfProbes\" : 3 } } ] } }, { \"apiVersion\": \"2018-09-01\", \"type\": \"Microsoft.Network/privateDnsZones/A\", \"name\": \"[concat(parameters('privateDNSZoneName'), '/api')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]\" ], \"properties\": { \"ttl\": 60, \"aRecords\": [ { \"ipv4Address\": \"[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]\" } ] } }, { \"apiVersion\": \"2018-09-01\", \"type\": \"Microsoft.Network/privateDnsZones/A\", \"name\": \"[concat(parameters('privateDNSZoneName'), '/api-int')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]\" ], \"properties\": { \"ttl\": 60, \"aRecords\": [ { \"ipv4Address\": \"[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]\" } ] } } ] }",
"bootstrap_url_expiry=`date -u -d \"10 hours\" '+%Y-%m-%dT%H:%MZ'`",
"export BOOTSTRAP_URL=`az storage blob generate-sas -c 'files' -n 'bootstrap.ign' --https-only --full-uri --permissions r --expiry USDbootstrap_url_expiry --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -o tsv`",
"export BOOTSTRAP_IGNITION=`jq -rcnM --arg v \"3.2.0\" --arg url USD{BOOTSTRAP_URL} '{ignition:{version:USDv,config:{replace:{source:USDurl}}}}' | base64 | tr -d '\\n'`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/04_bootstrap.json\" --parameters bootstrapIgnition=\"USD{BOOTSTRAP_IGNITION}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"vnetBaseName\": { \"type\": \"string\", \"defaultValue\": \"\", \"metadata\" : { \"description\" : \"The specific customer vnet's base name (optional)\" } }, \"bootstrapIgnition\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Bootstrap ignition content for the bootstrap cluster\" } }, \"sshKeyData\" : { \"type\" : \"securestring\", \"defaultValue\" : \"Unused\", \"metadata\" : { \"description\" : \"Unused\" } }, \"bootstrapVMSize\" : { \"type\" : \"string\", \"defaultValue\" : \"Standard_D4s_v3\", \"metadata\" : { \"description\" : \"The size of the Bootstrap Virtual Machine\" } }, \"hyperVGen\": { \"type\": \"string\", \"metadata\": { \"description\": \"VM generation image to use\" }, \"defaultValue\": \"V2\", \"allowedValues\": [ \"V1\", \"V2\" ] } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"masterSubnetName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]\", \"masterSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]\", \"masterLoadBalancerName\" : \"[parameters('baseName')]\", \"internalLoadBalancerName\" : \"[concat(parameters('baseName'), '-internal-lb')]\", \"sshKeyPath\" : \"/home/core/.ssh/authorized_keys\", \"identityName\" : \"[concat(parameters('baseName'), '-identity')]\", \"vmName\" : \"[concat(parameters('baseName'), '-bootstrap')]\", \"nicName\" : \"[concat(variables('vmName'), '-nic')]\", \"galleryName\": \"[concat('gallery_', replace(parameters('baseName'), '-', '_'))]\", \"imageName\" : \"[concat(parameters('baseName'), if(equals(parameters('hyperVGen'), 'V2'), '-gen2', ''))]\", \"clusterNsgName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-nsg')]\", \"sshPublicIpAddressName\" : \"[concat(variables('vmName'), '-ssh-pip')]\" }, \"resources\" : [ { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/publicIPAddresses\", \"name\" : \"[variables('sshPublicIpAddressName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"Standard\" }, \"properties\" : { \"publicIPAllocationMethod\" : \"Static\", \"dnsSettings\" : { \"domainNameLabel\" : \"[variables('sshPublicIpAddressName')]\" } } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Network/networkInterfaces\", \"name\" : \"[variables('nicName')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]\" ], \"properties\" : { \"ipConfigurations\" : [ { \"name\" : \"pipConfig\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"publicIPAddress\": { \"id\": \"[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]\" }, \"subnet\" : { \"id\" : \"[variables('masterSubnetRef')]\" }, \"loadBalancerBackendAddressPools\" : [ { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/', variables('masterLoadBalancerName'))]\" }, { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]\" } ] } } ] } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Compute/virtualMachines\", \"name\" : \"[variables('vmName')]\", \"location\" : \"[variables('location')]\", \"identity\" : { \"type\" : \"userAssigned\", \"userAssignedIdentities\" : { \"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]\" : {} } }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]\" ], \"properties\" : { \"hardwareProfile\" : { \"vmSize\" : \"[parameters('bootstrapVMSize')]\" }, \"osProfile\" : { \"computerName\" : \"[variables('vmName')]\", \"adminUsername\" : \"core\", \"adminPassword\" : \"NotActuallyApplied!\", \"customData\" : \"[parameters('bootstrapIgnition')]\", \"linuxConfiguration\" : { \"disablePasswordAuthentication\" : false } }, \"storageProfile\" : { \"imageReference\": { \"id\": \"[resourceId('Microsoft.Compute/galleries/images', variables('galleryName'), variables('imageName'))]\" }, \"osDisk\" : { \"name\": \"[concat(variables('vmName'),'_OSDisk')]\", \"osType\" : \"Linux\", \"createOption\" : \"FromImage\", \"managedDisk\": { \"storageAccountType\": \"Premium_LRS\" }, \"diskSizeGB\" : 100 } }, \"networkProfile\" : { \"networkInterfaces\" : [ { \"id\" : \"[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]\" } ] } } }, { \"apiVersion\" : \"2018-06-01\", \"type\": \"Microsoft.Network/networkSecurityGroups/securityRules\", \"name\" : \"[concat(variables('clusterNsgName'), '/bootstrap_ssh_in')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[resourceId('Microsoft.Compute/virtualMachines', variables('vmName'))]\" ], \"properties\": { \"protocol\" : \"Tcp\", \"sourcePortRange\" : \"*\", \"destinationPortRange\" : \"22\", \"sourceAddressPrefix\" : \"*\", \"destinationAddressPrefix\" : \"*\", \"access\" : \"Allow\", \"priority\" : 100, \"direction\" : \"Inbound\" } } ] }",
"export MASTER_IGNITION=`cat <installation_directory>/master.ign | base64 | tr -d '\\n'`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/05_masters.json\" --parameters masterIgnition=\"USD{MASTER_IGNITION}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"vnetBaseName\": { \"type\": \"string\", \"defaultValue\": \"\", \"metadata\" : { \"description\" : \"The specific customer vnet's base name (optional)\" } }, \"masterIgnition\" : { \"type\" : \"string\", \"metadata\" : { \"description\" : \"Ignition content for the master nodes\" } }, \"numberOfMasters\" : { \"type\" : \"int\", \"defaultValue\" : 3, \"minValue\" : 2, \"maxValue\" : 30, \"metadata\" : { \"description\" : \"Number of OpenShift masters to deploy\" } }, \"sshKeyData\" : { \"type\" : \"securestring\", \"defaultValue\" : \"Unused\", \"metadata\" : { \"description\" : \"Unused\" } }, \"privateDNSZoneName\" : { \"type\" : \"string\", \"defaultValue\" : \"\", \"metadata\" : { \"description\" : \"unused\" } }, \"masterVMSize\" : { \"type\" : \"string\", \"defaultValue\" : \"Standard_D8s_v3\", \"metadata\" : { \"description\" : \"The size of the Master Virtual Machines\" } }, \"diskSizeGB\" : { \"type\" : \"int\", \"defaultValue\" : 1024, \"metadata\" : { \"description\" : \"Size of the Master VM OS disk, in GB\" } }, \"hyperVGen\": { \"type\": \"string\", \"metadata\": { \"description\": \"VM generation image to use\" }, \"defaultValue\": \"V2\", \"allowedValues\": [ \"V1\", \"V2\" ] } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"masterSubnetName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]\", \"masterSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]\", \"masterLoadBalancerName\" : \"[parameters('baseName')]\", \"internalLoadBalancerName\" : \"[concat(parameters('baseName'), '-internal-lb')]\", \"sshKeyPath\" : \"/home/core/.ssh/authorized_keys\", \"identityName\" : \"[concat(parameters('baseName'), '-identity')]\", \"galleryName\": \"[concat('gallery_', replace(parameters('baseName'), '-', '_'))]\", \"imageName\" : \"[concat(parameters('baseName'), if(equals(parameters('hyperVGen'), 'V2'), '-gen2', ''))]\", \"copy\" : [ { \"name\" : \"vmNames\", \"count\" : \"[parameters('numberOfMasters')]\", \"input\" : \"[concat(parameters('baseName'), '-master-', copyIndex('vmNames'))]\" } ] }, \"resources\" : [ { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Network/networkInterfaces\", \"copy\" : { \"name\" : \"nicCopy\", \"count\" : \"[length(variables('vmNames'))]\" }, \"name\" : \"[concat(variables('vmNames')[copyIndex()], '-nic')]\", \"location\" : \"[variables('location')]\", \"properties\" : { \"ipConfigurations\" : [ { \"name\" : \"pipConfig\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"subnet\" : { \"id\" : \"[variables('masterSubnetRef')]\" }, \"loadBalancerBackendAddressPools\" : [ { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/', variables('masterLoadBalancerName'))]\" }, { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]\" } ] } } ] } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Compute/virtualMachines\", \"copy\" : { \"name\" : \"vmCopy\", \"count\" : \"[length(variables('vmNames'))]\" }, \"name\" : \"[variables('vmNames')[copyIndex()]]\", \"location\" : \"[variables('location')]\", \"identity\" : { \"type\" : \"userAssigned\", \"userAssignedIdentities\" : { \"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]\" : {} } }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]\" ], \"properties\" : { \"hardwareProfile\" : { \"vmSize\" : \"[parameters('masterVMSize')]\" }, \"osProfile\" : { \"computerName\" : \"[variables('vmNames')[copyIndex()]]\", \"adminUsername\" : \"core\", \"adminPassword\" : \"NotActuallyApplied!\", \"customData\" : \"[parameters('masterIgnition')]\", \"linuxConfiguration\" : { \"disablePasswordAuthentication\" : false } }, \"storageProfile\" : { \"imageReference\": { \"id\": \"[resourceId('Microsoft.Compute/galleries/images', variables('galleryName'), variables('imageName'))]\" }, \"osDisk\" : { \"name\": \"[concat(variables('vmNames')[copyIndex()], '_OSDisk')]\", \"osType\" : \"Linux\", \"createOption\" : \"FromImage\", \"caching\": \"ReadOnly\", \"writeAcceleratorEnabled\": false, \"managedDisk\": { \"storageAccountType\": \"Premium_LRS\" }, \"diskSizeGB\" : \"[parameters('diskSizeGB')]\" } }, \"networkProfile\" : { \"networkInterfaces\" : [ { \"id\" : \"[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]\", \"properties\": { \"primary\": false } } ] } } } ] }",
"./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level info 2",
"az network nsg rule delete -g USD{RESOURCE_GROUP} --nsg-name USD{INFRA_ID}-nsg --name bootstrap_ssh_in az vm stop -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap az vm deallocate -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap az vm delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap --yes az disk delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap_OSDisk --no-wait --yes az network nic delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-nic --no-wait az storage blob delete --account-key USD{ACCOUNT_KEY} --account-name USD{CLUSTER_NAME}sa --container-name files --name bootstrap.ign az network public-ip delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-ssh-pip",
"export WORKER_IGNITION=`cat <installation_directory>/worker.ign | base64 | tr -d '\\n'`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/06_workers.json\" --parameters workerIgnition=\"USD{WORKER_IGNITION}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"vnetBaseName\": { \"type\": \"string\", \"defaultValue\": \"\", \"metadata\" : { \"description\" : \"The specific customer vnet's base name (optional)\" } }, \"workerIgnition\" : { \"type\" : \"string\", \"metadata\" : { \"description\" : \"Ignition content for the worker nodes\" } }, \"numberOfNodes\" : { \"type\" : \"int\", \"defaultValue\" : 3, \"minValue\" : 2, \"maxValue\" : 30, \"metadata\" : { \"description\" : \"Number of OpenShift compute nodes to deploy\" } }, \"sshKeyData\" : { \"type\" : \"securestring\", \"defaultValue\" : \"Unused\", \"metadata\" : { \"description\" : \"Unused\" } }, \"nodeVMSize\" : { \"type\" : \"string\", \"defaultValue\" : \"Standard_D4s_v3\", \"metadata\" : { \"description\" : \"The size of the each Node Virtual Machine\" } }, \"hyperVGen\": { \"type\": \"string\", \"metadata\": { \"description\": \"VM generation image to use\" }, \"defaultValue\": \"V2\", \"allowedValues\": [ \"V1\", \"V2\" ] } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"nodeSubnetName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-worker-subnet')]\", \"nodeSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('nodeSubnetName'))]\", \"infraLoadBalancerName\" : \"[parameters('baseName')]\", \"sshKeyPath\" : \"/home/capi/.ssh/authorized_keys\", \"identityName\" : \"[concat(parameters('baseName'), '-identity')]\", \"galleryName\": \"[concat('gallery_', replace(parameters('baseName'), '-', '_'))]\", \"imageName\" : \"[concat(parameters('baseName'), if(equals(parameters('hyperVGen'), 'V2'), '-gen2', ''))]\", \"copy\" : [ { \"name\" : \"vmNames\", \"count\" : \"[parameters('numberOfNodes')]\", \"input\" : \"[concat(parameters('baseName'), '-worker-', variables('location'), '-', copyIndex('vmNames', 1))]\" } ] }, \"resources\" : [ { \"apiVersion\" : \"2019-05-01\", \"name\" : \"[concat('node', copyIndex())]\", \"type\" : \"Microsoft.Resources/deployments\", \"copy\" : { \"name\" : \"nodeCopy\", \"count\" : \"[length(variables('vmNames'))]\" }, \"properties\" : { \"mode\" : \"Incremental\", \"template\" : { \"USDschema\" : \"http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"resources\" : [ { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Network/networkInterfaces\", \"name\" : \"[concat(variables('vmNames')[copyIndex()], '-nic')]\", \"location\" : \"[variables('location')]\", \"properties\" : { \"ipConfigurations\" : [ { \"name\" : \"pipConfig\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"subnet\" : { \"id\" : \"[variables('nodeSubnetRef')]\" } } } ] } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Compute/virtualMachines\", \"name\" : \"[variables('vmNames')[copyIndex()]]\", \"location\" : \"[variables('location')]\", \"tags\" : { \"kubernetes.io-cluster-ffranzupi\": \"owned\" }, \"identity\" : { \"type\" : \"userAssigned\", \"userAssignedIdentities\" : { \"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]\" : {} } }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]\" ], \"properties\" : { \"hardwareProfile\" : { \"vmSize\" : \"[parameters('nodeVMSize')]\" }, \"osProfile\" : { \"computerName\" : \"[variables('vmNames')[copyIndex()]]\", \"adminUsername\" : \"capi\", \"adminPassword\" : \"NotActuallyApplied!\", \"customData\" : \"[parameters('workerIgnition')]\", \"linuxConfiguration\" : { \"disablePasswordAuthentication\" : false } }, \"storageProfile\" : { \"imageReference\": { \"id\": \"[resourceId('Microsoft.Compute/galleries/images', variables('galleryName'), variables('imageName'))]\" }, \"osDisk\" : { \"name\": \"[concat(variables('vmNames')[copyIndex()],'_OSDisk')]\", \"osType\" : \"Linux\", \"createOption\" : \"FromImage\", \"managedDisk\": { \"storageAccountType\": \"Premium_LRS\" }, \"diskSizeGB\": 128 } }, \"networkProfile\" : { \"networkInterfaces\" : [ { \"id\" : \"[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]\", \"properties\": { \"primary\": true } } ] } } } ] } } } ] }",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.26.0 master-1 Ready master 63m v1.26.0 master-2 Ready master 64m v1.26.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.26.0 master-1 Ready master 73m v1.26.0 master-2 Ready master 74m v1.26.0 worker-0 Ready worker 11m v1.26.0 worker-1 Ready worker 11m v1.26.0",
"oc -n openshift-ingress get service router-default",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.20.10 35.130.120.110 80:32288/TCP,443:31215/TCP 20",
"export PUBLIC_IP_ROUTER=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'`",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER} --ttl 300",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n *.apps.USD{CLUSTER_NAME} -a USD{PUBLIC_IP_ROUTER} --ttl 300",
"az network private-dns record-set a create -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps --ttl 300",
"az network private-dns record-set a add-record -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER}",
"oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes",
"oauth-openshift.apps.cluster.basedomain.com console-openshift-console.apps.cluster.basedomain.com downloads-openshift-console.apps.cluster.basedomain.com alertmanager-main-openshift-monitoring.apps.cluster.basedomain.com prometheus-k8s-openshift-monitoring.apps.cluster.basedomain.com",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_azure/installing-azure-user-infra |
5.288. rsyslog | 5.288. rsyslog 5.288.1. RHSA-2012:0796 - Moderate: rsyslog security, bug fix, and enhancement update Updated rsyslog packages that fix one security issue, multiple bugs, and add two enhancements are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. The rsyslog packages provide an enhanced, multi-threaded syslog daemon. Security Fix CVE-2011-4623 A numeric truncation error, leading to a heap-based buffer overflow, was found in the way the rsyslog imfile module processed text files containing long lines. An attacker could use this flaw to crash the rsyslogd daemon or, possibly, execute arbitrary code with the privileges of rsyslogd, if they are able to cause a long line to be written to a log file that rsyslogd monitors with imfile. The imfile module is not enabled by default. Bug Fixes BZ# 727380 Several variables were incorrectly deinitialized with Transport Layer Security (TLS) transport and keys in PKCS#8 format. The rsyslogd daemon aborted with a segmentation fault when keys in this format were provided. Now, the variables are correctly deinitialized. BZ# 756664 Previously, the imgssapi plug-in initialization was incomplete. As a result, the rsyslogd daemon aborted when configured to provide a GSSAPI listener. Now, the plug-in is correctly initialized. BZ# 767527 The fully qualified domain name (FQDN) for the localhost used in messages was the first alias found. This did not always produce the expected result on multihomed hosts. With this update, the algorithm uses the alias that corresponds to the hostname. BZ# 803550 The gtls module leaked a file descriptor every time it was loaded due to an error in the GnuTLS library. No new files or network connections could be opened when the limit for the file descriptor count was reached. This update modifies the gtls module so that it is not unloaded during the process lifetime. BZ# 805424 rsyslog could not override the hostname to set an alternative hostname for locally generated messages. Now, the local hostname can be overridden. BZ# 807608 The rsyslogd init script did not pass the lock file path to the 'status' action. As a result, the lock file was ignored and a wrong exit code was returned. This update modifies the init script to pass the lock file to the 'status' action. Now, the correct exit code is returned. BZ# 813079 Data could be incorrectly deinitialized when rsyslogd was supplied with malformed spool files. The rsyslogd daemon could be aborted with a segmentation fault. This update modifies the underlying code to correctly deinitialize the data. BZ# 813084 Previously, deinitialization of non-existent data could, in certain error cases, occur. As a result, rsyslogd could abort with a segmentation fault when rsyslog was configured to use a disk assisted queue without specifying a spool file. With this update, the error cases are handled gracefully. BZ# 820311 The manual page wrongly stated that the '-d' option to turn on debugging caused the daemon to run in the foreground, which was misleading as the current behavior is to run in the background. Now, the manual page reflects the correct behavior. BZ# 820996 rsyslog attempted to write debugging messages to standard output even when run in the background. This resulted in the debugging information being written to some other output. This was corrected and the debug messages are no longer written to standard output when run in the background. BZ# 822118 The string buffer to hold the distinguished name (DN) of a certificate was too small. DNs with more than 128 characters were not displayed. This update enlarges the buffer to process longer DNs. Enhancements BZ# 672182 Support for rate limiting and multi-line message capability. Now, rsyslogd can limit the number of messages it accepts through a UNIX socket. BZ# 740420 The addition of the "/etc/rsyslog.d/" configuration directory to supply syslog configuration files. All users of rsyslog are advised to upgrade to these updated packages, which upgrade rsyslog to version 5.8.10 and correct these issues and add these enhancements. After installing this update, the rsyslog daemon will be restarted automatically. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/rsyslog |
Chapter 3. Understanding persistent storage | Chapter 3. Understanding persistent storage 3.1. Persistent storage overview Managing storage is a distinct problem from managing compute resources. OpenShift Container Platform uses the Kubernetes persistent volume (PV) framework to allow cluster administrators to provision persistent storage for a cluster. Developers can use persistent volume claims (PVCs) to request PV resources without having specific knowledge of the underlying storage infrastructure. PVCs are specific to a project, and are created and used by developers as a means to use a PV. PV resources on their own are not scoped to any single project; they can be shared across the entire OpenShift Container Platform cluster and claimed from any project. After a PV is bound to a PVC, that PV can not then be bound to additional PVCs. This has the effect of scoping a bound PV to a single namespace, that of the binding project. PVs are defined by a PersistentVolume API object, which represents a piece of existing storage in the cluster that was either statically provisioned by the cluster administrator or dynamically provisioned using a StorageClass object. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plug-ins like Volumes but have a lifecycle that is independent of any individual pod that uses the PV. PV objects capture the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system. Important High availability of storage in the infrastructure is left to the underlying storage provider. PVCs are defined by a PersistentVolumeClaim API object, which represents a request for storage by a developer. It is similar to a pod in that pods consume node resources and PVCs consume PV resources. For example, pods can request specific levels of resources, such as CPU and memory, while PVCs can request specific storage capacity and access modes. For example, they can be mounted once read-write or many times read-only. 3.2. Lifecycle of a volume and claim PVs are resources in the cluster. PVCs are requests for those resources and also act as claim checks to the resource. The interaction between PVs and PVCs have the following lifecycle. 3.2.1. Provision storage In response to requests from a developer defined in a PVC, a cluster administrator configures one or more dynamic provisioners that provision storage and a matching PV. Alternatively, a cluster administrator can create a number of PVs in advance that carry the details of the real storage that is available for use. PVs exist in the API and are available for use. 3.2.2. Bind claims When you create a PVC, you request a specific amount of storage, specify the required access mode, and create a storage class to describe and classify the storage. The control loop in the master watches for new PVCs and binds the new PVC to an appropriate PV. If an appropriate PV does not exist, a provisioner for the storage class creates one. The size of all PVs might exceed your PVC size. This is especially true with manually provisioned PVs. To minimize the excess, OpenShift Container Platform binds to the smallest PV that matches all other criteria. Claims remain unbound indefinitely if a matching volume does not exist or can not be created with any available provisioner servicing a storage class. Claims are bound as matching volumes become available. For example, a cluster with many manually provisioned 50Gi volumes would not match a PVC requesting 100Gi. The PVC can be bound when a 100Gi PV is added to the cluster. 3.2.3. Use pods and claimed PVs Pods use claims as volumes. The cluster inspects the claim to find the bound volume and mounts that volume for a pod. For those volumes that support multiple access modes, you must specify which mode applies when you use the claim as a volume in a pod. Once you have a claim and that claim is bound, the bound PV belongs to you for as long as you need it. You can schedule pods and access claimed PVs by including persistentVolumeClaim in the pod's volumes block. Note If you attach persistent volumes that have high file counts to pods, those pods can fail or can take a long time to start. For more information, see When using Persistent Volumes with high file counts in OpenShift, why do pods fail to start or take an excessive amount of time to achieve "Ready" state? . 3.2.4. Storage Object in Use Protection The Storage Object in Use Protection feature ensures that PVCs in active use by a pod and PVs that are bound to PVCs are not removed from the system, as this can result in data loss. Storage Object in Use Protection is enabled by default. Note A PVC is in active use by a pod when a Pod object exists that uses the PVC. If a user deletes a PVC that is in active use by a pod, the PVC is not removed immediately. PVC removal is postponed until the PVC is no longer actively used by any pods. Also, if a cluster admin deletes a PV that is bound to a PVC, the PV is not removed immediately. PV removal is postponed until the PV is no longer bound to a PVC. 3.2.5. Release a persistent volume When you are finished with a volume, you can delete the PVC object from the API, which allows reclamation of the resource. The volume is considered released when the claim is deleted, but it is not yet available for another claim. The claimant's data remains on the volume and must be handled according to policy. 3.2.6. Reclaim policy for persistent volumes The reclaim policy of a persistent volume tells the cluster what to do with the volume after it is released. A volume's reclaim policy can be Retain , Recycle , or Delete . Retain reclaim policy allows manual reclamation of the resource for those volume plug-ins that support it. Recycle reclaim policy recycles the volume back into the pool of unbound persistent volumes once it is released from its claim. Important The Recycle reclaim policy is deprecated in OpenShift Container Platform 4. Dynamic provisioning is recommended for equivalent and better functionality. Delete reclaim policy deletes both the PersistentVolume object from OpenShift Container Platform and the associated storage asset in external infrastructure, such as AWS EBS or VMware vSphere. Note Dynamically provisioned volumes are always deleted. 3.2.7. Reclaiming a persistent volume manually When a persistent volume claim (PVC) is deleted, the persistent volume (PV) still exists and is considered "released". However, the PV is not yet available for another claim because the data of the claimant remains on the volume. Procedure To manually reclaim the PV as a cluster administrator: Delete the PV. USD oc delete pv <pv-name> The associated storage asset in the external infrastructure, such as an AWS EBS, GCE PD, Azure Disk, or Cinder volume, still exists after the PV is deleted. Clean up the data on the associated storage asset. Delete the associated storage asset. Alternately, to reuse the same storage asset, create a new PV with the storage asset definition. The reclaimed PV is now available for use by another PVC. 3.2.8. Changing the reclaim policy of a persistent volume To change the reclaim policy of a persistent volume: List the persistent volumes in your cluster: USD oc get pv Example output Choose one of your persistent volumes and change its reclaim policy: USD oc patch pv <your-pv-name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' Verify that your chosen persistent volume has the right policy: USD oc get pv Example output In the preceding output, the volume bound to claim default/claim3 now has a Retain reclaim policy. The volume will not be automatically deleted when a user deletes claim default/claim3 . 3.3. Persistent volumes Each PV contains a spec and status , which is the specification and status of the volume, for example: PersistentVolume object definition example apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 5Gi 2 accessModes: - ReadWriteOnce 3 persistentVolumeReclaimPolicy: Retain 4 ... status: ... 1 Name of the persistent volume. 2 The amount of storage available to the volume. 3 The access mode, defining the read-write and mount permissions. 4 The reclaim policy, indicating how the resource should be handled once it is released. 3.3.1. Types of PVs OpenShift Container Platform supports the following persistent volume plug-ins: AWS Elastic Block Store (EBS) Azure Disk Azure File Cinder Fibre Channel GCE Persistent Disk HostPath iSCSI Local volume NFS OpenStack Manila Red Hat OpenShift Container Storage VMware vSphere 3.3.2. Capacity Generally, a persistent volume (PV) has a specific storage capacity. This is set by using the capacity attribute of the PV. Currently, storage capacity is the only resource that can be set or requested. Future attributes may include IOPS, throughput, and so on. 3.3.3. Access modes A persistent volume can be mounted on a host in any way supported by the resource provider. Providers have different capabilities and each PV's access modes are set to the specific modes supported by that particular volume. For example, NFS can support multiple read-write clients, but a specific NFS PV might be exported on the server as read-only. Each PV gets its own set of access modes describing that specific PV's capabilities. Claims are matched to volumes with similar access modes. The only two matching criteria are access modes and size. A claim's access modes represent a request. Therefore, you might be granted more, but never less. For example, if a claim requests RWO, but the only volume available is an NFS PV (RWO+ROX+RWX), the claim would then match NFS because it supports RWO. Direct matches are always attempted first. The volume's modes must match or contain more modes than you requested. The size must be greater than or equal to what is expected. If two types of volumes, such as NFS and iSCSI, have the same set of access modes, either of them can match a claim with those modes. There is no ordering between types of volumes and no way to choose one type over another. All volumes with the same modes are grouped, and then sorted by size, smallest to largest. The binder gets the group with matching modes and iterates over each, in size order, until one size matches. The following table lists the access modes: Table 3.1. Access modes Access Mode CLI abbreviation Description ReadWriteOnce RWO The volume can be mounted as read-write by a single node. ReadOnlyMany ROX The volume can be mounted as read-only by many nodes. ReadWriteMany RWX The volume can be mounted as read-write by many nodes. Important Volume access modes are descriptors of volume capabilities. They are not enforced constraints. The storage provider is responsible for runtime errors resulting from invalid use of the resource. For example, NFS offers ReadWriteOnce access mode. You must mark the claims as read-only if you want to use the volume's ROX capability. Errors in the provider show up at runtime as mount errors. iSCSI and Fibre Channel volumes do not currently have any fencing mechanisms. You must ensure the volumes are only used by one node at a time. In certain situations, such as draining a node, the volumes can be used simultaneously by two nodes. Before draining the node, first ensure the pods that use these volumes are deleted. Table 3.2. Supported access modes for PVs Volume plug-in ReadWriteOnce [1] ReadOnlyMany ReadWriteMany AWS EBS [2] β
- - Azure File β
β
β
Azure Disk β
- - Cinder β
- - Fibre Channel β
β
- GCE Persistent Disk β
- - HostPath β
- - iSCSI β
β
- Local volume β
- - NFS β
β
β
OpenStack Manila - - β
Red Hat OpenShift Container Storage β
- β
VMware vSphere β
- - ReadWriteOnce (RWO) volumes cannot be mounted on multiple nodes. If a node fails, the system does not allow the attached RWO volume to be mounted on a new node because it is already assigned to the failed node. If you encounter a multi-attach error message as a result, force delete the pod on a shutdown or crashed node to avoid data loss in critical workloads, such as when dynamic persistent volumes are attached. Use a recreate deployment strategy for pods that rely on AWS EBS. 3.3.4. Phase Volumes can be found in one of the following phases: Table 3.3. Volume phases Phase Description Available A free resource not yet bound to a claim. Bound The volume is bound to a claim. Released The claim was deleted, but the resource is not yet reclaimed by the cluster. Failed The volume has failed its automatic reclamation. You can view the name of the PVC bound to the PV by running: USD oc get pv <pv-claim> 3.3.4.1. Mount options You can specify mount options while mounting a PV by using the attribute mountOptions . For example: Mount options example apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce mountOptions: 1 - nfsvers=4.1 nfs: path: /tmp server: 172.17.0.2 persistentVolumeReclaimPolicy: Retain claimRef: name: claim1 namespace: default 1 Specified mount options are used while mounting the PV to the disk. The following PV types support mount options: AWS Elastic Block Store (EBS) Azure Disk Azure File Cinder GCE Persistent Disk iSCSI Local volume NFS Red Hat OpenShift Container Storage (Ceph RBD only) VMware vSphere Note Fibre Channel and HostPath PVs do not support mount options. 3.4. Persistent volume claims Each PersistentVolumeClaim object contains a spec and status , which is the specification and status of the persistent volume claim (PVC), for example: PersistentVolumeClaim object definition example kind: PersistentVolumeClaim apiVersion: v1 metadata: name: myclaim 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: 8Gi 3 storageClassName: gold 4 status: ... 1 Name of the PVC 2 The access mode, defining the read-write and mount permissions 3 The amount of storage available to the PVC 4 Name of the StorageClass required by the claim 3.4.1. Storage classes Claims can optionally request a specific storage class by specifying the storage class's name in the storageClassName attribute. Only PVs of the requested class, ones with the same storageClassName as the PVC, can be bound to the PVC. The cluster administrator can configure dynamic provisioners to service one or more storage classes. The cluster administrator can create a PV on demand that matches the specifications in the PVC. Important The Cluster Storage Operator might install a default storage class depending on the platform in use. This storage class is owned and controlled by the operator. It cannot be deleted or modified beyond defining annotations and labels. If different behavior is desired, you must define a custom storage class. The cluster administrator can also set a default storage class for all PVCs. When a default storage class is configured, the PVC must explicitly ask for StorageClass or storageClassName annotations set to "" to be bound to a PV without a storage class. Note If more than one storage class is marked as default, a PVC can only be created if the storageClassName is explicitly specified. Therefore, only one storage class should be set as the default. 3.4.2. Access modes Claims use the same conventions as volumes when requesting storage with specific access modes. 3.4.3. Resources Claims, such as pods, can request specific quantities of a resource. In this case, the request is for storage. The same resource model applies to volumes and claims. 3.4.4. Claims as volumes Pods access storage by using the claim as a volume. Claims must exist in the same namespace as the pod using the claim. The cluster finds the claim in the pod's namespace and uses it to get the PersistentVolume backing the claim. The volume is mounted to the host and into the pod, for example: Mount volume to the host and into the pod example kind: Pod apiVersion: v1 metadata: name: mypod spec: containers: - name: myfrontend image: dockerfile/nginx volumeMounts: - mountPath: "/var/www/html" 1 name: mypd 2 volumes: - name: mypd persistentVolumeClaim: claimName: myclaim 3 1 Path to mount the volume inside the pod. 2 Name of the volume to mount. Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host . 3 Name of the PVC, that exists in the same namespace, to use. 3.5. Block volume support OpenShift Container Platform can statically provision raw block volumes. These volumes do not have a file system, and can provide performance benefits for applications that either write to the disk directly or implement their own storage service. Raw block volumes are provisioned by specifying volumeMode: Block in the PV and PVC specification. Important Pods using raw block volumes must be configured to allow privileged containers. The following table displays which volume plug-ins support block volumes. Table 3.4. Block volume support Volume Plug-in Manually provisioned Dynamically provisioned Fully supported AWS EBS β
β
β
Azure Disk β
β
β
Azure File Cinder β
β
Fibre Channel β
β
GCP β
β
β
HostPath iSCSI β
β
Local volume β
β
NFS Red Hat OpenShift Container Storage β
β
β
VMware vSphere β
β
β
Note Any of the block volumes that can be provisioned manually, but are not provided as fully supported, are included as a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . 3.5.1. Block volume examples PV example apiVersion: v1 kind: PersistentVolume metadata: name: block-pv spec: capacity: storage: 10Gi accessModes: - ReadWriteOnce volumeMode: Block 1 persistentVolumeReclaimPolicy: Retain fc: targetWWNs: ["50060e801049cfd1"] lun: 0 readOnly: false 1 volumeMode must be set to Block to indicate that this PV is a raw block volume. PVC example apiVersion: v1 kind: PersistentVolumeClaim metadata: name: block-pvc spec: accessModes: - ReadWriteOnce volumeMode: Block 1 resources: requests: storage: 10Gi 1 volumeMode must be set to Block to indicate that a raw block PVC is requested. Pod specification example apiVersion: v1 kind: Pod metadata: name: pod-with-block-volume spec: containers: - name: fc-container image: fedora:26 command: ["/bin/sh", "-c"] args: [ "tail -f /dev/null" ] volumeDevices: 1 - name: data devicePath: /dev/xvda 2 volumes: - name: data persistentVolumeClaim: claimName: block-pvc 3 1 volumeDevices , instead of volumeMounts , is used for block devices. Only PersistentVolumeClaim sources can be used with raw block volumes. 2 devicePath , instead of mountPath , represents the path to the physical device where the raw block is mapped to the system. 3 The volume source must be of type persistentVolumeClaim and must match the name of the PVC as expected. Table 3.5. Accepted values for volumeMode Value Default Filesystem Yes Block No Table 3.6. Binding scenarios for block volumes PV volumeMode PVC volumeMode Binding result Filesystem Filesystem Bind Unspecified Unspecified Bind Filesystem Unspecified Bind Unspecified Filesystem Bind Block Block Bind Unspecified Block No Bind Block Unspecified No Bind Filesystem Block No Bind Block Filesystem No Bind Important Unspecified values result in the default value of Filesystem . | [
"oc delete pv <pv-name>",
"oc get pv",
"NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 10s pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 6s pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim3 manual 3s",
"oc patch pv <your-pv-name> -p '{\"spec\":{\"persistentVolumeReclaimPolicy\":\"Retain\"}}'",
"oc get pv",
"NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 10s pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 6s pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Retain Bound default/claim3 manual 3s",
"apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 5Gi 2 accessModes: - ReadWriteOnce 3 persistentVolumeReclaimPolicy: Retain 4 status:",
"oc get pv <pv-claim>",
"apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce mountOptions: 1 - nfsvers=4.1 nfs: path: /tmp server: 172.17.0.2 persistentVolumeReclaimPolicy: Retain claimRef: name: claim1 namespace: default",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: myclaim 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: 8Gi 3 storageClassName: gold 4 status:",
"kind: Pod apiVersion: v1 metadata: name: mypod spec: containers: - name: myfrontend image: dockerfile/nginx volumeMounts: - mountPath: \"/var/www/html\" 1 name: mypd 2 volumes: - name: mypd persistentVolumeClaim: claimName: myclaim 3",
"apiVersion: v1 kind: PersistentVolume metadata: name: block-pv spec: capacity: storage: 10Gi accessModes: - ReadWriteOnce volumeMode: Block 1 persistentVolumeReclaimPolicy: Retain fc: targetWWNs: [\"50060e801049cfd1\"] lun: 0 readOnly: false",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: block-pvc spec: accessModes: - ReadWriteOnce volumeMode: Block 1 resources: requests: storage: 10Gi",
"apiVersion: v1 kind: Pod metadata: name: pod-with-block-volume spec: containers: - name: fc-container image: fedora:26 command: [\"/bin/sh\", \"-c\"] args: [ \"tail -f /dev/null\" ] volumeDevices: 1 - name: data devicePath: /dev/xvda 2 volumes: - name: data persistentVolumeClaim: claimName: block-pvc 3"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/storage/understanding-persistent-storage |
9.2. REST Properties | 9.2. REST Properties The following properties can be specified on a JBoss Data Virtualization virtual procedure. Property Name Description Is Required Allowed Values METHOD HTTP Method to use Yes GET | POST| PUT | DELETE URI URI of procedure Yes ex:/procedure PRODUCES Type of content produced by the service no xml | json | plain | any text CHARSET When procedure returns Blob, and content type text based, this character set to used to convert the data no US-ASCII | UTF-8 The above properties must be defined with NAMESPACE 'http://teiid.org/rest' on the metadata. Here is an example VDB that defines the REST based service. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/rest_properties |
Chapter 2. Enabling members of a group to back up Directory Server and performing the backup as one of the group members | Chapter 2. Enabling members of a group to back up Directory Server and performing the backup as one of the group members You can configure that members of a group have permissions to back up an instance and perform the backup. This increases the security because you no longer need to set the credentials of cn=Directory Manager in your backup script or cron jobs. Additionally, you can easily grant and revoke the backup permissions by modifying the group. 2.1. Enabling a group to back up Directory Server Use this procedure to add the cn=backup_users,ou=groups,dc=example,dc=com group and enable members of this group to create backup tasks. Prerequisites The entry ou=groups,dc=example,dc=com exists in the database. Procedure Create the cn=backup_users,ou=groups,dc=example,dc=com group: # dsidm -D "cn=Directory manager" ldap://server.example.com -b " dc=example,dc=com " group create --cn backup_users Add an access control instruction (ACI) that allows members of the cn=backup_users,ou=groups,dc=example,dc=com group to create backup tasks: # ldapadd -D "cn=Directory Manager" -W -H ldap://server.example.com dn: cn=config changetype: modify add: aci aci: (target = " ldap:///cn=backup,cn=tasks,cn=config ")(targetattr="*") (version 3.0 ; acl " permission: Allow backup_users group to create backup tasks " ; allow (add, read, search) groupdn = " ldap:///cn=backup_users,ou=groups,dc=example,dc=com ";) - add: aci aci: (target = "ldap:///cn=config")(targetattr = "nsslapd-bakdir || objectClass") (version 3.0 ; acl " permission: Allow backup_users group to access bakdir attribute " ; allow (read,search) groupdn = " ldap:///cn=backup_users,ou=groups,dc=example,dc=com ";) Create a user: Create a user account: # dsidm -D "cn=Directory manager" ldap://server.example.com -b " dc=example,dc=com " user create --uid=" example " --cn=" example " --uidNumber=" 1000 " --gidNumber=" 1000 " --homeDirectory=" /home/example/ " --displayName=" Example User " Set a password on the user account: # dsidm -D "cn=Directory manager" ldap://server.example.com -b " dc=example,dc=com " account reset_password " uid=example,ou=People,dc=example,dc=com " " password " Add the uid=example,ou=People,dc=example,dc=com user to the cn=backup_users,ou=groups,dc=example,dc=com group: # dsidm -D "cn=Directory manager" ldap://server.example.com -b " dc=example,dc=com " group add_member backup_users uid=example,ou=People,dc=example,dc=com Verification Display the ACIs set on the cn=config entry: # ldapsearch -o ldif-wrap=no -LLLx -D "cn=directory manager" -W -H ldap://server.example.com -b cn=config aci=* aci -s base dn: cn=config aci: (target = "ldap:///cn=backup,cn=tasks,cn=config")(targetattr="*")(version 3.0 ; acl "permission: Allow backup_users group to create backup tasks" ; allow (add, read, search) groupdn = "ldap:///cn=backup_users,ou=groups,dc=example,dc=com";) aci: (target = "ldap:///cn=config")(targetattr = "nsslapd-bakdir || objectClass")(version 3.0 ; acl "permission: Allow backup_users group to access bakdir attribute" ; allow (read,search) groupdn = "ldap:///cn=backup_users,ou=groups,dc=example,dc=com";) ... 2.2. Performing a backup as a regular user You can perform backups as a regular user instead of cn=Directory Manager . Prerequisites You enabled members of the cn=backup_users,ou=groups,dc=example,dc=com group to perform backups. The user you use to perform the backup is a member of the cn=backup_users,ou=groups,dc=example,dc=com group. Procedure Create a backup task using one of the following methods: Using the dsconf backup create command: # dsconf -D " uid=example,ou=People,dc=example,dc=com " ldap://server.example.com backup create By manually creating the task: # ldapadd -D " uid=example,ou=People,dc=example,dc=com " -W -H ldap://server.example.com dn: cn= backup-2021_07_23_12:55_00 ,cn=backup,cn=tasks,cn=config changetype: add objectClass: extensibleObject nsarchivedir: /var/lib/dirsrv/slapd-instance_name/bak/backup-2021_07_23_12:55_00 nsdatabasetype: ldbm database cn: backup-2021_07_23_12:55_00 Verification Verify that the backup was created: # ls -l /var/lib/dirsrv/slapd-instance_name/bak/ total 0 drwx------. 3 dirsrv dirsrv 108 Jul 23 12:55 backup-2021_07_23_12_55_00 ... Additional resources Enabling a group to back up Directory Server | [
"dsidm -D \"cn=Directory manager\" ldap://server.example.com -b \" dc=example,dc=com \" group create --cn backup_users",
"ldapadd -D \"cn=Directory Manager\" -W -H ldap://server.example.com dn: cn=config changetype: modify add: aci aci: (target = \" ldap:///cn=backup,cn=tasks,cn=config \")(targetattr=\"*\") (version 3.0 ; acl \" permission: Allow backup_users group to create backup tasks \" ; allow (add, read, search) groupdn = \" ldap:///cn=backup_users,ou=groups,dc=example,dc=com \";) - add: aci aci: (target = \"ldap:///cn=config\")(targetattr = \"nsslapd-bakdir || objectClass\") (version 3.0 ; acl \" permission: Allow backup_users group to access bakdir attribute \" ; allow (read,search) groupdn = \" ldap:///cn=backup_users,ou=groups,dc=example,dc=com \";)",
"dsidm -D \"cn=Directory manager\" ldap://server.example.com -b \" dc=example,dc=com \" user create --uid=\" example \" --cn=\" example \" --uidNumber=\" 1000 \" --gidNumber=\" 1000 \" --homeDirectory=\" /home/example/ \" --displayName=\" Example User \"",
"dsidm -D \"cn=Directory manager\" ldap://server.example.com -b \" dc=example,dc=com \" account reset_password \" uid=example,ou=People,dc=example,dc=com \" \" password \"",
"dsidm -D \"cn=Directory manager\" ldap://server.example.com -b \" dc=example,dc=com \" group add_member backup_users uid=example,ou=People,dc=example,dc=com",
"ldapsearch -o ldif-wrap=no -LLLx -D \"cn=directory manager\" -W -H ldap://server.example.com -b cn=config aci=* aci -s base dn: cn=config aci: (target = \"ldap:///cn=backup,cn=tasks,cn=config\")(targetattr=\"*\")(version 3.0 ; acl \"permission: Allow backup_users group to create backup tasks\" ; allow (add, read, search) groupdn = \"ldap:///cn=backup_users,ou=groups,dc=example,dc=com\";) aci: (target = \"ldap:///cn=config\")(targetattr = \"nsslapd-bakdir || objectClass\")(version 3.0 ; acl \"permission: Allow backup_users group to access bakdir attribute\" ; allow (read,search) groupdn = \"ldap:///cn=backup_users,ou=groups,dc=example,dc=com\";)",
"dsconf -D \" uid=example,ou=People,dc=example,dc=com \" ldap://server.example.com backup create",
"ldapadd -D \" uid=example,ou=People,dc=example,dc=com \" -W -H ldap://server.example.com dn: cn= backup-2021_07_23_12:55_00 ,cn=backup,cn=tasks,cn=config changetype: add objectClass: extensibleObject nsarchivedir: /var/lib/dirsrv/slapd-instance_name/bak/backup-2021_07_23_12:55_00 nsdatabasetype: ldbm database cn: backup-2021_07_23_12:55_00",
"ls -l /var/lib/dirsrv/slapd-instance_name/bak/ total 0 drwx------. 3 dirsrv dirsrv 108 Jul 23 12:55 backup-2021_07_23_12_55_00"
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/backing_up_and_restoring_red_hat_directory_server/assembly_enabling-members-of-a-group-to-back-up-directory-server-and-performing-the-backup-as-one-of-the-group-members_backing-up-and-restoring-rhds |
Chapter 8. Certified SAP applications on RHEL 9 | Chapter 8. Certified SAP applications on RHEL 9 SAP Max DB 7.9.10.02 and later (See SAP Note 1444241 ) SAP ASE 16 (See SAP Note 2489781 ) SAP HANA 2.0 SPS05 and later (See SAP Note 2235581 ) SAP BI 4.3 and later (See SAP Note 1338845 ) SAP NetWeaver (See SAP Note 2772999 ) In general, SAP documents support of their products for certain versions of Red Hat Linux Enterprise in their SAP Product Availability Matrix . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/9.x_release_notes/certified_sap_applications_9.x_release_notes |
Chapter 7. Network considerations | Chapter 7. Network considerations Review the strategies for redirecting your application network traffic after migration. 7.1. DNS considerations The DNS domain of the target cluster is different from the domain of the source cluster. By default, applications get FQDNs of the target cluster after migration. To preserve the source DNS domain of migrated applications, select one of the two options described below. 7.1.1. Isolating the DNS domain of the target cluster from the clients You can allow the clients' requests sent to the DNS domain of the source cluster to reach the DNS domain of the target cluster without exposing the target cluster to the clients. Procedure Place an exterior network component, such as an application load balancer or a reverse proxy, between the clients and the target cluster. Update the application FQDN on the source cluster in the DNS server to return the IP address of the exterior network component. Configure the network component to send requests received for the application in the source domain to the load balancer in the target cluster domain. Create a wildcard DNS record for the *.apps.source.example.com domain that points to the IP address of the load balancer of the source cluster. Create a DNS record for each application that points to the IP address of the exterior network component in front of the target cluster. A specific DNS record has higher priority than a wildcard record, so no conflict arises when the application FQDN is resolved. Note The exterior network component must terminate all secure TLS connections. If the connections pass through to the target cluster load balancer, the FQDN of the target application is exposed to the client and certificate errors occur. The applications must not return links referencing the target cluster domain to the clients. Otherwise, parts of the application might not load or work properly. 7.1.2. Setting up the target cluster to accept the source DNS domain You can set up the target cluster to accept requests for a migrated application in the DNS domain of the source cluster. Procedure For both non-secure HTTP access and secure HTTPS access, perform the following steps: Create a route in the target cluster's project that is configured to accept requests addressed to the application's FQDN in the source cluster: USD oc expose svc <app1-svc> --hostname <app1.apps.source.example.com> \ -n <app1-namespace> With this new route in place, the server accepts any request for that FQDN and sends it to the corresponding application pods. In addition, when you migrate the application, another route is created in the target cluster domain. Requests reach the migrated application using either of these hostnames. Create a DNS record with your DNS provider that points the application's FQDN in the source cluster to the IP address of the default load balancer of the target cluster. This will redirect traffic away from your source cluster to your target cluster. The FQDN of the application resolves to the load balancer of the target cluster. The default Ingress Controller router accept requests for that FQDN because a route for that hostname is exposed. For secure HTTPS access, perform the following additional step: Replace the x509 certificate of the default Ingress Controller created during the installation process with a custom certificate. Configure this certificate to include the wildcard DNS domains for both the source and target clusters in the subjectAltName field. The new certificate is valid for securing connections made using either DNS domain. Additional resources See Replacing the default ingress certificate for more information. 7.2. Network traffic redirection strategies After a successful migration, you must redirect network traffic of your stateless applications from the source cluster to the target cluster. The strategies for redirecting network traffic are based on the following assumptions: The application pods are running on both the source and target clusters. Each application has a route that contains the source cluster hostname. The route with the source cluster hostname contains a CA certificate. For HTTPS, the target router CA certificate contains a Subject Alternative Name for the wildcard DNS record of the source cluster. Consider the following strategies and select the one that meets your objectives. Redirecting all network traffic for all applications at the same time Change the wildcard DNS record of the source cluster to point to the target cluster router's virtual IP address (VIP). This strategy is suitable for simple applications or small migrations. Redirecting network traffic for individual applications Create a DNS record for each application with the source cluster hostname pointing to the target cluster router's VIP. This DNS record takes precedence over the source cluster wildcard DNS record. Redirecting network traffic gradually for individual applications Create a proxy that can direct traffic to both the source cluster router's VIP and the target cluster router's VIP, for each application. Create a DNS record for each application with the source cluster hostname pointing to the proxy. Configure the proxy entry for the application to route a percentage of the traffic to the target cluster router's VIP and the rest of the traffic to the source cluster router's VIP. Gradually increase the percentage of traffic that you route to the target cluster router's VIP until all the network traffic is redirected. User-based redirection of traffic for individual applications Using this strategy, you can filter TCP/IP headers of user requests to redirect network traffic for predefined groups of users. This allows you to test the redirection process on specific populations of users before redirecting the entire network traffic. Create a proxy that can direct traffic to both the source cluster router's VIP and the target cluster router's VIP, for each application. Create a DNS record for each application with the source cluster hostname pointing to the proxy. Configure the proxy entry for the application to route traffic matching a given header pattern, such as test customers , to the target cluster router's VIP and the rest of the traffic to the source cluster router's VIP. Redirect traffic to the target cluster router's VIP in stages until all the traffic is on the target cluster router's VIP. | [
"oc expose svc <app1-svc> --hostname <app1.apps.source.example.com> -n <app1-namespace>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/migration_toolkit_for_containers/network-considerations-mtc |
Chapter 43. implied | Chapter 43. implied This chapter describes the commands under the implied command. 43.1. implied role create Creates an association between prior and implied roles Usage: Table 43.1. Positional Arguments Value Summary <role> Role (name or id) that implies another role Table 43.2. Optional Arguments Value Summary -h, --help Show this help message and exit --implied-role <role> <role> (name or id) implied by another role Table 43.3. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 43.4. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 43.5. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 43.6. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 43.2. implied role delete Deletes an association between prior and implied roles Usage: Table 43.7. Positional Arguments Value Summary <role> Role (name or id) that implies another role Table 43.8. Optional Arguments Value Summary -h, --help Show this help message and exit --implied-role <role> <role> (name or id) implied by another role 43.3. implied role list List implied roles Usage: Table 43.9. Optional Arguments Value Summary -h, --help Show this help message and exit Table 43.10. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 43.11. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 43.12. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 43.13. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack implied role create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --implied-role <role> <role>",
"openstack implied role delete [-h] --implied-role <role> <role>",
"openstack implied role list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN]"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/command_line_interface_reference/implied |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.