title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
โŒ€
url
stringlengths
79
342
Chapter 2. Installing OpenShift on a single node
Chapter 2. Installing OpenShift on a single node You can install single-node OpenShift by using either the web-based Assisted Installer or the coreos-installer tool to generate a discovery ISO image. The discovery ISO image writes the Red Hat Enterprise Linux CoreOS (RHCOS) system configuration to the target installation disk, so that you can run a single-cluster node to meet your needs. Consider using single-node OpenShift when you want to run a cluster in a low-resource or an isolated environment for testing, troubleshooting, training, or small-scale project purposes. 2.1. Installing single-node OpenShift using the Assisted Installer To install OpenShift Container Platform on a single node, use the web-based Assisted Installer wizard to guide you through the process and manage the installation. See the Assisted Installer for OpenShift Container Platform documentation for details and configuration options. 2.1.1. Generating the discovery ISO with the Assisted Installer Installing OpenShift Container Platform on a single node requires a discovery ISO, which the Assisted Installer can generate. Procedure On the administration host, open a browser and navigate to Red Hat OpenShift Cluster Manager . Click Create New Cluster to create a new cluster. In the Cluster name field, enter a name for the cluster. In the Base domain field, enter a base domain. For example: All DNS records must be subdomains of this base domain and include the cluster name, for example: Note You cannot change the base domain or cluster name after cluster installation. Select Install single node OpenShift (SNO) and complete the rest of the wizard steps. Download the discovery ISO. Complete the remaining Assisted Installer wizard steps. Important Ensure that you take note of the discovery ISO URL for installing with virtual media. If you enable OpenShift Virtualization during this process, you must have a second local storage device of at least 50GiB for your virtual machines. Additional resources Persistent storage using logical volume manager storage What you can do with OpenShift Virtualization 2.1.2. Installing single-node OpenShift with the Assisted Installer Use the Assisted Installer to install the single-node cluster. Prerequisites Ensure that the boot drive order in the server BIOS settings defaults to booting the server from the target installation disk. Procedure Attach the discovery ISO image to the target host. Boot the server from the discovery ISO image. The discovery ISO image writes the system configuration to the target installation disk and automatically triggers a server restart. On the administration host, return to the browser. Wait for the host to appear in the list of discovered hosts. If necessary, reload the Assisted Clusters page and select the cluster name. Complete the install wizard steps. Add networking details, including a subnet from the available subnets. Add the SSH public key if necessary. Monitor the installation's progress. Watch the cluster events. After the installation process finishes writing the operating system image to the server's hard disk, the server restarts. Optional: Remove the discovery ISO image. The server restarts several times automatically, deploying the control plane. Additional resources Creating a bootable ISO image on a USB drive Booting from an HTTP-hosted ISO image using the Redfish API Adding worker nodes to single-node OpenShift clusters 2.2. Installing single-node OpenShift manually To install OpenShift Container Platform on a single node, first generate the installation ISO, and then boot the server from the ISO. You can monitor the installation using the openshift-install installation program. Additional resources Networking requirements for user-provisioned infrastructure User-provisioned DNS requirements Configuring DHCP or static IP addresses 2.2.1. Generating the installation ISO with coreos-installer Installing OpenShift Container Platform on a single node requires an installation ISO, which you can generate with the following procedure. Prerequisites Install podman . Note See "Requirements for installing OpenShift on a single node" for networking requirements, including DNS records. Procedure Set the OpenShift Container Platform version: USD export OCP_VERSION=<ocp_version> 1 1 Replace <ocp_version> with the current version, for example, latest-4.14 Set the host architecture: USD export ARCH=<architecture> 1 1 Replace <architecture> with the target host architecture, for example, aarch64 or x86_64 . Download the OpenShift Container Platform client ( oc ) and make it available for use by entering the following commands: USD curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDOCP_VERSION/openshift-client-linux.tar.gz -o oc.tar.gz USD tar zxf oc.tar.gz USD chmod +x oc Download the OpenShift Container Platform installer and make it available for use by entering the following commands: USD curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDOCP_VERSION/openshift-install-linux.tar.gz -o openshift-install-linux.tar.gz USD tar zxvf openshift-install-linux.tar.gz USD chmod +x openshift-install Retrieve the RHCOS ISO URL by running the following command: USD export ISO_URL=USD(./openshift-install coreos print-stream-json | grep location | grep USDARCH | grep iso | cut -d\" -f4) Download the RHCOS ISO: USD curl -L USDISO_URL -o rhcos-live.iso Prepare the install-config.yaml file: apiVersion: v1 baseDomain: <domain> 1 compute: - name: worker replicas: 0 2 controlPlane: name: master replicas: 1 3 metadata: name: <name> 4 networking: 5 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 6 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: none: {} bootstrapInPlace: installationDisk: /dev/disk/by-id/<disk_id> 7 pullSecret: '<pull_secret>' 8 sshKey: | <ssh_key> 9 1 Add the cluster domain name. 2 Set the compute replicas to 0 . This makes the control plane node schedulable. 3 Set the controlPlane replicas to 1 . In conjunction with the compute setting, this setting ensures the cluster runs on a single node. 4 Set the metadata name to the cluster name. 5 Set the networking details. OVN-Kubernetes is the only allowed network plugin type for single-node clusters. 6 Set the cidr value to match the subnet of the single-node OpenShift cluster. 7 Set the path to the installation disk drive, for example, /dev/disk/by-id/wwn-0x64cd98f04fde100024684cf3034da5c2 . 8 Copy the pull secret from Red Hat OpenShift Cluster Manager and add the contents to this configuration setting. 9 Add the public SSH key from the administration host so that you can log in to the cluster after installation. Generate OpenShift Container Platform assets by running the following commands: USD mkdir ocp USD cp install-config.yaml ocp USD ./openshift-install --dir=ocp create single-node-ignition-config Embed the ignition data into the RHCOS ISO by running the following commands: USD alias coreos-installer='podman run --privileged --pull always --rm \ -v /dev:/dev -v /run/udev:/run/udev -v USDPWD:/data \ -w /data quay.io/coreos/coreos-installer:release' USD coreos-installer iso ignition embed -fi ocp/bootstrap-in-place-for-live-iso.ign rhcos-live.iso Additional resources See Requirements for installing OpenShift on a single node for more information about installing OpenShift Container Platform on a single node. See Enabling cluster capabilities for more information about enabling cluster capabilities that were disabled prior to installation. See Optional cluster capabilities in OpenShift Container Platform 4.14 for more information about the features provided by each capability. 2.2.2. Monitoring the cluster installation using openshift-install Use openshift-install to monitor the progress of the single-node cluster installation. Prerequisites Ensure that the boot drive order in the server BIOS settings defaults to booting the server from the target installation disk. Procedure Attach the discovery ISO image to the target host. Boot the server from the discovery ISO image. The discovery ISO image writes the system configuration to the target installation disk and automatically triggers a server restart. On the administration host, monitor the installation by running the following command: USD ./openshift-install --dir=ocp wait-for install-complete Optional: Remove the discovery ISO image. The server restarts several times while deploying the control plane. Verification After the installation is complete, check the environment by running the following command: USD export KUBECONFIG=ocp/auth/kubeconfig USD oc get nodes Example output NAME STATUS ROLES AGE VERSION control-plane.example.com Ready master,worker 10m v1.27.3 Additional resources Creating a bootable ISO image on a USB drive Booting from an HTTP-hosted ISO image using the Redfish API Adding worker nodes to single-node OpenShift clusters 2.3. Installing single-node OpenShift on cloud providers 2.3.1. Additional requirements for installing single-node OpenShift on a cloud provider The documentation for installer-provisioned installation on cloud providers is based on a high availability cluster consisting of three control plane nodes. When referring to the documentation, consider the differences between the requirements for a single-node OpenShift cluster and a high availability cluster. A high availability cluster requires a temporary bootstrap machine, three control plane machines, and at least two compute machines. For a single-node OpenShift cluster, you need only a temporary bootstrap machine and one cloud instance for the control plane node and no compute nodes. The minimum resource requirements for high availability cluster installation include a control plane node with 4 vCPUs and 100GB of storage. For a single-node OpenShift cluster, you must have a minimum of 8 vCPUs and 120GB of storage. The controlPlane.replicas setting in the install-config.yaml file should be set to 1 . The compute.replicas setting in the install-config.yaml file should be set to 0 . This makes the control plane node schedulable. 2.3.2. Supported cloud providers for single-node OpenShift The following table contains a list of supported cloud providers and CPU architectures. Table 2.1. Supported cloud providers Cloud provider CPU architecture Amazon Web Service (AWS) x86_64 and AArch64 Microsoft Azure x86_64 Google Cloud Platform (GCP) x86_64 and AArch64 2.3.3. Installing single-node OpenShift on AWS Installing a single-node cluster on AWS requires installer-provisioned installation using the "Installing a cluster on AWS with customizations" procedure. Additional resources Installing a cluster on AWS with customizations 2.3.4. Installing single-node OpenShift on Azure Installing a single node cluster on Azure requires installer-provisioned installation using the "Installing a cluster on Azure with customizations" procedure. Additional resources Installing a cluster on Azure with customizations 2.3.5. Installing single-node OpenShift on GCP Installing a single node cluster on GCP requires installer-provisioned installation using the "Installing a cluster on GCP with customizations" procedure. Additional resources Installing a cluster on GCP with customizations 2.4. Creating a bootable ISO image on a USB drive You can install software using a bootable USB drive that contains an ISO image. Booting the server with the USB drive prepares the server for the software installation. Procedure On the administration host, insert a USB drive into a USB port. Create a bootable USB drive, for example: # dd if=<path_to_iso> of=<path_to_usb> status=progress where: <path_to_iso> is the relative path to the downloaded ISO file, for example, rhcos-live.iso . <path_to_usb> is the location of the connected USB drive, for example, /dev/sdb . After the ISO is copied to the USB drive, you can use the USB drive to install software on the server. 2.5. Booting from an HTTP-hosted ISO image using the Redfish API You can provision hosts in your network using ISOs that you install using the Redfish Baseboard Management Controller (BMC) API. Note This example procedure demonstrates the steps on a Dell server. Important Ensure that you have the latest firmware version of iDRAC that is compatible with your hardware. If you have any issues with the hardware or firmware, you must contact the provider. Prerequisites Download the installation Red Hat Enterprise Linux CoreOS (RHCOS) ISO. Use a Dell PowerEdge server that is compatible with iDRAC9. Procedure Copy the ISO file to an HTTP server accessible in your network. Boot the host from the hosted ISO file, for example: Call the Redfish API to set the hosted ISO as the VirtualMedia boot media by running the following command: USD curl -k -u <bmc_username>:<bmc_password> -d '{"Image":"<hosted_iso_file>", "Inserted": true}' -H "Content-Type: application/json" -X POST <host_bmc_address>/redfish/v1/Managers/iDRAC.Embedded.1/VirtualMedia/CD/Actions/VirtualMedia.InsertMedia Where: <bmc_username>:<bmc_password> Is the username and password for the target host BMC. <hosted_iso_file> Is the URL for the hosted installation ISO, for example: http://webserver.example.com/rhcos-live-minimal.iso . The ISO must be accessible from the target host machine. <host_bmc_address> Is the BMC IP address of the target host machine. Set the host to boot from the VirtualMedia device by running the following command: USD curl -k -u <bmc_username>:<bmc_password> -X PATCH -H 'Content-Type: application/json' -d '{"Boot": {"BootSourceOverrideTarget": "Cd", "BootSourceOverrideMode": "UEFI", "BootSourceOverrideEnabled": "Once"}}' <host_bmc_address>/redfish/v1/Systems/System.Embedded.1 Reboot the host: USD curl -k -u <bmc_username>:<bmc_password> -d '{"ResetType": "ForceRestart"}' -H 'Content-type: application/json' -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset Optional: If the host is powered off, you can boot it using the {"ResetType": "On"} switch. Run the following command: USD curl -k -u <bmc_username>:<bmc_password> -d '{"ResetType": "On"}' -H 'Content-type: application/json' -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset 2.6. Creating a custom live RHCOS ISO for remote server access In some cases, you cannot attach an external disk drive to a server, however, you need to access the server remotely to provision a node. It is recommended to enable SSH access to the server. You can create a live RHCOS ISO with SSHd enabled and with predefined credentials so that you can access the server after it boots. Prerequisites You installed the butane utility. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Download the latest live RHCOS ISO from mirror.openshift.com . Create the embedded.yaml file that the butane utility uses to create the Ignition file: variant: openshift version: 4.14.0 metadata: name: sshd labels: machineconfiguration.openshift.io/role: worker passwd: users: - name: core 1 ssh_authorized_keys: - '<ssh_key>' 1 The core user has sudo privileges. Run the butane utility to create the Ignition file using the following command: USD butane -pr embedded.yaml -o embedded.ign After the Ignition file is created, you can include the configuration in a new live RHCOS ISO, which is named rhcos-sshd-4.14.0-x86_64-live.x86_64.iso , with the coreos-installer utility: USD coreos-installer iso ignition embed -i embedded.ign rhcos-4.14.0-x86_64-live.x86_64.iso -o rhcos-sshd-4.14.0-x86_64-live.x86_64.iso Verification Check that the custom live ISO can be used to boot the server by running the following command: # coreos-installer iso ignition show rhcos-sshd-4.14.0-x86_64-live.x86_64.iso Example output { "ignition": { "version": "3.2.0" }, "passwd": { "users": [ { "name": "core", "sshAuthorizedKeys": [ "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCZnG8AIzlDAhpyENpK2qKiTT8EbRWOrz7NXjRzopbPu215mocaJgjjwJjh1cYhgPhpAp6M/ttTk7I4OI7g4588Apx4bwJep6oWTU35LkY8ZxkGVPAJL8kVlTdKQviDv3XX12l4QfnDom4tm4gVbRH0gNT1wzhnLP+LKYm2Ohr9D7p9NBnAdro6k++XWgkDeijLRUTwdEyWunIdW1f8G0Mg8Y1Xzr13BUo3+8aey7HLKJMDtobkz/C8ESYA/f7HJc5FxF0XbapWWovSSDJrr9OmlL9f4TfE+cQk3s+eoKiz2bgNPRgEEwihVbGsCN4grA+RzLCAOpec+2dTJrQvFqsD [email protected]" ] } ] } } 2.7. Installing single-node OpenShift with IBM Z and IBM LinuxONE Installing a single-node cluster on IBM Z(R) and IBM(R) LinuxONE requires user-provisioned installation using either the "Installing a cluster with RHEL KVM on IBM Z(R) and IBM(R) LinuxONE" or the "Installing a cluster with z/VM on IBM Z(R) and IBM(R) LinuxONE" procedure. Note Installing a single-node cluster on IBM Z(R) simplifies installation for development and test environments and requires less resource requirements at entry level. Hardware requirements The equivalent of two Integrated Facilities for Linux (IFL), which are SMT2 enabled, for each cluster. At least one network connection to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. Note You can use dedicated or shared IFLs to assign sufficient compute resources. Resource sharing is one of the key strengths of IBM Z(R). However, you must adjust capacity correctly on each hypervisor layer and ensure sufficient resources for every OpenShift Container Platform cluster. Additional resources Installing a cluster with z/VM on IBM Z(R) and IBM(R) LinuxONE Installing a cluster with RHEL KVM on IBM Z(R) andIBM(R) LinuxONE 2.7.1. Installing single-node OpenShift with z/VM on IBM Z and IBM LinuxONE Prerequisites You have installed podman . Procedure Set the OpenShift Container Platform version by running the following command: USD OCP_VERSION=<ocp_version> 1 1 Replace <ocp_version> with the current version, for example, latest-4.14 Set the host architecture by running the following command: USD ARCH=<architecture> 1 1 Replace <architecture> with the target host architecture s390x . Download the OpenShift Container Platform client ( oc ) and make it available for use by entering the following commands: USD curl -k https://mirror.openshift.com/pub/openshift-v4/USD{ARCH}/clients/ocp/USD{OCP_VERSION}/openshift-client-linux.tar.gz -o oc.tar.gz USD tar zxf oc.tar.gz USD chmod +x oc Download the OpenShift Container Platform installer and make it available for use by entering the following commands: USD curl -k https://mirror.openshift.com/pub/openshift-v4/USD{ARCH}/clients/ocp/USD{OCP_VERSION}/openshift-install-linux.tar.gz -o openshift-install-linux.tar.gz USD tar zxvf openshift-install-linux.tar.gz USD chmod +x openshift-install Prepare the install-config.yaml file: apiVersion: v1 baseDomain: <domain> 1 compute: - name: worker replicas: 0 2 controlPlane: name: master replicas: 1 3 metadata: name: <name> 4 networking: 5 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 6 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: none: {} bootstrapInPlace: installationDisk: /dev/disk/by-id/<disk_id> 7 pullSecret: '<pull_secret>' 8 sshKey: | <ssh_key> 9 1 Add the cluster domain name. 2 Set the compute replicas to 0 . This makes the control plane node schedulable. 3 Set the controlPlane replicas to 1 . In conjunction with the compute setting, this setting ensures the cluster runs on a single node. 4 Set the metadata name to the cluster name. 5 Set the networking details. OVN-Kubernetes is the only allowed network plugin type for single-node clusters. 6 Set the cidr value to match the subnet of the single-node OpenShift cluster. 7 Set the path to the installation disk drive, for example, /dev/disk/by-id/wwn-0x64cd98f04fde100024684cf3034da5c2 . 8 Copy the pull secret from Red Hat OpenShift Cluster Manager and add the contents to this configuration setting. 9 Add the public SSH key from the administration host so that you can log in to the cluster after installation. Generate OpenShift Container Platform assets by running the following commands: USD mkdir ocp USD cp install-config.yaml ocp USD ./openshift-install --dir=ocp create single-node-ignition-config Obtain the RHEL kernel , initramfs , and rootfs artifacts from the Product Downloads page on the Red Hat Customer Portal or from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel , initramfs , and rootfs artifacts described in the following procedure. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel rhcos-<version>-live-kernel-<architecture> initramfs rhcos-<version>-live-initramfs.<architecture>.img rootfs rhcos-<version>-live-rootfs.<architecture>.img Note The rootfs image is the same for FCP and DASD. Move the following artifacts and files to an HTTP or HTTPS server: Downloaded RHEL live kernel , initramfs , and rootfs artifacts Ignition files Create parameter files for a particular virtual machine: Example parameter file rd.neednet=1 \ console=ttysclp0 \ coreos.live.rootfs_url=<rhcos_liveos>:8080/rootfs.img \ 1 ignition.firstboot ignition.platform.id=metal \ ignition.config.url=<rhcos_ign>:8080/ignition/bootstrap-in-place-for-live-iso.ign \ 2 ip=encbdd0:dhcp::02:00:00:02:34:02 3 rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 \ rd.dasd=0.0.4411 \ 4 rd.zfcp=0.0.8001,0x50050763040051e3,0x4000406300000000 \ 5 zfcp.allow_lun_scan=0 \ rd.luks.options=discard 1 For the coreos.live.rootfs_url= artifact, specify the matching rootfs artifact for the kernel`and `initramfs you are booting. Only HTTP and HTTPS protocols are supported. 2 For the ignition.config.url= parameter, specify the Ignition file for the machine role. Only HTTP and HTTPS protocols are supported. 3 For the ip= parameter, assign the IP address automatically using DHCP or manually as described in "Installing a cluster with z/VM on IBM Z(R) and IBM(R) LinuxONE". 4 For installations on DASD-type disks, use rd.dasd= to specify the DASD where RHCOS is to be installed. Omit this entry for FCP-type disks. 5 For installations on FCP-type disks, use rd.zfcp=<adapter>,<wwpn>,<lun> to specify the FCP disk where RHCOS is to be installed. Omit this entry for DASD-type disks. Leave all other parameters unchanged. Transfer the following artifacts, files, and images to z/VM. For example by using FTP: kernel and initramfs artifacts Parameter files RHCOS images For details about how to transfer the files with FTP and boot from the virtual reader, see Installing under Z/VM . Punch the files to the virtual reader of the z/VM guest virtual machine that is to become your bootstrap node. Log in to CMS on the bootstrap machine. IPL the bootstrap machine from the reader by running the following command: After the first reboot of the virtual machine, run the following commands directly after one another: To boot a DASD device after first reboot, run the following commands: USD cp i <devno> clear loadparm prompt where: <devno> Specifies the device number of the boot device as seen by the guest. USD cp vi vmsg 0 <kernel_parameters> where: <kernel_parameters> Specifies a set of kernel parameters to be stored as system control program data (SCPDATA). When booting Linux, these kernel parameters are concatenated to the end of the existing kernel parameters that are used by your boot configuration. The combined parameter string must not exceed 896 characters. To boot an FCP device after first reboot, run the following commands: USD cp set loaddev portname <wwpn> lun <lun> where: <wwpn> Specifies the target port and <lun> the logical unit in hexadecimal format. USD cp set loaddev bootprog <n> where: <n> Specifies the kernel to be booted. USD cp set loaddev scpdata {APPEND|NEW} '<kernel_parameters>' where: <kernel_parameters> Specifies a set of kernel parameters to be stored as system control program data (SCPDATA). When booting Linux, these kernel parameters are concatenated to the end of the existing kernel parameters that are used by your boot configuration. The combined parameter string must not exceed 896 characters. <APPEND|NEW> Optional: Specify APPEND to append kernel parameters to existing SCPDATA. This is the default. Specify NEW to replace existing SCPDATA. Example USD cp set loaddev scpdata 'rd.zfcp=0.0.8001,0x500507630a0350a4,0x4000409D00000000 ip=encbdd0:dhcp::02:00:00:02:34:02 rd.neednet=1' To start the IPL and boot process, run the following command: USD cp i <devno> where: <devno> Specifies the device number of the boot device as seen by the guest. 2.7.2. Installing single-node OpenShift with RHEL KVM on IBM Z and IBM LinuxONE Prerequisites You have installed podman . Procedure Set the OpenShift Container Platform version by running the following command: USD OCP_VERSION=<ocp_version> 1 1 Replace <ocp_version> with the current version, for example, latest-4.14 Set the host architecture by running the following command: USD ARCH=<architecture> 1 1 Replace <architecture> with the target host architecture s390x . Download the OpenShift Container Platform client ( oc ) and make it available for use by entering the following commands: USD curl -k https://mirror.openshift.com/pub/openshift-v4/USD{ARCH}/clients/ocp/USD{OCP_VERSION}/openshift-client-linux.tar.gz -o oc.tar.gz USD tar zxf oc.tar.gz USD chmod +x oc Download the OpenShift Container Platform installer and make it available for use by entering the following commands: USD curl -k https://mirror.openshift.com/pub/openshift-v4/USD{ARCH}/clients/ocp/USD{OCP_VERSION}/openshift-install-linux.tar.gz -o openshift-install-linux.tar.gz USD tar zxvf openshift-install-linux.tar.gz USD chmod +x openshift-install Prepare the install-config.yaml file: apiVersion: v1 baseDomain: <domain> 1 compute: - name: worker replicas: 0 2 controlPlane: name: master replicas: 1 3 metadata: name: <name> 4 networking: 5 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 6 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: none: {} bootstrapInPlace: installationDisk: /dev/disk/by-id/<disk_id> 7 pullSecret: '<pull_secret>' 8 sshKey: | <ssh_key> 9 1 Add the cluster domain name. 2 Set the compute replicas to 0 . This makes the control plane node schedulable. 3 Set the controlPlane replicas to 1 . In conjunction with the compute setting, this setting ensures the cluster runs on a single node. 4 Set the metadata name to the cluster name. 5 Set the networking details. OVN-Kubernetes is the only allowed network plugin type for single-node clusters. 6 Set the cidr value to match the subnet of the single-node OpenShift cluster. 7 Set the path to the installation disk drive, for example, /dev/disk/by-id/wwn-0x64cd98f04fde100024684cf3034da5c2 . 8 Copy the pull secret from Red Hat OpenShift Cluster Manager and add the contents to this configuration setting. 9 Add the public SSH key from the administration host so that you can log in to the cluster after installation. Generate OpenShift Container Platform assets by running the following commands: USD mkdir ocp USD cp install-config.yaml ocp USD ./openshift-install --dir=ocp create single-node-ignition-config Obtain the RHEL kernel , initramfs , and rootfs artifacts from the Product Downloads page on the Red Hat Customer Portal or from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel , initramfs , and rootfs artifacts described in the following procedure. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel rhcos-<version>-live-kernel-<architecture> initramfs rhcos-<version>-live-initramfs.<architecture>.img rootfs rhcos-<version>-live-rootfs.<architecture>.img Before you launch virt-install , move the following files and artifacts to an HTTP or HTTPS server: Downloaded RHEL live kernel , initramfs , and rootfs artifacts Ignition files Create the KVM guest nodes by using the following components: RHEL kernel and initramfs artifacts Ignition files The new disk image Adjusted parm line arguments USD virt-install \ --name <vm_name> \ --autostart \ --memory=<memory_mb> \ --cpu host \ --vcpus <vcpus> \ --location <media_location>,kernel=<rhcos_kernel>,initrd=<rhcos_initrd> \ 1 --disk size=100 \ --network network=<virt_network_parm> \ --graphics none \ --noautoconsole \ --extra-args "ip=<ip>::<gateway>:<mask>:<hostname>::none" \ --extra-args "nameserver=<name_server>" \ --extra-args "ip=dhcp rd.neednet=1 ignition.platform.id=metal ignition.firstboot" \ --extra-args "coreos.live.rootfs_url=<rhcos_liveos>" \ 2 --extra-args "ignition.config.url=<rhcos_ign>" \ 3 --extra-args "random.trust_cpu=on rd.luks.options=discard" \ --extra-args "console=ttysclp0" \ --wait 1 For the --location parameter, specify the location of the kernel/initrd on the HTTP or HTTPS server. 2 For the coreos.live.rootfs_url= artifact, specify the matching rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. 3 For the ignition.config.url= parameter, specify the Ignition file for the machine role. Only HTTP and HTTPS protocols are supported. 2.8. Installing single-node OpenShift with IBM Power Installing a single-node cluster on IBM Power(R) requires user-provisioned installation using the "Installing a cluster with IBM Power(R)" procedure. Note Installing a single-node cluster on IBM Power(R) simplifies installation for development and test environments and requires less resource requirements at entry level. Hardware requirements The equivalent of two Integrated Facilities for Linux (IFL), which are SMT2 enabled, for each cluster. At least one network connection to connect to the LoadBalancer service and to serve data for traffic outside of the cluster. Note You can use dedicated or shared IFLs to assign sufficient compute resources. Resource sharing is one of the key strengths of IBM Power(R). However, you must adjust capacity correctly on each hypervisor layer and ensure sufficient resources for every OpenShift Container Platform cluster. Additional resources Installing a cluster on IBM Power(R) 2.8.1. Setting up basion for single-node OpenShift with IBM Power Prior to installing single-node OpenShift on IBM Power(R), you must set up bastion. Setting up a bastion server for single-node OpenShift on IBM Power(R) requires the configuration of the following services: PXE is used for the single-node OpenShift cluster installation. PXE requires the following services to be configured and run: DNS to define api, api-int, and *.apps DHCP service to enable PXE and assign an IP address to single-node OpenShift node HTTP to provide ignition and RHCOS rootfs image TFTP to enable PXE You must install dnsmasq to support DNS, DHCP and PXE, httpd for HTTP. Use the following procedure to configure a bastion server that meets these requirements. Procedure Use the following command to install grub2 , which is required to enable PXE for PowerVM: grub2-mknetdir --net-directory=/var/lib/tftpboot Example /var/lib/tftpboot/boot/grub2/grub.cfg file default=0 fallback=1 timeout=1 if [ USD{net_default_mac} == fa:b0:45:27:43:20 ]; then menuentry "CoreOS (BIOS)" { echo "Loading kernel" linux "/rhcos/kernel" ip=dhcp rd.neednet=1 ignition.platform.id=metal ignition.firstboot coreos.live.rootfs_url=http://192.168.10.5:8000/install/rootfs.img ignition.config.url=http://192.168.10.5:8000/ignition/sno.ign echo "Loading initrd" initrd "/rhcos/initramfs.img" } fi Use the following commands to download RHCOS image files from the mirror repo for PXE. Enter the following command to assign the RHCOS_URL variable the follow 4.12 URL: USD export RHCOS_URL=https://mirror.openshift.com/pub/openshift-v4/ppc64le/dependencies/rhcos/4.12/latest/ Enter the following command to navigate to the /var/lib/tftpboot/rhcos directory: USD cd /var/lib/tftpboot/rhcos Enter the following command to download the specified RHCOS kernel file from the URL stored in the RHCOS_URL variable: USD wget USD{RHCOS_URL}/rhcos-live-kernel-ppc64le -o kernel Enter the following command to download the RHCOS initramfs file from the URL stored in the RHCOS_URL variable: USD wget USD{RHCOS_URL}/rhcos-live-initramfs.ppc64le.img -o initramfs.img Enter the following command to navigate to the /var//var/www/html/install/ directory: USD cd /var//var/www/html/install/ Enter the following command to download, and save, the RHCOS root filesystem image file from the URL stored in the RHCOS_URL variable: USD wget USD{RHCOS_URL}/rhcos-live-rootfs.ppc64le.img -o rootfs.img To create the ignition file for a single-node OpenShift cluster, you must create the install-config.yaml file. Enter the following command to create the work directory that holds the file: USD mkdir -p ~/sno-work Enter the following command to navigate to the ~/sno-work directory: USD cd ~/sno-work Use the following sample file can to create the required install-config.yaml in the ~/sno-work directory: apiVersion: v1 baseDomain: <domain> 1 compute: - name: worker replicas: 0 2 controlPlane: name: master replicas: 1 3 metadata: name: <name> 4 networking: 5 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 6 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: none: {} bootstrapInPlace: installationDisk: /dev/disk/by-id/<disk_id> 7 pullSecret: '<pull_secret>' 8 sshKey: | <ssh_key> 9 1 Add the cluster domain name. 2 Set the compute replicas to 0 . This makes the control plane node schedulable. 3 Set the controlPlane replicas to 1 . In conjunction with the compute setting, this setting ensures that the cluster runs on a single node. 4 Set the metadata name to the cluster name. 5 Set the networking details. OVN-Kubernetes is the only allowed network plugin type for single-node clusters. 6 Set the cidr value to match the subnet of the single-node OpenShift cluster. 7 Set the path to the installation disk drive, for example, /dev/disk/by-id/wwn-0x64cd98f04fde100024684cf3034da5c2 . 8 Copy the pull secret from Red Hat OpenShift Cluster Manager and add the contents to this configuration setting. 9 Add the public SSH key from the administration host so that you can log in to the cluster after installation. Download the openshift-install image to create the ignition file and copy it to the http directory. Enter the following command to download the openshift-install-linux-4.12.0 .tar file: USD wget https://mirror.openshift.com/pub/openshift-v4/ppc64le/clients/ocp/4.12.0/openshift-install-linux-4.12.0.tar.gz Enter the following command to unpack the openshift-install-linux-4.12.0.tar.gz archive: USD tar xzvf openshift-install-linux-4.12.0.tar.gz Enter the following command to USD ./openshift-install --dir=~/sno-work create create single-node-ignition-config Enter the following command to create the ignition file: USD cp ~/sno-work/single-node-ignition-config.ign /var/www/html/ignition/sno.ign Enter the following command to restore SELinux file for the /var/www/html directory: USD restorecon -vR /var/www/html || true Bastion now has all the required files and is properly configured in order to install single-node OpenShift. 2.8.2. Installing single-node OpenShift with IBM Power Prerequisites You have set up bastion. Procedure There are two steps for the single-node OpenShift cluster installation. First the single-node OpenShift logical partition (LPAR) needs to boot up with PXE, then you need to monitor the installation progress. Use the following command to boot powerVM with netboot: USD lpar_netboot -i -D -f -t ent -m <sno_mac> -s auto -d auto -S <server_ip> -C <sno_ip> -G <gateway> <lpar_name> default_profile <cec_name> where: sno_mac Specifies the MAC address of the single-node OpenShift cluster. sno_ip Specifies the IP address of the single-node OpenShift cluster. server_ip Specifies the IP address of bastion (PXE server). gateway Specifies the Network's gateway IP. lpar_name Specifies the single-node OpenShift lpar name in HMC. cec_name Specifies the System name where the sno_lpar resides After the single-node OpenShift LPAR boots up with PXE, use the openshift-install command to monitor the progress of installation: Run the following command after the bootstrap is complete: ./openshift-install wait-for bootstrap-complete Run the following command after it returns successfully: ./openshift-install wait-for install-complete
[ "example.com", "<cluster_name>.example.com", "export OCP_VERSION=<ocp_version> 1", "export ARCH=<architecture> 1", "curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDOCP_VERSION/openshift-client-linux.tar.gz -o oc.tar.gz", "tar zxf oc.tar.gz", "chmod +x oc", "curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDOCP_VERSION/openshift-install-linux.tar.gz -o openshift-install-linux.tar.gz", "tar zxvf openshift-install-linux.tar.gz", "chmod +x openshift-install", "export ISO_URL=USD(./openshift-install coreos print-stream-json | grep location | grep USDARCH | grep iso | cut -d\\\" -f4)", "curl -L USDISO_URL -o rhcos-live.iso", "apiVersion: v1 baseDomain: <domain> 1 compute: - name: worker replicas: 0 2 controlPlane: name: master replicas: 1 3 metadata: name: <name> 4 networking: 5 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 6 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: none: {} bootstrapInPlace: installationDisk: /dev/disk/by-id/<disk_id> 7 pullSecret: '<pull_secret>' 8 sshKey: | <ssh_key> 9", "mkdir ocp", "cp install-config.yaml ocp", "./openshift-install --dir=ocp create single-node-ignition-config", "alias coreos-installer='podman run --privileged --pull always --rm -v /dev:/dev -v /run/udev:/run/udev -v USDPWD:/data -w /data quay.io/coreos/coreos-installer:release'", "coreos-installer iso ignition embed -fi ocp/bootstrap-in-place-for-live-iso.ign rhcos-live.iso", "./openshift-install --dir=ocp wait-for install-complete", "export KUBECONFIG=ocp/auth/kubeconfig", "oc get nodes", "NAME STATUS ROLES AGE VERSION control-plane.example.com Ready master,worker 10m v1.27.3", "dd if=<path_to_iso> of=<path_to_usb> status=progress", "curl -k -u <bmc_username>:<bmc_password> -d '{\"Image\":\"<hosted_iso_file>\", \"Inserted\": true}' -H \"Content-Type: application/json\" -X POST <host_bmc_address>/redfish/v1/Managers/iDRAC.Embedded.1/VirtualMedia/CD/Actions/VirtualMedia.InsertMedia", "curl -k -u <bmc_username>:<bmc_password> -X PATCH -H 'Content-Type: application/json' -d '{\"Boot\": {\"BootSourceOverrideTarget\": \"Cd\", \"BootSourceOverrideMode\": \"UEFI\", \"BootSourceOverrideEnabled\": \"Once\"}}' <host_bmc_address>/redfish/v1/Systems/System.Embedded.1", "curl -k -u <bmc_username>:<bmc_password> -d '{\"ResetType\": \"ForceRestart\"}' -H 'Content-type: application/json' -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset", "curl -k -u <bmc_username>:<bmc_password> -d '{\"ResetType\": \"On\"}' -H 'Content-type: application/json' -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset", "variant: openshift version: 4.14.0 metadata: name: sshd labels: machineconfiguration.openshift.io/role: worker passwd: users: - name: core 1 ssh_authorized_keys: - '<ssh_key>'", "butane -pr embedded.yaml -o embedded.ign", "coreos-installer iso ignition embed -i embedded.ign rhcos-4.14.0-x86_64-live.x86_64.iso -o rhcos-sshd-4.14.0-x86_64-live.x86_64.iso", "coreos-installer iso ignition show rhcos-sshd-4.14.0-x86_64-live.x86_64.iso", "{ \"ignition\": { \"version\": \"3.2.0\" }, \"passwd\": { \"users\": [ { \"name\": \"core\", \"sshAuthorizedKeys\": [ \"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCZnG8AIzlDAhpyENpK2qKiTT8EbRWOrz7NXjRzopbPu215mocaJgjjwJjh1cYhgPhpAp6M/ttTk7I4OI7g4588Apx4bwJep6oWTU35LkY8ZxkGVPAJL8kVlTdKQviDv3XX12l4QfnDom4tm4gVbRH0gNT1wzhnLP+LKYm2Ohr9D7p9NBnAdro6k++XWgkDeijLRUTwdEyWunIdW1f8G0Mg8Y1Xzr13BUo3+8aey7HLKJMDtobkz/C8ESYA/f7HJc5FxF0XbapWWovSSDJrr9OmlL9f4TfE+cQk3s+eoKiz2bgNPRgEEwihVbGsCN4grA+RzLCAOpec+2dTJrQvFqsD [email protected]\" ] } ] } }", "OCP_VERSION=<ocp_version> 1", "ARCH=<architecture> 1", "curl -k https://mirror.openshift.com/pub/openshift-v4/USD{ARCH}/clients/ocp/USD{OCP_VERSION}/openshift-client-linux.tar.gz -o oc.tar.gz", "tar zxf oc.tar.gz", "chmod +x oc", "curl -k https://mirror.openshift.com/pub/openshift-v4/USD{ARCH}/clients/ocp/USD{OCP_VERSION}/openshift-install-linux.tar.gz -o openshift-install-linux.tar.gz", "tar zxvf openshift-install-linux.tar.gz", "chmod +x openshift-install", "apiVersion: v1 baseDomain: <domain> 1 compute: - name: worker replicas: 0 2 controlPlane: name: master replicas: 1 3 metadata: name: <name> 4 networking: 5 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 6 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: none: {} bootstrapInPlace: installationDisk: /dev/disk/by-id/<disk_id> 7 pullSecret: '<pull_secret>' 8 sshKey: | <ssh_key> 9", "mkdir ocp", "cp install-config.yaml ocp", "./openshift-install --dir=ocp create single-node-ignition-config", "rd.neednet=1 console=ttysclp0 coreos.live.rootfs_url=<rhcos_liveos>:8080/rootfs.img \\ 1 ignition.firstboot ignition.platform.id=metal ignition.config.url=<rhcos_ign>:8080/ignition/bootstrap-in-place-for-live-iso.ign \\ 2 ip=encbdd0:dhcp::02:00:00:02:34:02 3 rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.dasd=0.0.4411 \\ 4 rd.zfcp=0.0.8001,0x50050763040051e3,0x4000406300000000 \\ 5 zfcp.allow_lun_scan=0 rd.luks.options=discard", "cp ipl c", "cp i <devno> clear loadparm prompt", "cp vi vmsg 0 <kernel_parameters>", "cp set loaddev portname <wwpn> lun <lun>", "cp set loaddev bootprog <n>", "cp set loaddev scpdata {APPEND|NEW} '<kernel_parameters>'", "cp set loaddev scpdata 'rd.zfcp=0.0.8001,0x500507630a0350a4,0x4000409D00000000 ip=encbdd0:dhcp::02:00:00:02:34:02 rd.neednet=1'", "cp i <devno>", "OCP_VERSION=<ocp_version> 1", "ARCH=<architecture> 1", "curl -k https://mirror.openshift.com/pub/openshift-v4/USD{ARCH}/clients/ocp/USD{OCP_VERSION}/openshift-client-linux.tar.gz -o oc.tar.gz", "tar zxf oc.tar.gz", "chmod +x oc", "curl -k https://mirror.openshift.com/pub/openshift-v4/USD{ARCH}/clients/ocp/USD{OCP_VERSION}/openshift-install-linux.tar.gz -o openshift-install-linux.tar.gz", "tar zxvf openshift-install-linux.tar.gz", "chmod +x openshift-install", "apiVersion: v1 baseDomain: <domain> 1 compute: - name: worker replicas: 0 2 controlPlane: name: master replicas: 1 3 metadata: name: <name> 4 networking: 5 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 6 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: none: {} bootstrapInPlace: installationDisk: /dev/disk/by-id/<disk_id> 7 pullSecret: '<pull_secret>' 8 sshKey: | <ssh_key> 9", "mkdir ocp", "cp install-config.yaml ocp", "./openshift-install --dir=ocp create single-node-ignition-config", "virt-install --name <vm_name> --autostart --memory=<memory_mb> --cpu host --vcpus <vcpus> --location <media_location>,kernel=<rhcos_kernel>,initrd=<rhcos_initrd> \\ 1 --disk size=100 --network network=<virt_network_parm> --graphics none --noautoconsole --extra-args \"ip=<ip>::<gateway>:<mask>:<hostname>::none\" --extra-args \"nameserver=<name_server>\" --extra-args \"ip=dhcp rd.neednet=1 ignition.platform.id=metal ignition.firstboot\" --extra-args \"coreos.live.rootfs_url=<rhcos_liveos>\" \\ 2 --extra-args \"ignition.config.url=<rhcos_ign>\" \\ 3 --extra-args \"random.trust_cpu=on rd.luks.options=discard\" --extra-args \"console=ttysclp0\" --wait", "grub2-mknetdir --net-directory=/var/lib/tftpboot", "default=0 fallback=1 timeout=1 if [ USD{net_default_mac} == fa:b0:45:27:43:20 ]; then menuentry \"CoreOS (BIOS)\" { echo \"Loading kernel\" linux \"/rhcos/kernel\" ip=dhcp rd.neednet=1 ignition.platform.id=metal ignition.firstboot coreos.live.rootfs_url=http://192.168.10.5:8000/install/rootfs.img ignition.config.url=http://192.168.10.5:8000/ignition/sno.ign echo \"Loading initrd\" initrd \"/rhcos/initramfs.img\" } fi", "export RHCOS_URL=https://mirror.openshift.com/pub/openshift-v4/ppc64le/dependencies/rhcos/4.12/latest/", "cd /var/lib/tftpboot/rhcos", "wget USD{RHCOS_URL}/rhcos-live-kernel-ppc64le -o kernel", "wget USD{RHCOS_URL}/rhcos-live-initramfs.ppc64le.img -o initramfs.img", "cd /var//var/www/html/install/", "wget USD{RHCOS_URL}/rhcos-live-rootfs.ppc64le.img -o rootfs.img", "mkdir -p ~/sno-work", "cd ~/sno-work", "apiVersion: v1 baseDomain: <domain> 1 compute: - name: worker replicas: 0 2 controlPlane: name: master replicas: 1 3 metadata: name: <name> 4 networking: 5 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 6 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: none: {} bootstrapInPlace: installationDisk: /dev/disk/by-id/<disk_id> 7 pullSecret: '<pull_secret>' 8 sshKey: | <ssh_key> 9", "wget https://mirror.openshift.com/pub/openshift-v4/ppc64le/clients/ocp/4.12.0/openshift-install-linux-4.12.0.tar.gz", "tar xzvf openshift-install-linux-4.12.0.tar.gz", "./openshift-install --dir=~/sno-work create create single-node-ignition-config", "cp ~/sno-work/single-node-ignition-config.ign /var/www/html/ignition/sno.ign", "restorecon -vR /var/www/html || true", "lpar_netboot -i -D -f -t ent -m <sno_mac> -s auto -d auto -S <server_ip> -C <sno_ip> -G <gateway> <lpar_name> default_profile <cec_name>", "./openshift-install wait-for bootstrap-complete", "./openshift-install wait-for install-complete" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_a_single_node/install-sno-installing-sno
Chapter 8. Multicloud Object Gateway bucket replication
Chapter 8. Multicloud Object Gateway bucket replication Data replication from one Multicloud Object Gateway (MCG) bucket to another MCG bucket provides higher resiliency and better collaboration options. These buckets can be either data buckets or namespace buckets backed by any supported storage solution (S3, Azure, etc.). A replication policy is composed of a list of replication rules. Each rule defines the destination bucket, and can specify a filter based on an object key prefix. Configuring a complementing replication policy on the second bucket results in bidirectional replication. Prerequisites A running OpenShift Data Foundation Platform. Access to the Multicloud Object Gateway, see link:Accessing the Multicloud Object Gateway with your applications. Download the Multicloud Object Gateway (MCG) command-line interface: Important Specify the appropriate architecture for enabling the repositories using the subscription manager. For instance, in case of IBM Power use the following command: Alternatively, you can install the mcg package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Important Choose the correct Product Variant according to your architecture. Note Certain MCG features are only available in certain MCG versions, and the appropriate MCG CLI tool version must be used to fully utilize MCG's features. To replicate a bucket, see Replicating a bucket to another bucket . To set a bucket class replication policy, see Setting a bucket class replication policy . 8.1. Replicating a bucket to another bucket You can set the bucket replication policy in two ways: Replicating a bucket to another bucket using the MCG command-line interface . Replicating a bucket to another bucket using a YAML . 8.1.1. Replicating a bucket to another bucket using the MCG command-line interface Applications that require a Multicloud Object Gateway (MCG) bucket to have a specific replication policy can create an Object Bucket Claim (OBC) and define the replication policy parameter in a JSON file. Procedure From the MCG command-line interface, run the following command to create an OBC with a specific replication policy: <bucket-claim-name> Specify the name of the bucket claim. /path/to/json-file.json Is the path to a JSON file which defines the replication policy. Example JSON file: + + "prefix" Is optional. It is the prefix of the object keys that should be replicated, and you can even leave it empty, for example, {"prefix": ""} . For example: 8.1.2. Replicating a bucket to another bucket using a YAML Applications that require a Multicloud Object Gateway (MCG) data bucket to have a specific replication policy can create an Object Bucket Claim (OBC) and add the spec.additionalConfig.replicationPolicy parameter to the OBC. Procedure Apply the following YAML: <desired-bucket-claim> Specify the name of the bucket claim. <desired-namespace> Specify the namespace. <desired-bucket-name> Specify the prefix of the bucket name. "rule_id" Specify the ID number of the rule, for example, {"rule_id": "rule-1"} . "destination_bucket" Specify the name of the destination bucket, for example, {"destination_bucket": "first.bucket"} . "prefix" Is optional. It is the prefix of the object keys that should be replicated, and you can even leave it empty, for example, {"prefix": ""} . Additional information For more information about OBCs, see Object Bucket Claim . 8.2. Setting a bucket class replication policy It is possible to set up a replication policy that automatically applies to all the buckets created under a certain bucket class. You can do this in two ways: Setting a bucket class replication policy using the MCG command-line interface . Setting a bucket class replication policy using a YAML . 8.2.1. Setting a bucket class replication policy using the MCG command-line interface Applications that require a Multicloud Object Gateway (MCG) bucket class to have a specific replication policy can create a bucketclass and define the replication-policy parameter in a JSON file. It is possible to set a bucket class replication policy for two types of bucket classes: Placement Namespace Procedure From the MCG command-line interface, run the following command: <bucketclass-name> Specify the name of the bucket class. <backingstores> Specify the name of a backingstore. It is possible to pass several backingstores separated by commas. /path/to/json-file.json Is the path to a JSON file which defines the replication policy. Example JSON file: "prefix" Is optional. It is the prefix of the object keys that should be replicated, and you can even leave it empty, for example, {"prefix": ""} . For example: This example creates a placement bucket class with a specific replication policy defined in the JSON file. 8.2.2. Setting a bucket class replication policy using a YAML Applications that require a Multicloud Object Gateway (MCG) bucket class to have a specific replication policy can create a bucket class using the spec.replicationPolicy field. Procedure Apply the following YAML: This YAML is an example that creates a placement bucket class. Each Object bucket claim (OBC) object that is uploaded to the bucket is filtered based on the prefix and is replicated to first.bucket . <desired-app-label> Specify a label for the app. <desired-bucketclass-name> Specify the bucket class name. <desired-namespace> Specify the namespace in which the bucket class gets created. <backingstore> Specify the name of a backingstore. It is possible to pass several backingstores. "rule_id" Specify the ID number of the rule, for example, `{"rule_id": "rule-1"} . "destination_bucket" Specify the name of the destination bucket, for example, {"destination_bucket": "first.bucket"} . "prefix" Is optional. It is the prefix of the object keys that should be replicated, and you can even leave it empty, for example, {"prefix": ""} . 8.2.3. Enabling bucket replication deletion When creating a bucket replication policy, you may want to enable deletion so that when data is deleted from one bucket, the data is deleted from the destination bucket as well. This ensures that when data is deleted in one location, the other location has the same dataset. Important This feature requires logs-based replication, which is currently only supported using AWS. For more information about setting up AWS logs, see Enabling Amazon S3 server access logging . The AWS logs bucket needs to be created in the same region as the source NamespaceStore AWS bucket. Prerequisites Administrator access to OpenShift Web Console. Procedure In the OpenShift Web Console, navigate to Storage Object Bucket Claims . Click Create new Object bucket claim . In the Replication policy section, select the checkbox Sync deletion . Enter the name of the bucket that will contain the logs under Event log Bucket . Enter the prefix for the location of the logs in the logs bucket under Prefix . If the logs are stored in the root of the bucket, you can leave Prefix empty.
[ "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms", "noobaa obc create <bucket-claim-name> -n openshift-storage --replication-policy /path/to/json-file.json", "[{ \"rule_id\": \"rule-1\", \"destination_bucket\": \"first.bucket\", \"filter\": {\"prefix\": \"repl\"}}]", "noobaa obc create my-bucket-claim -n openshift-storage --replication-policy /path/to/json-file.json", "apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <desired-bucket-claim> namespace: <desired-namespace> spec: generateBucketName: <desired-bucket-name> storageClassName: openshift-storage.noobaa.io additionalConfig: replicationPolicy: |+ { \"rules\": [ {\"rule_id\":\"rule-1\", \"destination_bucket\":\"first.bucket\" } ] }", "noobaa -n openshift-storage bucketclass create placement-bucketclass <bucketclass-name> --backingstores <backingstores> --replication-policy=/path/to/json-file.json", "[{ \"rule_id\": \"rule-1\", \"destination_bucket\": \"first.bucket\", \"filter\": {\"prefix\": \"repl\"}}]", "noobaa -n openshift-storage bucketclass create placement-bucketclass bc --backingstores azure-blob-ns --replication-policy=/path/to/json-file.json", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: <desired-app-label> name: <desired-bucketclass-name> namespace: <desired-namespace> spec: placementPolicy: tiers: - backingstores: - <backingstore> placement: Spread replicationPolicy: [{ \"rule_id\": \" <rule id> \", \"destination_bucket\": \"first.bucket\", \"filter\": {\"prefix\": \" <object name prefix> \"}}]" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/managing_hybrid_and_multicloud_resources/multicloud_object_gateway_bucket_replication
Preface
Preface This document is intended for use with Red Hat 3scale API Management 2.15 and related patch releases.
null
https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/release_notes_for_red_hat_3scale_api_management_2.15_on-premises/pr01
Chapter 17. Flow Control APIs
Chapter 17. Flow Control APIs 17.1. Flow Control APIs 17.1.1. FlowSchema [flowcontrol.apiserver.k8s.io/v1beta3] Description FlowSchema defines the schema of a group of flows. Note that a flow is made up of a set of inbound API requests with similar attributes and is identified by a pair of strings: the name of the FlowSchema and a "flow distinguisher". Type object 17.1.2. PriorityLevelConfiguration [flowcontrol.apiserver.k8s.io/v1beta3] Description PriorityLevelConfiguration represents the configuration of a priority level. Type object 17.2. FlowSchema [flowcontrol.apiserver.k8s.io/v1beta3] Description FlowSchema defines the schema of a group of flows. Note that a flow is made up of a set of inbound API requests with similar attributes and is identified by a pair of strings: the name of the FlowSchema and a "flow distinguisher". Type object 17.2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object FlowSchemaSpec describes how the FlowSchema's specification looks like. status object FlowSchemaStatus represents the current state of a FlowSchema. 17.2.1.1. .spec Description FlowSchemaSpec describes how the FlowSchema's specification looks like. Type object Required priorityLevelConfiguration Property Type Description distinguisherMethod object FlowDistinguisherMethod specifies the method of a flow distinguisher. matchingPrecedence integer matchingPrecedence is used to choose among the FlowSchemas that match a given request. The chosen FlowSchema is among those with the numerically lowest (which we take to be logically highest) MatchingPrecedence. Each MatchingPrecedence value must be ranged in [1,10000]. Note that if the precedence is not specified, it will be set to 1000 as default. priorityLevelConfiguration object PriorityLevelConfigurationReference contains information that points to the "request-priority" being used. rules array rules describes which requests will match this flow schema. This FlowSchema matches a request if and only if at least one member of rules matches the request. if it is an empty slice, there will be no requests matching the FlowSchema. rules[] object PolicyRulesWithSubjects prescribes a test that applies to a request to an apiserver. The test considers the subject making the request, the verb being requested, and the resource to be acted upon. This PolicyRulesWithSubjects matches a request if and only if both (a) at least one member of subjects matches the request and (b) at least one member of resourceRules or nonResourceRules matches the request. 17.2.1.2. .spec.distinguisherMethod Description FlowDistinguisherMethod specifies the method of a flow distinguisher. Type object Required type Property Type Description type string type is the type of flow distinguisher method The supported types are "ByUser" and "ByNamespace". Required. 17.2.1.3. .spec.priorityLevelConfiguration Description PriorityLevelConfigurationReference contains information that points to the "request-priority" being used. Type object Required name Property Type Description name string name is the name of the priority level configuration being referenced Required. 17.2.1.4. .spec.rules Description rules describes which requests will match this flow schema. This FlowSchema matches a request if and only if at least one member of rules matches the request. if it is an empty slice, there will be no requests matching the FlowSchema. Type array 17.2.1.5. .spec.rules[] Description PolicyRulesWithSubjects prescribes a test that applies to a request to an apiserver. The test considers the subject making the request, the verb being requested, and the resource to be acted upon. This PolicyRulesWithSubjects matches a request if and only if both (a) at least one member of subjects matches the request and (b) at least one member of resourceRules or nonResourceRules matches the request. Type object Required subjects Property Type Description nonResourceRules array nonResourceRules is a list of NonResourcePolicyRules that identify matching requests according to their verb and the target non-resource URL. nonResourceRules[] object NonResourcePolicyRule is a predicate that matches non-resource requests according to their verb and the target non-resource URL. A NonResourcePolicyRule matches a request if and only if both (a) at least one member of verbs matches the request and (b) at least one member of nonResourceURLs matches the request. resourceRules array resourceRules is a slice of ResourcePolicyRules that identify matching requests according to their verb and the target resource. At least one of resourceRules and nonResourceRules has to be non-empty. resourceRules[] object ResourcePolicyRule is a predicate that matches some resource requests, testing the request's verb and the target resource. A ResourcePolicyRule matches a resource request if and only if: (a) at least one member of verbs matches the request, (b) at least one member of apiGroups matches the request, (c) at least one member of resources matches the request, and (d) either (d1) the request does not specify a namespace (i.e., Namespace=="" ) and clusterScope is true or (d2) the request specifies a namespace and least one member of namespaces matches the request's namespace. subjects array subjects is the list of normal user, serviceaccount, or group that this rule cares about. There must be at least one member in this slice. A slice that includes both the system:authenticated and system:unauthenticated user groups matches every request. Required. subjects[] object Subject matches the originator of a request, as identified by the request authentication system. There are three ways of matching an originator; by user, group, or service account. 17.2.1.6. .spec.rules[].nonResourceRules Description nonResourceRules is a list of NonResourcePolicyRules that identify matching requests according to their verb and the target non-resource URL. Type array 17.2.1.7. .spec.rules[].nonResourceRules[] Description NonResourcePolicyRule is a predicate that matches non-resource requests according to their verb and the target non-resource URL. A NonResourcePolicyRule matches a request if and only if both (a) at least one member of verbs matches the request and (b) at least one member of nonResourceURLs matches the request. Type object Required verbs nonResourceURLs Property Type Description nonResourceURLs array (string) nonResourceURLs is a set of url prefixes that a user should have access to and may not be empty. For example: - "/healthz" is legal - "/hea*" is illegal - "/hea" is legal but matches nothing - "/hea/ " also matches nothing - "/healthz/ " matches all per-component health checks. "*" matches all non-resource urls. if it is present, it must be the only entry. Required. verbs array (string) verbs is a list of matching verbs and may not be empty. "*" matches all verbs. If it is present, it must be the only entry. Required. 17.2.1.8. .spec.rules[].resourceRules Description resourceRules is a slice of ResourcePolicyRules that identify matching requests according to their verb and the target resource. At least one of resourceRules and nonResourceRules has to be non-empty. Type array 17.2.1.9. .spec.rules[].resourceRules[] Description ResourcePolicyRule is a predicate that matches some resource requests, testing the request's verb and the target resource. A ResourcePolicyRule matches a resource request if and only if: (a) at least one member of verbs matches the request, (b) at least one member of apiGroups matches the request, (c) at least one member of resources matches the request, and (d) either (d1) the request does not specify a namespace (i.e., Namespace=="" ) and clusterScope is true or (d2) the request specifies a namespace and least one member of namespaces matches the request's namespace. Type object Required verbs apiGroups resources Property Type Description apiGroups array (string) apiGroups is a list of matching API groups and may not be empty. "*" matches all API groups and, if present, must be the only entry. Required. clusterScope boolean clusterScope indicates whether to match requests that do not specify a namespace (which happens either because the resource is not namespaced or the request targets all namespaces). If this field is omitted or false then the namespaces field must contain a non-empty list. namespaces array (string) namespaces is a list of target namespaces that restricts matches. A request that specifies a target namespace matches only if either (a) this list contains that target namespace or (b) this list contains " ". Note that " " matches any specified namespace but does not match a request that does not specify a namespace (see the clusterScope field for that). This list may be empty, but only if clusterScope is true. resources array (string) resources is a list of matching resources (i.e., lowercase and plural) with, if desired, subresource. For example, [ "services", "nodes/status" ]. This list may not be empty. "*" matches all resources and, if present, must be the only entry. Required. verbs array (string) verbs is a list of matching verbs and may not be empty. "*" matches all verbs and, if present, must be the only entry. Required. 17.2.1.10. .spec.rules[].subjects Description subjects is the list of normal user, serviceaccount, or group that this rule cares about. There must be at least one member in this slice. A slice that includes both the system:authenticated and system:unauthenticated user groups matches every request. Required. Type array 17.2.1.11. .spec.rules[].subjects[] Description Subject matches the originator of a request, as identified by the request authentication system. There are three ways of matching an originator; by user, group, or service account. Type object Required kind Property Type Description group object GroupSubject holds detailed information for group-kind subject. kind string kind indicates which one of the other fields is non-empty. Required serviceAccount object ServiceAccountSubject holds detailed information for service-account-kind subject. user object UserSubject holds detailed information for user-kind subject. 17.2.1.12. .spec.rules[].subjects[].group Description GroupSubject holds detailed information for group-kind subject. Type object Required name Property Type Description name string name is the user group that matches, or "*" to match all user groups. See https://github.com/kubernetes/apiserver/blob/master/pkg/authentication/user/user.go for some well-known group names. Required. 17.2.1.13. .spec.rules[].subjects[].serviceAccount Description ServiceAccountSubject holds detailed information for service-account-kind subject. Type object Required namespace name Property Type Description name string name is the name of matching ServiceAccount objects, or "*" to match regardless of name. Required. namespace string namespace is the namespace of matching ServiceAccount objects. Required. 17.2.1.14. .spec.rules[].subjects[].user Description UserSubject holds detailed information for user-kind subject. Type object Required name Property Type Description name string name is the username that matches, or "*" to match all usernames. Required. 17.2.1.15. .status Description FlowSchemaStatus represents the current state of a FlowSchema. Type object Property Type Description conditions array conditions is a list of the current states of FlowSchema. conditions[] object FlowSchemaCondition describes conditions for a FlowSchema. 17.2.1.16. .status.conditions Description conditions is a list of the current states of FlowSchema. Type array 17.2.1.17. .status.conditions[] Description FlowSchemaCondition describes conditions for a FlowSchema. Type object Property Type Description lastTransitionTime Time lastTransitionTime is the last time the condition transitioned from one status to another. message string message is a human-readable message indicating details about last transition. reason string reason is a unique, one-word, CamelCase reason for the condition's last transition. status string status is the status of the condition. Can be True, False, Unknown. Required. type string type is the type of the condition. Required. 17.2.2. API endpoints The following API endpoints are available: /apis/flowcontrol.apiserver.k8s.io/v1beta3/flowschemas DELETE : delete collection of FlowSchema GET : list or watch objects of kind FlowSchema POST : create a FlowSchema /apis/flowcontrol.apiserver.k8s.io/v1beta3/watch/flowschemas GET : watch individual changes to a list of FlowSchema. deprecated: use the 'watch' parameter with a list operation instead. /apis/flowcontrol.apiserver.k8s.io/v1beta3/flowschemas/{name} DELETE : delete a FlowSchema GET : read the specified FlowSchema PATCH : partially update the specified FlowSchema PUT : replace the specified FlowSchema /apis/flowcontrol.apiserver.k8s.io/v1beta3/watch/flowschemas/{name} GET : watch changes to an object of kind FlowSchema. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/flowcontrol.apiserver.k8s.io/v1beta3/flowschemas/{name}/status GET : read status of the specified FlowSchema PATCH : partially update status of the specified FlowSchema PUT : replace status of the specified FlowSchema 17.2.2.1. /apis/flowcontrol.apiserver.k8s.io/v1beta3/flowschemas Table 17.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of FlowSchema Table 17.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 17.3. Body parameters Parameter Type Description body DeleteOptions schema Table 17.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind FlowSchema Table 17.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 17.6. HTTP responses HTTP code Reponse body 200 - OK FlowSchemaList schema 401 - Unauthorized Empty HTTP method POST Description create a FlowSchema Table 17.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.8. Body parameters Parameter Type Description body FlowSchema schema Table 17.9. HTTP responses HTTP code Reponse body 200 - OK FlowSchema schema 201 - Created FlowSchema schema 202 - Accepted FlowSchema schema 401 - Unauthorized Empty 17.2.2.2. /apis/flowcontrol.apiserver.k8s.io/v1beta3/watch/flowschemas Table 17.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of FlowSchema. deprecated: use the 'watch' parameter with a list operation instead. Table 17.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 17.2.2.3. /apis/flowcontrol.apiserver.k8s.io/v1beta3/flowschemas/{name} Table 17.12. Global path parameters Parameter Type Description name string name of the FlowSchema Table 17.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a FlowSchema Table 17.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 17.15. Body parameters Parameter Type Description body DeleteOptions schema Table 17.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified FlowSchema Table 17.17. HTTP responses HTTP code Reponse body 200 - OK FlowSchema schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified FlowSchema Table 17.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 17.19. Body parameters Parameter Type Description body Patch schema Table 17.20. HTTP responses HTTP code Reponse body 200 - OK FlowSchema schema 201 - Created FlowSchema schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified FlowSchema Table 17.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.22. Body parameters Parameter Type Description body FlowSchema schema Table 17.23. HTTP responses HTTP code Reponse body 200 - OK FlowSchema schema 201 - Created FlowSchema schema 401 - Unauthorized Empty 17.2.2.4. /apis/flowcontrol.apiserver.k8s.io/v1beta3/watch/flowschemas/{name} Table 17.24. Global path parameters Parameter Type Description name string name of the FlowSchema Table 17.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind FlowSchema. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 17.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 17.2.2.5. /apis/flowcontrol.apiserver.k8s.io/v1beta3/flowschemas/{name}/status Table 17.27. Global path parameters Parameter Type Description name string name of the FlowSchema Table 17.28. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified FlowSchema Table 17.29. HTTP responses HTTP code Reponse body 200 - OK FlowSchema schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified FlowSchema Table 17.30. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 17.31. Body parameters Parameter Type Description body Patch schema Table 17.32. HTTP responses HTTP code Reponse body 200 - OK FlowSchema schema 201 - Created FlowSchema schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified FlowSchema Table 17.33. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.34. Body parameters Parameter Type Description body FlowSchema schema Table 17.35. HTTP responses HTTP code Reponse body 200 - OK FlowSchema schema 201 - Created FlowSchema schema 401 - Unauthorized Empty 17.3. PriorityLevelConfiguration [flowcontrol.apiserver.k8s.io/v1beta3] Description PriorityLevelConfiguration represents the configuration of a priority level. Type object 17.3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object PriorityLevelConfigurationSpec specifies the configuration of a priority level. status object PriorityLevelConfigurationStatus represents the current state of a "request-priority". 17.3.1.1. .spec Description PriorityLevelConfigurationSpec specifies the configuration of a priority level. Type object Required type Property Type Description limited object LimitedPriorityLevelConfiguration specifies how to handle requests that are subject to limits. It addresses two issues: - How are requests for this priority level limited? - What should be done with requests that exceed the limit? type string type indicates whether this priority level is subject to limitation on request execution. A value of "Exempt" means that requests of this priority level are not subject to a limit (and thus are never queued) and do not detract from the capacity made available to other priority levels. A value of "Limited" means that (a) requests of this priority level are subject to limits and (b) some of the server's limited capacity is made available exclusively to this priority level. Required. 17.3.1.2. .spec.limited Description LimitedPriorityLevelConfiguration specifies how to handle requests that are subject to limits. It addresses two issues: - How are requests for this priority level limited? - What should be done with requests that exceed the limit? Type object Property Type Description borrowingLimitPercent integer borrowingLimitPercent , if present, configures a limit on how many seats this priority level can borrow from other priority levels. The limit is known as this level's BorrowingConcurrencyLimit (BorrowingCL) and is a limit on the total number of seats that this level may borrow at any one time. This field holds the ratio of that limit to the level's nominal concurrency limit. When this field is non-nil, it must hold a non-negative integer and the limit is calculated as follows. BorrowingCL(i) = round( NominalCL(i) * borrowingLimitPercent(i)/100.0 ) The value of this field can be more than 100, implying that this priority level can borrow a number of seats that is greater than its own nominal concurrency limit (NominalCL). When this field is left nil , the limit is effectively infinite. lendablePercent integer lendablePercent prescribes the fraction of the level's NominalCL that can be borrowed by other priority levels. The value of this field must be between 0 and 100, inclusive, and it defaults to 0. The number of seats that other levels can borrow from this level, known as this level's LendableConcurrencyLimit (LendableCL), is defined as follows. LendableCL(i) = round( NominalCL(i) * lendablePercent(i)/100.0 ) limitResponse object LimitResponse defines how to handle requests that can not be executed right now. nominalConcurrencyShares integer nominalConcurrencyShares (NCS) contributes to the computation of the NominalConcurrencyLimit (NominalCL) of this level. This is the number of execution seats available at this priority level. This is used both for requests dispatched from this priority level as well as requests dispatched from other priority levels borrowing seats from this level. The server's concurrency limit (ServerCL) is divided among the Limited priority levels in proportion to their NCS values: NominalCL(i) = ceil( ServerCL * NCS(i) / sum_ncs ) sum_ncs = sum[limited priority level k] NCS(k) Bigger numbers mean a larger nominal concurrency limit, at the expense of every other Limited priority level. This field has a default value of 30. 17.3.1.3. .spec.limited.limitResponse Description LimitResponse defines how to handle requests that can not be executed right now. Type object Required type Property Type Description queuing object QueuingConfiguration holds the configuration parameters for queuing type string type is "Queue" or "Reject". "Queue" means that requests that can not be executed upon arrival are held in a queue until they can be executed or a queuing limit is reached. "Reject" means that requests that can not be executed upon arrival are rejected. Required. 17.3.1.4. .spec.limited.limitResponse.queuing Description QueuingConfiguration holds the configuration parameters for queuing Type object Property Type Description handSize integer handSize is a small positive number that configures the shuffle sharding of requests into queues. When enqueuing a request at this priority level the request's flow identifier (a string pair) is hashed and the hash value is used to shuffle the list of queues and deal a hand of the size specified here. The request is put into one of the shortest queues in that hand. handSize must be no larger than queues , and should be significantly smaller (so that a few heavy flows do not saturate most of the queues). See the user-facing documentation for more extensive guidance on setting this field. This field has a default value of 8. queueLengthLimit integer queueLengthLimit is the maximum number of requests allowed to be waiting in a given queue of this priority level at a time; excess requests are rejected. This value must be positive. If not specified, it will be defaulted to 50. queues integer queues is the number of queues for this priority level. The queues exist independently at each apiserver. The value must be positive. Setting it to 1 effectively precludes shufflesharding and thus makes the distinguisher method of associated flow schemas irrelevant. This field has a default value of 64. 17.3.1.5. .status Description PriorityLevelConfigurationStatus represents the current state of a "request-priority". Type object Property Type Description conditions array conditions is the current state of "request-priority". conditions[] object PriorityLevelConfigurationCondition defines the condition of priority level. 17.3.1.6. .status.conditions Description conditions is the current state of "request-priority". Type array 17.3.1.7. .status.conditions[] Description PriorityLevelConfigurationCondition defines the condition of priority level. Type object Property Type Description lastTransitionTime Time lastTransitionTime is the last time the condition transitioned from one status to another. message string message is a human-readable message indicating details about last transition. reason string reason is a unique, one-word, CamelCase reason for the condition's last transition. status string status is the status of the condition. Can be True, False, Unknown. Required. type string type is the type of the condition. Required. 17.3.2. API endpoints The following API endpoints are available: /apis/flowcontrol.apiserver.k8s.io/v1beta3/prioritylevelconfigurations DELETE : delete collection of PriorityLevelConfiguration GET : list or watch objects of kind PriorityLevelConfiguration POST : create a PriorityLevelConfiguration /apis/flowcontrol.apiserver.k8s.io/v1beta3/watch/prioritylevelconfigurations GET : watch individual changes to a list of PriorityLevelConfiguration. deprecated: use the 'watch' parameter with a list operation instead. /apis/flowcontrol.apiserver.k8s.io/v1beta3/prioritylevelconfigurations/{name} DELETE : delete a PriorityLevelConfiguration GET : read the specified PriorityLevelConfiguration PATCH : partially update the specified PriorityLevelConfiguration PUT : replace the specified PriorityLevelConfiguration /apis/flowcontrol.apiserver.k8s.io/v1beta3/watch/prioritylevelconfigurations/{name} GET : watch changes to an object of kind PriorityLevelConfiguration. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/flowcontrol.apiserver.k8s.io/v1beta3/prioritylevelconfigurations/{name}/status GET : read status of the specified PriorityLevelConfiguration PATCH : partially update status of the specified PriorityLevelConfiguration PUT : replace status of the specified PriorityLevelConfiguration 17.3.2.1. /apis/flowcontrol.apiserver.k8s.io/v1beta3/prioritylevelconfigurations Table 17.36. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of PriorityLevelConfiguration Table 17.37. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 17.38. Body parameters Parameter Type Description body DeleteOptions schema Table 17.39. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind PriorityLevelConfiguration Table 17.40. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 17.41. HTTP responses HTTP code Reponse body 200 - OK PriorityLevelConfigurationList schema 401 - Unauthorized Empty HTTP method POST Description create a PriorityLevelConfiguration Table 17.42. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.43. Body parameters Parameter Type Description body PriorityLevelConfiguration schema Table 17.44. HTTP responses HTTP code Reponse body 200 - OK PriorityLevelConfiguration schema 201 - Created PriorityLevelConfiguration schema 202 - Accepted PriorityLevelConfiguration schema 401 - Unauthorized Empty 17.3.2.2. /apis/flowcontrol.apiserver.k8s.io/v1beta3/watch/prioritylevelconfigurations Table 17.45. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of PriorityLevelConfiguration. deprecated: use the 'watch' parameter with a list operation instead. Table 17.46. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 17.3.2.3. /apis/flowcontrol.apiserver.k8s.io/v1beta3/prioritylevelconfigurations/{name} Table 17.47. Global path parameters Parameter Type Description name string name of the PriorityLevelConfiguration Table 17.48. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a PriorityLevelConfiguration Table 17.49. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 17.50. Body parameters Parameter Type Description body DeleteOptions schema Table 17.51. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified PriorityLevelConfiguration Table 17.52. HTTP responses HTTP code Reponse body 200 - OK PriorityLevelConfiguration schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified PriorityLevelConfiguration Table 17.53. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 17.54. Body parameters Parameter Type Description body Patch schema Table 17.55. HTTP responses HTTP code Reponse body 200 - OK PriorityLevelConfiguration schema 201 - Created PriorityLevelConfiguration schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified PriorityLevelConfiguration Table 17.56. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.57. Body parameters Parameter Type Description body PriorityLevelConfiguration schema Table 17.58. HTTP responses HTTP code Reponse body 200 - OK PriorityLevelConfiguration schema 201 - Created PriorityLevelConfiguration schema 401 - Unauthorized Empty 17.3.2.4. /apis/flowcontrol.apiserver.k8s.io/v1beta3/watch/prioritylevelconfigurations/{name} Table 17.59. Global path parameters Parameter Type Description name string name of the PriorityLevelConfiguration Table 17.60. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind PriorityLevelConfiguration. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 17.61. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 17.3.2.5. /apis/flowcontrol.apiserver.k8s.io/v1beta3/prioritylevelconfigurations/{name}/status Table 17.62. Global path parameters Parameter Type Description name string name of the PriorityLevelConfiguration Table 17.63. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified PriorityLevelConfiguration Table 17.64. HTTP responses HTTP code Reponse body 200 - OK PriorityLevelConfiguration schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified PriorityLevelConfiguration Table 17.65. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 17.66. Body parameters Parameter Type Description body Patch schema Table 17.67. HTTP responses HTTP code Reponse body 200 - OK PriorityLevelConfiguration schema 201 - Created PriorityLevelConfiguration schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified PriorityLevelConfiguration Table 17.68. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.69. Body parameters Parameter Type Description body PriorityLevelConfiguration schema Table 17.70. HTTP responses HTTP code Reponse body 200 - OK PriorityLevelConfiguration schema 201 - Created PriorityLevelConfiguration schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/api_reference/flow-control-apis-1
Chapter 93. OtherArtifact schema reference
Chapter 93. OtherArtifact schema reference Used in: Plugin Property Description url URL of the artifact which will be downloaded. AMQ Streams does not do any security scanning of the downloaded artifacts. For security reasons, you should first verify the artifacts manually and configure the checksum verification to make sure the same artifact is used in the automated build. Required for jar , zip , tgz and other artifacts. Not applicable to the maven artifact type. string sha512sum SHA512 checksum of the artifact. Optional. If specified, the checksum will be verified while building the new container. If not specified, the downloaded artifact will not be verified. Not applicable to the maven artifact type. string fileName Name under which the artifact will be stored. string insecure By default, connections using TLS are verified to check they are secure. The server certificate used must be valid, trusted, and contain the server name. By setting this option to true , all TLS verification is disabled and the artifact will be downloaded, even when the server is considered insecure. boolean type Must be other . string
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-OtherArtifact-reference
Chapter 4. Configuring a password-based account lockout policy
Chapter 4. Configuring a password-based account lockout policy A password-based account lockout policy prevents attackers from repeatedly trying to guess a user's password. You can configure the account lockout policy to lock a user account after a specified number of failed attempts to bind. If a password-based account lockout policy is configured, Directory Server maintains the lockout information in the following attributes of the user entries: passwordRetryCount : Stores the number of failed bind attempts. Directory Server resets the value if the user successfully binds to the directory later than the time in retryCountResetTime . This attribute is present after a user fails to bind for the first time. retryCountResetTime : Stores the time after which the passwordRetryCount attribute is reset. This attribute is present after a user fails to bind for the first time. accountUnlockTime : Stores the time after which the user account is unlocked. This attribute is present after the account was locked for the first time. 4.1. Configuring whether to lock accounts when reaching or exceeding the configured maximum attempts Administrators can configure one of the following behaviors when Directory Server locks accounts on failed login attempts: The server locks accounts if the limit has been exceeded. For example, if the limit is set to 3 attempts, the lockout happens after the fourth failed attempt ( n+1 ). This also means that, if the fourth attempt succeeds, Directory Server does not lock the account. By default, Directory Server uses this legacy password policy that is often expected by traditional LDAP clients. The server locks accounts if the limit has been reached. For example, if the limit is set to 3 attempts, the server locks the account after the third failed attempt ( n ). Modern LDAP clients often expect this behavior. This procedure describes how to disable the legacy password policy. After changing the policy, Directory Server blocks login attempts for a user that reached the configured limit. Prerequisites You configured an account lockout policy. Procedure To disable the legacy password policy and lock accounts if the limit has been reached, enter: # dsconf -D " cn=Directory Manager " ldap://server.example.com config replace passwordLegacyPolicy=off Verification Display the value of the passwordmaxfailure setting: # dsconf -D "cn=Directory Manager" ldap://server.example.com pwpolicy get passwordmaxfailure passwordmaxfailure: 2 Attempt to bind using an invalid password one more time than the value set in passwordmaxfailure : # ldapsearch -H ldap://server.example.com -D " uid=example,ou=People,dc=example,dc=com " -w invalid-password -b " dc=example,dc=com " -x ldap_bind: Invalid credentials (49) # ldapsearch -H ldap://server.example.com -D " uid=example,ou=People,dc=example,dc=com " -w invalid-password -b " dc=example,dc=com " -x ldap_bind: Invalid credentials (49) # ldapsearch -H ldap://server.example.com -D " uid=example,ou=People,dc=example,dc=com " -w invalid-password -b " dc=example,dc=com " -x ldap_bind: Constraint violation (19) additional info: Exceed password retry limit. Please try later. With legacy passwords disabled, Directory Server locked the account after the second attempt, and further tries are blocked with an ldap_bind: Constraint violation (19) error. Additional resources Configuring a password-based account lockout policy using the command line 4.2. Configuring a password-based account lockout policy using the command line To block login recurring bind attempts with invalid passwords, configure a password-based account lockout policy. Important The behavior whether Directory Server locks accounts when reaching or exceeding the configured maximum attempts depends on the legacy password policy setting. Procedure Optional: Identify whether the legacy password policy is enabled or disabled: # dsconf -D " cn=Directory Manager " ldap://server.example.com config get passwordLegacyPolicy passwordLegacyPolicy: on Enable the password lockout policy and set the maximum number of failures to 2 : # [command]`dsconf -D " cn=Directory Manager " ldap://server.example.com pwpolicy set --pwdlockout on --pwdmaxfailures= 2 With the legacy password policy enabled, Directory Server will lock accounts after the third failed attempt to bind (value of the --pwdmaxfailures parameter + 1). The dsconf pwpolicy set command supports the following parameters: --pwdlockout : Enables or disables the account lockout feature. Default: off . --pwdmaxfailures : Sets the maximum number of allowed failed bind attempts before Directory Server locks the account. Default: 3 . Note that this lockout happens one attempt later if the legacy password policy setting is enabled. Default: 3 . --pwdresetfailcount : Sets the time in seconds before Directory Server resets the passwordRetryCount attribute in the user's entry. Default: 600 seconds (10 minutes). --pwdlockoutduration : Sets the time of accounts being locked in seconds. This parameter is ignored if you set the --pwdunlock parameter to off . Default: 3600 seconds (1 hour). --pwdunlock : Enables or disables whether locked accounts should be unlocked after a certain amount of time or stay disabled until an administrator manually unlocks them. Default: on . Verification Attempt to bind using an invalid password two more times than the value you set in the --pwdmaxfailures parameter: # ldapsearch -H ldap://server.example.com -D " uid=example,ou=People,dc=example,dc=com " -w invalid-password -b " dc=example,dc=com " -x ldap_bind: Invalid credentials (49) # ldapsearch -H ldap://server.example.com -D " uid=example,ou=People,dc=example,dc=com " -w invalid-password -b " dc=example,dc=com " -x ldap_bind: Invalid credentials (49) # ldapsearch -H ldap://server.example.com -D " uid=example,ou=People,dc=example,dc=com " -w invalid-password -b " dc=example,dc=com " -x ldap_bind: Invalid credentials (49) # ldapsearch -H ldap://server.example.com -D " uid=example,ou=People,dc=example,dc=com " -w invalid-password -b " dc=example,dc=com " -x ldap_bind: Constraint violation (19) additional info: Exceed password retry limit. Please try later. With legacy passwords enabled, Directory Server locked the account after the limit has exceeded, and further tries are blocked with an ldap_bind: Constraint violation (19) error. Additional resources Configuring the legacy password policy 4.3. Configuring a password-based account lockout policy using the web console To block login recurring bind attempts with invalid passwords, configure a password-based account lockout policy. Important The behavior whether Directory Server locks accounts when reaching or exceeding the configured maximum attempts depends on the legacy password policy setting. Prerequisites You are logged in to the instance in the web console. Procedure Optional: Identify whether the legacy password policy is enabled or disabled: # dsconf -D " cn=Directory Manager " ldap://server.example.com config get passwordLegacyPolicy passwordLegacyPolicy: on This setting is not available in the web console. Navigate to Database Password Policies Global Policy Account Lockout . Select Enable Account Lockout . Configure the lockout settings: Number of Failed Logins That Locks out Account : Sets the maximum number of allowed failed bind attempts before Directory Server locks the account. Time Until Failure Count Resets : Sets the time in seconds before Directory Server resets the passwordRetryCount attribute in the user's entry. Time Until Account Unlocked : Sets the time of accounts beging locked in seconds. This parameter is ignored if you disable Do Not Lockout Account Forever . Do Not Lockout Account Forever : Enables or disables whether locked accounts should be unlocked after a certain amount of time or stay disabled until an administrator manually unlocks them. Click Save . Verification Attempt to bind using an invalid password two more times than the value you set in Number of Failed Logins That Locks out Account : # ldapsearch -H ldap://server.example.com -D " uid=example,ou=People,dc=example,dc=com " -w invalid-password -b " dc=example,dc=com " -x ldap_bind: Invalid credentials (49) # ldapsearch -H ldap://server.example.com -D " uid=example,ou=People,dc=example,dc=com " -w invalid-password -b " dc=example,dc=com " -x ldap_bind: Invalid credentials (49) # ldapsearch -H ldap://server.example.com -D " uid=example,ou=People,dc=example,dc=com " -w invalid-password -b " dc=example,dc=com " -x ldap_bind: Invalid credentials (49) # ldapsearch -H ldap://server.example.com -D " uid=example,ou=People,dc=example,dc=com " -w invalid-password -b " dc=example,dc=com " -x ldap_bind: Constraint violation (19) additional info: Exceed password retry limit. Please try later. With legacy passwords enabled, Directory Server locked the account after the limit has exceeded, and further tries are blocked with an ldap_bind: Constraint violation (19) error. Additional resources Configuring the legacy password policy
[ "dsconf -D \" cn=Directory Manager \" ldap://server.example.com config replace passwordLegacyPolicy=off", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com pwpolicy get passwordmaxfailure passwordmaxfailure: 2", "ldapsearch -H ldap://server.example.com -D \" uid=example,ou=People,dc=example,dc=com \" -w invalid-password -b \" dc=example,dc=com \" -x ldap_bind: Invalid credentials (49) ldapsearch -H ldap://server.example.com -D \" uid=example,ou=People,dc=example,dc=com \" -w invalid-password -b \" dc=example,dc=com \" -x ldap_bind: Invalid credentials (49) ldapsearch -H ldap://server.example.com -D \" uid=example,ou=People,dc=example,dc=com \" -w invalid-password -b \" dc=example,dc=com \" -x ldap_bind: Constraint violation (19) additional info: Exceed password retry limit. Please try later.", "dsconf -D \" cn=Directory Manager \" ldap://server.example.com config get passwordLegacyPolicy passwordLegacyPolicy: on", "[command]`dsconf -D \" cn=Directory Manager \" ldap://server.example.com pwpolicy set --pwdlockout on --pwdmaxfailures= 2", "ldapsearch -H ldap://server.example.com -D \" uid=example,ou=People,dc=example,dc=com \" -w invalid-password -b \" dc=example,dc=com \" -x ldap_bind: Invalid credentials (49) ldapsearch -H ldap://server.example.com -D \" uid=example,ou=People,dc=example,dc=com \" -w invalid-password -b \" dc=example,dc=com \" -x ldap_bind: Invalid credentials (49) ldapsearch -H ldap://server.example.com -D \" uid=example,ou=People,dc=example,dc=com \" -w invalid-password -b \" dc=example,dc=com \" -x ldap_bind: Invalid credentials (49) ldapsearch -H ldap://server.example.com -D \" uid=example,ou=People,dc=example,dc=com \" -w invalid-password -b \" dc=example,dc=com \" -x ldap_bind: Constraint violation (19) additional info: Exceed password retry limit. Please try later.", "dsconf -D \" cn=Directory Manager \" ldap://server.example.com config get passwordLegacyPolicy passwordLegacyPolicy: on", "ldapsearch -H ldap://server.example.com -D \" uid=example,ou=People,dc=example,dc=com \" -w invalid-password -b \" dc=example,dc=com \" -x ldap_bind: Invalid credentials (49) ldapsearch -H ldap://server.example.com -D \" uid=example,ou=People,dc=example,dc=com \" -w invalid-password -b \" dc=example,dc=com \" -x ldap_bind: Invalid credentials (49) ldapsearch -H ldap://server.example.com -D \" uid=example,ou=People,dc=example,dc=com \" -w invalid-password -b \" dc=example,dc=com \" -x ldap_bind: Invalid credentials (49) ldapsearch -H ldap://server.example.com -D \" uid=example,ou=People,dc=example,dc=com \" -w invalid-password -b \" dc=example,dc=com \" -x ldap_bind: Constraint violation (19) additional info: Exceed password retry limit. Please try later." ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/managing_access_control/assembly_configuring-a-password-based-account-lockout-policy_managing-access-control
Chapter 7. System Auditing
Chapter 7. System Auditing The Linux Audit system provides a way to track security-relevant information on your system. Based on pre-configured rules, Audit generates log entries to record as much information about the events that are happening on your system as possible. This information is crucial for mission-critical environments to determine the violator of the security policy and the actions they performed. Audit does not provide additional security to your system; rather, it can be used to discover violations of security policies used on your system. These violations can further be prevented by additional security measures such as SELinux. The following list summarizes some of the information that Audit is capable of recording in its log files: Date and time, type, and outcome of an event. Sensitivity labels of subjects and objects. Association of an event with the identity of the user who triggered the event. All modifications to Audit configuration and attempts to access Audit log files. All uses of authentication mechanisms, such as SSH, Kerberos, and others. Changes to any trusted database, such as /etc/passwd . Attempts to import or export information into or from the system. Include or exclude events based on user identity, subject and object labels, and other attributes. The use of the Audit system is also a requirement for a number of security-related certifications. Audit is designed to meet or exceed the requirements of the following certifications or compliance guides: Controlled Access Protection Profile (CAPP) Labeled Security Protection Profile (LSPP) Rule Set Base Access Control (RSBAC) National Industrial Security Program Operating Manual (NISPOM) Federal Information Security Management Act (FISMA) Payment Card Industry - Data Security Standard (PCI-DSS) Security Technical Implementation Guides (STIG) Audit has also been: Evaluated by National Information Assurance Partnership (NIAP) and Best Security Industries (BSI). Certified to LSPP/CAPP/RSBAC/EAL4+ on Red Hat Enterprise Linux 5. Certified to Operating System Protection Profile / Evaluation Assurance Level 4+ (OSPP/EAL4+) on Red Hat Enterprise Linux 6. Use Cases Watching file access Audit can track whether a file or a directory has been accessed, modified, executed, or the file's attributes have been changed. This is useful, for example, to detect access to important files and have an Audit trail available in case one of these files is corrupted. Monitoring system calls Audit can be configured to generate a log entry every time a particular system call is used. This can be used, for example, to track changes to the system time by monitoring the settimeofday , clock_adjtime , and other time-related system calls. Recording commands run by a user Because Audit can track whether a file has been executed, a number of rules can be defined to record every execution of a particular command. For example, a rule can be defined for every executable in the /bin directory. The resulting log entries can then be searched by user ID to generate an audit trail of executed commands per user. Recording security events The pam_faillock authentication module is capable of recording failed login attempts. Audit can be set up to record failed login attempts as well, and provides additional information about the user who attempted to log in. Searching for events Audit provides the ausearch utility, which can be used to filter the log entries and provide a complete audit trail based on a number of conditions. Running summary reports The aureport utility can be used to generate, among other things, daily reports of recorded events. A system administrator can then analyze these reports and investigate suspicious activity furthermore. Monitoring network access The iptables and ebtables utilities can be configured to trigger Audit events, allowing system administrators to monitor network access. Note System performance may be affected depending on the amount of information that is collected by Audit. 7.1. Audit System Architecture The Audit system consists of two main parts: the user-space applications and utilities, and the kernel-side system call processing. The kernel component receives system calls from user-space applications and filters them through one of the three filters: user , task , or exit . Once a system call passes through one of these filters, it is sent through the exclude filter, which, based on the Audit rule configuration, sends it to the Audit daemon for further processing. Figure 7.1, "Audit system architecture" illustrates this process. Figure 7.1. Audit system architecture The user-space Audit daemon collects the information from the kernel and creates log file entries in a log file. Other Audit user-space utilities interact with the Audit daemon, the kernel Audit component, or the Audit log files: audisp - the Audit dispatcher daemon interacts with the Audit daemon and sends events to other applications for further processing. The purpose of this daemon is to provide a plug-in mechanism so that real-time analytical programs can interact with Audit events. auditctl - the Audit control utility interacts with the kernel Audit component to control a number of settings and parameters of the event generation process. The remaining Audit utilities take the contents of the Audit log files as input and generate output based on user's requirements. For example, the aureport utility generates a report of all recorded events.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/chap-system_auditing
Chapter 26. CXF
Chapter 26. CXF Both producer and consumer are supported The CXF component provides integration with Apache CXF for connecting to JAX-WS services hosted in CXF. Tip When using CXF in streaming modes (see DataFormat option), then also read about Stream caching. 26.1. Dependencies When using cxf with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-cxf-soap-starter</artifactId> </dependency> 26.2. URI format There are two URI formats for this endpoint: cxfEndpoint and someAddress . Where cxfEndpoint represents a bean ID that references a bean in the Spring bean registry. With this URI format, most of the endpoint details are specified in the bean definition. Where someAddress specifies the CXF endpoint's address. With this URI format, most of the endpoint details are specified using options. For either style above, you can append options to the URI as follows: 26.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 26.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 26.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 26.4. Component Options The CXF component supports 6 options, which are listed below. Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean allowStreaming (advanced) This option controls whether the CXF component, when running in PAYLOAD mode, will DOM parse the incoming messages into DOM Elements or keep the payload as a javax.xml.transform.Source object that would allow streaming in some cases. Boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean headerFilterStrategy (filter) To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. HeaderFilterStrategy useGlobalSslContextParameters (security) Enable usage of global SSL context parameters. false boolean 26.5. Endpoint Options The CXF endpoint is configured using URI syntax: with the following path and query parameters: 26.5.1. Path Parameters (2 parameters) Name Description Default Type beanId (common) To lookup an existing configured CxfEndpoint. Must used bean: as prefix. String address (service) The service publish address. String 26.5.2. Query Parameters (35 parameters) Name Description Default Type dataFormat (common) The data type messages supported by the CXF endpoint. Enum values: PAYLOAD RAW MESSAGE CXF_MESSAGE POJO POJO DataFormat wrappedStyle (common) The WSDL style that describes how parameters are represented in the SOAP body. If the value is false, CXF will chose the document-literal unwrapped style, If the value is true, CXF will chose the document-literal wrapped style. Boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern cookieHandler (producer) Configure a cookie handler to maintain a HTTP session. CookieHandler defaultOperationName (producer) This option will set the default operationName that will be used by the CxfProducer which invokes the remote service. String defaultOperationNamespace (producer) This option will set the default operationNamespace that will be used by the CxfProducer which invokes the remote service. String hostnameVerifier (producer) The hostname verifier to be used. Use the # notation to reference a HostnameVerifier from the registry. HostnameVerifier lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean sslContextParameters (producer) The Camel SSL setting reference. Use the # notation to reference the SSL Context. SSLContextParameters wrapped (producer) Which kind of operation that CXF endpoint producer will invoke. false boolean synchronous (producer (advanced)) Sets whether synchronous processing should be strictly used. false boolean allowStreaming (advanced) This option controls whether the CXF component, when running in PAYLOAD mode, will DOM parse the incoming messages into DOM Elements or keep the payload as a javax.xml.transform.Source object that would allow streaming in some cases. Boolean bus (advanced) To use a custom configured CXF Bus. Bus continuationTimeout (advanced) This option is used to set the CXF continuation timeout which could be used in CxfConsumer by default when the CXF server is using Jetty or Servlet transport. 30000 long cxfBinding (advanced) To use a custom CxfBinding to control the binding between Camel Message and CXF Message. CxfBinding cxfConfigurer (advanced) This option could apply the implementation of org.apache.camel.component.cxf.CxfEndpointConfigurer which supports to configure the CXF endpoint in programmatic way. User can configure the CXF server and client by implementing configure{ServerClient} method of CxfEndpointConfigurer. CxfConfigurer defaultBus (advanced) Will set the default bus when CXF endpoint create a bus by itself. false boolean headerFilterStrategy (advanced) To use a custom HeaderFilterStrategy to filter header to and from Camel message. HeaderFilterStrategy mergeProtocolHeaders (advanced) Whether to merge protocol headers. If enabled then propagating headers between Camel and CXF becomes more consistent and similar. For more details see CAMEL-6393. false boolean mtomEnabled (advanced) To enable MTOM (attachments). This requires to use POJO or PAYLOAD data format mode. false boolean properties (advanced) To set additional CXF options using the key/value pairs from the Map. For example to turn on stacktraces in SOAP faults, properties.faultStackTraceEnabled=true. Map skipPayloadMessagePartCheck (advanced) Sets whether SOAP message validation should be disabled. false boolean loggingFeatureEnabled (logging) This option enables CXF Logging Feature which writes inbound and outbound SOAP messages to log. false boolean loggingSizeLimit (logging) To limit the total size of number of bytes the logger will output when logging feature has been enabled and -1 for no limit. 49152 int skipFaultLogging (logging) This option controls whether the PhaseInterceptorChain skips logging the Fault that it catches. false boolean password (security) This option is used to set the basic authentication information of password for the CXF client. String username (security) This option is used to set the basic authentication information of username for the CXF client. String bindingId (service) The bindingId for the service model to use. String portName (service) The endpoint name this service is implementing, it maps to the wsdl:portname. In the format of ns:PORT_NAME where ns is a namespace prefix valid at this scope. String publishedEndpointUrl (service) This option can override the endpointUrl that published from the WSDL which can be accessed with service address url plus wsd. String serviceClass (service) The class name of the SEI (Service Endpoint Interface) class which could have JSR181 annotation or not. Class serviceName (service) The service name this service is implementing, it maps to the wsdl:servicename. String wsdlURL (service) The location of the WSDL. Can be on the classpath, file system, or be hosted remotely. String The serviceName and portName are QNames , so if you provide them be sure to prefix them with their {namespace} as shown in the examples above. 26.5.3. Descriptions of the dataformats In Apache Camel, the Camel CXF component is the key to integrating routes with Web services. You can use the Camel CXF component to create a CXF endpoint, which can be used in either of the following ways: Consumer - (at the start of a route) represents a Web service instance, which integrates with the route. The type of payload injected into the route depends on the value of the endpoint's dataFormat option. Producer - (at other points in the route) represents a WS client proxy, which converts the current exchange object into an operation invocation on a remote Web service. The format of the current exchange must match the endpoint's dataFormat setting. DataFormat Description POJO POJOs (Plain old Java objects) are the Java parameters to the method being invoked on the target server. Both Protocol and Logical JAX-WS handlers are supported. PAYLOAD PAYLOAD is the message payload (the contents of the soap:body ) after message configuration in the CXF endpoint is applied. Only Protocol JAX-WS handler is supported. Logical JAX-WS handler is not supported. RAW RAW mode provides the raw message stream that is received from the transport layer. It is not possible to touch or change the stream, some of the CXF interceptors will be removed if you are using this kind of DataFormat, so you can't see any soap headers after the camel-cxf consumer. JAX-WS handler is not supported. CXF_MESSAGE CXF_MESSAGE allows for invoking the full capabilities of CXF interceptors by converting the message from the transport layer into a raw SOAP message You can determine the data format mode of an exchange by retrieving the exchange property, CamelCXFDataFormat . The exchange key constant is defined in org.apache.camel.component.cxf.common.message.CxfConstants.DATA_FORMAT_PROPERTY . 26.5.4. How to enable CXF's LoggingOutInterceptor in RAW mode CXF's LoggingOutInterceptor outputs outbound message that goes on the wire to logging system (Java Util Logging). Since the LoggingOutInterceptor is in PRE_STREAM phase (but PRE_STREAM phase is removed in RAW mode), you have to configure LoggingOutInterceptor to be run during the WRITE phase. The following is an example. @Bean public CxfEndpoint serviceEndpoint(LoggingOutInterceptor loggingOutInterceptor) { CxfSpringEndpoint cxfEndpoint = new CxfSpringEndpoint(); cxfEndpoint.setAddress("http://localhost:" + port + "/services" + SERVICE_ADDRESS); cxfEndpoint.setServiceClass(org.apache.camel.component.cxf.HelloService.class); Map<String, Object> properties = new HashMap<String, Object>(); properties.put("dataFormat", "RAW"); cxfEndpoint.setProperties(properties); cxfEndpoint.getOutInterceptors().add(loggingOutInterceptor); return cxfEndpoint; } @Bean public LoggingOutInterceptor loggingOutInterceptor() { LoggingOutInterceptor logger = new LoggingOutInterceptor("write"); return logger; } 26.5.5. Description of relayHeaders option There are in-band and out-of-band on-the-wire headers from the perspective of a JAXWS WSDL-first developer. The in-band headers are headers that are explicitly defined as part of the WSDL binding contract for an endpoint such as SOAP headers. The out-of-band headers are headers that are serialized over the wire, but are not explicitly part of the WSDL binding contract. Headers relaying/filtering is bi-directional. When a route has a CXF endpoint and the developer needs to have on-the-wire headers, such as SOAP headers, be relayed along the route to be consumed say by another JAXWS endpoint, then relayHeaders should be set to true , which is the default value. 26.5.6. Available only in POJO mode The relayHeaders=true expresses an intent to relay the headers. The actual decision on whether a given header is relayed is delegated to a pluggable instance that implements the MessageHeadersRelay interface. A concrete implementation of MessageHeadersRelay will be consulted to decide if a header needs to be relayed or not. There is already an implementation of SoapMessageHeadersRelay which binds itself to well-known SOAP name spaces. Currently only out-of-band headers are filtered, and in-band headers will always be relayed when relayHeaders=true . If there is a header on the wire whose name space is unknown to the runtime, then a fall back DefaultMessageHeadersRelay will be used, which simply allows all headers to be relayed. The relayHeaders=false setting specifies that all headers in-band and out-of-band should be dropped. You can plugin your own MessageHeadersRelay implementations overriding or adding additional ones to the list of relays. In order to override a preloaded relay instance just make sure that your MessageHeadersRelay implementation services the same name spaces as the one you looking to override. Also note, that the overriding relay has to service all of the name spaces as the one you looking to override, or else a runtime exception on route start up will be thrown as this would introduce an ambiguity in name spaces to relay instance mappings. <cxf:cxfEndpoint ...> <cxf:properties> <entry key="org.apache.camel.cxf.message.headers.relays"> <list> <ref bean="customHeadersRelay"/> </list> </entry> </cxf:properties> </cxf:cxfEndpoint> <bean id="customHeadersRelay" class="org.apache.camel.component.cxf.soap.headers.CustomHeadersRelay"/> Take a look at the tests that show how you'd be able to relay/drop headers here: https://github.com/apache/camel/blob/main/components/camel-cxf/camel-cxf-spring-soap/src/test/java/org/apache/camel/component/cxf/soap/headers/CxfMessageHeadersRelayTest.java POJO and PAYLOAD modes are supported. In POJO mode, only out-of-band message headers are available for filtering as the in-band headers have been processed and removed from header list by CXF. The in-band headers are incorporated into the MessageContentList in POJO mode. The camel-cxf component does make any attempt to remove the in-band headers from the MessageContentList . If filtering of in-band headers is required, please use PAYLOAD mode or plug in a (pretty straightforward) CXF interceptor/JAXWS Handler to the CXF endpoint. The Message Header Relay mechanism has been merged into CxfHeaderFilterStrategy . The relayHeaders option, its semantics, and default value remain the same, but it is a property of CxfHeaderFilterStrategy . Here is an example of configuring it. @Bean public HeaderFilterStrategy dropAllMessageHeadersStrategy() { CxfHeaderFilterStrategy headerFilterStrategy = new CxfHeaderFilterStrategy(); headerFilterStrategy.setRelayHeaders(false); return headerFilterStrategy; } Then, your endpoint can reference the CxfHeaderFilterStrategy . @Bean public CxfEndpoint routerNoRelayEndpoint(HeaderFilterStrategy dropAllMessageHeadersStrategy) { CxfSpringEndpoint cxfEndpoint = new CxfSpringEndpoint(); cxfEndpoint.setServiceClass(org.apache.camel.component.cxf.soap.headers.HeaderTester.class); cxfEndpoint.setAddress("/CxfMessageHeadersRelayTest/HeaderService/routerNoRelayEndpoint"); cxfEndpoint.setWsdlURL("soap_header.wsdl"); cxfEndpoint.setEndpointNameAsQName( QName.valueOf("{http://apache.org/camel/component/cxf/soap/headers}SoapPortNoRelay")); cxfEndpoint.setServiceNameAsQName(SERVICENAME); Map<String, Object> properties = new HashMap<String, Object>(); properties.put("dataFormat", "PAYLOAD"); cxfEndpoint.setProperties(properties); cxfEndpoint.setHeaderFilterStrategy(dropAllMessageHeadersStrategy); return cxfEndpoint; } @Bean public CxfEndpoint serviceNoRelayEndpoint(HeaderFilterStrategy dropAllMessageHeadersStrategy) { CxfSpringEndpoint cxfEndpoint = new CxfSpringEndpoint(); cxfEndpoint.setServiceClass(org.apache.camel.component.cxf.soap.headers.HeaderTester.class); cxfEndpoint.setAddress("http://localhost:" + port + "/services/CxfMessageHeadersRelayTest/HeaderService/routerNoRelayEndpointBackend"); cxfEndpoint.setWsdlURL("soap_header.wsdl"); cxfEndpoint.setEndpointNameAsQName( QName.valueOf("{http://apache.org/camel/component/cxf/soap/headers}SoapPortNoRelay")); cxfEndpoint.setServiceNameAsQName(SERVICENAME); Map<String, Object> properties = new HashMap<String, Object>(); properties.put("dataFormat", "PAYLOAD"); cxfEndpoint.setProperties(properties); cxfEndpoint.setHeaderFilterStrategy(dropAllMessageHeadersStrategy); return cxfEndpoint; } Then configure the route as follows: rom("cxf:bean:routerNoRelayEndpoint") .to("cxf:bean:serviceNoRelayEndpoint"); The MessageHeadersRelay interface has changed slightly and has been renamed to MessageHeaderFilter . It is a property of CxfHeaderFilterStrategy . Here is an example of configuring user defined Message Header Filters: @Bean public HeaderFilterStrategy customMessageFilterStrategy() { CxfHeaderFilterStrategy headerFilterStrategy = new CxfHeaderFilterStrategy(); List<MessageHeaderFilter> headerFilterList = new ArrayList<MessageHeaderFilter>(); headerFilterList.add(new SoapMessageHeaderFilter()); headerFilterList.add(new CustomHeaderFilter()); headerFilterStrategy.setMessageHeaderFilters(headerFilterList); return headerFilterStrategy; } In addition to relayHeaders , the following properties can be configured in CxfHeaderFilterStrategy . Name Required Description relayHeaders No All message headers will be processed by Message Header Filters Type : boolean Default : true relayAllMessageHeaders No All message headers will be propagated (without processing by Message Header Filters) Type : boolean Default : false allowFilterNamespaceClash No If two filters overlap in activation namespace, the property control how it should be handled. If the value is true , last one wins. If the value is false , it will throw an exception Type : boolean Default : false 26.6. Configure the CXF endpoints with Spring You can configure the CXF endpoint with the Spring configuration file shown below, and you can also embed the endpoint into the camelContext tags. When you are invoking the service endpoint, you can set the operationName and operationNamespace headers to explicitly state which operation you are calling. <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:cxf="http://camel.apache.org/schema/cxf/jaxws" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/cxf/jaxws http://camel.apache.org/schema/cxf/jaxws/camel-cxf.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd"> <cxf:cxfEndpoint id="routerEndpoint" address="http://localhost:9003/CamelContext/RouterPort" serviceClass="org.apache.hello_world_soap_http.GreeterImpl"/> <cxf:cxfEndpoint id="serviceEndpoint" address="http://localhost:9000/SoapContext/SoapPort" wsdlURL="testutils/hello_world.wsdl" serviceClass="org.apache.hello_world_soap_http.Greeter" endpointName="s:SoapPort" serviceName="s:SOAPService" xmlns:s="http://apache.org/hello_world_soap_http" /> <camelContext id="camel" xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="cxf:bean:routerEndpoint" /> <to uri="cxf:bean:serviceEndpoint" /> </route> </camelContext> </beans> Be sure to include the JAX-WS schemaLocation attribute specified on the root beans element. This allows CXF to validate the file and is required. Also note the namespace declarations at the end of the <cxf:cxfEndpoint/> tag. These declarations are required because the combined {namespace}localName syntax is presently not supported for this tag's attribute values. The cxf:cxfEndpoint element supports many additional attributes: Name Value PortName The endpoint name this service is implementing, it maps to the wsdl:port@name . In the format of ns:PORT_NAME where ns is a namespace prefix valid at this scope. serviceName The service name this service is implementing, it maps to the wsdl:service@name . In the format of ns:SERVICE_NAME where ns is a namespace prefix valid at this scope. wsdlURL The location of the WSDL. Can be on the classpath, file system, or be hosted remotely. bindingId The bindingId for the service model to use. address The service publish address. bus The bus name that will be used in the JAX-WS endpoint. serviceClass The class name of the SEI (Service Endpoint Interface) class which could have JSR181 annotation or not. It also supports many child elements: Name Value cxf:inInterceptors The incoming interceptors for this endpoint. A list of <bean> or <ref> . cxf:inFaultInterceptors The incoming fault interceptors for this endpoint. A list of <bean> or <ref> . cxf:outInterceptors The outgoing interceptors for this endpoint. A list of <bean> or <ref> . cxf:outFaultInterceptors The outgoing fault interceptors for this endpoint. A list of <bean> or <ref> . cxf:properties A properties map which should be supplied to the JAX-WS endpoint. See below. cxf:handlers A JAX-WS handler list which should be supplied to the JAX-WS endpoint. See below. cxf:dataBinding You can specify the which DataBinding will be use in the endpoint. This can be supplied using the Spring <bean class="MyDataBinding"/> syntax. cxf:binding You can specify the BindingFactory for this endpoint to use. This can be supplied using the Spring <bean class="MyBindingFactory"/> syntax. cxf:features The features that hold the interceptors for this endpoint. A list of beans or refs cxf:schemaLocations The schema locations for endpoint to use. A list of schemaLocations cxf:serviceFactory The service factory for this endpoint to use. This can be supplied using the Spring <bean class="MyServiceFactory"/> syntax You can find more advanced examples that show how to provide interceptors, properties and handlers on the CXF JAX-WS Configuration page . Note You can use cxf:properties to set the camel-cxf endpoint's dataFormat and setDefaultBus properties from spring configuration file. <cxf:cxfEndpoint id="testEndpoint" address="http://localhost:9000/router" serviceClass="org.apache.camel.component.cxf.HelloService" endpointName="s:PortName" serviceName="s:ServiceName" xmlns:s="http://www.example.com/test"> <cxf:properties> <entry key="dataFormat" value="RAW"/> <entry key="setDefaultBus" value="true"/> </cxf:properties> </cxf:cxfEndpoint> Note In SpringBoot, you can use Spring XML files to configure camel-cxf and use code similar to the following example to create XML configured beans: @ImportResource({ "classpath:spring-configuration.xml" }) However, the use of Java code configured beans (as shown in other examples) is best practice in SpringBoot. 26.7. How to make the camel-cxf component use log4j instead of java.util.logging CXF's default logger is java.util.logging . If you want to change it to log4j, proceed as follows. Create a file, in the classpath, named META-INF/cxf/org.apache.cxf.logger . This file should contain the fully-qualified name of the class, org.apache.cxf.common.logging.Log4jLogger , with no comments, on a single line. 26.8. How to let camel-cxf response start with xml processing instruction If you are using some SOAP client such as PHP, you will get this kind of error, because CXF doesn't add the XML processing instruction <?xml version="1.0" encoding="utf-8"?> : To resolve this issue, you just need to tell StaxOutInterceptor to write the XML start document for you, as in the WriteXmlDeclarationInterceptor below: public class WriteXmlDeclarationInterceptor extends AbstractPhaseInterceptor<SoapMessage> { public WriteXmlDeclarationInterceptor() { super(Phase.PRE_STREAM); addBefore(StaxOutInterceptor.class.getName()); } public void handleMessage(SoapMessage message) throws Fault { message.put("org.apache.cxf.stax.force-start-document", Boolean.TRUE); } } As an alternative you can add a message header for it as demonstrated in CxfConsumerTest : // set up the response context which force start document Map<String, Object> map = new HashMap<String, Object>(); map.put("org.apache.cxf.stax.force-start-document", Boolean.TRUE); exchange.getOut().setHeader(Client.RESPONSE_CONTEXT, map); 26.9. How to override the CXF producer address from message header The camel-cxf producer supports to override the target service address by setting a message header CamelDestinationOverrideUrl . // set up the service address from the message header to override the setting of CXF endpoint exchange.getIn().setHeader(Exchange.DESTINATION_OVERRIDE_URL, constant(getServiceAddress())); 26.10. How to consume a message from a camel-cxf endpoint in POJO data format The camel-cxf endpoint consumer POJO data format is based on the CXF invoker , so the message header has a property with the name of CxfConstants.OPERATION_NAME and the message body is a list of the SEI method parameters. Consider the PersonProcessor example code: public class PersonProcessor implements Processor { private static final Logger LOG = LoggerFactory.getLogger(PersonProcessor.class); @Override @SuppressWarnings("unchecked") public void process(Exchange exchange) throws Exception { LOG.info("processing exchange in camel"); BindingOperationInfo boi = (BindingOperationInfo) exchange.getProperty(BindingOperationInfo.class.getName()); if (boi != null) { LOG.info("boi.isUnwrapped" + boi.isUnwrapped()); } // Get the parameters list which element is the holder. MessageContentsList msgList = (MessageContentsList) exchange.getIn().getBody(); Holder<String> personId = (Holder<String>) msgList.get(0); Holder<String> ssn = (Holder<String>) msgList.get(1); Holder<String> name = (Holder<String>) msgList.get(2); if (personId.value == null || personId.value.length() == 0) { LOG.info("person id 123, so throwing exception"); // Try to throw out the soap fault message org.apache.camel.wsdl_first.types.UnknownPersonFault personFault = new org.apache.camel.wsdl_first.types.UnknownPersonFault(); personFault.setPersonId(""); org.apache.camel.wsdl_first.UnknownPersonFault fault = new org.apache.camel.wsdl_first.UnknownPersonFault("Get the null value of person name", personFault); exchange.getMessage().setBody(fault); return; } name.value = "Bonjour"; ssn.value = "123"; LOG.info("setting Bonjour as the response"); // Set the response message, first element is the return value of the operation, // the others are the holders of method parameters exchange.getMessage().setBody(new Object[] { null, personId, ssn, name }); } } 26.11. How to prepare the message for the camel-cxf endpoint in POJO data format The camel-cxf endpoint producer is based on the CXF client API . First you need to specify the operation name in the message header, then add the method parameters to a list, and initialize the message with this parameter list. The response message's body is a messageContentsList, you can get the result from that list. If you don't specify the operation name in the message header, CxfProducer will try to use the defaultOperationName from CxfEndpoint , if there is no defaultOperationName set on CxfEndpoint , it will pick up the first operationName from the Operation list. If you want to get the object array from the message body, you can get the body using message.getBody(Object[].class) , as shown in CxfProducerRouterTest.testInvokingSimpleServerWithParams : Exchange senderExchange = new DefaultExchange(context, ExchangePattern.InOut); final List<String> params = new ArrayList<>(); // Prepare the request message for the camel-cxf procedure params.add(TEST_MESSAGE); senderExchange.getIn().setBody(params); senderExchange.getIn().setHeader(CxfConstants.OPERATION_NAME, ECHO_OPERATION); Exchange exchange = template.send("direct:EndpointA", senderExchange); org.apache.camel.Message out = exchange.getMessage(); // The response message's body is an MessageContentsList which first element is the return value of the operation, // If there are some holder parameters, the holder parameter will be filled in the reset of List. // The result will be extract from the MessageContentsList with the String class type MessageContentsList result = (MessageContentsList) out.getBody(); LOG.info("Received output text: " + result.get(0)); Map<String, Object> responseContext = CastUtils.cast((Map<?, ?>) out.getHeader(Client.RESPONSE_CONTEXT)); assertNotNull(responseContext); assertEquals("UTF-8", responseContext.get(org.apache.cxf.message.Message.ENCODING), "We should get the response context here"); assertEquals("echo " + TEST_MESSAGE, result.get(0), "Reply body on Camel is wrong"); 26.12. How to deal with the message for a camel-cxf endpoint in PAYLOAD data format PAYLOAD means that you process the payload from the SOAP envelope as a native CxfPayload. Message.getBody() will return a org.apache.camel.component.cxf.CxfPayload object, with getters for SOAP message headers and the SOAP body. See CxfConsumerPayloadTest : protected RouteBuilder createRouteBuilder() { return new RouteBuilder() { public void configure() { from(simpleEndpointURI + "&dataFormat=PAYLOAD").to("log:info").process(new Processor() { @SuppressWarnings("unchecked") public void process(final Exchange exchange) throws Exception { CxfPayload<SoapHeader> requestPayload = exchange.getIn().getBody(CxfPayload.class); List<Source> inElements = requestPayload.getBodySources(); List<Source> outElements = new ArrayList<>(); // You can use a customer toStringConverter to turn a CxfPayLoad message into String as you want String request = exchange.getIn().getBody(String.class); XmlConverter converter = new XmlConverter(); String documentString = ECHO_RESPONSE; Element in = new XmlConverter().toDOMElement(inElements.get(0)); // Just check the element namespace if (!in.getNamespaceURI().equals(ELEMENT_NAMESPACE)) { throw new IllegalArgumentException("Wrong element namespace"); } if (in.getLocalName().equals("echoBoolean")) { documentString = ECHO_BOOLEAN_RESPONSE; checkRequest("ECHO_BOOLEAN_REQUEST", request); } else { documentString = ECHO_RESPONSE; checkRequest("ECHO_REQUEST", request); } Document outDocument = converter.toDOMDocument(documentString, exchange); outElements.add(new DOMSource(outDocument.getDocumentElement())); // set the payload header with null CxfPayload<SoapHeader> responsePayload = new CxfPayload<>(null, outElements, null); exchange.getMessage().setBody(responsePayload); } }); } }; } 26.13. How to get and set SOAP headers in POJO mode POJO means that the data format is a "list of Java objects" when the camel-cxf endpoint produces or consumes Camel exchanges. Even though Camel exposes the message body as POJOs in this mode, camel-cxf still provides access to read and write SOAP headers. However, since CXF interceptors remove in-band SOAP headers from the header list after they have been processed, only out-of-band SOAP headers are available to camel-cxf in POJO mode. The following example illustrates how to get/set SOAP headers. Suppose we have a route that forwards from one Camel-cxf endpoint to another. That is, SOAP Client Camel CXF service. We can attach two processors to obtain/insert SOAP headers at (1) before a request goes out to the CXF service and (2) before the response comes back to the SOAP Client. Processor (1) and (2) in this example are InsertRequestOutHeaderProcessor and InsertResponseOutHeaderProcessor. Our route looks like this: from("cxf:bean:routerRelayEndpointWithInsertion") .process(new InsertRequestOutHeaderProcessor()) .to("cxf:bean:serviceRelayEndpointWithInsertion") .process(new InsertResponseOutHeaderProcessor()); The Bean routerRelayEndpointWithInsertion and serviceRelayEndpointWithInsertion are defined as follows: @Bean public CxfEndpoint routerRelayEndpointWithInsertion() { CxfSpringEndpoint cxfEndpoint = new CxfSpringEndpoint(); cxfEndpoint.setServiceClass(org.apache.camel.component.cxf.soap.headers.HeaderTester.class); cxfEndpoint.setAddress("/CxfMessageHeadersRelayTest/HeaderService/routerRelayEndpointWithInsertion"); cxfEndpoint.setWsdlURL("soap_header.wsdl"); cxfEndpoint.setEndpointNameAsQName( QName.valueOf("{http://apache.org/camel/component/cxf/soap/headers}SoapPortRelayWithInsertion")); cxfEndpoint.setServiceNameAsQName(SERVICENAME); cxfEndpoint.getFeatures().add(new LoggingFeature()); return cxfEndpoint; } @Bean public CxfEndpoint serviceRelayEndpointWithInsertion() { CxfSpringEndpoint cxfEndpoint = new CxfSpringEndpoint(); cxfEndpoint.setServiceClass(org.apache.camel.component.cxf.soap.headers.HeaderTester.class); cxfEndpoint.setAddress("http://localhost:" + port + "/services/CxfMessageHeadersRelayTest/HeaderService/routerRelayEndpointWithInsertionBackend"); cxfEndpoint.setWsdlURL("soap_header.wsdl"); cxfEndpoint.setEndpointNameAsQName( QName.valueOf("{http://apache.org/camel/component/cxf/soap/headers}SoapPortRelayWithInsertion")); cxfEndpoint.setServiceNameAsQName(SERVICENAME); cxfEndpoint.getFeatures().add(new LoggingFeature()); return cxfEndpoint; } SOAP headers are propagated to and from Camel Message headers. The Camel message header name is "org.apache.cxf.headers.Header.list" which is a constant defined in CXF (org.apache.cxf.headers.Header.HEADER_LIST). The header value is a List of CXF SoapHeader objects (org.apache.cxf.binding.soap.SoapHeader). The following snippet is the InsertResponseOutHeaderProcessor (that insert a new SOAP header in the response message). The way to access SOAP headers in both InsertResponseOutHeaderProcessor and InsertRequestOutHeaderProcessor are actually the same. The only difference between the two processors is setting the direction of the inserted SOAP header. You can find the InsertResponseOutHeaderProcessor example in CxfMessageHeadersRelayTest : public static class InsertResponseOutHeaderProcessor implements Processor { public void process(Exchange exchange) throws Exception { List<SoapHeader> soapHeaders = CastUtils.cast((List<?>)exchange.getIn().getHeader(Header.HEADER_LIST)); // Insert a new header String xml = "<?xml version=\"1.0\" encoding=\"utf-8\"?><outofbandHeader " + "xmlns=\"http://cxf.apache.org/outofband/Header\" hdrAttribute=\"testHdrAttribute\" " + "xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\" soap:mustUnderstand=\"1\">" + "<name>New_testOobHeader</name><value>New_testOobHeaderValue</value></outofbandHeader>"; SoapHeader newHeader = new SoapHeader(soapHeaders.get(0).getName(), DOMUtils.readXml(new StringReader(xml)).getDocumentElement()); // make sure direction is OUT since it is a response message. newHeader.setDirection(Direction.DIRECTION_OUT); //newHeader.setMustUnderstand(false); soapHeaders.add(newHeader); } } 26.14. How to get and set SOAP headers in PAYLOAD mode We've already shown how to access the SOAP message as CxfPayload object in PAYLOAD mode inm the section How to deal with the message for a camel-cxf endpoint in PAYLOAD data format . Once you obtain a CxfPayload object, you can invoke the CxfPayload.getHeaders() method that returns a List of DOM Elements (SOAP headers). For an example see CxfPayLoadSoapHeaderTest : from(getRouterEndpointURI()).process(new Processor() { @SuppressWarnings("unchecked") public void process(Exchange exchange) throws Exception { CxfPayload<SoapHeader> payload = exchange.getIn().getBody(CxfPayload.class); List<Source> elements = payload.getBodySources(); assertNotNull(elements, "We should get the elements here"); assertEquals(1, elements.size(), "Get the wrong elements size"); Element el = new XmlConverter().toDOMElement(elements.get(0)); elements.set(0, new DOMSource(el)); assertEquals("http://camel.apache.org/pizza/types", el.getNamespaceURI(), "Get the wrong namespace URI"); List<SoapHeader> headers = payload.getHeaders(); assertNotNull(headers, "We should get the headers here"); assertEquals(1, headers.size(), "Get the wrong headers size"); assertEquals("http://camel.apache.org/pizza/types", ((Element) (headers.get(0).getObject())).getNamespaceURI(), "Get the wrong namespace URI"); // alternatively you can also get the SOAP header via the camel header: headers = exchange.getIn().getHeader(Header.HEADER_LIST, List.class); assertNotNull(headers, "We should get the headers here"); assertEquals(1, headers.size(), "Get the wrong headers size"); assertEquals("http://camel.apache.org/pizza/types", ((Element) (headers.get(0).getObject())).getNamespaceURI(), "Get the wrong namespace URI"); } }) .to(getServiceEndpointURI()); You can also use the same way as described in sub-chapter "How to get and set SOAP headers in POJO mode" to set or get the SOAP headers. So, you can use the header "org.apache.cxf.headers.Header.list" to get and set a list of SOAP headers.This does also mean that if you have a route that forwards from one Camel-cxf endpoint to another (SOAP Client Camel CXF service), now also the SOAP headers sent by the SOAP client are forwarded to the CXF service. If you do not want that these headers are forwarded you have to remove them in the Camel header "org.apache.cxf.headers.Header.list". 26.15. SOAP headers are not available in RAW mode SOAP headers are not available in RAW mode as SOAP processing is skipped. 26.16. How to throw a SOAP Fault from Camel If you are using a camel-cxf endpoint to consume the SOAP request, you may need to throw the SOAP Fault from the camel context. Basically, you can use the throwFault DSL to do that; it works for POJO , PAYLOAD and MESSAGE data format. You can define the soap fault as shown in CxfCustomizedExceptionTest : SOAP_FAULT = new SoapFault(EXCEPTION_MESSAGE, SoapFault.FAULT_CODE_CLIENT); Element detail = SOAP_FAULT.getOrCreateDetail(); Document doc = detail.getOwnerDocument(); Text tn = doc.createTextNode(DETAIL_TEXT); detail.appendChild(tn); Then throw it as you like from(routerEndpointURI).setFaultBody(constant(SOAP_FAULT)); If your CXF endpoint is working in the MESSAGE data format, you could set the SOAP Fault message in the message body and set the response code in the message header as demonstrated by CxfMessageStreamExceptionTest from(routerEndpointURI).process(new Processor() { public void process(Exchange exchange) throws Exception { Message out = exchange.getOut(); // Set the message body with the out.setBody(this.getClass().getResourceAsStream("SoapFaultMessage.xml")); // Set the response code here out.setHeader(org.apache.cxf.message.Message.RESPONSE_CODE, new Integer(500)); } }); Same for using POJO data format. You can set the SOAPFault on the out body. 26.17. How to propagate a camel-cxf endpoint's request and response context CXF client API provides a way to invoke the operation with request and response context. If you are using a camel-cxf endpoint producer to invoke the outside web service, you can set the request context and get response context with the following code: CxfExchange exchange = (CxfExchange)template.send(getJaxwsEndpointUri(), new Processor() { public void process(final Exchange exchange) { final List<String> params = new ArrayList<String>(); params.add(TEST_MESSAGE); // Set the request context to the inMessage Map<String, Object> requestContext = new HashMap<String, Object>(); requestContext.put(BindingProvider.ENDPOINT_ADDRESS_PROPERTY, JAXWS_SERVER_ADDRESS); exchange.getIn().setBody(params); exchange.getIn().setHeader(Client.REQUEST_CONTEXT , requestContext); exchange.getIn().setHeader(CxfConstants.OPERATION_NAME, GREET_ME_OPERATION); } }); org.apache.camel.Message out = exchange.getOut(); // The output is an object array, the first element of the array is the return value Object\[\] output = out.getBody(Object\[\].class); LOG.info("Received output text: " + output\[0\]); // Get the response context form outMessage Map<String, Object> responseContext = CastUtils.cast((Map)out.getHeader(Client.RESPONSE_CONTEXT)); assertNotNull(responseContext); assertEquals("Get the wrong wsdl operation name", "{http://apache.org/hello_world_soap_http}greetMe", responseContext.get("javax.xml.ws.wsdl.operation").toString()); 26.18. Attachment Support POJO Mode: Both SOAP with Attachment and MTOM are supported (see example in Payload Mode for enabling MTOM). However, SOAP with Attachment is not tested. Since attachments are marshalled and unmarshalled into POJOs, users typically do not need to deal with the attachment themself. Attachments are propagated to Camel message's attachments if the MTOM is not enabled. So, it is possible to retrieve attachments by Camel Message API DataHandler Message.getAttachment(String id) Payload Mode: MTOM is supported by the component. Attachments can be retrieved by Camel Message APIs mentioned above. SOAP with Attachment (SwA) is supported and attachments can be retrieved. SwA is the default (same as setting the CXF endpoint property "mtom-enabled" to false). To enable MTOM, set the CXF endpoint property "mtom-enabled" to true . @Bean public CxfEndpoint routerEndpoint() { CxfSpringEndpoint cxfEndpoint = new CxfSpringEndpoint(); cxfEndpoint.setServiceNameAsQName(SERVICE_QNAME); cxfEndpoint.setEndpointNameAsQName(PORT_QNAME); cxfEndpoint.setAddress("/" + getClass().getSimpleName()+ "/jaxws-mtom/hello"); cxfEndpoint.setWsdlURL("mtom.wsdl"); Map<String, Object> properties = new HashMap<String, Object>(); properties.put("dataFormat", "PAYLOAD"); properties.put("mtom-enabled", true); cxfEndpoint.setProperties(properties); return cxfEndpoint; } You can produce a Camel message with attachment to send to a CXF endpoint in Payload mode. Exchange exchange = context.createProducerTemplate().send("direct:testEndpoint", new Processor() { public void process(Exchange exchange) throws Exception { exchange.setPattern(ExchangePattern.InOut); List<Source> elements = new ArrayList<Source>(); elements.add(new DOMSource(DOMUtils.readXml(new StringReader(MtomTestHelper.REQ_MESSAGE)).getDocumentElement())); CxfPayload<SoapHeader> body = new CxfPayload<SoapHeader>(new ArrayList<SoapHeader>(), elements, null); exchange.getIn().setBody(body); exchange.getIn().addAttachment(MtomTestHelper.REQ_PHOTO_CID, new DataHandler(new ByteArrayDataSource(MtomTestHelper.REQ_PHOTO_DATA, "application/octet-stream"))); exchange.getIn().addAttachment(MtomTestHelper.REQ_IMAGE_CID, new DataHandler(new ByteArrayDataSource(MtomTestHelper.requestJpeg, "image/jpeg"))); } }); // process response CxfPayload<SoapHeader> out = exchange.getOut().getBody(CxfPayload.class); Assert.assertEquals(1, out.getBody().size()); Map<String, String> ns = new HashMap<String, String>(); ns.put("ns", MtomTestHelper.SERVICE_TYPES_NS); ns.put("xop", MtomTestHelper.XOP_NS); XPathUtils xu = new XPathUtils(ns); Element oute = new XmlConverter().toDOMElement(out.getBody().get(0)); Element ele = (Element)xu.getValue("//ns:DetailResponse/ns:photo/xop:Include", oute, XPathConstants.NODE); String photoId = ele.getAttribute("href").substring(4); // skip "cid:" ele = (Element)xu.getValue("//ns:DetailResponse/ns:image/xop:Include", oute, XPathConstants.NODE); String imageId = ele.getAttribute("href").substring(4); // skip "cid:" DataHandler dr = exchange.getOut().getAttachment(photoId); Assert.assertEquals("application/octet-stream", dr.getContentType()); MtomTestHelper.assertEquals(MtomTestHelper.RESP_PHOTO_DATA, IOUtils.readBytesFromStream(dr.getInputStream())); dr = exchange.getOut().getAttachment(imageId); Assert.assertEquals("image/jpeg", dr.getContentType()); BufferedImage image = ImageIO.read(dr.getInputStream()); Assert.assertEquals(560, image.getWidth()); Assert.assertEquals(300, image.getHeight()); You can also consume a Camel message received from a CXF endpoint in Payload mode. The CxfMtomConsumerPayloadModeTest illustrates how this works: public static class MyProcessor implements Processor { @SuppressWarnings("unchecked") public void process(Exchange exchange) throws Exception { CxfPayload<SoapHeader> in = exchange.getIn().getBody(CxfPayload.class); // verify request Assert.assertEquals(1, in.getBody().size()); Map<String, String> ns = new HashMap<String, String>(); ns.put("ns", MtomTestHelper.SERVICE_TYPES_NS); ns.put("xop", MtomTestHelper.XOP_NS); XPathUtils xu = new XPathUtils(ns); Element body = new XmlConverter().toDOMElement(in.getBody().get(0)); Element ele = (Element)xu.getValue("//ns:Detail/ns:photo/xop:Include", body, XPathConstants.NODE); String photoId = ele.getAttribute("href").substring(4); // skip "cid:" Assert.assertEquals(MtomTestHelper.REQ_PHOTO_CID, photoId); ele = (Element)xu.getValue("//ns:Detail/ns:image/xop:Include", body, XPathConstants.NODE); String imageId = ele.getAttribute("href").substring(4); // skip "cid:" Assert.assertEquals(MtomTestHelper.REQ_IMAGE_CID, imageId); DataHandler dr = exchange.getIn().getAttachment(photoId); Assert.assertEquals("application/octet-stream", dr.getContentType()); MtomTestHelper.assertEquals(MtomTestHelper.REQ_PHOTO_DATA, IOUtils.readBytesFromStream(dr.getInputStream())); dr = exchange.getIn().getAttachment(imageId); Assert.assertEquals("image/jpeg", dr.getContentType()); MtomTestHelper.assertEquals(MtomTestHelper.requestJpeg, IOUtils.readBytesFromStream(dr.getInputStream())); // create response List<Source> elements = new ArrayList<Source>(); elements.add(new DOMSource(DOMUtils.readXml(new StringReader(MtomTestHelper.RESP_MESSAGE)).getDocumentElement())); CxfPayload<SoapHeader> sbody = new CxfPayload<SoapHeader>(new ArrayList<SoapHeader>(), elements, null); exchange.getOut().setBody(sbody); exchange.getOut().addAttachment(MtomTestHelper.RESP_PHOTO_CID, new DataHandler(new ByteArrayDataSource(MtomTestHelper.RESP_PHOTO_DATA, "application/octet-stream"))); exchange.getOut().addAttachment(MtomTestHelper.RESP_IMAGE_CID, new DataHandler(new ByteArrayDataSource(MtomTestHelper.responseJpeg, "image/jpeg"))); } } Raw Mode: Attachments are not supported as it does not process the message at all. CXF_RAW Mode : MTOM is supported, and Attachments can be retrieved by Camel Message APIs mentioned above. Note that when receiving a multipart (i.e. MTOM) message the default SOAPMessage to String converter will provide the complete multipart payload on the body. If you require just the SOAP XML as a String, you can set the message body with message.getSOAPPart(), and Camel convert can do the rest of work for you. 26.19. Streaming Support in PAYLOAD mode The camel-cxf component now supports streaming of incoming messages when using PAYLOAD mode. Previously, the incoming messages would have been completely DOM parsed. For large messages, this is time consuming and uses a significant amount of memory. The incoming messages can remain as a javax.xml.transform.Source while being routed and, if nothing modifies the payload, can then be directly streamed out to the target destination. For common "simple proxy" use cases (example: from("cxf:... ").to("cxf:... ")), this can provide very significant performance increases as well as significantly lowered memory requirements. However, there are cases where streaming may not be appropriate or desired. Due to the streaming nature, invalid incoming XML may not be caught until later in the processing chain. Also, certain actions may require the message to be DOM parsed anyway (like WS-Security or message tracing and such) in which case the advantages of the streaming is limited. At this point, there are two ways to control the streaming: Endpoint property: you can add "allowStreaming=false" as an endpoint property to turn the streaming on/off. Component property: the CxfComponent object also has an allowStreaming property that can set the default for endpoints created from that component. Global system property: you can add a system property of "org.apache.camel.component.cxf.streaming" to "false" to turn it off. That sets the global default, but setting the endpoint property above will override this value for that endpoint. 26.20. Using the generic CXF Dispatch mode The camel-cxf component supports the generic CXF dispatch mode that can transport messages of arbitrary structures (i.e., not bound to a specific XML schema). To use this mode, you simply omit specifying the wsdlURL and serviceClass attributes of the CXF endpoint. <cxf:cxfEndpoint id="testEndpoint" address="http://localhost:9000/SoapContext/SoapAnyPort"> <cxf:properties> <entry key="dataFormat" value="PAYLOAD"/> </cxf:properties> </cxf:cxfEndpoint> It is noted that the default CXF dispatch client does not send a specific SOAPAction header. Therefore, when the target service requires a specific SOAPAction value, it is supplied in the Camel header using the key SOAPAction (case-insensitive). 26.21. Spring Boot Auto-Configuration The component supports 13 options, which are listed below. Name Description Default Type camel.component.cxf.allow-streaming This option controls whether the CXF component, when running in PAYLOAD mode, will DOM parse the incoming messages into DOM Elements or keep the payload as a javax.xml.transform.Source object that would allow streaming in some cases. Boolean camel.component.cxf.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.cxf.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.cxf.enabled Whether to enable auto configuration of the cxf component. This is enabled by default. Boolean camel.component.cxf.header-filter-strategy To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. The option is a org.apache.camel.spi.HeaderFilterStrategy type. HeaderFilterStrategy camel.component.cxf.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.cxf.use-global-ssl-context-parameters Enable usage of global SSL context parameters. false Boolean camel.component.cxfrs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.cxfrs.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.cxfrs.enabled Whether to enable auto configuration of the cxfrs component. This is enabled by default. Boolean camel.component.cxfrs.header-filter-strategy To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. The option is a org.apache.camel.spi.HeaderFilterStrategy type. HeaderFilterStrategy camel.component.cxfrs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.cxfrs.use-global-ssl-context-parameters Enable usage of global SSL context parameters. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-cxf-soap-starter</artifactId> </dependency>", "cxf:bean:cxfEndpoint[?options]", "cxf://someAddress[?options]", "cxf:bean:cxfEndpoint?wsdlURL=wsdl/hello_world.wsdl&dataFormat=PAYLOAD", "cxf:beanId:address", "@Bean public CxfEndpoint serviceEndpoint(LoggingOutInterceptor loggingOutInterceptor) { CxfSpringEndpoint cxfEndpoint = new CxfSpringEndpoint(); cxfEndpoint.setAddress(\"http://localhost:\" + port + \"/services\" + SERVICE_ADDRESS); cxfEndpoint.setServiceClass(org.apache.camel.component.cxf.HelloService.class); Map<String, Object> properties = new HashMap<String, Object>(); properties.put(\"dataFormat\", \"RAW\"); cxfEndpoint.setProperties(properties); cxfEndpoint.getOutInterceptors().add(loggingOutInterceptor); return cxfEndpoint; } @Bean public LoggingOutInterceptor loggingOutInterceptor() { LoggingOutInterceptor logger = new LoggingOutInterceptor(\"write\"); return logger; }", "<cxf:cxfEndpoint ...> <cxf:properties> <entry key=\"org.apache.camel.cxf.message.headers.relays\"> <list> <ref bean=\"customHeadersRelay\"/> </list> </entry> </cxf:properties> </cxf:cxfEndpoint> <bean id=\"customHeadersRelay\" class=\"org.apache.camel.component.cxf.soap.headers.CustomHeadersRelay\"/>", "@Bean public HeaderFilterStrategy dropAllMessageHeadersStrategy() { CxfHeaderFilterStrategy headerFilterStrategy = new CxfHeaderFilterStrategy(); headerFilterStrategy.setRelayHeaders(false); return headerFilterStrategy; }", "@Bean public CxfEndpoint routerNoRelayEndpoint(HeaderFilterStrategy dropAllMessageHeadersStrategy) { CxfSpringEndpoint cxfEndpoint = new CxfSpringEndpoint(); cxfEndpoint.setServiceClass(org.apache.camel.component.cxf.soap.headers.HeaderTester.class); cxfEndpoint.setAddress(\"/CxfMessageHeadersRelayTest/HeaderService/routerNoRelayEndpoint\"); cxfEndpoint.setWsdlURL(\"soap_header.wsdl\"); cxfEndpoint.setEndpointNameAsQName( QName.valueOf(\"{http://apache.org/camel/component/cxf/soap/headers}SoapPortNoRelay\")); cxfEndpoint.setServiceNameAsQName(SERVICENAME); Map<String, Object> properties = new HashMap<String, Object>(); properties.put(\"dataFormat\", \"PAYLOAD\"); cxfEndpoint.setProperties(properties); cxfEndpoint.setHeaderFilterStrategy(dropAllMessageHeadersStrategy); return cxfEndpoint; } @Bean public CxfEndpoint serviceNoRelayEndpoint(HeaderFilterStrategy dropAllMessageHeadersStrategy) { CxfSpringEndpoint cxfEndpoint = new CxfSpringEndpoint(); cxfEndpoint.setServiceClass(org.apache.camel.component.cxf.soap.headers.HeaderTester.class); cxfEndpoint.setAddress(\"http://localhost:\" + port + \"/services/CxfMessageHeadersRelayTest/HeaderService/routerNoRelayEndpointBackend\"); cxfEndpoint.setWsdlURL(\"soap_header.wsdl\"); cxfEndpoint.setEndpointNameAsQName( QName.valueOf(\"{http://apache.org/camel/component/cxf/soap/headers}SoapPortNoRelay\")); cxfEndpoint.setServiceNameAsQName(SERVICENAME); Map<String, Object> properties = new HashMap<String, Object>(); properties.put(\"dataFormat\", \"PAYLOAD\"); cxfEndpoint.setProperties(properties); cxfEndpoint.setHeaderFilterStrategy(dropAllMessageHeadersStrategy); return cxfEndpoint; }", "rom(\"cxf:bean:routerNoRelayEndpoint\") .to(\"cxf:bean:serviceNoRelayEndpoint\");", "@Bean public HeaderFilterStrategy customMessageFilterStrategy() { CxfHeaderFilterStrategy headerFilterStrategy = new CxfHeaderFilterStrategy(); List<MessageHeaderFilter> headerFilterList = new ArrayList<MessageHeaderFilter>(); headerFilterList.add(new SoapMessageHeaderFilter()); headerFilterList.add(new CustomHeaderFilter()); headerFilterStrategy.setMessageHeaderFilters(headerFilterList); return headerFilterStrategy; }", "<beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:cxf=\"http://camel.apache.org/schema/cxf/jaxws\" xsi:schemaLocation=\" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/cxf/jaxws http://camel.apache.org/schema/cxf/jaxws/camel-cxf.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd\"> <cxf:cxfEndpoint id=\"routerEndpoint\" address=\"http://localhost:9003/CamelContext/RouterPort\" serviceClass=\"org.apache.hello_world_soap_http.GreeterImpl\"/> <cxf:cxfEndpoint id=\"serviceEndpoint\" address=\"http://localhost:9000/SoapContext/SoapPort\" wsdlURL=\"testutils/hello_world.wsdl\" serviceClass=\"org.apache.hello_world_soap_http.Greeter\" endpointName=\"s:SoapPort\" serviceName=\"s:SOAPService\" xmlns:s=\"http://apache.org/hello_world_soap_http\" /> <camelContext id=\"camel\" xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"cxf:bean:routerEndpoint\" /> <to uri=\"cxf:bean:serviceEndpoint\" /> </route> </camelContext> </beans>", "<cxf:cxfEndpoint id=\"testEndpoint\" address=\"http://localhost:9000/router\" serviceClass=\"org.apache.camel.component.cxf.HelloService\" endpointName=\"s:PortName\" serviceName=\"s:ServiceName\" xmlns:s=\"http://www.example.com/test\"> <cxf:properties> <entry key=\"dataFormat\" value=\"RAW\"/> <entry key=\"setDefaultBus\" value=\"true\"/> </cxf:properties> </cxf:cxfEndpoint>", "@ImportResource({ \"classpath:spring-configuration.xml\" })", "Error:sendSms: SoapFault exception: [Client] looks like we got no XML document in [...]", "public class WriteXmlDeclarationInterceptor extends AbstractPhaseInterceptor<SoapMessage> { public WriteXmlDeclarationInterceptor() { super(Phase.PRE_STREAM); addBefore(StaxOutInterceptor.class.getName()); } public void handleMessage(SoapMessage message) throws Fault { message.put(\"org.apache.cxf.stax.force-start-document\", Boolean.TRUE); } }", "// set up the response context which force start document Map<String, Object> map = new HashMap<String, Object>(); map.put(\"org.apache.cxf.stax.force-start-document\", Boolean.TRUE); exchange.getOut().setHeader(Client.RESPONSE_CONTEXT, map);", "// set up the service address from the message header to override the setting of CXF endpoint exchange.getIn().setHeader(Exchange.DESTINATION_OVERRIDE_URL, constant(getServiceAddress()));", "public class PersonProcessor implements Processor { private static final Logger LOG = LoggerFactory.getLogger(PersonProcessor.class); @Override @SuppressWarnings(\"unchecked\") public void process(Exchange exchange) throws Exception { LOG.info(\"processing exchange in camel\"); BindingOperationInfo boi = (BindingOperationInfo) exchange.getProperty(BindingOperationInfo.class.getName()); if (boi != null) { LOG.info(\"boi.isUnwrapped\" + boi.isUnwrapped()); } // Get the parameters list which element is the holder. MessageContentsList msgList = (MessageContentsList) exchange.getIn().getBody(); Holder<String> personId = (Holder<String>) msgList.get(0); Holder<String> ssn = (Holder<String>) msgList.get(1); Holder<String> name = (Holder<String>) msgList.get(2); if (personId.value == null || personId.value.length() == 0) { LOG.info(\"person id 123, so throwing exception\"); // Try to throw out the soap fault message org.apache.camel.wsdl_first.types.UnknownPersonFault personFault = new org.apache.camel.wsdl_first.types.UnknownPersonFault(); personFault.setPersonId(\"\"); org.apache.camel.wsdl_first.UnknownPersonFault fault = new org.apache.camel.wsdl_first.UnknownPersonFault(\"Get the null value of person name\", personFault); exchange.getMessage().setBody(fault); return; } name.value = \"Bonjour\"; ssn.value = \"123\"; LOG.info(\"setting Bonjour as the response\"); // Set the response message, first element is the return value of the operation, // the others are the holders of method parameters exchange.getMessage().setBody(new Object[] { null, personId, ssn, name }); } }", "Exchange senderExchange = new DefaultExchange(context, ExchangePattern.InOut); final List<String> params = new ArrayList<>(); // Prepare the request message for the camel-cxf procedure params.add(TEST_MESSAGE); senderExchange.getIn().setBody(params); senderExchange.getIn().setHeader(CxfConstants.OPERATION_NAME, ECHO_OPERATION); Exchange exchange = template.send(\"direct:EndpointA\", senderExchange); org.apache.camel.Message out = exchange.getMessage(); // The response message's body is an MessageContentsList which first element is the return value of the operation, // If there are some holder parameters, the holder parameter will be filled in the reset of List. // The result will be extract from the MessageContentsList with the String class type MessageContentsList result = (MessageContentsList) out.getBody(); LOG.info(\"Received output text: \" + result.get(0)); Map<String, Object> responseContext = CastUtils.cast((Map<?, ?>) out.getHeader(Client.RESPONSE_CONTEXT)); assertNotNull(responseContext); assertEquals(\"UTF-8\", responseContext.get(org.apache.cxf.message.Message.ENCODING), \"We should get the response context here\"); assertEquals(\"echo \" + TEST_MESSAGE, result.get(0), \"Reply body on Camel is wrong\");", "protected RouteBuilder createRouteBuilder() { return new RouteBuilder() { public void configure() { from(simpleEndpointURI + \"&dataFormat=PAYLOAD\").to(\"log:info\").process(new Processor() { @SuppressWarnings(\"unchecked\") public void process(final Exchange exchange) throws Exception { CxfPayload<SoapHeader> requestPayload = exchange.getIn().getBody(CxfPayload.class); List<Source> inElements = requestPayload.getBodySources(); List<Source> outElements = new ArrayList<>(); // You can use a customer toStringConverter to turn a CxfPayLoad message into String as you want String request = exchange.getIn().getBody(String.class); XmlConverter converter = new XmlConverter(); String documentString = ECHO_RESPONSE; Element in = new XmlConverter().toDOMElement(inElements.get(0)); // Just check the element namespace if (!in.getNamespaceURI().equals(ELEMENT_NAMESPACE)) { throw new IllegalArgumentException(\"Wrong element namespace\"); } if (in.getLocalName().equals(\"echoBoolean\")) { documentString = ECHO_BOOLEAN_RESPONSE; checkRequest(\"ECHO_BOOLEAN_REQUEST\", request); } else { documentString = ECHO_RESPONSE; checkRequest(\"ECHO_REQUEST\", request); } Document outDocument = converter.toDOMDocument(documentString, exchange); outElements.add(new DOMSource(outDocument.getDocumentElement())); // set the payload header with null CxfPayload<SoapHeader> responsePayload = new CxfPayload<>(null, outElements, null); exchange.getMessage().setBody(responsePayload); } }); } }; }", "from(\"cxf:bean:routerRelayEndpointWithInsertion\") .process(new InsertRequestOutHeaderProcessor()) .to(\"cxf:bean:serviceRelayEndpointWithInsertion\") .process(new InsertResponseOutHeaderProcessor());", "@Bean public CxfEndpoint routerRelayEndpointWithInsertion() { CxfSpringEndpoint cxfEndpoint = new CxfSpringEndpoint(); cxfEndpoint.setServiceClass(org.apache.camel.component.cxf.soap.headers.HeaderTester.class); cxfEndpoint.setAddress(\"/CxfMessageHeadersRelayTest/HeaderService/routerRelayEndpointWithInsertion\"); cxfEndpoint.setWsdlURL(\"soap_header.wsdl\"); cxfEndpoint.setEndpointNameAsQName( QName.valueOf(\"{http://apache.org/camel/component/cxf/soap/headers}SoapPortRelayWithInsertion\")); cxfEndpoint.setServiceNameAsQName(SERVICENAME); cxfEndpoint.getFeatures().add(new LoggingFeature()); return cxfEndpoint; } @Bean public CxfEndpoint serviceRelayEndpointWithInsertion() { CxfSpringEndpoint cxfEndpoint = new CxfSpringEndpoint(); cxfEndpoint.setServiceClass(org.apache.camel.component.cxf.soap.headers.HeaderTester.class); cxfEndpoint.setAddress(\"http://localhost:\" + port + \"/services/CxfMessageHeadersRelayTest/HeaderService/routerRelayEndpointWithInsertionBackend\"); cxfEndpoint.setWsdlURL(\"soap_header.wsdl\"); cxfEndpoint.setEndpointNameAsQName( QName.valueOf(\"{http://apache.org/camel/component/cxf/soap/headers}SoapPortRelayWithInsertion\")); cxfEndpoint.setServiceNameAsQName(SERVICENAME); cxfEndpoint.getFeatures().add(new LoggingFeature()); return cxfEndpoint; }", "public static class InsertResponseOutHeaderProcessor implements Processor { public void process(Exchange exchange) throws Exception { List<SoapHeader> soapHeaders = CastUtils.cast((List<?>)exchange.getIn().getHeader(Header.HEADER_LIST)); // Insert a new header String xml = \"<?xml version=\\\"1.0\\\" encoding=\\\"utf-8\\\"?><outofbandHeader \" + \"xmlns=\\\"http://cxf.apache.org/outofband/Header\\\" hdrAttribute=\\\"testHdrAttribute\\\" \" + \"xmlns:soap=\\\"http://schemas.xmlsoap.org/soap/envelope/\\\" soap:mustUnderstand=\\\"1\\\">\" + \"<name>New_testOobHeader</name><value>New_testOobHeaderValue</value></outofbandHeader>\"; SoapHeader newHeader = new SoapHeader(soapHeaders.get(0).getName(), DOMUtils.readXml(new StringReader(xml)).getDocumentElement()); // make sure direction is OUT since it is a response message. newHeader.setDirection(Direction.DIRECTION_OUT); //newHeader.setMustUnderstand(false); soapHeaders.add(newHeader); } }", "from(getRouterEndpointURI()).process(new Processor() { @SuppressWarnings(\"unchecked\") public void process(Exchange exchange) throws Exception { CxfPayload<SoapHeader> payload = exchange.getIn().getBody(CxfPayload.class); List<Source> elements = payload.getBodySources(); assertNotNull(elements, \"We should get the elements here\"); assertEquals(1, elements.size(), \"Get the wrong elements size\"); Element el = new XmlConverter().toDOMElement(elements.get(0)); elements.set(0, new DOMSource(el)); assertEquals(\"http://camel.apache.org/pizza/types\", el.getNamespaceURI(), \"Get the wrong namespace URI\"); List<SoapHeader> headers = payload.getHeaders(); assertNotNull(headers, \"We should get the headers here\"); assertEquals(1, headers.size(), \"Get the wrong headers size\"); assertEquals(\"http://camel.apache.org/pizza/types\", ((Element) (headers.get(0).getObject())).getNamespaceURI(), \"Get the wrong namespace URI\"); // alternatively you can also get the SOAP header via the camel header: headers = exchange.getIn().getHeader(Header.HEADER_LIST, List.class); assertNotNull(headers, \"We should get the headers here\"); assertEquals(1, headers.size(), \"Get the wrong headers size\"); assertEquals(\"http://camel.apache.org/pizza/types\", ((Element) (headers.get(0).getObject())).getNamespaceURI(), \"Get the wrong namespace URI\"); } }) .to(getServiceEndpointURI());", "SOAP_FAULT = new SoapFault(EXCEPTION_MESSAGE, SoapFault.FAULT_CODE_CLIENT); Element detail = SOAP_FAULT.getOrCreateDetail(); Document doc = detail.getOwnerDocument(); Text tn = doc.createTextNode(DETAIL_TEXT); detail.appendChild(tn);", "from(routerEndpointURI).setFaultBody(constant(SOAP_FAULT));", "from(routerEndpointURI).process(new Processor() { public void process(Exchange exchange) throws Exception { Message out = exchange.getOut(); // Set the message body with the out.setBody(this.getClass().getResourceAsStream(\"SoapFaultMessage.xml\")); // Set the response code here out.setHeader(org.apache.cxf.message.Message.RESPONSE_CODE, new Integer(500)); } });", "CxfExchange exchange = (CxfExchange)template.send(getJaxwsEndpointUri(), new Processor() { public void process(final Exchange exchange) { final List<String> params = new ArrayList<String>(); params.add(TEST_MESSAGE); // Set the request context to the inMessage Map<String, Object> requestContext = new HashMap<String, Object>(); requestContext.put(BindingProvider.ENDPOINT_ADDRESS_PROPERTY, JAXWS_SERVER_ADDRESS); exchange.getIn().setBody(params); exchange.getIn().setHeader(Client.REQUEST_CONTEXT , requestContext); exchange.getIn().setHeader(CxfConstants.OPERATION_NAME, GREET_ME_OPERATION); } }); org.apache.camel.Message out = exchange.getOut(); // The output is an object array, the first element of the array is the return value Object\\[\\] output = out.getBody(Object\\[\\].class); LOG.info(\"Received output text: \" + output\\[0\\]); // Get the response context form outMessage Map<String, Object> responseContext = CastUtils.cast((Map)out.getHeader(Client.RESPONSE_CONTEXT)); assertNotNull(responseContext); assertEquals(\"Get the wrong wsdl operation name\", \"{http://apache.org/hello_world_soap_http}greetMe\", responseContext.get(\"javax.xml.ws.wsdl.operation\").toString());", "DataHandler Message.getAttachment(String id)", "@Bean public CxfEndpoint routerEndpoint() { CxfSpringEndpoint cxfEndpoint = new CxfSpringEndpoint(); cxfEndpoint.setServiceNameAsQName(SERVICE_QNAME); cxfEndpoint.setEndpointNameAsQName(PORT_QNAME); cxfEndpoint.setAddress(\"/\" + getClass().getSimpleName()+ \"/jaxws-mtom/hello\"); cxfEndpoint.setWsdlURL(\"mtom.wsdl\"); Map<String, Object> properties = new HashMap<String, Object>(); properties.put(\"dataFormat\", \"PAYLOAD\"); properties.put(\"mtom-enabled\", true); cxfEndpoint.setProperties(properties); return cxfEndpoint; }", "Exchange exchange = context.createProducerTemplate().send(\"direct:testEndpoint\", new Processor() { public void process(Exchange exchange) throws Exception { exchange.setPattern(ExchangePattern.InOut); List<Source> elements = new ArrayList<Source>(); elements.add(new DOMSource(DOMUtils.readXml(new StringReader(MtomTestHelper.REQ_MESSAGE)).getDocumentElement())); CxfPayload<SoapHeader> body = new CxfPayload<SoapHeader>(new ArrayList<SoapHeader>(), elements, null); exchange.getIn().setBody(body); exchange.getIn().addAttachment(MtomTestHelper.REQ_PHOTO_CID, new DataHandler(new ByteArrayDataSource(MtomTestHelper.REQ_PHOTO_DATA, \"application/octet-stream\"))); exchange.getIn().addAttachment(MtomTestHelper.REQ_IMAGE_CID, new DataHandler(new ByteArrayDataSource(MtomTestHelper.requestJpeg, \"image/jpeg\"))); } }); // process response CxfPayload<SoapHeader> out = exchange.getOut().getBody(CxfPayload.class); Assert.assertEquals(1, out.getBody().size()); Map<String, String> ns = new HashMap<String, String>(); ns.put(\"ns\", MtomTestHelper.SERVICE_TYPES_NS); ns.put(\"xop\", MtomTestHelper.XOP_NS); XPathUtils xu = new XPathUtils(ns); Element oute = new XmlConverter().toDOMElement(out.getBody().get(0)); Element ele = (Element)xu.getValue(\"//ns:DetailResponse/ns:photo/xop:Include\", oute, XPathConstants.NODE); String photoId = ele.getAttribute(\"href\").substring(4); // skip \"cid:\" ele = (Element)xu.getValue(\"//ns:DetailResponse/ns:image/xop:Include\", oute, XPathConstants.NODE); String imageId = ele.getAttribute(\"href\").substring(4); // skip \"cid:\" DataHandler dr = exchange.getOut().getAttachment(photoId); Assert.assertEquals(\"application/octet-stream\", dr.getContentType()); MtomTestHelper.assertEquals(MtomTestHelper.RESP_PHOTO_DATA, IOUtils.readBytesFromStream(dr.getInputStream())); dr = exchange.getOut().getAttachment(imageId); Assert.assertEquals(\"image/jpeg\", dr.getContentType()); BufferedImage image = ImageIO.read(dr.getInputStream()); Assert.assertEquals(560, image.getWidth()); Assert.assertEquals(300, image.getHeight());", "public static class MyProcessor implements Processor { @SuppressWarnings(\"unchecked\") public void process(Exchange exchange) throws Exception { CxfPayload<SoapHeader> in = exchange.getIn().getBody(CxfPayload.class); // verify request Assert.assertEquals(1, in.getBody().size()); Map<String, String> ns = new HashMap<String, String>(); ns.put(\"ns\", MtomTestHelper.SERVICE_TYPES_NS); ns.put(\"xop\", MtomTestHelper.XOP_NS); XPathUtils xu = new XPathUtils(ns); Element body = new XmlConverter().toDOMElement(in.getBody().get(0)); Element ele = (Element)xu.getValue(\"//ns:Detail/ns:photo/xop:Include\", body, XPathConstants.NODE); String photoId = ele.getAttribute(\"href\").substring(4); // skip \"cid:\" Assert.assertEquals(MtomTestHelper.REQ_PHOTO_CID, photoId); ele = (Element)xu.getValue(\"//ns:Detail/ns:image/xop:Include\", body, XPathConstants.NODE); String imageId = ele.getAttribute(\"href\").substring(4); // skip \"cid:\" Assert.assertEquals(MtomTestHelper.REQ_IMAGE_CID, imageId); DataHandler dr = exchange.getIn().getAttachment(photoId); Assert.assertEquals(\"application/octet-stream\", dr.getContentType()); MtomTestHelper.assertEquals(MtomTestHelper.REQ_PHOTO_DATA, IOUtils.readBytesFromStream(dr.getInputStream())); dr = exchange.getIn().getAttachment(imageId); Assert.assertEquals(\"image/jpeg\", dr.getContentType()); MtomTestHelper.assertEquals(MtomTestHelper.requestJpeg, IOUtils.readBytesFromStream(dr.getInputStream())); // create response List<Source> elements = new ArrayList<Source>(); elements.add(new DOMSource(DOMUtils.readXml(new StringReader(MtomTestHelper.RESP_MESSAGE)).getDocumentElement())); CxfPayload<SoapHeader> sbody = new CxfPayload<SoapHeader>(new ArrayList<SoapHeader>(), elements, null); exchange.getOut().setBody(sbody); exchange.getOut().addAttachment(MtomTestHelper.RESP_PHOTO_CID, new DataHandler(new ByteArrayDataSource(MtomTestHelper.RESP_PHOTO_DATA, \"application/octet-stream\"))); exchange.getOut().addAttachment(MtomTestHelper.RESP_IMAGE_CID, new DataHandler(new ByteArrayDataSource(MtomTestHelper.responseJpeg, \"image/jpeg\"))); } }", "<cxf:cxfEndpoint id=\"testEndpoint\" address=\"http://localhost:9000/SoapContext/SoapAnyPort\"> <cxf:properties> <entry key=\"dataFormat\" value=\"PAYLOAD\"/> </cxf:properties> </cxf:cxfEndpoint>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-cxf-component-starter
Appendix B. Revision history
Appendix B. Revision history `0.2-3 Mon February 24 2025, Gabriela Fialova ( [email protected] ) Updated an Enhancement in RHEL-14942 (Dynamic programming languages) 0.2-2 Thu Jan 30 2024, Gabriela Fialova ( [email protected] ) Added a Known Issue RHELDOCS-19603 (IdM SSSD) 0.2-1 Tue Jan 28 2025, Marc Muehlfeld ( [email protected] ) Add an Enhancement RHEL-35991 (Dynamic programming languages, web and database servers) 0.2-0 Wed Jan 22 2025, Gabriela Fialova ( [email protected] ) Add a Known Issue RHELDOCS-18863 (Virtualization) 0.1-10 Mon Jan 13 2025, Marc Muehlfeld ( [email protected] ) Add a Bug Fix RHEL-73052 (Networking) 0.1-9 Wed Dec 4 2024, Gabriela Fialova ( [email protected] ) Updated the Customer Portal labs section Updated the Installation section 0.1-8 Tue Nov 05 2024, Lenka Spackova ( [email protected] ) Added multiple new features to Compilers and development tools , namely: new GCC Toolset 14, GCC Toolset 13 GCC update, rebases of LLVM, Rust, and Go Toolsets. 0.1-7 Thu Oct 23 2024, Gabriela Fialova ( [email protected] ) Added a Known Issue RHELDOCS-18777 (Identity Management) 0.1-6 Thu Oct 17 2024, Brian Angelica ( [email protected] ) Added a Deprecated Functionality RHELDOCS-19027 (File systems and storage) 0.1-5 Wed Oct 09 2024, Brian Angelica ( [email protected] ) Added a Deprecated Functionality RHEL-18958 (Containers) 0.1-4 Wed Oct 09 2024, Brian Angelica ( [email protected] ) Added a Bug Fix RHEL-45908 (Identity Management) 0.1-3 Tue Sep 24 2024, Lenka Spackova ( [email protected] ) Added an Enhancement RHEL-49614 (Dynamic programming languages, web and database servers) 0.1-2 Tue Aug 27 2024, Lenka Spackova ( [email protected] ) Added a Bug Fix RHEL-39994 (Compilers and development tools) 0.1-1 Wed Aug 14 2024, Brian Angelica ( [email protected] ) Added a Known Issue RHELDOCS-18748 (Networking) 0.1-0 Wed Aug 14 2024, Brian Angelica ( [email protected] ) Added an Enhancement RHEL-47595 (Networking) 0.0-9 Fri Aug 09 2024, Brian Angelica ( [email protected] ) Added a Known Issue RHEL-11397 (Installer and image creation) 0.0-8 Thu Jul 18 2024, Gabriela Fialova ( [email protected] ) Updated a Deprecated Functionality in Jira:RHELDOCS-17573 (Identity Management) 0.0-7 Thu Jul 11 2024, Lenka Spackova ( [email protected] ) Added a Known Issue RHEL-45711 (System Roles) 0.0-6 Mon Jul 08 2024, Lenka Spackova ( [email protected] ) Fixed formatting and reference in RHEL-25405 (Compilers and development tools) 0.0-5 Wed Jul 03 2024, Lenka Spackova ( [email protected] ) Added a Known Issue RHEL-34075 (Identity Management) 0.0-4 Tue Jun 25 2024, Lenka Spackova ( [email protected] ) Added a Known Issue RHELDOCS-18435 (Dynamic programming languages, web and database servers) 0.0-3 Wed June 12 2024, Brian Angelica ( [email protected] ) Updated an Enhancement in Jira:RHELPLAN-123140 (Identity Management) 0.0-2 Fri June 7 2024, Brian Angelica ( [email protected] ) Updated a Known Issue in Jira:RHELDOCS-18326 (Red Hat Enterprise Linux System Roles) 0.0-1 Thu May 23 2024, Brian Angelica ( [email protected] ) Release of the Red Hat Enterprise Linux 8.10 Release Notes. 0.0-0 Wed March 27 2024, Lucie Varakova ( [email protected] ) Release of the Red Hat Enterprise Linux 8.10 Beta Release Notes.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.10_release_notes/revision_history
Configuring and managing virtualization
Configuring and managing virtualization Red Hat Enterprise Linux 8 Setting up your host, creating and administering virtual machines, and understanding virtualization features in Red Hat Enterprise Linux 8 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_virtualization/index
Chapter 5. Building with Maven
Chapter 5. Building with Maven The standard approach to developing applications for Spring Boot in Fuse is to use the Apache Maven build tool and to structure your source code as a Maven project. Fuse provides Maven quickstarts to get you started quickly and many of the Fuse build tools are provided as Maven plug-ins. For this reason, it is highly recommended that you adopt Maven as the build tool for Spring Boot projects in Fuse. 5.1. Generating a Maven project Fuse provides a selection of quickstarts, based on Maven archetypes, which you can use to generate an initial Maven project for a Spring Boot application. To prevent you from having to remember the location information and versions for various Maven archetypes, Fuse provides tooling to help you generate Maven projects for standalone Spring Boot projects. 5.1.1. Project generator at developers.redhat.com/launch The quickest way to get started with Spring Boot standalone in Fuse is to navigate to developers.redhat.com/launch and follow the instructions for the Spring Boot standalone runtime, to generate a new Maven project. After following the on-screen instructions, you will be prompted to download an archive file, which contains a complete Maven project that you can build and run locally. 5.1.2. Fuse tooling wizard in Developer Studio Alternatively, you can download and install Red Hat JBoss Developer Studio (which includes Fuse Tooling). Using the Fuse New Integration Project wizard, you can generate a new Spring Boot standalone project and continue to develop inside the Eclipse-based IDE. 5.2. Using Spring Boot BOM After creating and building your first Spring Boot project, you will soon want to add more components. But how do you know which versions of the Maven dependencies to add to your project? The simplest (and recommended) approach is to use the relevant Bill of Materials (BOM) file, which automatically defines all of the version dependencies for you. 5.2.1. BOM file for Spring Boot The purpose of a Maven Bill of Materials (BOM) file is to provide a curated set of Maven dependency versions that work well together, preventing you from having to define versions individually for every Maven artifact. Important Ensure you are using the correct Fuse BOM based on the version of Spring Boot you are using. The Fuse BOM for Spring Boot offers the following advantages: Defines versions for Maven dependencies, so that you do not need to specify the version when you add a dependency to your POM. Defines a set of curated dependencies that are fully tested and supported for a specific version of Fuse. Simplifies upgrades of Fuse. Important Only the set of dependencies defined by a Fuse BOM are supported by Red Hat. 5.2.2. Incorporate the BOM file To incorporate a BOM file into your Maven project, specify a dependencyManagement element in your project's pom.xml file (or, possibly, in a parent POM file), as shown in the examples for both Spring Boot 2: Spring Boot 2 BOM Spring Boot 2 BOM <?xml version="1.0" encoding="UTF-8" standalone="no"?> <project ...> ... <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <!-- configure the versions you want to use here --> <fuse.version>7.13.0.fuse-7_13_0-00012-redhat-00001</fuse.version> </properties> <dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.redhat-fuse</groupId> <artifactId>fuse-springboot-bom</artifactId> <version>USD{fuse.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> ... </project> After specifying the BOM using the dependency management mechanism, it is possible to add Maven dependencies to your POM without specifying the version of the artifact. For example, to add a dependency for the camel-hystrix component, you would add the following XML fragment to the dependencies element in your POM: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-hystrix-starter</artifactId> </dependency> Note how the Camel artifact ID is specified with the -starter suffix - that is, you specify the Camel Hystrix component as camel-hystrix-starter , not as camel-hystrix . The Camel starter components are packaged in a way that is optimized for the Spring Boot environment. 5.2.3. Spring Boot Maven plugin The Spring Boot Maven plugin is provided by Spring Boot and it is a developer utility for building and running a Spring Boot project: Building - create an executable Jar package for your Spring Boot application by entering the command mvn package in the project directory. The output of the build is placed in the target/ subdirectory of your Maven project. Running - for convenience, you can run the newly-built application with the command, mvn spring-boot:start . To incorporate the Spring Boot Maven plugin into your project POM file, add the plugin configuration to the project/build/plugins section of your pom.xml file, as shown in the following example. Example <?xml version="1.0" encoding="UTF-8" standalone="no"?> <project ...> ... <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <!-- configure the versions you want to use here --> <fuse.version>7.13.0.fuse-7_13_0-00012-redhat-00001</fuse.version> </properties> ... <build> <plugins> <plugin> <groupId>org.jboss.redhat-fuse</groupId> <artifactId>spring-boot-maven-plugin</artifactId> <version>USD{fuse.version}</version> <executions> <execution> <goals> <goal>repackage</goal> </goals> </execution> </executions> </plugin> </plugins> </build> ... </project>
[ "<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"no\"?> <project ...> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <!-- configure the versions you want to use here --> <fuse.version>7.13.0.fuse-7_13_0-00012-redhat-00001</fuse.version> </properties> <dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.redhat-fuse</groupId> <artifactId>fuse-springboot-bom</artifactId> <version>USD{fuse.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> </project>", "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-hystrix-starter</artifactId> </dependency>", "<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"no\"?> <project ...> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <!-- configure the versions you want to use here --> <fuse.version>7.13.0.fuse-7_13_0-00012-redhat-00001</fuse.version> </properties> <build> <plugins> <plugin> <groupId>org.jboss.redhat-fuse</groupId> <artifactId>spring-boot-maven-plugin</artifactId> <version>USD{fuse.version}</version> <executions> <execution> <goals> <goal>repackage</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </project>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/deploying_into_spring_boot/building-with-maven
Appendix C. Journaler configuration reference
Appendix C. Journaler configuration reference Reference of the list commands that can be used for journaler configuration. journaler_write_head_interval Description How frequently to update the journal head object. Type Integer Required No Default 15 journaler_prefetch_periods Description How many stripe periods to read ahead on journal replay. Type Integer Required No Default 10 journaler_prezero_periods Description How many stripe periods to zero ahead of write position. Type Integer Required No Default 10 journaler_batch_interval Description Maximum additional latency in seconds to incur artificially. Type Double Required No Default .001 journaler_batch_max Description Maximum bytes that will be delayed flushing. Type 64-bit Unsigned Integer Required No Default 0
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/file_system_guide/journaler-configuration-reference_fs
9.6. Designing a Password Policy
9.6. Designing a Password Policy A password policy is a set of rules that govern how passwords are used in a given system. The Directory Server's password policy specifies the criteria that a password must satisfy to be considered valid, like the age, length, and whether users can reuse passwords. The following sections provide more information on designing a sound password policy: Section 9.6.1, "How Password Policy Works" Section 9.6.2, "Password Policy Attributes" Section 9.6.3, "Designing a Password Policy in a Replicated Environment" 9.6.1. How Password Policy Works Directory Server supports fine-grained password policy, which means password policies can be defined at the subtree and user level. This allows the flexibility of defining a password policy at any point in the directory tree: The entire directory. Such a policy is known as the global password policy. When configured and enabled, the policy is applied to all users within the directory except for the Directory Manager entry and those user entries that have local password policies enabled. This can define a common, single password policy for all directory users. A particular subtree of the directory. Such a policy is known as the subtree level or local password policy. When configured and enabled, the policy is applied to all users under the specified subtree. This is good in a hosting environment to support different password policies for each hosted company rather than enforcing a single policy for all the hosted companies. A particular user of the directory. Such a policy is known as the user level or local password policy. When configured and enabled, the policy is applied to the specified user only. This can define different password policies for different directory users. For example, specify that some users change their passwords daily, some users change it monthly, and all other users change it every six months. By default, Directory Server includes entries and attributes that are relevant to the global password policy, meaning the same policy is applied to all users. To set up a password policy for a subtree or user, add additional entries at the subtree or user level and enable the nsslapd-pwpolicy-local attribute of the cn=config entry. This attribute acts as a switch, turning fine-grained password policy on and off. You can change password policies by using the command line or the web console. Use the dsconf pwpolicy command to change global policies and the dsconf localpwp command to change local policies. For more information about setting password policies, see the Administration Guide . Note The ns-newpwpolicy.pl script that previously managed local password policies has been deprecated. However, this script is still available in the 389-ds-base-legacy-tools package. After password policy entries are added to the directory, they determine the type (global or local) of the password policy the Directory Server should enforce. When a user attempts to bind to the directory, Directory Server determines whether a local policy has been defined and enabled for the user's entry. To determine whether the fine-grained password policy is enabled, the server checks the value ( on or off ) assigned to the nsslapd-pwpolicy-local attribute of the cn=config entry. If the value is off , the server ignores the policies defined at the subtree and user levels and enforces the global password policy. To determine whether a local policy is defined for a subtree or user, the server checks for the pwdPolicysubentry attribute in the corresponding user entry. If the attribute is present, the server enforces the local password policy configured for the user. If the attribute is absent, the server logs an error message and enforces the global password policy. The server then compares the user-supplied password with the value specified in the user's directory entry to make sure they match. The server also uses the rules defined by the password policy to ensure that the password is valid before allowing the user to bind to the directory. Figure 9.3. Password Policy Checking Process In addition to bind requests, password policy checking also occurs during add and modify operations if the userPassword attribute (explained in the following section) is present in the request. Modifying the value of userPassword checks two password policy settings: The password minimum age policy is activated. If the minimum age requirement has not been satisfied, the server returns a constraintViolation error. The password update operation fails. The password history policy is activated. If the new value of userPassword is in the password history, or if it is the same as the current password, the server returns a constraintViolation error. The password update operation fails. Both adding and modifying the value of userPassword checks password policies set for the password syntax: The password minimum length policy is activated. If the new value of userPassword is less than the required minimum length, the server returns a constraintViolation error. The password update operation fails. The password syntax checking policy is activated. If the new value of userPassword is the same as another attribute of the entry, the server returns a constraintViolation error. The password update operation fails. 9.6.2. Password Policy Attributes The following sections describe the attributes to create a password policy for the server: Section 9.6.2.1, "Maximum Number of Failures" Section 9.6.2.2, "Password Change After Reset" Section 9.6.2.3, "User-Defined Passwords" Section 9.6.2.4, "Password Expiration" Section 9.6.2.5, "Expiration Warning" Section 9.6.2.6, "Grace Login Limit" Section 9.6.2.7, "Password Syntax Checking" Section 9.6.2.8, "Password Length" Section 9.6.2.9, "Password Minimum Age" Section 9.6.2.10, "Password History" Section 9.6.2.11, "Password Storage Schemes" Section 9.6.2.12, "Password Last Change Time" See the Red Hat Directory Server Administration Guide for instructions on how to set these attributes. 9.6.2.1. Maximum Number of Failures This is a setting in the password policy which enables password-based account lockouts. If a user attempts to log in a certain number of times and fails, then that account is locked until an administrator unlocks it or, optionally, a certain amount of time passes. This is set in the passwordMaxFailure parameter. There are two different ways to count login attempts when evaluating when the maximum number of failed attempts is reached. It can be a hard limit which locks the account when the number is hit ( n ) or which locks the account only when the count is exceeded ( n+1 ). For example, if the failure limit is three attempts, then the account could be locked at the third failed attempt ( n ) or at the fourth failed attempt ( n+1 ). The n+1 behavior is the historical behavior for LDAP servers, so it is considered legacy behavior. Newer LDAP clients expect the stricter hard limit. By default, the Directory Server uses the strict limit ( n ), but the legacy behavior can be enabled in the passwordLegacyPolicy parameter. 9.6.2.2. Password Change After Reset The Directory Server password policy can specify whether users must change their passwords after the first login or after the password has been reset by the administrator. The default passwords set by the administrator typically follow a company convention, such as the user's initials, user ID, or the company name. If this convention is discovered, it is usually the first value that a cracker uses in an attempt to break into the system. It is therefore recommended that users be required to change their password after it has been reset by an administrator. If this option is configured for the password policy, users are required to change their password even if user-defined passwords are disabled. If users are not required or allowed change their own passwords, administrator-assigned passwords should not follow any obvious convention and should be difficult to discover. The default configuration does not require that users change their password after it has been reset. See Section 9.6.2.3, "User-Defined Passwords" for more information. 9.6.2.3. User-Defined Passwords The password policy can be set either to allow or not to allow users to change their own passwords. A good password is the key to a strong password policy. Good passwords do not use trivial words; any word that can be found in a dictionary, names of pets or children, birthdays, user IDs, or any other information about the user that can be easily discovered (or stored in the directory itself), is a poor choice for a password. A good password should contain a combination of letters, numbers, and special characters. For the sake of convenience, however, users often use passwords that are easy to remember. Consequently, some enterprises choose to set passwords for users that meet the criteria of a strong password, and do not allow users to change their passwords. There are two disadvantages to having administrators set passwords for users: It requires a substantial amount of an administrator's time. Because administrator-specified passwords are typically more difficult to remember, users are more likely to write their password down, increasing the risk of discovery. By default, user-defined passwords are allowed. 9.6.2.4. Password Expiration The password policy can allow users to use the same passwords indefinitely or specify that passwords expire after a given time. In general, the longer a password is in use, the more likely it is to be discovered. If passwords expire too often, however, users may have trouble remembering them and resort to writing their passwords down. A common policy is to have passwords expire every 30 to 90 days. The server remembers the password expiration specification even if password expiration is disabled. If the password expiration is re-enabled, passwords are valid only for the duration set before it was last disabled. For example, if the password policy is set for passwords to expire every 90 days, and then password expiration is disabled and re-enabled, the default password expiration duration is 90 days. By default, user passwords never expire. 9.6.2.5. Expiration Warning If a password expiration period is set, it is a good idea to send users a warning before their passwords expire. The Directory Server displays the warning when the user binds to the server. If password expiration is enabled, by default, a warning is sent (using an LDAP message) to the user one day before the user's password expires, provided the user's client application supports this feature. The valid range for a password expiration warning to be sent is from one to 24,855 days. Note The password never expires until the expiration warning has been sent. 9.6.2.6. Grace Login Limit A grace period for expired passwords means that users can still log in to the system, even if their password has expired. To allow some users to log in using an expired password, specify the number of grace login attempts that are allowed to a user after the password has expired. By default, grace logins are not permitted. 9.6.2.7. Password Syntax Checking Password syntax checking enforces rules for password strings, so that any password has to meet or exceed certain criteria. All password syntax checking can be applied globally, per subtree, or per user. Password syntax checking is set in the passwordCheckSyntax attribute. The default password syntax requires a minimum password length of eight characters and that no trivial words are used in the password. A trivial word is any value stored in the uid , cn , sn , givenName , ou , or mail attributes of the user's entry. Additionally, other forms of password syntax enforcement are possible, providing different optional categories for the password syntax: Minimum required number of characters in the password ( passwordMinLength ) Minimum number of digit characters, meaning numbers between zero and nine ( passwordMinDigits ) Minimum number of ASCII alphabetic characters, both upper- and lower-case ( passwordMinAlphas ) Minimum number of uppercase ASCII alphabetic characters ( passwordMinUppers ) Minimum number of lowercase ASCII alphabetic characters ( passwordMinLowers ) Minimum number of special ASCII characters, such as !@#USD ( passwordMinSpecials ) Minimum number of 8-bit characters ( passwordMin8bit ) Maximum number of times that the same character can be immediately repeated, such as aaabbb ( passwordMaxRepeats ) Minimum number of character categories required per password; a category can be upper- or lower-case letters, special characters, digits, or 8-bit characters ( passwordMinCategories ) Directory Server checks the password against the CrackLib dictionary ( passwordDictCheck ) Directory Server checks if the password contains a palindrome ( passwordPalindrome ) Directory Server prevents setting a password that has more consecutive characters from the same category ( passwordMaxClassChars ) Directory Server prevents setting a password that contains certain strings ( passwordBadWords ) Directory Server prevents setting a password that contains strings set in administrator-defined attributes ( passwordUserAttributes ) The more categories of syntax required, the stronger the password. By default, password syntax checking is disabled. 9.6.2.8. Password Length The password policy can require a minimum length for user passwords. In general, shorter passwords are easier to crack. A good length for passwords is eight characters. This is long enough to be difficult to crack but short enough that users can remember the password without writing it down. The valid range of values for this attribute is from two to 512 characters. By default, no minimum password length is set. 9.6.2.9. Password Minimum Age The password policy can prevent users from changing their passwords for a specified time. When used in conjunction with the passwordHistory attribute, users are discouraged from reusing old passwords. For example, if the password minimum age ( passwordMinAge ) attribute is two days, users cannot repeatedly change their passwords during a single session. This prevents them from cycling through the password history so that they can reuse an old password. The valid range of values for this attribute is from zero to 24,855 days. A value of zero (0) indicates that the user can change the password immediately. 9.6.2.10. Password History The Directory Server can store from two to 24 passwords in the password history ; if a password is in the history, a user cannot reset his password to that old password. This prevents users from reusing a couple of passwords that are easy to remember. Alternatively, the password history can be disabled, thus allowing users to reuse passwords. The passwords remain in history even if the password history is off so that if the password history is turned back on, users cannot reuse the passwords that were in the history before the password history was disabled. The server does not maintain a password history by default. 9.6.2.11. Password Storage Schemes The password storage scheme specifies the type of encryption used to store Directory Server passwords within the directory. The Directory Server supports several different password storage schemes: Salted Secure Hash Algorithm (SSHA, SSHA-256, SSHA-384, and SSHA-512). This is the most secure password storage scheme and is the default. The recommended SSHA scheme is SSHA-256 or stronger. CLEAR , meaning no encryption. This is the only option which can be used with SASL Digest-MD5, so using SASL requires the CLEAR password storage scheme. Although passwords stored in the directory can be protected through the use of access control information (ACI) instructions, it is still not a good idea to store plain text passwords in the directory. Secure Hash Algorithm (SHA, SHA-256, SHA-384, and SHA-512). This is less secure than SSHA. UNIX CRYPT algorithm . This algorithm provides compatibility with UNIX passwords. MD5 . This storage scheme is less secure than SSHA, but it is included for legacy applications which require MD5. Salted MD5 . This storage scheme is more secure than plain MD5 hash, but still less secure than SSHA. This storage scheme is not included for use with new passwords but to help with migrating user accounts from directories which support salted MD5. 9.6.2.12. Password Last Change Time The passwordTrackUpdateTime attribute tells the server to record a timestamp for the last time that the password was updated for an entry. The password change time itself is stored as an operational attribute on the user entry, pwdUpdateTime (which is separate from the modifyTimestamp or lastModified operational attributes). By default, the password change time is not recorded. 9.6.3. Designing a Password Policy in a Replicated Environment Password and account lockout policies are enforced in a replicated environment as follows: Password policies are enforced on the data supplier. Account lockout is enforced on all servers in the replication setup. The password policy information in the directory, such as password age; the account lockout counter; and the expiration warning counter are all replicated. The configuration information, however, is stored locally and is not replicated. This information includes the password syntax and the history of password modifications. When configuring a password policy in a replicated environment, consider the following points: All replicas issue warnings of an impending password expiration. This information is kept locally on each server, so if a user binds to several replicas in turn, the user receives the same warning several times. In addition, if the user changes the password, it may take time for this information to filter through to the replicas. If a user changes a password and then immediately rebinds, the bind may fail until the replica registers the changes. The same bind behavior should occur on all servers, including suppliers and replicas. Always create the same password policy configuration information on each server. Account lockout counters may not work as expected in a multi-supplier environment.
null
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/deployment_guide/Designing_a_Secure_Directory-Designing_a_Password_Policy
3.5. Listing Hosts
3.5. Listing Hosts This Ruby example lists the hosts. # Get the reference to the root of the services tree: system_service = connection.system_service # Get the reference to the service that manages the # collection of hosts: host_service = system_service.hosts_service # Retrieve the list of hosts and for each one # print its name: host = host_service.list host.each do |host| puts host.name end In an environment with only one attached host ( Atlantic ) the example outputs: For more information, see https://www.rubydoc.info/gems/ovirt-engine-sdk/OvirtSDK4/HostsService#list-instance_method .
[ "Get the reference to the root of the services tree: system_service = connection.system_service Get the reference to the service that manages the collection of hosts: host_service = system_service.hosts_service Retrieve the list of hosts and for each one print its name: host = host_service.list host.each do |host| puts host.name end", "Atlantic" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/ruby_sdk_guide/listing_hosts
Chapter 2. Features that are available in this release
Chapter 2. Features that are available in this release This release of the JBoss Web Server collection includes the following features. 2.1. New or changed features in the latest release The latest release of the JBoss Web Server collection provides the following new or changed features. 2.1.1. Support for ansible-core package version 2.16 or later The 2.1 release of the JBoss Web Server collection requires that you have installed the ansible-core package version 2.16 or later on a control node in your system. You can install the ansible-core package by installing Red Hat Ansible Automation Platform 2. x . For more information, see the Red Hat Ansible Automation Platform Installation Guide . 2.1.2. Automated installation of native archive file enabled by default From the 2.1 release onward, the JBoss Web Server collection is also configured to install the native archive file for the specified product version by default. The jws_native variable is now set to True by default. This supersedes the behavior in earlier releases where the jws_native variable was set to False by default. In this situation, unless you explicitly changed the jws_native variable setting to True , the JBoss Web Server collection did not install the native archive file. Note If you set the jws_native variable to False , the JBoss Web Server collection cannot install the native archive, which causes issues for features such as SELinux policies that require the installation of a native archive file. 2.1.3. Preconfigured become: true directives in redhat.jws.jws role From the 2.1 release onward, the redhat.jws.jws role is already preconfigured with become: true directives, which activate user privilege escalation for performing any automated tasks that require root privileges on your target hosts. 2.1.4. Requirement for become: true directive in playbook removed From the 2.1 release onward, because the redhat.jws.jws role is preconfigured with become: true directives, you no longer need to specify a become: true directive in your playbook. This supersedes the behavior in earlier releases where the JBoss Web Server collection required that you specify a become: true directive in your playbook to activate user privilege escalation at the play level. 2.2. New or changed features in earlier releases The JBoss Web Server collection includes the following features that were introduced in earlier releases. 2.2.1. Full Red Hat support From the 2.0 release onward, the JBoss Web Server collection is a fully supported feature from Red Hat. Before the 2.0 release, the JBoss Web Server collection was a Technology Preview feature only. 2.2.2. Support for automated installations of JBoss Web Server on RHEL 8 or RHEL 9 The JBoss Web Server collection supports the automated installation of Red Hat JBoss Web Server on target hosts that are running on Red Hat Enterprise Linux (RHEL) version 8 or 9. 2.2.3. Predefined set of variables for enabling automation tasks The JBoss Web Server collection provides a comprehensive set of predefined variables and default values that you can manually update to match your setup requirements. These variable settings provide all the information that the JBoss Web Server collection requires to complete an automated and customized installation of Red Hat JBoss Web Server on your target hosts. For a full list of variables that the JBoss Web Server collection provides, see the redhat.jws.jws role in Ansible automation hub . The information page for the redhat.jws.jws role lists the names, descriptions, and default values for all the variables that you can define. 2.2.4. Automated installation of a Red Hat JBoss Web Server base release from archive files By default, the JBoss Web Server collection supports the automated installation of Red Hat JBoss Web Server from product archive files. You can enable the JBoss Web Server collection to install the base release of a specified JBoss Web Server version from archive files. A base release is the initial release of a specific product version (for example, 6.0.0 is the base release of version 6.0 ). The JBoss Web Server collection requires that local copies of the appropriate archive files are available on your Ansible control node. If copies of the archive files are not already on your system, you can set variables to permit automatic file downloads from the Red Hat Customer Portal. For more information, see Support for automatic download of archive files . Alternatively, you can download the archive files manually. This feature also includes variables to support the following automation setup tasks: You can specify the base release of the product version that you want to install. If you have changed the names of the archive files on your Ansible control node, you can specify the appropriate file names. After you set the appropriate variables, the JBoss Web Server collection automatically extracts the archive files and installs the product on your target hosts when you subsequently run the playbook. For more information, see Enabling the automated installation of a JBoss Web Server base release . 2.2.5. Automated installation of Red Hat JBoss Web Server patch updates from archive files If product patch updates are available for the JBoss Web Server version that is being installed, you can also enable the JBoss Web Server collection to install these patch updates from archive files. This feature is disabled by default. You can use the same steps to enable the automated installation of patch updates regardless of whether you want to install these updates at the same time as the base release or later. The JBoss Web Server collection requires that local copies of the appropriate archive files are available on your Ansible control node. If copies of the archive files are not already on your system, you can set variables to permit automatic file downloads from the Red Hat Customer Portal. For more information, see Support for automatic download of archive files . Alternatively, you can download the archive files manually. This feature also includes variables to support the following automation setup tasks: You can enable the automated installation of patch updates. If you want to install a specified patch release rather than the latest available patch update, you can specify the appropriate patch release. If you want to prevent the JBoss Web Server collection from contacting the Red Hat Customer Portal for file downloads, you can enable a fully offline installation. For more information, see Support for fully offline installations from archive files . After you set the appropriate variables, the JBoss Web Server collection automatically extracts the archive files and installs the patch updates on your target hosts when you subsequently run the playbook. For more information, see Enabling the automated installation of JBoss Web Server patch updates . 2.2.6. Support for automatic download of archive files The JBoss Web Server collection is configured to support the automatic download of archive files by default. However, this feature also requires that you set variables to specify the client identifier (ID) and secret that are associated with your Red Hat service account. Note Service accounts enable you to securely and automatically connect and authenticate services or applications without requiring end-user credentials or direct interaction. To create a service account, you can log in to the Service Accounts page in the Red Hat Hybrid Cloud Console, and click Create service account . For more information, see Enabling the automated installation of a JBoss Web Server base release and Enabling the automated installation of JBoss Web Server patch updates . 2.2.7. Support for fully offline archive file installations By default, the JBoss Web Server collection is configured to contact the Red Hat Customer Portal to check if new patch updates are available. However, you can optionally set a variable to enforce a fully offline installation and prevent the collection from contacting the Red Hat Customer Portal, This feature is useful if your Ansible control node does not have internet access and you want the collection to avoid contacting the Red Hat Customer Portal for file downloads. Note If you enable this feature, you must also set a variable to specify the patch release that you want to install. You must also ensure that copies of the appropriate archive files already exist on your Ansible control node. For more information, see Enabling the automated installation of JBoss Web Server patch updates . 2.2.8. Automated installation of Red Hat JBoss Web Server from RPM packages You can enable the JBoss Web Server collection to install Red Hat JBoss Web Server from RPM packages. This feature is disabled by default. When you enable the RPM installation method, the JBoss Web Server collection installs the latest RPM packages for a specified major version of the product, including any minor version and patch updates. The collection obtains the RPM packages directly from Red Hat. This feature includes variables to support the following automation setup tasks: You can specify the product version that you want to install. You can enable the RPM installation method. After you set the appropriate variables, the JBoss Web Server collection automatically obtains the latest RPM packages and installs these packages on your target hosts when you subsequently run the playbook. For more information, see Enabling the automated installation of JBoss Web Server from RPM packages . 2.2.9. Automated installation of Red Hat build of OpenJDK By default, the JBoss Web Server collection does not install a JDK automatically on your target hosts, based on the assumption that you have already installed a supported JDK on these hosts. However, for the sake of convenience, you can optionally set a variable to enable the automated installation of a supported version of Red Hat build of OpenJDK. In this situation, the JBoss Web Server collection automatically installs the specified OpenJDK version on each target host when you subsequently run the playbook. Note The JBoss Web Server collection supports the automated installation of Red Hat build of OpenJDK only. If you want to use a supported version of IBM JDK or Oracle JDK, you must install the JDK manually on each target host or you can automate this process by using your playbook. For more information about manually installing a version of IBM JDK or Oracle JDK, see the Red Hat JBoss Web Server Installation Guide . For more information, see Ensuring that a JDK is installed on the target hosts . 2.2.10. Automated creation of product user account and group By default, the JBoss Web Server collection creates a tomcat user account and a tomcat group automatically on each target host. However, if you want the JBoss Web Server collection to create a different user account and group, you can set variables to modify the behavior of the JBoss Web Server collection to match your setup requirements. In this situation, the JBoss Web Server collection automatically creates the specified user account and group name on each target host when you subsequently run the playbook. For more information, see Ensuring that a product user and group are created on the target hosts . 2.2.11. Automated integration of Red Hat JBoss Web Server with systemd By default, the JBoss Web Server collection is not configured to set up Red Hat JBoss Web Server as a service that a system daemon can manage. However, if you want the JBoss Web Server collection to integrate Red Hat JBoss Web Server with a system daemon, you can set a variable to modify the behavior of the JBoss Web Server collection to match your setup requirements. If you enable this feature, the JBoss Web Server collection sets up Red Hat JBoss Web Server as a jws6โ€tomcat service automatically on each target host. However, if you want to use a different service name, you can also set a variable to instruct the JBoss Web Server collection to create a different service name. Note The JBoss Web Server service is managed by systemd . If you have not enabled an automated installation of Red Hat build of OpenJDK, you must also set a variable to specify the location of the JDK that is installed on your target hosts. This step is required to ensure successful integration with systemd . For more information, see Enabling the automated integration of JBoss Web Server with systemd . 2.2.12. Automated configuration of Red Hat JBoss Web Server product features The JBoss Web Server collection provides a comprehensive set of variables to enable the automated configuration of a Red Hat JBoss Web Server installation. By default, the JBoss Web Server collection configures Red Hat JBoss Web Server to listen for nonsecure HTTP connections on port 8080 . Other product features such as the following are disabled by default: Support for secure HTTPS connections Mod_cluster support for load-balancing HTTP server requests to the JBoss Web Server back end The password vault for storing sensitive data in an encrypted Java keystore Support for Apache JServ Protocol (AJP) traffic between JBoss Web Server and the Apache HTTP Server To enable a wider set of product features, you can set variables to modify the behavior of the JBoss Web Server collection to match your setup requirements. For more information, see Enablement of automated JBoss Web Server configuration tasks . 2.2.13. Automated deployment of JBoss Web Server applications You can also automate the deployment of web applications on your target hosts by adding customized tasks to the playbook. If you want to deploy a new or updated application when Red Hat JBoss Web Server is already running, the JBoss Web Server collection provides a handler to restart the web server when the application is deployed. For more information, see Enabling the automated deployment of JBoss Web Server applications on your target hosts .
null
https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/red_hat_ansible_certified_content_collection_for_red_hat_jboss_web_server_release_notes/features_that_are_available_in_this_release
Chapter 2. Considerations for implementing the Load-balancing service
Chapter 2. Considerations for implementing the Load-balancing service You must make several decisions when you plan to deploy the Red Hat OpenStack Platform (RHOSP) Load-balancing service (octavia) such as choosing which provider to use or whether to implement a highly available environment: Section 2.1, "Load-balancing service provider drivers" Section 2.2, "Load-balancing service (octavia) feature support matrix" Section 2.3, "Load-balancing service software requirements" Section 2.4, "Load-balancing service prerequisites for the undercloud" Section 2.5, "Basics of active-standby topology for Load-balancing service instances" Section 2.6, "Post-deployment steps for the Load-balancing service" 2.1. Load-balancing service provider drivers The Red Hat OpenStack Platform (RHOSP) Load-balancing service (octavia) supports enabling multiple provider drivers by using the Octavia v2 API. You can choose to use one provider driver, or multiple provider drivers simultaneously. RHOSP provides two load-balancing providers, amphora and Open Virtual Network (OVN). Amphora, the default, is a highly available load balancer with a feature set that scales with your compute environment. Because of this, amphora is suited for large-scale deployments. The OVN load-balancing provider is a lightweight load balancer with a basic feature set. OVN is typical for east-west, layer 4 network traffic. OVN provisions quickly and consumes fewer resources than a full-featured load-balancing provider such as amphora. On RHOSP deployments that use the neutron Modular Layer 2 plug-in with the OVN mechanism driver (ML2/OVN), RHOSP director automatically enables the OVN provider driver in the Load-balancing service without the need for additional installation or configuration. Important The information in this section applies only to the amphora load-balancing provider, unless indicated otherwise. Additional resources Section 2.2, "Load-balancing service (octavia) feature support matrix" 2.2. Load-balancing service (octavia) feature support matrix The Red Hat OpenStack Platform (RHOSP) Load-balancing service (octavia) provides two load-balancing providers, amphora and Open Virtual Network (OVN). Amphora is a full-featured load-balancing provider that requires a separate haproxy VM and an extra latency hop. OVN runs on every node and does not require a separate VM nor an extra hop. However, OVN has far fewer load-balancing features than amphora. The following table lists features in the Load-balancing service that Red Hat OpenStack Platform (RHOSP) 17.0 supports and in which maintenance release support for the feature began. Note If the feature is not listed, then RHOSP 17.0 does not support the feature. Table 2.1. Load-balancing service (octavia) feature support matrix Feature Support level in RHOSP 17.0 Amphora Provider OVN Provider ML2/OVS L3 HA Full support No support ML2/OVS DVR Full support No support ML2/OVS L3 HA + composable network node [1] Full support No support ML2/OVS DVR + composable network node [1] Full support No support ML2/OVN L3 HA Full support Full support ML2/OVN DVR Full support Full support DPDK No support No support SR-IOV No support No support Health monitors Full support No support Amphora active-standby Full support No support Terminated HTTPS load balancers (with barbican) Full support No support Amphora spare pool Technology Preview only No support UDP Full support Full support Backup members Technology Preview only No support Provider framework Technology Preview only No support TLS client authentication Technology Preview only No support TLS back end encryption Technology Preview only No support Octavia flavors Full support No support Object tags Full support No support Listener API timeouts Full support No support Log offloading Full support No support VIP access control list Full support No support Volume-based amphora No support No support [1] Network node with OVS, metadata, DHCP, L3, and Octavia (worker, health monitor, and housekeeping). Additional resources Section 2.1, "Load-balancing service provider drivers" 2.3. Load-balancing service software requirements The Red Hat OpenStack Platform (RHOSP) Load-balancing service (octavia) requires that you configure the following core OpenStack components: Compute (nova) OpenStack Networking (neutron) Image (glance) Identity (keystone) RabbitMQ MySQL 2.4. Load-balancing service prerequisites for the undercloud The Red Hat OpenStack Platform (RHOSP) Load-balancing service (octavia) has the following requirements for the RHOSP undercloud: A successful undercloud installation. The Load-balancing service present on the undercloud. A container-based overcloud deployment plan. Load-balancing service components configured on your Controller nodes. Important If you want to enable the Load-balancing service on an existing overcloud deployment, you must prepare the undercloud. Failure to do so results in the overcloud installation being reported as successful yet without the Load-balancing service running. To prepare the undercloud, see the Transitioning to Containerized Services guide. 2.5. Basics of active-standby topology for Load-balancing service instances When you deploy the Red Hat OpenStack Platform (RHOSP) Load-balancing service (octavia), you can decide whether, by default, load balancers are highly available when users create them. If you want to give users a choice, then after RHOSP deployment, create a Load-balancing service flavor for creating highly available load balancers and a flavor for creating standalone load balancers. By default, the amphora provider driver is configured for a single Load-balancing service (amphora) instance topology with limited support for high availability (HA). However, you can make Load-balancing service instances highly available when you implement an active-standby topology. In this topology, the Load-balancing service boots an active and standby instance for each load balancer, and maintains session persistence between each. If the active instance becomes unhealthy, the instance automatically fails over to the standby instance, making it active. The Load-balancing service health manager automatically rebuilds an instance that fails. Additional resources Section 4.2, "Enabling active-standby topology for Load-balancing service instances" 2.6. Post-deployment steps for the Load-balancing service Red Hat OpenStack Platform (RHOSP) provides a workflow task to simplify the post-deployment steps for the Load-balancing service (octavia). This workflow runs a set of Ansible playbooks to provide the following post-deployment steps as the last phase of the overcloud deployment: Configure certificates and keys. Configure the load-balancing management network between the amphorae and the Load-balancing service Controller worker and health manager. Amphora image On pre-provisioned servers, you must install the amphora image on the undercloud before you deploy the Load-balancing service: On servers that are not pre-provisioned, RHOSP director automatically downloads the default amphora image, uploads it to the overcloud Image service (glance), and then configures the Load-balancing service to use this amphora image. During a stack update or upgrade, director updates this image to the latest amphora image. Note Custom amphora images are not supported. Additional resources Section 4.1, "Deploying the Load-balancing service"
[ "sudo dnf install octavia-amphora-image-x86_64.noarch" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/using_octavia_for_load_balancing-as-a-service/plan-lb-service_rhosp-lbaas
Chapter 9. Advanced migration options
Chapter 9. Advanced migration options You can automate your migrations and modify the MigPlan and MigrationController custom resources in order to perform large-scale migrations and to improve performance. 9.1. Terminology Table 9.1. MTC terminology Term Definition Source cluster Cluster from which the applications are migrated. Destination cluster [1] Cluster to which the applications are migrated. Replication repository Object storage used for copying images, volumes, and Kubernetes objects during indirect migration or for Kubernetes objects during direct volume migration or direct image migration. The replication repository must be accessible to all clusters. Host cluster Cluster on which the migration-controller pod and the web console are running. The host cluster is usually the destination cluster but this is not required. The host cluster does not require an exposed registry route for direct image migration. Remote cluster A remote cluster is usually the source cluster but this is not required. A remote cluster requires a Secret custom resource that contains the migration-controller service account token. A remote cluster requires an exposed secure registry route for direct image migration. Indirect migration Images, volumes, and Kubernetes objects are copied from the source cluster to the replication repository and then from the replication repository to the destination cluster. Direct volume migration Persistent volumes are copied directly from the source cluster to the destination cluster. Direct image migration Images are copied directly from the source cluster to the destination cluster. Stage migration Data is copied to the destination cluster without stopping the application. Running a stage migration multiple times reduces the duration of the cutover migration. Cutover migration The application is stopped on the source cluster and its resources are migrated to the destination cluster. State migration Application state is migrated by copying specific persistent volume claims to the destination cluster. Rollback migration Rollback migration rolls back a completed migration. 1 Called the target cluster in the MTC web console. 9.2. Migrating applications by using the command line You can migrate applications with the MTC API by using the command line interface (CLI) in order to automate the migration. 9.2.1. Migration prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Direct image migration You must ensure that the secure OpenShift image registry of the source cluster is exposed. You must create a route to the exposed registry. Direct volume migration If your clusters use proxies, you must configure an Stunnel TCP proxy. Clusters The source cluster must be upgraded to the latest MTC z-stream release. The MTC version must be the same on all clusters. Network The clusters have unrestricted network access to each other and to the replication repository. If you copy the persistent volumes with move , the clusters must have unrestricted network access to the remote volumes. You must enable the following ports on an OpenShift Container Platform 4 cluster: 6443 (API server) 443 (routes) 53 (DNS) You must enable port 443 on the replication repository if you are using TLS. Persistent volumes (PVs) The PVs must be valid. The PVs must be bound to persistent volume claims. If you use snapshots to copy the PVs, the following additional prerequisites apply: The cloud provider must support snapshots. The PVs must have the same cloud provider. The PVs must be located in the same geographic region. The PVs must have the same storage class. 9.2.2. Creating a registry route for direct image migration For direct image migration, you must create a route to the exposed OpenShift image registry on all remote clusters. Prerequisites The OpenShift image registry must be exposed to external traffic on all remote clusters. The OpenShift Container Platform 4 registry is exposed by default. Procedure To create a route to an OpenShift Container Platform 4 registry, run the following command: USD oc create route passthrough --service=image-registry -n openshift-image-registry 9.2.3. Proxy configuration For OpenShift Container Platform 4.1 and earlier versions, you must configure proxies in the MigrationController custom resource (CR) manifest after you install the Migration Toolkit for Containers Operator because these versions do not support a cluster-wide proxy object. For OpenShift Container Platform 4.2 to 4.11, the Migration Toolkit for Containers (MTC) inherits the cluster-wide proxy settings. You can change the proxy parameters if you want to override the cluster-wide proxy settings. 9.2.3.1. Direct volume migration Direct Volume Migration (DVM) was introduced in MTC 1.4.2. DVM supports only one proxy. The source cluster cannot access the route of the target cluster if the target cluster is also behind a proxy. If you want to perform a DVM from a source cluster behind a proxy, you must configure a TCP proxy that works at the transport layer and forwards the SSL connections transparently without decrypting and re-encrypting them with their own SSL certificates. A Stunnel proxy is an example of such a proxy. 9.2.3.1.1. TCP proxy setup for DVM You can set up a direct connection between the source and the target cluster through a TCP proxy and configure the stunnel_tcp_proxy variable in the MigrationController CR to use the proxy: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port Direct volume migration (DVM) supports only basic authentication for the proxy. Moreover, DVM works only from behind proxies that can tunnel a TCP connection transparently. HTTP/HTTPS proxies in man-in-the-middle mode do not work. The existing cluster-wide proxies might not support this behavior. As a result, the proxy settings for DVM are intentionally kept different from the usual proxy configuration in MTC. 9.2.3.1.2. Why use a TCP proxy instead of an HTTP/HTTPS proxy? You can enable DVM by running Rsync between the source and the target cluster over an OpenShift route. Traffic is encrypted using Stunnel, a TCP proxy. The Stunnel running on the source cluster initiates a TLS connection with the target Stunnel and transfers data over an encrypted channel. Cluster-wide HTTP/HTTPS proxies in OpenShift are usually configured in man-in-the-middle mode where they negotiate their own TLS session with the outside servers. However, this does not work with Stunnel. Stunnel requires that its TLS session be untouched by the proxy, essentially making the proxy a transparent tunnel which simply forwards the TCP connection as-is. Therefore, you must use a TCP proxy. 9.2.3.1.3. Known issue Migration fails with error Upgrade request required The migration Controller uses the SPDY protocol to execute commands within remote pods. If the remote cluster is behind a proxy or a firewall that does not support the SPDY protocol, the migration controller fails to execute remote commands. The migration fails with the error message Upgrade request required . Workaround: Use a proxy that supports the SPDY protocol. In addition to supporting the SPDY protocol, the proxy or firewall also must pass the Upgrade HTTP header to the API server. The client uses this header to open a websocket connection with the API server. If the Upgrade header is blocked by the proxy or firewall, the migration fails with the error message Upgrade request required . Workaround: Ensure that the proxy forwards the Upgrade header. 9.2.3.2. Tuning network policies for migrations OpenShift supports restricting traffic to or from pods using NetworkPolicy or EgressFirewalls based on the network plugin used by the cluster. If any of the source namespaces involved in a migration use such mechanisms to restrict network traffic to pods, the restrictions might inadvertently stop traffic to Rsync pods during migration. Rsync pods running on both the source and the target clusters must connect to each other over an OpenShift Route. Existing NetworkPolicy or EgressNetworkPolicy objects can be configured to automatically exempt Rsync pods from these traffic restrictions. 9.2.3.2.1. NetworkPolicy configuration 9.2.3.2.1.1. Egress traffic from Rsync pods You can use the unique labels of Rsync pods to allow egress traffic to pass from them if the NetworkPolicy configuration in the source or destination namespaces blocks this type of traffic. The following policy allows all egress traffic from Rsync pods in the namespace: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress 9.2.3.2.1.2. Ingress traffic to Rsync pods apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress 9.2.3.2.2. EgressNetworkPolicy configuration The EgressNetworkPolicy object or Egress Firewalls are OpenShift constructs designed to block egress traffic leaving the cluster. Unlike the NetworkPolicy object, the Egress Firewall works at a project level because it applies to all pods in the namespace. Therefore, the unique labels of Rsync pods do not exempt only Rsync pods from the restrictions. However, you can add the CIDR ranges of the source or target cluster to the Allow rule of the policy so that a direct connection can be setup between two clusters. Based on which cluster the Egress Firewall is present in, you can add the CIDR range of the other cluster to allow egress traffic between the two: apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny 9.2.3.2.3. Choosing alternate endpoints for data transfer By default, DVM uses an OpenShift Container Platform route as an endpoint to transfer PV data to destination clusters. You can choose another type of supported endpoint, if cluster topologies allow. For each cluster, you can configure an endpoint by setting the rsync_endpoint_type variable on the appropriate destination cluster in your MigrationController CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route] 9.2.3.2.4. Configuring supplemental groups for Rsync pods When your PVCs use a shared storage, you can configure the access to that storage by adding supplemental groups to Rsync pod definitions in order for the pods to allow access: Table 9.2. Supplementary groups for Rsync pods Variable Type Default Description src_supplemental_groups string Not set Comma-separated list of supplemental groups for source Rsync pods target_supplemental_groups string Not set Comma-separated list of supplemental groups for target Rsync pods Example usage The MigrationController CR can be updated to set values for these supplemental groups: spec: src_supplemental_groups: "1000,2000" target_supplemental_groups: "2000,3000" 9.2.3.3. Configuring proxies Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Procedure Get the MigrationController CR manifest: USD oc get migrationcontroller <migration_controller> -n openshift-migration Update the proxy parameters: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration ... spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2 1 Stunnel proxy URL for direct volume migration. 2 Comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy nor the httpsProxy field is set. Save the manifest as migration-controller.yaml . Apply the updated manifest: USD oc replace -f migration-controller.yaml -n openshift-migration 9.2.4. Migrating an application by using the MTC API You can migrate an application from the command line by using the Migration Toolkit for Containers (MTC) API. Procedure Create a MigCluster CR manifest for the host cluster: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <host_cluster> namespace: openshift-migration spec: isHostCluster: true EOF Create a Secret object manifest for each remote cluster: USD cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: <cluster_secret> namespace: openshift-config type: Opaque data: saToken: <sa_token> 1 EOF 1 Specify the base64-encoded migration-controller service account (SA) token of the remote cluster. You can obtain the token by running the following command: USD oc sa get-token migration-controller -n openshift-migration | base64 -w 0 Create a MigCluster CR manifest for each remote cluster: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <remote_cluster> 1 namespace: openshift-migration spec: exposedRegistryPath: <exposed_registry_route> 2 insecure: false 3 isHostCluster: false serviceAccountSecretRef: name: <remote_cluster_secret> 4 namespace: openshift-config url: <remote_cluster_url> 5 EOF 1 Specify the Cluster CR of the remote cluster. 2 Optional: For direct image migration, specify the exposed registry route. 3 SSL verification is enabled if false . CA certificates are not required or checked if true . 4 Specify the Secret object of the remote cluster. 5 Specify the URL of the remote cluster. Verify that all clusters are in a Ready state: USD oc describe cluster <cluster> Create a Secret object manifest for the replication repository: USD cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: namespace: openshift-config name: <migstorage_creds> type: Opaque data: aws-access-key-id: <key_id_base64> 1 aws-secret-access-key: <secret_key_base64> 2 EOF 1 Specify the key ID in base64 format. 2 Specify the secret key in base64 format. AWS credentials are base64-encoded by default. For other storage providers, you must encode your credentials by running the following command with each key: USD echo -n "<key>" | base64 -w 0 1 1 Specify the key ID or the secret key. Both keys must be base64-encoded. Create a MigStorage CR manifest for the replication repository: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: name: <migstorage> namespace: openshift-migration spec: backupStorageConfig: awsBucketName: <bucket> 1 credsSecretRef: name: <storage_secret> 2 namespace: openshift-config backupStorageProvider: <storage_provider> 3 volumeSnapshotConfig: credsSecretRef: name: <storage_secret> 4 namespace: openshift-config volumeSnapshotProvider: <storage_provider> 5 EOF 1 Specify the bucket name. 2 Specify the Secrets CR of the object storage. You must ensure that the credentials stored in the Secrets CR of the object storage are correct. 3 Specify the storage provider. 4 Optional: If you are copying data by using snapshots, specify the Secrets CR of the object storage. You must ensure that the credentials stored in the Secrets CR of the object storage are correct. 5 Optional: If you are copying data by using snapshots, specify the storage provider. Verify that the MigStorage CR is in a Ready state: USD oc describe migstorage <migstorage> Create a MigPlan CR manifest: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: destMigClusterRef: name: <host_cluster> namespace: openshift-migration indirectImageMigration: true 1 indirectVolumeMigration: true 2 migStorageRef: name: <migstorage> 3 namespace: openshift-migration namespaces: - <source_namespace_1> 4 - <source_namespace_2> - <source_namespace_3>:<destination_namespace> 5 srcMigClusterRef: name: <remote_cluster> 6 namespace: openshift-migration EOF 1 Direct image migration is enabled if false . 2 Direct volume migration is enabled if false . 3 Specify the name of the MigStorage CR instance. 4 Specify one or more source namespaces. By default, the destination namespace has the same name. 5 Specify a destination namespace if it is different from the source namespace. 6 Specify the name of the source cluster MigCluster instance. Verify that the MigPlan instance is in a Ready state: USD oc describe migplan <migplan> -n openshift-migration Create a MigMigration CR manifest to start the migration defined in the MigPlan instance: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: <migmigration> namespace: openshift-migration spec: migPlanRef: name: <migplan> 1 namespace: openshift-migration quiescePods: true 2 stage: false 3 rollback: false 4 EOF 1 Specify the MigPlan CR name. 2 The pods on the source cluster are stopped before migration if true . 3 A stage migration, which copies most of the data without stopping the application, is performed if true . 4 A completed migration is rolled back if true . Verify the migration by watching the MigMigration CR progress: USD oc watch migmigration <migmigration> -n openshift-migration The output resembles the following: Example output Name: c8b034c0-6567-11eb-9a4f-0bc004db0fbc Namespace: openshift-migration Labels: migration.openshift.io/migplan-name=django Annotations: openshift.io/touch: e99f9083-6567-11eb-8420-0a580a81020c API Version: migration.openshift.io/v1alpha1 Kind: MigMigration ... Spec: Mig Plan Ref: Name: migplan Namespace: openshift-migration Stage: false Status: Conditions: Category: Advisory Last Transition Time: 2021-02-02T15:04:09Z Message: Step: 19/47 Reason: InitialBackupCreated Status: True Type: Running Category: Required Last Transition Time: 2021-02-02T15:03:19Z Message: The migration is ready. Status: True Type: Ready Category: Required Durable: true Last Transition Time: 2021-02-02T15:04:05Z Message: The migration registries are healthy. Status: True Type: RegistriesHealthy Itinerary: Final Observed Digest: 7fae9d21f15979c71ddc7dd075cb97061895caac5b936d92fae967019ab616d5 Phase: InitialBackupCreated Pipeline: Completed: 2021-02-02T15:04:07Z Message: Completed Name: Prepare Started: 2021-02-02T15:03:18Z Message: Waiting for initial Velero backup to complete. Name: Backup Phase: InitialBackupCreated Progress: Backup openshift-migration/c8b034c0-6567-11eb-9a4f-0bc004db0fbc-wpc44: 0 out of estimated total of 0 objects backed up (5s) Started: 2021-02-02T15:04:07Z Message: Not started Name: StageBackup Message: Not started Name: StageRestore Message: Not started Name: DirectImage Message: Not started Name: DirectVolume Message: Not started Name: Restore Message: Not started Name: Cleanup Start Timestamp: 2021-02-02T15:03:18Z Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Running 57s migmigration_controller Step: 2/47 Normal Running 57s migmigration_controller Step: 3/47 Normal Running 57s (x3 over 57s) migmigration_controller Step: 4/47 Normal Running 54s migmigration_controller Step: 5/47 Normal Running 54s migmigration_controller Step: 6/47 Normal Running 52s (x2 over 53s) migmigration_controller Step: 7/47 Normal Running 51s (x2 over 51s) migmigration_controller Step: 8/47 Normal Ready 50s (x12 over 57s) migmigration_controller The migration is ready. Normal Running 50s migmigration_controller Step: 9/47 Normal Running 50s migmigration_controller Step: 10/47 9.2.5. State migration You can perform repeatable, state-only migrations by using Migration Toolkit for Containers (MTC) to migrate persistent volume claims (PVCs) that constitute an application's state. You migrate specified PVCs by excluding other PVCs from the migration plan. You can map the PVCs to ensure that the source and the target PVCs are synchronized. Persistent volume (PV) data is copied to the target cluster. The PV references are not moved, and the application pods continue to run on the source cluster. State migration is specifically designed to be used in conjunction with external CD mechanisms, such as OpenShift Gitops. You can migrate application manifests using GitOps while migrating the state using MTC. If you have a CI/CD pipeline, you can migrate stateless components by deploying them on the target cluster. Then you can migrate stateful components by using MTC. You can perform a state migration between clusters or within the same cluster. Important State migration migrates only the components that constitute an application's state. If you want to migrate an entire namespace, use stage or cutover migration. Prerequisites The state of the application on the source cluster is persisted in PersistentVolumes provisioned through PersistentVolumeClaims . The manifests of the application are available in a central repository that is accessible from both the source and the target clusters. Procedure Migrate persistent volume data from the source to the target cluster. You can perform this step as many times as needed. The source application continues running. Quiesce the source application. You can do this by setting the replicas of workload resources to 0 , either directly on the source cluster or by updating the manifests in GitHub and re-syncing the Argo CD application. Clone application manifests to the target cluster. You can use Argo CD to clone the application manifests to the target cluster. Migrate the remaining volume data from the source to the target cluster. Migrate any new data created by the application during the state migration process by performing a final data migration. If the cloned application is in a quiesced state, unquiesce it. Switch the DNS record to the target cluster to re-direct user traffic to the migrated application. Note MTC 1.6 cannot quiesce applications automatically when performing state migration. It can only migrate PV data. Therefore, you must use your CD mechanisms for quiescing or unquiescing applications. MTC 1.7 introduces explicit Stage and Cutover flows. You can use staging to perform initial data transfers as many times as needed. Then you can perform a cutover, in which the source applications are quiesced automatically. Additional resources See Excluding PVCs from migration to select PVCs for state migration. See Mapping PVCs to migrate source PV data to provisioned PVCs on the destination cluster. See Migrating Kubernetes objects to migrate the Kubernetes objects that constitute an application's state. 9.3. Migration hooks You can add up to four migration hooks to a single migration plan, with each hook running at a different phase of the migration. Migration hooks perform tasks such as customizing application quiescence, manually migrating unsupported data types, and updating applications after migration. A migration hook runs on a source or a target cluster at one of the following migration steps: PreBackup : Before resources are backed up on the source cluster. PostBackup : After resources are backed up on the source cluster. PreRestore : Before resources are restored on the target cluster. PostRestore : After resources are restored on the target cluster. You can create a hook by creating an Ansible playbook that runs with the default Ansible image or with a custom hook container. Ansible playbook The Ansible playbook is mounted on a hook container as a config map. The hook container runs as a job, using the cluster, service account, and namespace specified in the MigPlan custom resource. The job continues to run until it reaches the default limit of 6 retries or a successful completion. This continues even if the initial pod is evicted or killed. The default Ansible runtime image is registry.redhat.io/rhmtc/openshift-migration-hook-runner-rhel7:1.8 . This image is based on the Ansible Runner image and includes python-openshift for Ansible Kubernetes resources and an updated oc binary. Custom hook container You can use a custom hook container instead of the default Ansible image. 9.3.1. Writing an Ansible playbook for a migration hook You can write an Ansible playbook to use as a migration hook. The hook is added to a migration plan by using the MTC web console or by specifying values for the spec.hooks parameters in the MigPlan custom resource (CR) manifest. The Ansible playbook is mounted onto a hook container as a config map. The hook container runs as a job, using the cluster, service account, and namespace specified in the MigPlan CR. The hook container uses a specified service account token so that the tasks do not require authentication before they run in the cluster. 9.3.1.1. Ansible modules You can use the Ansible shell module to run oc commands. Example shell module - hosts: localhost gather_facts: false tasks: - name: get pod name shell: oc get po --all-namespaces You can use kubernetes.core modules, such as k8s_info , to interact with Kubernetes resources. Example k8s_facts module - hosts: localhost gather_facts: false tasks: - name: Get pod k8s_info: kind: pods api: v1 namespace: openshift-migration name: "{{ lookup( 'env', 'HOSTNAME') }}" register: pods - name: Print pod name debug: msg: "{{ pods.resources[0].metadata.name }}" You can use the fail module to produce a non-zero exit status in cases where a non-zero exit status would not normally be produced, ensuring that the success or failure of a hook is detected. Hooks run as jobs and the success or failure status of a hook is based on the exit status of the job container. Example fail module - hosts: localhost gather_facts: false tasks: - name: Set a boolean set_fact: do_fail: true - name: "fail" fail: msg: "Cause a failure" when: do_fail 9.3.1.2. Environment variables The MigPlan CR name and migration namespaces are passed as environment variables to the hook container. These variables are accessed by using the lookup plugin. Example environment variables - hosts: localhost gather_facts: false tasks: - set_fact: namespaces: "{{ (lookup( 'env', 'MIGRATION_NAMESPACES')).split(',') }}" - debug: msg: "{{ item }}" with_items: "{{ namespaces }}" - debug: msg: "{{ lookup( 'env', 'MIGRATION_PLAN_NAME') }}" 9.4. Migration plan options You can exclude, edit, and map components in the MigPlan custom resource (CR). 9.4.1. Excluding resources You can exclude resources, for example, image streams, persistent volumes (PVs), or subscriptions, from a Migration Toolkit for Containers (MTC) migration plan to reduce the resource load for migration or to migrate images or PVs with a different tool. By default, the MTC excludes service catalog resources and Operator Lifecycle Manager (OLM) resources from migration. These resources are parts of the service catalog API group and the OLM API group, neither of which is supported for migration at this time. Procedure Edit the MigrationController custom resource manifest: USD oc edit migrationcontroller <migration_controller> -n openshift-migration Update the spec section by adding parameters to exclude specific resources. For those resources that do not have their own exclusion parameters, add the additional_excluded_resources parameter: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: disable_image_migration: true 1 disable_pv_migration: true 2 additional_excluded_resources: 3 - resource1 - resource2 ... 1 Add disable_image_migration: true to exclude image streams from the migration. imagestreams is added to the excluded_resources list in main.yml when the MigrationController pod restarts. 2 Add disable_pv_migration: true to exclude PVs from the migration plan. persistentvolumes and persistentvolumeclaims are added to the excluded_resources list in main.yml when the MigrationController pod restarts. Disabling PV migration also disables PV discovery when you create the migration plan. 3 You can add OpenShift Container Platform resources that you want to exclude to the additional_excluded_resources list. Wait two minutes for the MigrationController pod to restart so that the changes are applied. Verify that the resource is excluded: USD oc get deployment -n openshift-migration migration-controller -o yaml | grep EXCLUDED_RESOURCES -A1 The output contains the excluded resources: Example output name: EXCLUDED_RESOURCES value: resource1,resource2,imagetags,templateinstances,clusterserviceversions,packagemanifests,subscriptions,servicebrokers,servicebindings,serviceclasses,serviceinstances,serviceplans,imagestreams,persistentvolumes,persistentvolumeclaims 9.4.2. Mapping namespaces If you map namespaces in the MigPlan custom resource (CR), you must ensure that the namespaces are not duplicated on the source or the destination clusters because the UID and GID ranges of the namespaces are copied during migration. Two source namespaces mapped to the same destination namespace spec: namespaces: - namespace_2 - namespace_1:namespace_2 If you want the source namespace to be mapped to a namespace of the same name, you do not need to create a mapping. By default, a source namespace and a target namespace have the same name. Incorrect namespace mapping spec: namespaces: - namespace_1:namespace_1 Correct namespace reference spec: namespaces: - namespace_1 9.4.3. Excluding persistent volume claims You select persistent volume claims (PVCs) for state migration by excluding the PVCs that you do not want to migrate. You exclude PVCs by setting the spec.persistentVolumes.pvc.selection.action parameter of the MigPlan custom resource (CR) after the persistent volumes (PVs) have been discovered. Prerequisites MigPlan CR is in a Ready state. Procedure Add the spec.persistentVolumes.pvc.selection.action parameter to the MigPlan CR and set it to skip : apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: ... persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: ... selection: action: skip 9.4.4. Mapping persistent volume claims You can migrate persistent volume (PV) data from the source cluster to persistent volume claims (PVCs) that are already provisioned in the destination cluster in the MigPlan CR by mapping the PVCs. This mapping ensures that the destination PVCs of migrated applications are synchronized with the source PVCs. You map PVCs by updating the spec.persistentVolumes.pvc.name parameter in the MigPlan custom resource (CR) after the PVs have been discovered. Prerequisites MigPlan CR is in a Ready state. Procedure Update the spec.persistentVolumes.pvc.name parameter in the MigPlan CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: ... persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: name: <source_pvc>:<destination_pvc> 1 1 Specify the PVC on the source cluster and the PVC on the destination cluster. If the destination PVC does not exist, it will be created. You can use this mapping to change the PVC name during migration. 9.4.5. Editing persistent volume attributes After you create a MigPlan custom resource (CR), the MigrationController CR discovers the persistent volumes (PVs). The spec.persistentVolumes block and the status.destStorageClasses block are added to the MigPlan CR. You can edit the values in the spec.persistentVolumes.selection block. If you change values outside the spec.persistentVolumes.selection block, the values are overwritten when the MigPlan CR is reconciled by the MigrationController CR. Note The default value for the spec.persistentVolumes.selection.storageClass parameter is determined by the following logic: If the source cluster PV is Gluster or NFS, the default is either cephfs , for accessMode: ReadWriteMany , or cephrbd , for accessMode: ReadWriteOnce . If the PV is neither Gluster nor NFS or if cephfs or cephrbd are not available, the default is a storage class for the same provisioner. If a storage class for the same provisioner is not available, the default is the default storage class of the destination cluster. You can change the storageClass value to the value of any name parameter in the status.destStorageClasses block of the MigPlan CR. If the storageClass value is empty, the PV will have no storage class after migration. This option is appropriate if, for example, you want to move the PV to an NFS volume on the destination cluster. Prerequisites MigPlan CR is in a Ready state. Procedure Edit the spec.persistentVolumes.selection values in the MigPlan CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: pvc-095a6559-b27f-11eb-b27f-021bddcaf6e4 proposedCapacity: 10Gi pvc: accessModes: - ReadWriteMany hasReference: true name: mysql namespace: mysql-persistent selection: action: <copy> 1 copyMethod: <filesystem> 2 verify: true 3 storageClass: <gp2> 4 accessMode: <ReadWriteMany> 5 storageClass: cephfs 1 Allowed values are move , copy , and skip . If only one action is supported, the default value is the supported action. If multiple actions are supported, the default value is copy . 2 Allowed values are snapshot and filesystem . Default value is filesystem . 3 The verify parameter is displayed if you select the verification option for file system copy in the MTC web console. You can set it to false . 4 You can change the default value to the value of any name parameter in the status.destStorageClasses block of the MigPlan CR. If no value is specified, the PV will have no storage class after migration. 5 Allowed values are ReadWriteOnce and ReadWriteMany . If this value is not specified, the default is the access mode of the source cluster PVC. You can only edit the access mode in the MigPlan CR. You cannot edit it by using the MTC web console. 9.4.6. Converting storage classes in the MTC web console You can convert the storage class of a persistent volume (PV) by migrating it within the same cluster. To do so, you must create and run a migration plan in the Migration Toolkit for Containers (MTC) web console. Prerequisites You must be logged in as a user with cluster-admin privileges on the cluster on which MTC is running. You must add the cluster to the MTC web console. Procedure In the left-side navigation pane of the OpenShift Container Platform web console, click Projects . In the list of projects, click your project. The Project details page opens. Click the DeploymentConfig name. Note the name of its running pod. Open the YAML tab of the project. Find the PVs and note the names of their corresponding persistent volume claims (PVCs). In the MTC web console, click Migration plans . Click Add migration plan . Enter the Plan name . The migration plan name must contain 3 to 63 lower-case alphanumeric characters ( a-z, 0-9 ) and must not contain spaces or underscores ( _ ). From the Migration type menu, select Storage class conversion . From the Source cluster list, select the desired cluster for storage class conversion. Click . The Namespaces page opens. Select the required project. Click . The Persistent volumes page opens. The page displays the PVs in the project, all selected by default. For each PV, select the desired target storage class. Click . The wizard validates the new migration plan and shows that it is ready. Click Close . The new plan appears on the Migration plans page. To start the conversion, click the options menu of the new plan. Under Migrations , two options are displayed, Stage and Cutover . Note Cutover migration updates PVC references in the applications. Stage migration does not update PVC references in the applications. Select the desired option. Depending on which option you selected, the Stage migration or Cutover migration notification appears. Click Migrate . Depending on which option you selected, the Stage started or Cutover started message appears. To see the status of the current migration, click the number in the Migrations column. The Migrations page opens. To see more details on the current migration and monitor its progress, select the migration from the Type column. The Migration details page opens. When the migration progresses to the DirectVolume step and the status of the step becomes Running Rsync Pods to migrate Persistent Volume data , you can click View details and see the detailed status of the copies. In the breadcrumb bar, click Stage or Cutover and wait for all steps to complete. Open the PersistentVolumeClaims tab of the OpenShift Container Platform web console. You can see new PVCs with the names of the initial PVCs but ending in new , which are using the target storage class. In the left-side navigation pane, click Pods . See that the pod of your project is running again. Additional resources For details about the move and copy actions, see MTC workflow . For details about the skip action, see Excluding PVCs from migration . For details about the file system and snapshot copy methods, see About data copy methods . 9.4.7. Performing a state migration of Kubernetes objects by using the MTC API After you migrate all the PV data, you can use the Migration Toolkit for Containers (MTC) API to perform a one-time state migration of Kubernetes objects that constitute an application. You do this by configuring MigPlan custom resource (CR) fields to provide a list of Kubernetes resources with an additional label selector to further filter those resources, and then performing a migration by creating a MigMigration CR. The MigPlan resource is closed after the migration. Note Selecting Kubernetes resources is an API-only feature. You must update the MigPlan CR and create a MigMigration CR for it by using the CLI. The MTC web console does not support migrating Kubernetes objects. Note After migration, the closed parameter of the MigPlan CR is set to true . You cannot create another MigMigration CR for this MigPlan CR. You add Kubernetes objects to the MigPlan CR by using one of the following options: Adding the Kubernetes objects to the includedResources section. When the includedResources field is specified in the MigPlan CR, the plan takes a list of group-kind as input. Only resources present in the list are included in the migration. Adding the optional labelSelector parameter to filter the includedResources in the MigPlan . When this field is specified, only resources matching the label selector are included in the migration. For example, you can filter a list of Secret and ConfigMap resources by using the label app: frontend as a filter. Procedure Update the MigPlan CR to include Kubernetes resources and, optionally, to filter the included resources by adding the labelSelector parameter: To update the MigPlan CR to include Kubernetes resources: apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: "" - kind: <kind> group: "" 1 Specify the Kubernetes object, for example, Secret or ConfigMap . Optional: To filter the included resources by adding the labelSelector parameter: apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: "" - kind: <kind> group: "" ... labelSelector: matchLabels: <label> 2 1 Specify the Kubernetes object, for example, Secret or ConfigMap . 2 Specify the label of the resources to migrate, for example, app: frontend . Create a MigMigration CR to migrate the selected Kubernetes resources. Verify that the correct MigPlan is referenced in migPlanRef : apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: generateName: <migplan> namespace: openshift-migration spec: migPlanRef: name: <migplan> namespace: openshift-migration stage: false 9.5. Migration controller options You can edit migration plan limits, enable persistent volume resizing, or enable cached Kubernetes clients in the MigrationController custom resource (CR) for large migrations and improved performance. 9.5.1. Increasing limits for large migrations You can increase the limits on migration objects and container resources for large migrations with the Migration Toolkit for Containers (MTC). Important You must test these changes before you perform a migration in a production environment. Procedure Edit the MigrationController custom resource (CR) manifest: USD oc edit migrationcontroller -n openshift-migration Update the following parameters: ... mig_controller_limits_cpu: "1" 1 mig_controller_limits_memory: "10Gi" 2 ... mig_controller_requests_cpu: "100m" 3 mig_controller_requests_memory: "350Mi" 4 ... mig_pv_limit: 100 5 mig_pod_limit: 100 6 mig_namespace_limit: 10 7 ... 1 Specifies the number of CPUs available to the MigrationController CR. 2 Specifies the amount of memory available to the MigrationController CR. 3 Specifies the number of CPU units available for MigrationController CR requests. 100m represents 0.1 CPU units (100 * 1e-3). 4 Specifies the amount of memory available for MigrationController CR requests. 5 Specifies the number of persistent volumes that can be migrated. 6 Specifies the number of pods that can be migrated. 7 Specifies the number of namespaces that can be migrated. Create a migration plan that uses the updated parameters to verify the changes. If your migration plan exceeds the MigrationController CR limits, the MTC console displays a warning message when you save the migration plan. 9.5.2. Enabling persistent volume resizing for direct volume migration You can enable persistent volume (PV) resizing for direct volume migration to avoid running out of disk space on the destination cluster. When the disk usage of a PV reaches a configured level, the MigrationController custom resource (CR) compares the requested storage capacity of a persistent volume claim (PVC) to its actual provisioned capacity. Then, it calculates the space required on the destination cluster. A pv_resizing_threshold parameter determines when PV resizing is used. The default threshold is 3% . This means that PV resizing occurs when the disk usage of a PV is more than 97% . You can increase this threshold so that PV resizing occurs at a lower disk usage level. PVC capacity is calculated according to the following criteria: If the requested storage capacity ( spec.resources.requests.storage ) of the PVC is not equal to its actual provisioned capacity ( status.capacity.storage ), the greater value is used. If a PV is provisioned through a PVC and then subsequently changed so that its PV and PVC capacities no longer match, the greater value is used. Prerequisites The PVCs must be attached to one or more running pods so that the MigrationController CR can execute commands. Procedure Log in to the host cluster. Enable PV resizing by patching the MigrationController CR: USD oc patch migrationcontroller migration-controller -p '{"spec":{"enable_dvm_pv_resizing":true}}' \ 1 --type='merge' -n openshift-migration 1 Set the value to false to disable PV resizing. Optional: Update the pv_resizing_threshold parameter to increase the threshold: USD oc patch migrationcontroller migration-controller -p '{"spec":{"pv_resizing_threshold":41}}' \ 1 --type='merge' -n openshift-migration 1 The default value is 3 . When the threshold is exceeded, the following status message is displayed in the MigPlan CR status: status: conditions: ... - category: Warn durable: true lastTransitionTime: "2021-06-17T08:57:01Z" message: 'Capacity of the following volumes will be automatically adjusted to avoid disk capacity issues in the target cluster: [pvc-b800eb7b-cf3b-11eb-a3f7-0eae3e0555f3]' reason: Done status: "False" type: PvCapacityAdjustmentRequired Note For AWS gp2 storage, this message does not appear unless the pv_resizing_threshold is 42% or greater because of the way gp2 calculates volume usage and size. ( BZ#1973148 ) 9.5.3. Enabling cached Kubernetes clients You can enable cached Kubernetes clients in the MigrationController custom resource (CR) for improved performance during migration. The greatest performance benefit is displayed when migrating between clusters in different regions or with significant network latency. Note Delegated tasks, for example, Rsync backup for direct volume migration or Velero backup and restore, however, do not show improved performance with cached clients. Cached clients require extra memory because the MigrationController CR caches all API resources that are required for interacting with MigCluster CRs. Requests that are normally sent to the API server are directed to the cache instead. The cache watches the API server for updates. You can increase the memory limits and requests of the MigrationController CR if OOMKilled errors occur after you enable cached clients. Procedure Enable cached clients by running the following command: USD oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch \ '[{ "op": "replace", "path": "/spec/mig_controller_enable_cache", "value": true}]' Optional: Increase the MigrationController CR memory limits by running the following command: USD oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch \ '[{ "op": "replace", "path": "/spec/mig_controller_limits_memory", "value": <10Gi>}]' Optional: Increase the MigrationController CR memory requests by running the following command: USD oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch \ '[{ "op": "replace", "path": "/spec/mig_controller_requests_memory", "value": <350Mi>}]'
[ "oc create route passthrough --service=image-registry -n openshift-image-registry", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress", "apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]", "spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"", "oc get migrationcontroller <migration_controller> -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2", "oc replace -f migration-controller.yaml -n openshift-migration", "cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <host_cluster> namespace: openshift-migration spec: isHostCluster: true EOF", "cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: <cluster_secret> namespace: openshift-config type: Opaque data: saToken: <sa_token> 1 EOF", "oc sa get-token migration-controller -n openshift-migration | base64 -w 0", "cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <remote_cluster> 1 namespace: openshift-migration spec: exposedRegistryPath: <exposed_registry_route> 2 insecure: false 3 isHostCluster: false serviceAccountSecretRef: name: <remote_cluster_secret> 4 namespace: openshift-config url: <remote_cluster_url> 5 EOF", "oc describe cluster <cluster>", "cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: namespace: openshift-config name: <migstorage_creds> type: Opaque data: aws-access-key-id: <key_id_base64> 1 aws-secret-access-key: <secret_key_base64> 2 EOF", "echo -n \"<key>\" | base64 -w 0 1", "cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: name: <migstorage> namespace: openshift-migration spec: backupStorageConfig: awsBucketName: <bucket> 1 credsSecretRef: name: <storage_secret> 2 namespace: openshift-config backupStorageProvider: <storage_provider> 3 volumeSnapshotConfig: credsSecretRef: name: <storage_secret> 4 namespace: openshift-config volumeSnapshotProvider: <storage_provider> 5 EOF", "oc describe migstorage <migstorage>", "cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: destMigClusterRef: name: <host_cluster> namespace: openshift-migration indirectImageMigration: true 1 indirectVolumeMigration: true 2 migStorageRef: name: <migstorage> 3 namespace: openshift-migration namespaces: - <source_namespace_1> 4 - <source_namespace_2> - <source_namespace_3>:<destination_namespace> 5 srcMigClusterRef: name: <remote_cluster> 6 namespace: openshift-migration EOF", "oc describe migplan <migplan> -n openshift-migration", "cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: <migmigration> namespace: openshift-migration spec: migPlanRef: name: <migplan> 1 namespace: openshift-migration quiescePods: true 2 stage: false 3 rollback: false 4 EOF", "oc watch migmigration <migmigration> -n openshift-migration", "Name: c8b034c0-6567-11eb-9a4f-0bc004db0fbc Namespace: openshift-migration Labels: migration.openshift.io/migplan-name=django Annotations: openshift.io/touch: e99f9083-6567-11eb-8420-0a580a81020c API Version: migration.openshift.io/v1alpha1 Kind: MigMigration Spec: Mig Plan Ref: Name: migplan Namespace: openshift-migration Stage: false Status: Conditions: Category: Advisory Last Transition Time: 2021-02-02T15:04:09Z Message: Step: 19/47 Reason: InitialBackupCreated Status: True Type: Running Category: Required Last Transition Time: 2021-02-02T15:03:19Z Message: The migration is ready. Status: True Type: Ready Category: Required Durable: true Last Transition Time: 2021-02-02T15:04:05Z Message: The migration registries are healthy. Status: True Type: RegistriesHealthy Itinerary: Final Observed Digest: 7fae9d21f15979c71ddc7dd075cb97061895caac5b936d92fae967019ab616d5 Phase: InitialBackupCreated Pipeline: Completed: 2021-02-02T15:04:07Z Message: Completed Name: Prepare Started: 2021-02-02T15:03:18Z Message: Waiting for initial Velero backup to complete. Name: Backup Phase: InitialBackupCreated Progress: Backup openshift-migration/c8b034c0-6567-11eb-9a4f-0bc004db0fbc-wpc44: 0 out of estimated total of 0 objects backed up (5s) Started: 2021-02-02T15:04:07Z Message: Not started Name: StageBackup Message: Not started Name: StageRestore Message: Not started Name: DirectImage Message: Not started Name: DirectVolume Message: Not started Name: Restore Message: Not started Name: Cleanup Start Timestamp: 2021-02-02T15:03:18Z Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Running 57s migmigration_controller Step: 2/47 Normal Running 57s migmigration_controller Step: 3/47 Normal Running 57s (x3 over 57s) migmigration_controller Step: 4/47 Normal Running 54s migmigration_controller Step: 5/47 Normal Running 54s migmigration_controller Step: 6/47 Normal Running 52s (x2 over 53s) migmigration_controller Step: 7/47 Normal Running 51s (x2 over 51s) migmigration_controller Step: 8/47 Normal Ready 50s (x12 over 57s) migmigration_controller The migration is ready. Normal Running 50s migmigration_controller Step: 9/47 Normal Running 50s migmigration_controller Step: 10/47", "- hosts: localhost gather_facts: false tasks: - name: get pod name shell: oc get po --all-namespaces", "- hosts: localhost gather_facts: false tasks: - name: Get pod k8s_info: kind: pods api: v1 namespace: openshift-migration name: \"{{ lookup( 'env', 'HOSTNAME') }}\" register: pods - name: Print pod name debug: msg: \"{{ pods.resources[0].metadata.name }}\"", "- hosts: localhost gather_facts: false tasks: - name: Set a boolean set_fact: do_fail: true - name: \"fail\" fail: msg: \"Cause a failure\" when: do_fail", "- hosts: localhost gather_facts: false tasks: - set_fact: namespaces: \"{{ (lookup( 'env', 'MIGRATION_NAMESPACES')).split(',') }}\" - debug: msg: \"{{ item }}\" with_items: \"{{ namespaces }}\" - debug: msg: \"{{ lookup( 'env', 'MIGRATION_PLAN_NAME') }}\"", "oc edit migrationcontroller <migration_controller> -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: disable_image_migration: true 1 disable_pv_migration: true 2 additional_excluded_resources: 3 - resource1 - resource2", "oc get deployment -n openshift-migration migration-controller -o yaml | grep EXCLUDED_RESOURCES -A1", "name: EXCLUDED_RESOURCES value: resource1,resource2,imagetags,templateinstances,clusterserviceversions,packagemanifests,subscriptions,servicebrokers,servicebindings,serviceclasses,serviceinstances,serviceplans,imagestreams,persistentvolumes,persistentvolumeclaims", "spec: namespaces: - namespace_2 - namespace_1:namespace_2", "spec: namespaces: - namespace_1:namespace_1", "spec: namespaces: - namespace_1", "apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: selection: action: skip", "apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: name: <source_pvc>:<destination_pvc> 1", "apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: pvc-095a6559-b27f-11eb-b27f-021bddcaf6e4 proposedCapacity: 10Gi pvc: accessModes: - ReadWriteMany hasReference: true name: mysql namespace: mysql-persistent selection: action: <copy> 1 copyMethod: <filesystem> 2 verify: true 3 storageClass: <gp2> 4 accessMode: <ReadWriteMany> 5 storageClass: cephfs", "apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: \"\" - kind: <kind> group: \"\"", "apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: \"\" - kind: <kind> group: \"\" labelSelector: matchLabels: <label> 2", "apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: generateName: <migplan> namespace: openshift-migration spec: migPlanRef: name: <migplan> namespace: openshift-migration stage: false", "oc edit migrationcontroller -n openshift-migration", "mig_controller_limits_cpu: \"1\" 1 mig_controller_limits_memory: \"10Gi\" 2 mig_controller_requests_cpu: \"100m\" 3 mig_controller_requests_memory: \"350Mi\" 4 mig_pv_limit: 100 5 mig_pod_limit: 100 6 mig_namespace_limit: 10 7", "oc patch migrationcontroller migration-controller -p '{\"spec\":{\"enable_dvm_pv_resizing\":true}}' \\ 1 --type='merge' -n openshift-migration", "oc patch migrationcontroller migration-controller -p '{\"spec\":{\"pv_resizing_threshold\":41}}' \\ 1 --type='merge' -n openshift-migration", "status: conditions: - category: Warn durable: true lastTransitionTime: \"2021-06-17T08:57:01Z\" message: 'Capacity of the following volumes will be automatically adjusted to avoid disk capacity issues in the target cluster: [pvc-b800eb7b-cf3b-11eb-a3f7-0eae3e0555f3]' reason: Done status: \"False\" type: PvCapacityAdjustmentRequired", "oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_enable_cache\", \"value\": true}]'", "oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_limits_memory\", \"value\": <10Gi>}]'", "oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_requests_memory\", \"value\": <350Mi>}]'" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/migration_toolkit_for_containers/advanced-migration-options-mtc
15.9. Displaying CPU Usage for Hosts
15.9. Displaying CPU Usage for Hosts To view the CPU usage for all hosts on your system: From the View menu, select Graph , then the Host CPU Usage check box. Figure 15.22. Enabling host CPU usage statistics graphing The Virtual Machine Manager shows a graph of host CPU usage on your system. Figure 15.23. Host CPU usage graph
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-virtualization-managing_guests_with_the_virtual_machine_manager_virt_manager-displaying_host-cpu_usage
Chapter 22. OpenShiftAPIServer [operator.openshift.io/v1]
Chapter 22. OpenShiftAPIServer [operator.openshift.io/v1] Description OpenShiftAPIServer provides information to configure an operator to manage openshift-apiserver. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 22.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec is the specification of the desired behavior of the OpenShift API Server. status object status defines the observed status of the OpenShift API Server. 22.1.1. .spec Description spec is the specification of the desired behavior of the OpenShift API Server. Type object Property Type Description logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". unsupportedConfigOverrides `` unsupportedConfigOverrides holds a sparse config that will override any previously set options. It only needs to be the fields to override it will end up overlaying in the following order: 1. hardcoded defaults 2. observedConfig 3. unsupportedConfigOverrides 22.1.2. .status Description status defines the observed status of the OpenShift API Server. Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. latestAvailableRevision integer latestAvailableRevision is the latest revision used as suffix of revisioned secrets like encryption-config. A new revision causes a new deployment of pods. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 22.1.3. .status.conditions Description conditions is a list of conditions and their status Type array 22.1.4. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Property Type Description lastTransitionTime string message string reason string status string type string 22.1.5. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 22.1.6. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 22.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/openshiftapiservers DELETE : delete collection of OpenShiftAPIServer GET : list objects of kind OpenShiftAPIServer POST : create an OpenShiftAPIServer /apis/operator.openshift.io/v1/openshiftapiservers/{name} DELETE : delete an OpenShiftAPIServer GET : read the specified OpenShiftAPIServer PATCH : partially update the specified OpenShiftAPIServer PUT : replace the specified OpenShiftAPIServer /apis/operator.openshift.io/v1/openshiftapiservers/{name}/status GET : read status of the specified OpenShiftAPIServer PATCH : partially update status of the specified OpenShiftAPIServer PUT : replace status of the specified OpenShiftAPIServer 22.2.1. /apis/operator.openshift.io/v1/openshiftapiservers Table 22.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of OpenShiftAPIServer Table 22.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 22.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind OpenShiftAPIServer Table 22.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 22.5. HTTP responses HTTP code Reponse body 200 - OK OpenShiftAPIServerList schema 401 - Unauthorized Empty HTTP method POST Description create an OpenShiftAPIServer Table 22.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 22.7. Body parameters Parameter Type Description body OpenShiftAPIServer schema Table 22.8. HTTP responses HTTP code Reponse body 200 - OK OpenShiftAPIServer schema 201 - Created OpenShiftAPIServer schema 202 - Accepted OpenShiftAPIServer schema 401 - Unauthorized Empty 22.2.2. /apis/operator.openshift.io/v1/openshiftapiservers/{name} Table 22.9. Global path parameters Parameter Type Description name string name of the OpenShiftAPIServer Table 22.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an OpenShiftAPIServer Table 22.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 22.12. Body parameters Parameter Type Description body DeleteOptions schema Table 22.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified OpenShiftAPIServer Table 22.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 22.15. HTTP responses HTTP code Reponse body 200 - OK OpenShiftAPIServer schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OpenShiftAPIServer Table 22.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 22.17. Body parameters Parameter Type Description body Patch schema Table 22.18. HTTP responses HTTP code Reponse body 200 - OK OpenShiftAPIServer schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OpenShiftAPIServer Table 22.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 22.20. Body parameters Parameter Type Description body OpenShiftAPIServer schema Table 22.21. HTTP responses HTTP code Reponse body 200 - OK OpenShiftAPIServer schema 201 - Created OpenShiftAPIServer schema 401 - Unauthorized Empty 22.2.3. /apis/operator.openshift.io/v1/openshiftapiservers/{name}/status Table 22.22. Global path parameters Parameter Type Description name string name of the OpenShiftAPIServer Table 22.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified OpenShiftAPIServer Table 22.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 22.25. HTTP responses HTTP code Reponse body 200 - OK OpenShiftAPIServer schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified OpenShiftAPIServer Table 22.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 22.27. Body parameters Parameter Type Description body Patch schema Table 22.28. HTTP responses HTTP code Reponse body 200 - OK OpenShiftAPIServer schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified OpenShiftAPIServer Table 22.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 22.30. Body parameters Parameter Type Description body OpenShiftAPIServer schema Table 22.31. HTTP responses HTTP code Reponse body 200 - OK OpenShiftAPIServer schema 201 - Created OpenShiftAPIServer schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/operator_apis/openshiftapiserver-operator-openshift-io-v1
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/ipv6_networking_for_the_overcloud/making-open-source-more-inclusive
Chapter 14. Using qemu-img
Chapter 14. Using qemu-img The qemu-img command-line tool is used for formatting, modifying, and verifying various file systems used by KVM. qemu-img options and usages are highlighted in the sections that follow. Warning Never use qemu-img to modify images in use by a running virtual machine or any other process. This may destroy the image. Also, be aware that querying an image that is being modified by another process may encounter inconsistent state. 14.1. Checking the Disk Image To perform a consistency check on a disk image with the file name imgname . Note Only a selected group of formats support consistency checks. These include qcow2 , vdi , vhdx , vmdk , and qed .
[ "qemu-img check [-f format ] imgname" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/chap-Using_qemu_img
DM Multipath
DM Multipath Red Hat Enterprise Linux 7 Configuring and managing Device Mapper Multipath Steven Levine Red Hat Customer Content Services [email protected]
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/dm_multipath/index
Virtualization
Virtualization OpenShift Container Platform 4.18 OpenShift Virtualization installation, usage, and release notes Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/virtualization/index
4.4. Virtual Directory Information Tree Views
4.4. Virtual Directory Information Tree Views Directory Server supports a concept for hierarchical navigation and organization of directory information called virtual directory information tree views or virtual DIT views . Note Virtual views are not entirely compatible with multiple back ends in that the entries to be returned by the views must reside in the same back end; the search is limited to one back end. 4.4.1. About Virtual DIT Views There are two ways to configure the directory namespace: A hierarchical directory information tree. A flat directory information tree. The hierarchical DIT is useful for navigating the directory but is cumbersome and time-consuming to change. A major organizational change to a hierarchical DIT can be an expensive and time-consuming operation, because it usually involves considerable service disruption. This can usually only be minimized by performing changes after hours and during periods of low traffic. The flat DIT, while requiring little to no change, does not provide a convenient way to navigate or manage the entries in the directory service. A flat DIT also presents many management challenges as administration becomes more complex without any natural hierarchical groupings. Figure 4.14. Examples of a Flat and an Organizationally-Based DIT Using a hierarchical DIT, a deployment must then determine the subject domain of the hierarchy. Only one choice can be made; the natural tendency is to choose the organizational hierarchy. This view of the organization serves well in many cases, but having only a single view can be very limiting for directory navigation and management. For example, an organizational hierarchy is fine for looking for entries that belong to people in the Accounts department. However, this view is much less useful for finding entries that belong to people in a geographical location, such as Mountain View, California. The second query is as valid as the first, yet it requires knowledge of the attributes contained in the entries and additional search tools. For such a case, navigation using the DIT is not an option. Similarly, management of the directory is much easier when the DIT matches the requirements of the management function. The organization of the DIT may also be affected by other factors, such as replication and migration considerations, that cause the DIT to have functional utility for those applications but very little practical utility in other cases. Clearly, hierarchies are a useful mechanism for navigation and management. To avoid the burden of making changes to an existing DIT, however, a deployment may elect to forgo a hierarchy altogether in favor of a flat DIT. It would be advantageous for deployments if the directory provided a way to create an arbitrary number of hierarchies that get mapped to entries without having to move the target entries in question. The virtual DIT views feature of Directory Server resolves the quandary of deciding the type of DIT to use for the directory deployment. Virtual DIT views provide a way to hierarchically navigate entries without the requirement that those entries physically exist in any particular place. The virtual DIT view uses information about the entries to place them in the view hierarchy. To client applications, virtual DIT views appear as ordinary container hierarchies. In a sense, virtual DIT views superimpose a DIT hierarchy over a set of entries, irrespective of whether those entries are in a flat namespace or in another hierarchy of their own. Create a virtual DIT view hierarchy in the same way as a normal DIT hierarchy. Create the same entries (for example, organizational unit entries) but with an additional object class ( nsview ) and a filter attribute ( nsviewfilter ) that describes the view. After adding the additional attribute, the entries that match the view filter instantly populate the view. The target entries only appear to exist in the view; their true location never changes. Virtual DIT views behave like normal DITs in that a subtree or a one-level search can be performed with the expected results being returned. For information about adding and modifying entries, see "Creating Directory Entries" in the Red Hat Directory Server Administration Guide Figure 4.15. A Combined DIT Using Views The DIT Figure 4.15, "A Combined DIT Using Views" in illustrates what happens when the two DITs shown in Figure 4.14, "Examples of a Flat and an Organizationally-Based DIT" are combined using views. Because views inherently allow entries to appear in more than one place in a view hierarchy, this feature has been used to expand the ou=Sales entry to enable viewing the Sales entries either by location or by product. Given a set of virtual DIT view hierarchies, a directory user can use the view that makes the most sense to navigate to the required entries. For example, if the target entries were those who live in Mountain View, a view which begins by navigating using location-based information is most appropriate. If it were an organizational question, the organization view would be a better choice. Both of these views exist in the Directory Server at the same time and operate on the same entries; the different views just have different objectives when displaying their version of the directory structure. The entries in the views-enabled directory in Figure 4.15, "A Combined DIT Using Views" are contained in a flat namespace just below the parent of the top-most view in the hierarchy. This is not required. The entries can exist in a hierarchy of their own. The only concern that a view has about the placement of an entry is that it must be a descendant of the parent of the view hierarchy. Figure 4.16. A DIT with a Virtual DIT View Hierarchy The sub-tree ou=People contains the real Entry A and Entry B entries. The sub-tree ou=Location Views is a view hierarchy. The leaf nodes ou=Sunnyvale and ou=Mountain View each contain an attribute, nsviewfilter , which describes the view. These are leaf nodes because they do not contain the real entries. However, when a client application searches these views, it finds Entry A under ou=Sunnyvale and Entry B under ou=Mountain View . This virtual search space is described by the nsviewfilter attributes of all ancestor views. A search made from a view returns both entries from the virtual search space and those from the actual search space. This enables the view hierarchies to function as a conventional DIT or change a conventional DIT into a view hierarchy. 4.4.2. Advantages of Using Virtual DIT Views The deployment decisions become easier with virtual DIT views because: Views facilitate the use of a flat namespace for entries, because virtual DIT views provide navigational and managerial support similar to those provided by traditional hierarchies. In addition, whenever there is a change to the DIT, the entries never need to be moved; only the virtual DIT view hierarchies change. Because these hierarchies contain no real entries, they are simple and quick to modify. Oversights during deployment planning are less catastrophic with virtual DIT views. If the hierarchy is not developed correctly in the first instance, it can be changed easily and quickly without disrupting the service. View hierarchies can be completely revised in minutes and the results instantly realized, significantly reducing the cost of directory maintenance. Changes to a virtual DIT hierarchy are instantly realized. When an organizational change occurs, a new virtual DIT view can be created quickly. The new virtual DIT view can exist at the same time as the old view, thereby facilitating a more gradual changeover for the entries themselves and for the applications that use them. Because an organizational change in the directory is not an all-or-nothing operation, it can be performed over a period of time and without service disruption. Using multiple virtual DIT views for navigation and management allows for more flexible use of the directory service. With the functionality provided by virtual DIT views, an organization can use both the old and new methods to organize directory data without any requirement to place entries at certain points in the DIT. Virtual DIT view hierarchies can be created as a kind of ready-made query to facilitate the retrieval of commonly-required information. Views promote flexibility in working practices and reduce the requirement that directory users create complex search filters, using attribute names and values that they would otherwise have no need to know. The flexibility of having more than one way to view and query directory information allows end users and applications to find what they need intuitively through hierarchical navigation. 4.4.3. Example of Virtual DIT Views The LDIF entries below show a virtual DIT view hierarchy that is based on location. Any entry that resides below dc=example,dc=com and fits the view description appears in this view, organized by location. A subtree search based at ou=Location Views,dc=example,dc=com would return all entries below dc=example,dc=com which match the filters (l=Sunnyvale) , (l=Santa Clara) , or (l=Cupertino) . Conversely, a one-level search would return no entries other than the child view entries because all qualifying entries reside in the three descendant views. The ou=Location Views,dc=example,dc=com view entry itself does not contain a filter. This feature facilitates hierarchical organization without the requirement to further restrict the entries contained in the view. Any view may omit the filter. Although the example filters are very simple, the filter used can be as complex as necessary. It may be desirable to limit the type of entry that the view should contain. For example, to limit this hierarchy to contain only people entries, add an nsfilter attribute to ou=Location Views,dc=example,dc=com with the filter value (objectclass=organizationalperson) . Each view with a filter restricts the content of all descendant views, while descendant views with filters also restrict their ancestor's contents. For example, creating the top view ou=Location Views first together with the new filter mentioned above would create a view with all entries with the organization object class. When the descendant views are added that further restrict entries, the entries that now appear in the descendant views are removed from the ancestor views. This demonstrates how virtual DIT views mimic the behavior of traditional DITs. Although virtual DIT views mimic the behavior of traditional DITs, views can do something that traditional DITs cannot: entries can appear in more than one location. For example, to associate Entry B with both Mountain View and Sunnyvale (see Figure 4.16, "A DIT with a Virtual DIT View Hierarchy" ), add the Sunnyvale value to the location attribute, and the entry appears in both views. 4.4.4. Views and Other Directory Features Both class of service and roles in Directory Server support views; see Section 4.3, "Grouping Directory Entries" . When adding a class of service or a role under a view hierarchy, the entries that are both logically and actually contained in the view are considered within scope. This means that roles and class of service can be applied using a virtual DIT view, but the effects of that application can be seen even when querying the flat namespace. For information on using these features, see "Advanced Entry Management," in the Red Hat Directory Server Administration Guide . The use of views requires a slightly different approach to access control. Because there is currently no explicit support for ACLs in views, create role-based ACLs at the view parent and add the roles to the appropriate parts of the view hierarchy. In this way, take advantage of the organizational property of the hierarchy. If the base of a search is a view and the scope of the search is not a base, then the search is a views-based search. Otherwise, it is a conventional search. For example, performing a search with a base of dc=example,dc=com does not return any entries from the virtual search space; in fact, no virtual-search-space search is performed. Views processing occurs only if the search base is ou=Location Views . This way, views ensure that the search does not result in entries from both locations. (If it were a conventional DIT, entries from both locations would be returned.) 4.4.5. Effects of Virtual Views on Performance The performance of views-based hierarchies depends on the construction of the hierarchy itself and the number of entries in the DIT. In general, there may be a marginal change in performance (within a few percentage points of equivalent searches on a conventional DIT) if virtual DIT views are enabled in the directory service. If a search does not invoke a view, then there is no performance impact. Test the virtual DIT views against expected search patterns and loads before deployment. We also recommend that the attributes used in view filters be indexed if the views are to be used as general-purpose navigation tools in the organization. Further, when a sub-filter used by views matches a configured virtual list view index, that index is used in views evaluation. There is no need to tune any other part of the directory specifically for views. 4.4.6. Compatibility with Existing Applications Virtual DIT views are designed to mimic conventional DITs to a high degree. The existence of views should be transparent to most applications; there should be no indication that they are working with views. Except for a few specialized cases, there is no need for directory users to know that views are being used in a Directory Server instance; views appear and behave like conventional DITs. Certain types of applications may have problems working with a views-enabled directory service. For example: Applications that use the DN of a target entry to navigate up the DIT. This type of application would find that it is navigating up the hierarchy in which the entry physically exists instead of the view hierarchy in which the entry was found. The reason for this is that views make no attempt to disguise the true location of an entry by changing the DN of the entry to conform to the view's hierarchy. This is by design - many applications would not function if the true location of an entry were disguised, such as those applications that rely on the DN to identify a unique entry. This upward navigation by deconstructing a DN is an unusual technique for a client application, but, nonetheless, those clients that do this may not function as intended. Applications that use the numSubordinates operational attribute to determine how many entries exist beneath a node. For the nodes in a view, this is currently a count of only those entries that exist in the real search space, ignoring the virtual search space. Consequently, applications may not evaluate the view with a search.
[ "dn: ou=Location Views,dc=example,dc=com objectclass: top objectclass: organizationalUnit objectclass: nsView ou: Location Views description: views categorized by location dn: ou=Sunnyvale,ou=Location Views,dc=example,dc=com objectclass: top objectclass: organizationalUnit objectclass: nsView ou: Sunnyvale nsViewFilter: (l=Sunnyvale) description: views categorized by location dn: ou=Santa Clara,ou=Location Views,dc=example,dc=com objectclass: top objectclass: organizationalUnit objectclass: nsView ou: Santa Clara nsViewFilter: (l=Santa Clara) description: views categorized by location dn: ou=Cupertino,ou=Location Views,dc=example,dc=com objectclass: top objectclass: organizationalUnit objectclass: nsView ou: Cupertino nsViewFilter: (l=Cupertino) description: views categorized by location" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/deployment_guide/Designing_the_Directory_Tree-Virtual_Directory_Information_Tree_Views
14.10. Re-sizing the Disk Image
14.10. Re-sizing the Disk Image Change the disk image filename as if it had been created with size size . Only images in raw format can be resized in both directions, whereas qcow2 images can be grown but cannot be shrunk. Use the following to set the size of the disk image filename to size bytes: You can also resize relative to the current size of the disk image. To give a size relative to the current size, prefix the number of bytes with + to grow, or - to reduce the size of the disk image by that number of bytes. Adding a unit suffix allows you to set the image size in kilobytes (K), megabytes (M), gigabytes (G) or terabytes (T). Warning Before using this command to shrink a disk image, you must use file system and partitioning tools inside the VM itself to reduce allocated file systems and partition sizes accordingly. Failure to do so will result in data loss. After using this command to grow a disk image, you must use file system and partitioning tools inside the VM to actually begin using the new space on the device.
[ "qemu-img resize filename size", "qemu-img resize filename [+|-] size [K|M|G|T]" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-using_qemu_img-re_sizing_the_disk_image
function::caller_addr
function::caller_addr Name function::caller_addr - Return caller address Synopsis Arguments None General Syntax caller_addr: long Description This function returns the address of the calling function. Works only for return probes at this time.
[ "function caller_addr:long()" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-caller-addr
3.7. Software Collection MANPATH Support
3.7. Software Collection MANPATH Support To allow the man command on the system to display man pages from the enabled Software Collection, update the MANPATH environment variable with the paths to the man pages that are associated with the Software Collection. To update the MANPATH environment variable, add the following to the %install section of the Software Collection spec file: %install cat >> %{buildroot}%{_scl_scripts}/enable << EOF export MANPATH="%{_mandir}:\USD{MANPATH:-}" EOF This configures the enable scriptlet to update the MANPATH environment variable. The man pages associated with the Software Collection are then not visible as long as the Software Collection is not enabled. The Software Collection can provide a wrapper script that is visible to the system to enable the Software Collection, for example in the /usr/bin/ directory. In this case, ensure that the man pages are visible to the system even if the Software Collection is disabled. To allow the man command on the system to display man pages from the disabled Software Collection, update the MANPATH environment variable with the paths to the man pages associated with the Software Collection. Procedure 3.7. Updating the MANPATH environment variable for the disabled Software Collection To update the MANPATH environment variable, create a custom script /etc/profile.d/ name.sh . The script is preloaded when a shell is started on the system. For example, create the following file: Use the manpage.sh short script that modifies the MANPATH variable to refer to your man path directory: Add the file to your Software Collection package's spec file: SOURCE2: %{?scl_prefix}manpage.sh Install this file into the system /etc/profile.d/ directory by adjusting the %install section of the Software Collection package's spec file: %install install -p -c -m 644 %{SOURCE2} USDRPM_BUILD_ROOT%{?scl:%_root_sysconfdir}%{!?scl:%_sysconfdir}/profile.d/
[ "%install cat >> %{buildroot}%{_scl_scripts}/enable << EOF export MANPATH=\"%{_mandir}:\\USD{MANPATH:-}\" EOF", "%{?scl_prefix}manpage.sh", "export MANPATH=\"/opt/ provider / software_collection/path/to/your/man_pages :USD{MANPATH}\"", "SOURCE2: %{?scl_prefix}manpage.sh", "%install install -p -c -m 644 %{SOURCE2} USDRPM_BUILD_ROOT%{?scl:%_root_sysconfdir}%{!?scl:%_sysconfdir}/profile.d/" ]
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/packaging_guide/sect-Software_Collection_manpath_Support
2.44. RHEA-2011:0532 - new package: svrcore
2.44. RHEA-2011:0532 - new package: svrcore A new svrcore package is now available for Red Hat Enterprise Linux 6. The svrcore package contains an API library which provides various methods of handling and managing secure Personal Identification Number (PIN) storage. The svrcore library uses the Mozilla NSS cryptographic library. An example of an application which would use svrcore is one that must be restarted without user intervention, but which requires a PIN to unlock a private key and other cryptographic objects. This enhancement update adds a new svrcore package to Red Hat Enterprise Linux 6. (BZ# 643539 ) All users requiring svrcore should install this newly-released package.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.1_technical_notes/svrcore_new
39.3. Migrating an LDAP Server to Identity Management
39.3. Migrating an LDAP Server to Identity Management Important This is a general migration procedure, but it may not work in every environment. It is strongly recommended that you set up a test LDAP environment and test the migration process before attempting to migrate the real LDAP environment. To verify that the migration has been completed correctly: Create a test user on IdM using the ipa user-add command and compare the output of migrated users to the test user. Make sure that the migrated users contain the minimal set of attributes and object classes present on the test user. Compare the output of migrated users (seen on IdM) to the source users (seen on the original LDAP server). Make sure that imported attributes are not doubled and do have expected values. Install the IdM server, including any custom LDAP directory schema, on a different machine from the existing LDAP directory. Note Custom user or group schemas have limited support in IdM. They can cause problems during the migration because of incompatible object definitions. Disable the compat plug-in. This step is not necessary if the data provided by the compat tree is required during the migration. Restart the IdM Directory Server instance. Configure the IdM server to allow migration: Run the IdM migration script, ipa migrate-ds . At its most basic, this requires only the LDAP URL of the LDAP directory instance to migrate: Simply passing the LDAP URL migrates all of the directory data using common default settings. The user and group data can be selectively migrated by specifying other options, as covered in Section 39.2, "Examples for Using ipa migrate-ds " . If the compat plug-in was not disabled in the step, pass the --with-compat option to ipa migrate-ds . Once the information is exported, the script adds all required IdM object classes and attributes and converts DNs in attributes to match the IdM directory tree, if the naming context differs. For example: uid= user ,ou=people,dc=ldap,dc=example,dc=com is migrated to uid= user ,ou=people,dc=idm,dc=example,dc=com . Re-enable the compat plug-in, if it was disabled before the migration. Restart the IdM Directory Server instance. Disable the migration mode: Optional. Reconfigure non-SSSD clients to use Kerberos authentication ( pam_krb5 ) instead of LDAP authentication ( pam_ldap ). Use PAM_LDAP modules until all of the users have been migrated; then it is possible to use PAM_KRB5. For further information, see Configuring a Kerberos Client in the System-Level Authentication Guide . There are two ways for users to generate their hashed Kerberos password. Both migrate the users password without additional user interaction, as described in Section 39.1.2, "Planning Password Migration" . Using SSSD: Move clients that have SSSD installed from the LDAP back end to the IdM back end, and enroll them as clients with IdM. This downloads the required keys and certificates. On Red Hat Enterprise Linux clients, this can be done using the ipa-client-install command. For example: Using the IdM migration web page: Instruct users to log into IdM using the migration web page: To monitor the user migration process, query the existing LDAP directory to see which user accounts have a password but do not yet have a Kerberos principal key. Note Include the single quotes around the filter so that it is not interpreted by the shell. When the migration of all clients and users is complete, decommission the LDAP directory.
[ "ipa user-add TEST_USER", "ipa user-show --all TEST_USER", "ipa-compat-manage disable", "systemctl restart dirsrv.target", "ipa config-mod --enable-migration=TRUE", "ipa migrate-ds ldap://ldap.example.com:389", "ipa-compat-manage enable", "systemctl restart dirsrv.target", "ipa config-mod --enable-migration=FALSE", "ipa-client-install --enable-dns-update", "https:// ipaserver.example.com /ipa/migration", "[user@server ~]USD ldapsearch -LL -x -D 'cn=Directory Manager' -w secret -b 'cn=users,cn=accounts,dc=example,dc=com' '(&(!(krbprincipalkey=*))(userpassword=*))' uid" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/mig-ldap-to-idm
Chapter 2. Installing Kaoto
Chapter 2. Installing Kaoto Important It is recommended to install the Extension Pack for Apache Camel by Red Hat . It will provide a set of tools to manage, edit, run, and debug Camel integrations in various contexts. The following procedure explains how to install VS Code and other essential extensions required to get started with Kaoto. If you do not have Visual Studio Code installed, install it from here . Launch VS Code. Install the Extension Pack for Apache Camel by Red Hat into your Visual Studio Code instance. To run and debug Camel integrations using VS Code UI, install JBang CLI . Optionally, to run Camel integrations from the command line, install Camel CLI . Important Create and select a workspace folder where all the integration will be stored. Selecting a workspace is important to ensure all required commands will be available and behave correctly.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/kaoto/installing-kaoto
probe::ioscheduler_trace.unplug_timer
probe::ioscheduler_trace.unplug_timer Name probe::ioscheduler_trace.unplug_timer - Fires when unplug timer associated Synopsis ioscheduler_trace.unplug_timer Values rq_queue request queue name Name of the probe point Description with a request queue expires.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-ioscheduler-trace-unplug-timer
Chapter 3. Creating policies
Chapter 3. Creating policies The following workflow examples explain how to create several types of policies that detect system configuration changes and send notification of the changes by email. Note When creating a policy, if you see a warning message that you have not opted in for email alerts, set your User preferences to receive email from your policies. 3.1. Creating a policy to ensure public cloud providers are not over provisioned Create a policy using the following procedure. Procedure In Red Hat Hybrid Cloud Console , go to Operations > Policies . Click Create policy . On the Create a policy page, click From scratch or As a copy of existing Policy as required. Note that the As a copy of existing Policy option will prompt you to select a policy from the list of existing policies to use as a starting point. Click . Enter Condition . In this case, enter: facts.cloud_provider in ['alibaba', 'aws', 'azure', 'google'] and (facts.number_of_cpus >= 8 or facts.number_of_sockets >=2) . This condition will detect if an instance running on the specified public cloud providers is running with CPU hardware higher than the allowed limit. Note You can expand What condition can I define? and/or Review available system facts to view an explanation of conditions you can use, and see the available system facts, respectively. In this section are examples of syntax you can use. Click Validate condition . Once the condition is validated, click . On the Trigger actions page, click Add trigger actions . If notifications are greyed out, select Notification settings in the notifications box. Here you can customize notifications and their behaviors. Click . Note On the Trigger actions page, you can also enable email alerts and set other available email preferences. On the Review and enable page, click the toggle switch to activate the policy and review its details. Click Finish . Your new policy is created. When the policy is evaluated on a system check-in, if the condition in the policy is met, Policies automatically sends an email to all users on the account with access to Policies, depending on their email preferences. 3.2. Creating a policy to detect if systems are running an outdated version of RHEL You can create a policy that detects if systems are running outdated versions of RHEL and notifies you by email about what it finds. Procedure In Red Hat Hybrid Cloud Console , go to Operations > Policies . Click Create policy . On the Create policy page, click From scratch or As a copy of existing Policy as required. Note that the As a copy of existing Policy option prompts you to select a policy from the list of existing policies to use as a starting point. Click . Enter a Name and Description for the policy. Click . Enter Condition . In this case, enter facts.os_release < 8.1 . This condition will detect if systems still run an outdated version of our operating system based on RHEL 8.1. Click Validate condition , then click . On the Trigger actions page, click Add trigger actions and select Email . Click . On the Review and activate page, click the toggle switch to activate the policy and review its details. Click Finish . Your new policy is created. When the policy is evaluated on a system check-in, if the condition in the policy is triggered, the policies service automatically sends an email to all users on the account with access to Policies, depending on their email preferences. 3.3. Creating a policy to detect a vulnerable package version based on recent CVE You can create a policy that detects vulnerable package versions based on recent CVE and notifies you by email about what it finds. Procedure In Red Hat Hybrid Cloud Console , go to Operations > Policies . Click Create policy . On the Create Policy page, click From scratch or As a copy of existing Policy as required. Note that the As a copy of existing Policy option will prompt you to select a policy from the list of existing policies to use as a starting point. Click . Enter a Name and Description for the policy. Click . Enter Condition . In this case, enter facts.installed_packages contains ['openssh-4.5'] . This condition will detect if systems still run a vulnerable version of an openssh package based on recent CVE. Click Validate condition , then click . On the Trigger actions page, click Add trigger actions and select Email . Click . On the Review and activate page, click the toggle switch to activate the policy and review its details. Click Finish . Your new policy is created. When the policy is evaluated on a system check-in, if the condition in the policy is met, Policies automatically sends an email to all users on the account with access to Policies, depending on their email preferences.
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/monitoring_and_reacting_to_configuration_changes_using_policies/creating-policies_intro-policies
Chapter 4. heat
Chapter 4. heat The following chapter contains information about the configuration options in the heat service. 4.1. heat.conf This section contains options for the /etc/heat/heat.conf file. 4.1.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/heat/heat.conf file. . Configuration option = Default value Type Description action_retry_limit = 5 integer value Number of times to retry to bring a resource to a non-error state. Set to 0 to disable retries. allow_trusts_redelegation = False boolean value Create trusts with redelegation enabled. This option is only used when reauthentication_auth_method is set to "trusts". Note that enabling this option does have security implications as all trusts created by Heat will use both impersonation and redelegation enabled. Enable it only when there are other services that need to create trusts from tokens Heat uses to access them, examples are Aodh and Heat in another region when configured to use trusts too. auth_encryption_key = notgood but just long enough i t string value Key used to encrypt authentication info in the database. Length of this key must be 32 characters. backdoor_port = None string value Enable eventlet backdoor. Acceptable values are 0, <port>, and <start>:<end>, where 0 results in listening on a random tcp port number; <port> results in listening on the specified port number (and not enabling backdoor if that port is in use); and <start>:<end> results in listening on the smallest unused port number within the specified range of port numbers. The chosen port is displayed in the service's log file. backdoor_socket = None string value Enable eventlet backdoor, using the provided path as a unix socket that can receive connections. This option is mutually exclusive with backdoor_port in that only one should be provided. If both are provided then the existence of this option overrides the usage of that option. Inside the path {pid} will be replaced with the PID of the current process. client_retry_limit = 2 integer value Number of times to retry when a client encounters an expected intermittent error. Set to 0 to disable retries. cloud_backend = heat.engine.clients.OpenStackClients string value Fully qualified class name to use as a client backend. conn_pool_min_size = 2 integer value The pool size limit for connections expiration policy conn_pool_ttl = 1200 integer value The time-to-live in sec of idle connections in the pool control_exchange = openstack string value The default exchange under which topics are scoped. May be overridden by an exchange name specified in the transport_url option. convergence_engine = True boolean value Enables engine with convergence architecture. All stacks with this option will be created using convergence engine. debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_deployment_signal_transport = CFN_SIGNAL string value Template default for how the server should signal to heat with the deployment output values. CFN_SIGNAL will allow an HTTP POST to a CFN keypair signed URL (requires enabled heat-api-cfn). TEMP_URL_SIGNAL will create a Swift TempURL to be signaled via HTTP PUT (requires object-store endpoint which supports TempURL). HEAT_SIGNAL will allow calls to the Heat API resource-signal using the provided keystone credentials. ZAQAR_SIGNAL will create a dedicated zaqar queue to be signaled using the provided keystone credentials. default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. default_notification_level = INFO string value Default notification level for outgoing notifications. default_publisher_id = None string value Default publisher_id for outgoing notifications. default_software_config_transport = POLL_SERVER_CFN string value Template default for how the server should receive the metadata required for software configuration. POLL_SERVER_CFN will allow calls to the cfn API action DescribeStackResource authenticated with the provided keypair (requires enabled heat-api-cfn). POLL_SERVER_HEAT will allow calls to the Heat API resource-show using the provided keystone credentials (requires keystone v3 API, and configured stack_user_* config options). POLL_TEMP_URL will create and populate a Swift TempURL with metadata for polling (requires object-store endpoint which supports TempURL).ZAQAR_MESSAGE will create a dedicated zaqar queue and post the metadata for polling. default_user_data_format = HEAT_CFNTOOLS string value Template default for how the user_data should be formatted for the server. For HEAT_CFNTOOLS, the user_data is bundled as part of the heat-cfntools cloud-init boot configuration data. For RAW the user_data is passed to Nova unmodified. For SOFTWARE_CONFIG user_data is bundled as part of the software config data, and metadata is derived from any associated SoftwareDeployment resources. deferred_auth_method = trusts string value Select deferred auth method, stored password or trusts. Deprecated since: 9.0.0 *Reason:*Stored password based deferred auth is broken when used with keystone v3 and is not supported. enable_cloud_watch_lite = False boolean value Enable the legacy OS::Heat::CWLiteAlarm resource. Deprecated since: 10.0.0 *Reason:*Heat CloudWatch Service has been removed. enable_stack_abandon = False boolean value Enable the preview Stack Abandon feature. enable_stack_adopt = False boolean value Enable the preview Stack Adopt feature. encrypt_parameters_and_properties = False boolean value Encrypt template parameters that were marked as hidden and also all the resource properties before storing them in database. engine_life_check_timeout = 2 integer value RPC timeout for the engine liveness check that is used for stack locking. environment_dir = /etc/heat/environment.d string value The directory to search for environment files. error_wait_time = 240 integer value The amount of time in seconds after an error has occurred that tasks may continue to run before being cancelled. event_purge_batch_size = 200 integer value Controls how many events will be pruned whenever a stack's events are purged. Set this lower to keep more events at the expense of more frequent purges. executor_thread_pool_size = 64 integer value Size of executor thread pool when executor is threading or eventlet. fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. graceful_shutdown_timeout = 60 integer value Specify a timeout after which a gracefully shutdown server will exit. Zero value means endless wait. heat_metadata_server_url = None string value URL of the Heat metadata server. NOTE: Setting this is only needed if you require instances to use a different endpoint than in the keystone catalog heat_stack_user_role = heat_stack_user string value Keystone role for heat template-defined users. heat_waitcondition_server_url = None string value URL of the Heat waitcondition server. `heat_watch_server_url = ` string value URL of the Heat CloudWatch server. Deprecated since: 10.0.0 *Reason:*Heat CloudWatch Service has been removed. hidden_stack_tags = ['data-processing-cluster'] list value Stacks containing these tag names will be hidden. Multiple tags should be given in a comma-delimited list (eg. hidden_stack_tags=hide_me,me_too). host = <based on operating system> string value Name of the engine node. This can be an opaque identifier. It is not necessarily a hostname, FQDN, or IP address. instance_connection_https_validate_certificates = 1 string value Instance connection to CFN/CW API validate certs if SSL is used. instance_connection_is_secure = 0 string value Instance connection to CFN/CW API via https. `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. keystone_backend = heat.engine.clients.os.keystone.heat_keystoneclient.KsClientWrapper string value Fully qualified class name to use as a keystone backend. loadbalancer_template = None string value Custom template for the built-in loadbalancer nested stack. log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_options = True boolean value Enables or disables logging values of all registered options when starting a service (at DEBUG level). log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is setto "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_events_per_stack = 1000 integer value Rough number of maximum events that will be available per stack. Actual number of events can be a bit higher since purge checks take place randomly 200/event_purge_batch_size percent of the time. Older events are deleted when events are purged. Set to 0 for unlimited events per stack. max_interface_check_attempts = 10 integer value Number of times to check whether an interface has been attached or detached. max_json_body_size = 1048576 integer value Maximum raw byte size of JSON request body. Should be larger than max_template_size. max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". max_nested_stack_depth = 5 integer value Maximum depth allowed when using nested stacks. max_nova_api_microversion = None floating point value Maximum nova API version for client plugin. With this limitation, any nova feature supported with microversion number above max_nova_api_microversion will not be available. max_resources_per_stack = 1000 integer value Maximum resources allowed per top-level stack. -1 stands for unlimited. max_server_name_length = 53 integer value Maximum length of a server name to be used in nova. max_stacks_per_tenant = 100 integer value Maximum number of stacks any one tenant may have active at one time. max_template_size = 524288 integer value Maximum raw byte size of any template. num_engine_workers = None integer value Number of heat-engine processes to fork and run. Will default to either to 4 or number of CPUs on the host, whichever is greater. observe_on_update = False boolean value On update, enables heat to collect existing resource properties from reality and converge to updated template. onready = None string value Deprecated. periodic_interval = 60 integer value Seconds between running periodic tasks. plugin_dirs = ['/usr/lib64/heat', '/usr/lib/heat', '/usr/local/lib/heat', '/usr/local/lib64/heat'] list value List of directories to search for plug-ins. publish_errors = False boolean value Enables or disables publication of error events. rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. `reauthentication_auth_method = ` string value Allow reauthentication on token expiry, such that long-running tasks may complete. Note this defeats the expiry of any provided user tokens. region_name_for_services = None string value Default region name used to get services endpoints. rpc_conn_pool_size = 30 integer value Size of RPC connection pool. rpc_response_timeout = 60 integer value Seconds to wait for a response from a call. run_external_periodic_tasks = True boolean value Some periodic tasks can be run in a separate process. Should we run them here? `server_keystone_endpoint_type = ` string value If set, is used to control which authentication endpoint is used by user-controlled servers to make calls back to Heat. If unset www_authenticate_uri is used. stack_action_timeout = 3600 integer value Timeout in seconds for stack action (ie. create or update). stack_domain_admin = None string value Keystone username, a user with roles sufficient to manage users and projects in the stack_user_domain. stack_domain_admin_password = None string value Keystone password for stack_domain_admin user. stack_scheduler_hints = False boolean value When this feature is enabled, scheduler hints identifying the heat stack context of a server or volume resource are passed to the configured schedulers in nova and cinder, for creates done using heat resource types OS::Cinder::Volume, OS::Nova::Server, and AWS::EC2::Instance. heat_root_stack_id will be set to the id of the root stack of the resource, heat_stack_id will be set to the id of the resource's parent stack, heat_stack_name will be set to the name of the resource's parent stack, heat_path_in_stack will be set to a list of comma delimited strings of stackresourcename and stackname with list[0] being rootstackname , heat_resource_name will be set to the resource's name, and heat_resource_uuid will be set to the resource's orchestration id. stack_user_domain_id = None string value Keystone domain ID which contains heat template-defined users. If this option is set, stack_user_domain_name option will be ignored. stack_user_domain_name = None string value Keystone domain name which contains heat template-defined users. If stack_user_domain_id option is set, this option is ignored. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. template_dir = /etc/heat/templates string value The directory to search for template files. transport_url = rabbit:// string value The network address and optional user credentials for connecting to the messaging backend, in URL format. The expected format is: driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query Example: rabbit://rabbitmq:[email protected]:5672// For full details on the fields in the URL see the documentation of oslo_messaging.TransportURL at https://docs.openstack.org/oslo.messaging/latest/reference/transport.html trusts_delegated_roles = [] list value Subset of trustor roles to be delegated to heat. If left unset, all roles of a user will be delegated to heat when creating a stack. use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. 4.1.2. auth_password The following table outlines the options available under the [auth_password] group in the /etc/heat/heat.conf file. Table 4.1. auth_password Configuration option = Default value Type Description allowed_auth_uris = [] list value Allowed keystone endpoints for auth_uri when multi_cloud is enabled. At least one endpoint needs to be specified. multi_cloud = False boolean value Allow orchestration of multiple clouds. 4.1.3. clients The following table outlines the options available under the [clients] group in the /etc/heat/heat.conf file. Table 4.2. clients Configuration option = Default value Type Description ca_file = None string value Optional CA cert file to use in SSL connections. cert_file = None string value Optional PEM-formatted certificate chain file. endpoint_type = publicURL string value Type of endpoint in Identity service catalog to use for communication with the OpenStack service. insecure = False boolean value If set, then the server's certificate will not be verified. key_file = None string value Optional PEM-formatted file that contains the private key. 4.1.4. clients_aodh The following table outlines the options available under the [clients_aodh] group in the /etc/heat/heat.conf file. Table 4.3. clients_aodh Configuration option = Default value Type Description ca_file = None string value Optional CA cert file to use in SSL connections. cert_file = None string value Optional PEM-formatted certificate chain file. endpoint_type = None string value Type of endpoint in Identity service catalog to use for communication with the OpenStack service. insecure = None boolean value If set, then the server's certificate will not be verified. key_file = None string value Optional PEM-formatted file that contains the private key. 4.1.5. clients_barbican The following table outlines the options available under the [clients_barbican] group in the /etc/heat/heat.conf file. Table 4.4. clients_barbican Configuration option = Default value Type Description ca_file = None string value Optional CA cert file to use in SSL connections. cert_file = None string value Optional PEM-formatted certificate chain file. endpoint_type = None string value Type of endpoint in Identity service catalog to use for communication with the OpenStack service. insecure = None boolean value If set, then the server's certificate will not be verified. key_file = None string value Optional PEM-formatted file that contains the private key. 4.1.6. clients_cinder The following table outlines the options available under the [clients_cinder] group in the /etc/heat/heat.conf file. Table 4.5. clients_cinder Configuration option = Default value Type Description ca_file = None string value Optional CA cert file to use in SSL connections. cert_file = None string value Optional PEM-formatted certificate chain file. endpoint_type = None string value Type of endpoint in Identity service catalog to use for communication with the OpenStack service. http_log_debug = False boolean value Allow client's debug log output. insecure = None boolean value If set, then the server's certificate will not be verified. key_file = None string value Optional PEM-formatted file that contains the private key. 4.1.7. clients_designate The following table outlines the options available under the [clients_designate] group in the /etc/heat/heat.conf file. Table 4.6. clients_designate Configuration option = Default value Type Description ca_file = None string value Optional CA cert file to use in SSL connections. cert_file = None string value Optional PEM-formatted certificate chain file. endpoint_type = None string value Type of endpoint in Identity service catalog to use for communication with the OpenStack service. insecure = None boolean value If set, then the server's certificate will not be verified. key_file = None string value Optional PEM-formatted file that contains the private key. 4.1.8. clients_glance The following table outlines the options available under the [clients_glance] group in the /etc/heat/heat.conf file. Table 4.7. clients_glance Configuration option = Default value Type Description ca_file = None string value Optional CA cert file to use in SSL connections. cert_file = None string value Optional PEM-formatted certificate chain file. endpoint_type = None string value Type of endpoint in Identity service catalog to use for communication with the OpenStack service. insecure = None boolean value If set, then the server's certificate will not be verified. key_file = None string value Optional PEM-formatted file that contains the private key. 4.1.9. clients_heat The following table outlines the options available under the [clients_heat] group in the /etc/heat/heat.conf file. Table 4.8. clients_heat Configuration option = Default value Type Description ca_file = None string value Optional CA cert file to use in SSL connections. cert_file = None string value Optional PEM-formatted certificate chain file. endpoint_type = None string value Type of endpoint in Identity service catalog to use for communication with the OpenStack service. insecure = None boolean value If set, then the server's certificate will not be verified. key_file = None string value Optional PEM-formatted file that contains the private key. `url = ` string value Optional heat url in format like http://0.0.0.0:8004/v1/%(tenant_id)s . 4.1.10. clients_keystone The following table outlines the options available under the [clients_keystone] group in the /etc/heat/heat.conf file. Table 4.9. clients_keystone Configuration option = Default value Type Description `auth_uri = ` string value Unversioned keystone url in format like http://0.0.0.0:5000 . ca_file = None string value Optional CA cert file to use in SSL connections. cert_file = None string value Optional PEM-formatted certificate chain file. endpoint_type = None string value Type of endpoint in Identity service catalog to use for communication with the OpenStack service. insecure = None boolean value If set, then the server's certificate will not be verified. key_file = None string value Optional PEM-formatted file that contains the private key. 4.1.11. clients_magnum The following table outlines the options available under the [clients_magnum] group in the /etc/heat/heat.conf file. Table 4.10. clients_magnum Configuration option = Default value Type Description ca_file = None string value Optional CA cert file to use in SSL connections. cert_file = None string value Optional PEM-formatted certificate chain file. endpoint_type = None string value Type of endpoint in Identity service catalog to use for communication with the OpenStack service. insecure = None boolean value If set, then the server's certificate will not be verified. key_file = None string value Optional PEM-formatted file that contains the private key. 4.1.12. clients_manila The following table outlines the options available under the [clients_manila] group in the /etc/heat/heat.conf file. Table 4.11. clients_manila Configuration option = Default value Type Description ca_file = None string value Optional CA cert file to use in SSL connections. cert_file = None string value Optional PEM-formatted certificate chain file. endpoint_type = None string value Type of endpoint in Identity service catalog to use for communication with the OpenStack service. insecure = None boolean value If set, then the server's certificate will not be verified. key_file = None string value Optional PEM-formatted file that contains the private key. 4.1.13. clients_mistral The following table outlines the options available under the [clients_mistral] group in the /etc/heat/heat.conf file. Table 4.12. clients_mistral Configuration option = Default value Type Description ca_file = None string value Optional CA cert file to use in SSL connections. cert_file = None string value Optional PEM-formatted certificate chain file. endpoint_type = None string value Type of endpoint in Identity service catalog to use for communication with the OpenStack service. insecure = None boolean value If set, then the server's certificate will not be verified. key_file = None string value Optional PEM-formatted file that contains the private key. 4.1.14. clients_monasca The following table outlines the options available under the [clients_monasca] group in the /etc/heat/heat.conf file. Table 4.13. clients_monasca Configuration option = Default value Type Description ca_file = None string value Optional CA cert file to use in SSL connections. cert_file = None string value Optional PEM-formatted certificate chain file. endpoint_type = None string value Type of endpoint in Identity service catalog to use for communication with the OpenStack service. insecure = None boolean value If set, then the server's certificate will not be verified. key_file = None string value Optional PEM-formatted file that contains the private key. 4.1.15. clients_neutron The following table outlines the options available under the [clients_neutron] group in the /etc/heat/heat.conf file. Table 4.14. clients_neutron Configuration option = Default value Type Description ca_file = None string value Optional CA cert file to use in SSL connections. cert_file = None string value Optional PEM-formatted certificate chain file. endpoint_type = None string value Type of endpoint in Identity service catalog to use for communication with the OpenStack service. insecure = None boolean value If set, then the server's certificate will not be verified. key_file = None string value Optional PEM-formatted file that contains the private key. 4.1.16. clients_nova The following table outlines the options available under the [clients_nova] group in the /etc/heat/heat.conf file. Table 4.15. clients_nova Configuration option = Default value Type Description ca_file = None string value Optional CA cert file to use in SSL connections. cert_file = None string value Optional PEM-formatted certificate chain file. endpoint_type = None string value Type of endpoint in Identity service catalog to use for communication with the OpenStack service. http_log_debug = False boolean value Allow client's debug log output. insecure = None boolean value If set, then the server's certificate will not be verified. key_file = None string value Optional PEM-formatted file that contains the private key. 4.1.17. clients_octavia The following table outlines the options available under the [clients_octavia] group in the /etc/heat/heat.conf file. Table 4.16. clients_octavia Configuration option = Default value Type Description ca_file = None string value Optional CA cert file to use in SSL connections. cert_file = None string value Optional PEM-formatted certificate chain file. endpoint_type = None string value Type of endpoint in Identity service catalog to use for communication with the OpenStack service. insecure = None boolean value If set, then the server's certificate will not be verified. key_file = None string value Optional PEM-formatted file that contains the private key. 4.1.18. clients_sahara The following table outlines the options available under the [clients_sahara] group in the /etc/heat/heat.conf file. Table 4.17. clients_sahara Configuration option = Default value Type Description ca_file = None string value Optional CA cert file to use in SSL connections. cert_file = None string value Optional PEM-formatted certificate chain file. endpoint_type = None string value Type of endpoint in Identity service catalog to use for communication with the OpenStack service. insecure = None boolean value If set, then the server's certificate will not be verified. key_file = None string value Optional PEM-formatted file that contains the private key. 4.1.19. clients_senlin The following table outlines the options available under the [clients_senlin] group in the /etc/heat/heat.conf file. Table 4.18. clients_senlin Configuration option = Default value Type Description ca_file = None string value Optional CA cert file to use in SSL connections. cert_file = None string value Optional PEM-formatted certificate chain file. endpoint_type = None string value Type of endpoint in Identity service catalog to use for communication with the OpenStack service. insecure = None boolean value If set, then the server's certificate will not be verified. key_file = None string value Optional PEM-formatted file that contains the private key. 4.1.20. clients_swift The following table outlines the options available under the [clients_swift] group in the /etc/heat/heat.conf file. Table 4.19. clients_swift Configuration option = Default value Type Description ca_file = None string value Optional CA cert file to use in SSL connections. cert_file = None string value Optional PEM-formatted certificate chain file. endpoint_type = None string value Type of endpoint in Identity service catalog to use for communication with the OpenStack service. insecure = None boolean value If set, then the server's certificate will not be verified. key_file = None string value Optional PEM-formatted file that contains the private key. 4.1.21. clients_trove The following table outlines the options available under the [clients_trove] group in the /etc/heat/heat.conf file. Table 4.20. clients_trove Configuration option = Default value Type Description ca_file = None string value Optional CA cert file to use in SSL connections. cert_file = None string value Optional PEM-formatted certificate chain file. endpoint_type = None string value Type of endpoint in Identity service catalog to use for communication with the OpenStack service. insecure = None boolean value If set, then the server's certificate will not be verified. key_file = None string value Optional PEM-formatted file that contains the private key. 4.1.22. clients_zaqar The following table outlines the options available under the [clients_zaqar] group in the /etc/heat/heat.conf file. Table 4.21. clients_zaqar Configuration option = Default value Type Description ca_file = None string value Optional CA cert file to use in SSL connections. cert_file = None string value Optional PEM-formatted certificate chain file. endpoint_type = None string value Type of endpoint in Identity service catalog to use for communication with the OpenStack service. insecure = None boolean value If set, then the server's certificate will not be verified. key_file = None string value Optional PEM-formatted file that contains the private key. 4.1.23. cors The following table outlines the options available under the [cors] group in the /etc/heat/heat.conf file. Table 4.22. cors Configuration option = Default value Type Description allow_credentials = True boolean value Indicate that the actual request can include user credentials allow_headers = ['X-Auth-Token', 'X-Identity-Status', 'X-Roles', 'X-Service-Catalog', 'X-User-Id', 'X-Tenant-Id', 'X-OpenStack-Request-ID'] list value Indicate which header field names may be used during the actual request. allow_methods = ['GET', 'PUT', 'POST', 'DELETE', 'PATCH'] list value Indicate which methods can be used during the actual request. allowed_origin = None list value Indicate whether this resource may be shared with the domain received in the requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing slash. Example: https://horizon.example.com expose_headers = ['X-Auth-Token', 'X-Subject-Token', 'X-Service-Token', 'X-OpenStack-Request-ID'] list value Indicate which headers are safe to expose to the API. Defaults to HTTP Simple Headers. max_age = 3600 integer value Maximum cache age of CORS preflight requests. 4.1.24. database The following table outlines the options available under the [database] group in the /etc/heat/heat.conf file. Table 4.23. database Configuration option = Default value Type Description backend = sqlalchemy string value The back end to use for the database. connection = None string value The SQLAlchemy connection string to use to connect to the database. connection_debug = 0 integer value Verbosity of SQL debugging information: 0=None, 100=Everything. `connection_parameters = ` string value Optional URL parameters to append onto the connection URL at connect time; specify as param1=value1&param2=value2&... connection_recycle_time = 3600 integer value Connections which have been present in the connection pool longer than this number of seconds will be replaced with a new one the time they are checked out from the pool. connection_trace = False boolean value Add Python stack traces to SQL as comment strings. db_inc_retry_interval = True boolean value If True, increases the interval between retries of a database operation up to db_max_retry_interval. db_max_retries = 20 integer value Maximum retries in case of connection error or deadlock error before error is raised. Set to -1 to specify an infinite retry count. db_max_retry_interval = 10 integer value If db_inc_retry_interval is set, the maximum seconds between retries of a database operation. db_retry_interval = 1 integer value Seconds between retries of a database transaction. max_overflow = 50 integer value If set, use this value for max_overflow with SQLAlchemy. max_pool_size = 5 integer value Maximum number of SQL connections to keep open in a pool. Setting a value of 0 indicates no limit. max_retries = 10 integer value Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count. mysql_enable_ndb = False boolean value If True, transparently enables support for handling MySQL Cluster (NDB). mysql_sql_mode = TRADITIONAL string value The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode= pool_timeout = None integer value If set, use this value for pool_timeout with SQLAlchemy. retry_interval = 10 integer value Interval between retries of opening a SQL connection. slave_connection = None string value The SQLAlchemy connection string to use to connect to the slave database. sqlite_synchronous = True boolean value If True, SQLite uses synchronous mode. use_db_reconnect = False boolean value Enable the experimental use of database reconnect on connection lost. 4.1.25. ec2authtoken The following table outlines the options available under the [ec2authtoken] group in the /etc/heat/heat.conf file. Table 4.24. ec2authtoken Configuration option = Default value Type Description allowed_auth_uris = [] list value Allowed keystone endpoints for auth_uri when multi_cloud is enabled. At least one endpoint needs to be specified. auth_uri = None string value Authentication Endpoint URI. ca_file = None string value Optional CA cert file to use in SSL connections. cert_file = None string value Optional PEM-formatted certificate chain file. insecure = False boolean value If set, then the server's certificate will not be verified. key_file = None string value Optional PEM-formatted file that contains the private key. multi_cloud = False boolean value Allow orchestration of multiple clouds. 4.1.26. eventlet_opts The following table outlines the options available under the [eventlet_opts] group in the /etc/heat/heat.conf file. Table 4.25. eventlet_opts Configuration option = Default value Type Description client_socket_timeout = 900 integer value Timeout for client connections' socket operations. If an incoming connection is idle for this number of seconds it will be closed. A value of 0 means wait forever. wsgi_keep_alive = True boolean value If False, closes the client socket connection explicitly. 4.1.27. healthcheck The following table outlines the options available under the [healthcheck] group in the /etc/heat/heat.conf file. Table 4.26. healthcheck Configuration option = Default value Type Description backends = [] list value Additional backends that can perform health checks and report that information back as part of a request. detailed = False boolean value Show more detailed information as part of the response. Security note: Enabling this option may expose sensitive details about the service being monitored. Be sure to verify that it will not violate your security policies. disable_by_file_path = None string value Check the presence of a file to determine if an application is running on a port. Used by DisableByFileHealthcheck plugin. disable_by_file_paths = [] list value Check the presence of a file based on a port to determine if an application is running on a port. Expects a "port:path" list of strings. Used by DisableByFilesPortsHealthcheck plugin. path = /healthcheck string value The path to respond to healtcheck requests on. 4.1.28. heat_api The following table outlines the options available under the [heat_api] group in the /etc/heat/heat.conf file. Table 4.27. heat_api Configuration option = Default value Type Description backlog = 4096 integer value Number of backlog requests to configure the socket with. bind_host = 0.0.0.0 IP address value Address to bind the server. Useful when selecting a particular network interface. bind_port = 8004 port value The port on which the server will listen. cert_file = None string value Location of the SSL certificate file to use for SSL mode. key_file = None string value Location of the SSL key file to use for enabling SSL mode. max_header_line = 16384 integer value Maximum line size of message headers to be accepted. max_header_line may need to be increased when using large tokens (typically those generated by the Keystone v3 API with big service catalogs). tcp_keepidle = 600 integer value The value for the socket option TCP_KEEPIDLE. This is the time in seconds that the connection must be idle before TCP starts sending keepalive probes. workers = 0 integer value Number of workers for Heat service. Default value 0 means, that service will start number of workers equal number of cores on server. 4.1.29. heat_api_cfn The following table outlines the options available under the [heat_api_cfn] group in the /etc/heat/heat.conf file. Table 4.28. heat_api_cfn Configuration option = Default value Type Description backlog = 4096 integer value Number of backlog requests to configure the socket with. bind_host = 0.0.0.0 IP address value Address to bind the server. Useful when selecting a particular network interface. bind_port = 8000 port value The port on which the server will listen. cert_file = None string value Location of the SSL certificate file to use for SSL mode. key_file = None string value Location of the SSL key file to use for enabling SSL mode. max_header_line = 16384 integer value Maximum line size of message headers to be accepted. max_header_line may need to be increased when using large tokens (typically those generated by the Keystone v3 API with big service catalogs). tcp_keepidle = 600 integer value The value for the socket option TCP_KEEPIDLE. This is the time in seconds that the connection must be idle before TCP starts sending keepalive probes. workers = 1 integer value Number of workers for Heat service. 4.1.30. heat_api_cloudwatch The following table outlines the options available under the [heat_api_cloudwatch] group in the /etc/heat/heat.conf file. Table 4.29. heat_api_cloudwatch Configuration option = Default value Type Description backlog = 4096 integer value Number of backlog requests to configure the socket with. Deprecated since: 10.0.0 *Reason:*Heat CloudWatch API has been removed. bind_host = 0.0.0.0 IP address value Address to bind the server. Useful when selecting a particular network interface. Deprecated since: 10.0.0 *Reason:*Heat CloudWatch API has been removed. bind_port = 8003 port value The port on which the server will listen. Deprecated since: 10.0.0 *Reason:*Heat CloudWatch API has been removed. cert_file = None string value Location of the SSL certificate file to use for SSL mode. Deprecated since: 10.0.0 *Reason:*Heat CloudWatch API has been Removed. key_file = None string value Location of the SSL key file to use for enabling SSL mode. Deprecated since: 10.0.0 *Reason:*Heat CloudWatch API has been Removed. max_header_line = 16384 integer value Maximum line size of message headers to be accepted. max_header_line may need to be increased when using large tokens (typically those generated by the Keystone v3 API with big service catalogs.) Deprecated since: 10.0.0 *Reason:*Heat CloudWatch API has been Removed. tcp_keepidle = 600 integer value The value for the socket option TCP_KEEPIDLE. This is the time in seconds that the connection must be idle before TCP starts sending keepalive probes. Deprecated since: 10.0.0 *Reason:*Heat CloudWatch API has been Removed. workers = 1 integer value Number of workers for Heat service. Deprecated since: 10.0.0 *Reason:*Heat CloudWatch API has been Removed. 4.1.31. keystone_authtoken The following table outlines the options available under the [keystone_authtoken] group in the /etc/heat/heat.conf file. Table 4.30. keystone_authtoken Configuration option = Default value Type Description auth_section = None string value Config Section from which to load plugin specific options auth_type = None string value Authentication type to load auth_uri = None string value Complete "public" Identity API endpoint. This endpoint should not be an "admin" endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you're using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. This option is deprecated in favor of www_authenticate_uri and will be removed in the S release. Deprecated since: Queens *Reason:*The auth_uri option is deprecated in favor of www_authenticate_uri and will be removed in the S release. auth_version = None string value API version of the Identity API endpoint. cache = None string value Request environment key where the Swift cache object is stored. When auth_token middleware is deployed with a Swift cache, use this option to have the middleware share a caching backend with swift. Otherwise, use the memcached_servers option instead. cafile = None string value A PEM encoded Certificate Authority to use when verifying HTTPs connections. Defaults to system CAs. certfile = None string value Required if identity server requires client certificate delay_auth_decision = False boolean value Do not handle authorization requests within the middleware, but delegate the authorization decision to downstream WSGI components. enforce_token_bind = permissive string value Used to control the use and type of token binding. Can be set to: "disabled" to not check token binding. "permissive" (default) to validate binding information if the bind type is of a form known to the server and ignore it if not. "strict" like "permissive" but if the bind type is unknown the token will be rejected. "required" any form of token binding is needed to be allowed. Finally the name of a binding method that must be present in tokens. http_connect_timeout = None integer value Request timeout value for communicating with Identity API server. http_request_max_retries = 3 integer value How many times are we trying to reconnect when communicating with Identity API Server. include_service_catalog = True boolean value (Optional) Indicate whether to set the X-Service-Catalog header. If False, middleware will not ask for service catalog on token validation and will not set the X-Service-Catalog header. insecure = False boolean value Verify HTTPS connections. interface = admin string value Interface to use for the Identity API endpoint. Valid values are "public", "internal" or "admin"(default). keyfile = None string value Required if identity server requires client certificate memcache_pool_conn_get_timeout = 10 integer value (Optional) Number of seconds that an operation will wait to get a memcached client connection from the pool. memcache_pool_dead_retry = 300 integer value (Optional) Number of seconds memcached server is considered dead before it is tried again. memcache_pool_maxsize = 10 integer value (Optional) Maximum total number of open connections to every memcached server. memcache_pool_socket_timeout = 3 integer value (Optional) Socket timeout in seconds for communicating with a memcached server. memcache_pool_unused_timeout = 60 integer value (Optional) Number of seconds a connection to memcached is held unused in the pool before it is closed. memcache_secret_key = None string value (Optional, mandatory if memcache_security_strategy is defined) This string is used for key derivation. memcache_security_strategy = None string value (Optional) If defined, indicate whether token data should be authenticated or authenticated and encrypted. If MAC, token data is authenticated (with HMAC) in the cache. If ENCRYPT, token data is encrypted and authenticated in the cache. If the value is not one of these options or empty, auth_token will raise an exception on initialization. memcache_use_advanced_pool = False boolean value (Optional) Use the advanced (eventlet safe) memcached client pool. The advanced pool will only work under python 2.x. memcached_servers = None list value Optionally specify a list of memcached server(s) to use for caching. If left undefined, tokens will instead be cached in-process. region_name = None string value The region in which the identity server can be found. service_token_roles = ['service'] list value A choice of roles that must be present in a service token. Service tokens are allowed to request that an expired token can be used and so this check should tightly control that only actual services should be sending this token. Roles here are applied as an ANY check so any role in this list must be present. For backwards compatibility reasons this currently only affects the allow_expired check. service_token_roles_required = False boolean value For backwards compatibility reasons we must let valid service tokens pass that don't pass the service_token_roles check as valid. Setting this true will become the default in a future release and should be enabled if possible. service_type = None string value The name or type of the service as it appears in the service catalog. This is used to validate tokens that have restricted access rules. token_cache_time = 300 integer value In order to prevent excessive effort spent validating tokens, the middleware caches previously-seen tokens for a configurable duration (in seconds). Set to -1 to disable caching completely. www_authenticate_uri = None string value Complete "public" Identity API endpoint. This endpoint should not be an "admin" endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you're using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. 4.1.32. noauth The following table outlines the options available under the [noauth] group in the /etc/heat/heat.conf file. Table 4.31. noauth Configuration option = Default value Type Description `token_response = ` string value JSON file containing the content returned by the noauth middleware. 4.1.33. oslo_messaging_amqp The following table outlines the options available under the [oslo_messaging_amqp] group in the /etc/heat/heat.conf file. Table 4.32. oslo_messaging_amqp Configuration option = Default value Type Description addressing_mode = dynamic string value Indicates the addressing mode used by the driver. Permitted values: legacy - use legacy non-routable addressing routable - use routable addresses dynamic - use legacy addresses if the message bus does not support routing otherwise use routable addressing anycast_address = anycast string value Appended to the address prefix when sending to a group of consumers. Used by the message bus to identify messages that should be delivered in a round-robin fashion across consumers. broadcast_prefix = broadcast string value address prefix used when broadcasting to all servers connection_retry_backoff = 2 integer value Increase the connection_retry_interval by this many seconds after each unsuccessful failover attempt. connection_retry_interval = 1 integer value Seconds to pause before attempting to re-connect. connection_retry_interval_max = 30 integer value Maximum limit for connection_retry_interval + connection_retry_backoff container_name = None string value Name for the AMQP container. must be globally unique. Defaults to a generated UUID default_notification_exchange = None string value Exchange name used in notification addresses. Exchange name resolution precedence: Target.exchange if set else default_notification_exchange if set else control_exchange if set else notify default_notify_timeout = 30 integer value The deadline for a sent notification message delivery. Only used when caller does not provide a timeout expiry. default_reply_retry = 0 integer value The maximum number of attempts to re-send a reply message which failed due to a recoverable error. default_reply_timeout = 30 integer value The deadline for an rpc reply message delivery. default_rpc_exchange = None string value Exchange name used in RPC addresses. Exchange name resolution precedence: Target.exchange if set else default_rpc_exchange if set else control_exchange if set else rpc default_send_timeout = 30 integer value The deadline for an rpc cast or call message delivery. Only used when caller does not provide a timeout expiry. default_sender_link_timeout = 600 integer value The duration to schedule a purge of idle sender links. Detach link after expiry. group_request_prefix = unicast string value address prefix when sending to any server in group idle_timeout = 0 integer value Timeout for inactive connections (in seconds) link_retry_delay = 10 integer value Time to pause between re-connecting an AMQP 1.0 link that failed due to a recoverable error. multicast_address = multicast string value Appended to the address prefix when sending a fanout message. Used by the message bus to identify fanout messages. notify_address_prefix = openstack.org/om/notify string value Address prefix for all generated Notification addresses notify_server_credit = 100 integer value Window size for incoming Notification messages pre_settled = ['rpc-cast', 'rpc-reply'] multi valued Send messages of this type pre-settled. Pre-settled messages will not receive acknowledgement from the peer. Note well: pre-settled messages may be silently discarded if the delivery fails. Permitted values: rpc-call - send RPC Calls pre-settled rpc-reply - send RPC Replies pre-settled rpc-cast - Send RPC Casts pre-settled notify - Send Notifications pre-settled pseudo_vhost = True boolean value Enable virtual host support for those message buses that do not natively support virtual hosting (such as qpidd). When set to true the virtual host name will be added to all message bus addresses, effectively creating a private subnet per virtual host. Set to False if the message bus supports virtual hosting using the hostname field in the AMQP 1.0 Open performative as the name of the virtual host. reply_link_credit = 200 integer value Window size for incoming RPC Reply messages. rpc_address_prefix = openstack.org/om/rpc string value Address prefix for all generated RPC addresses rpc_server_credit = 100 integer value Window size for incoming RPC Request messages `sasl_config_dir = ` string value Path to directory that contains the SASL configuration `sasl_config_name = ` string value Name of configuration file (without .conf suffix) `sasl_default_realm = ` string value SASL realm to use if no realm present in username `sasl_mechanisms = ` string value Space separated list of acceptable SASL mechanisms server_request_prefix = exclusive string value address prefix used when sending to a specific server ssl = False boolean value Attempt to connect via SSL. If no other ssl-related parameters are given, it will use the system's CA-bundle to verify the server's certificate. `ssl_ca_file = ` string value CA certificate PEM file used to verify the server's certificate `ssl_cert_file = ` string value Self-identifying certificate PEM file for client authentication `ssl_key_file = ` string value Private key PEM file used to sign ssl_cert_file certificate (optional) ssl_key_password = None string value Password for decrypting ssl_key_file (if encrypted) ssl_verify_vhost = False boolean value By default SSL checks that the name in the server's certificate matches the hostname in the transport_url. In some configurations it may be preferable to use the virtual hostname instead, for example if the server uses the Server Name Indication TLS extension (rfc6066) to provide a certificate per virtual host. Set ssl_verify_vhost to True if the server's SSL certificate uses the virtual host name instead of the DNS name. trace = False boolean value Debug: dump AMQP frames to stdout unicast_address = unicast string value Appended to the address prefix when sending to a particular RPC/Notification server. Used by the message bus to identify messages sent to a single destination. 4.1.34. oslo_messaging_kafka The following table outlines the options available under the [oslo_messaging_kafka] group in the /etc/heat/heat.conf file. Table 4.33. oslo_messaging_kafka Configuration option = Default value Type Description compression_codec = none string value The compression codec for all data generated by the producer. If not set, compression will not be used. Note that the allowed values of this depend on the kafka version conn_pool_min_size = 2 integer value The pool size limit for connections expiration policy conn_pool_ttl = 1200 integer value The time-to-live in sec of idle connections in the pool consumer_group = oslo_messaging_consumer string value Group id for Kafka consumer. Consumers in one group will coordinate message consumption enable_auto_commit = False boolean value Enable asynchronous consumer commits kafka_consumer_timeout = 1.0 floating point value Default timeout(s) for Kafka consumers kafka_max_fetch_bytes = 1048576 integer value Max fetch bytes of Kafka consumer max_poll_records = 500 integer value The maximum number of records returned in a poll call pool_size = 10 integer value Pool Size for Kafka Consumers producer_batch_size = 16384 integer value Size of batch for the producer async send producer_batch_timeout = 0.0 floating point value Upper bound on the delay for KafkaProducer batching in seconds sasl_mechanism = PLAIN string value Mechanism when security protocol is SASL security_protocol = PLAINTEXT string value Protocol used to communicate with brokers `ssl_cafile = ` string value CA certificate PEM file used to verify the server certificate 4.1.35. oslo_messaging_notifications The following table outlines the options available under the [oslo_messaging_notifications] group in the /etc/heat/heat.conf file. Table 4.34. oslo_messaging_notifications Configuration option = Default value Type Description driver = [] multi valued The Drivers(s) to handle sending notifications. Possible values are messaging, messagingv2, routing, log, test, noop retry = -1 integer value The maximum number of attempts to re-send a notification message which failed to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite topics = ['notifications'] list value AMQP topic used for OpenStack notifications. transport_url = None string value A URL representing the messaging driver to use for notifications. If not set, we fall back to the same configuration used for RPC. 4.1.36. oslo_messaging_rabbit The following table outlines the options available under the [oslo_messaging_rabbit] group in the /etc/heat/heat.conf file. Table 4.35. oslo_messaging_rabbit Configuration option = Default value Type Description amqp_auto_delete = False boolean value Auto-delete queues in AMQP. amqp_durable_queues = False boolean value Use durable queues in AMQP. direct_mandatory_flag = True boolean value (DEPRECATED) Enable/Disable the RabbitMQ mandatory flag for direct send. The direct send is used as reply, so the MessageUndeliverable exception is raised in case the client queue does not exist.MessageUndeliverable exception will be used to loop for a timeout to lets a chance to sender to recover.This flag is deprecated and it will not be possible to deactivate this functionality anymore enable_cancel_on_failover = False boolean value Enable x-cancel-on-ha-failover flag so that rabbitmq server will cancel and notify consumerswhen queue is down heartbeat_in_pthread = False boolean value EXPERIMENTAL: Run the health check heartbeat threadthrough a native python thread. By default if thisoption isn't provided the health check heartbeat willinherit the execution model from the parent process. Byexample if the parent process have monkey patched thestdlib by using eventlet/greenlet then the heartbeatwill be run through a green thread. heartbeat_rate = 2 integer value How often times during the heartbeat_timeout_threshold we check the heartbeat. heartbeat_timeout_threshold = 60 integer value Number of seconds after which the Rabbit broker is considered down if heartbeat's keep-alive fails (0 disables heartbeat). kombu_compression = None string value EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not be used. This option may not be available in future versions. kombu_failover_strategy = round-robin string value Determines how the RabbitMQ node is chosen in case the one we are currently connected to becomes unavailable. Takes effect only if more than one RabbitMQ node is provided in config. kombu_missing_consumer_retry_timeout = 60 integer value How long to wait a missing client before abandoning to send it its replies. This value should not be longer than rpc_response_timeout. kombu_reconnect_delay = 1.0 floating point value How long to wait before reconnecting in response to an AMQP consumer cancel notification. rabbit_ha_queues = False boolean value Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring is no longer controlled by the x-ha-policy argument when declaring a queue. If you just want to make sure that all queues (except those with auto-generated names) are mirrored across all nodes, run: "rabbitmqctl set_policy HA ^(?!amq\.).* {"ha-mode": "all"} " rabbit_interval_max = 30 integer value Maximum interval of RabbitMQ connection retries. Default is 30 seconds. rabbit_login_method = AMQPLAIN string value The RabbitMQ login method. rabbit_qos_prefetch_count = 0 integer value Specifies the number of messages to prefetch. Setting to zero allows unlimited messages. rabbit_retry_backoff = 2 integer value How long to backoff for between retries when connecting to RabbitMQ. rabbit_retry_interval = 1 integer value How frequently to retry connecting with RabbitMQ. rabbit_transient_queues_ttl = 1800 integer value Positive integer representing duration in seconds for queue TTL (x-expires). Queues which are unused for the duration of the TTL are automatically deleted. The parameter affects only reply and fanout queues. ssl = False boolean value Connect over SSL. `ssl_ca_file = ` string value SSL certification authority file (valid only if SSL enabled). `ssl_cert_file = ` string value SSL cert file (valid only if SSL enabled). `ssl_key_file = ` string value SSL key file (valid only if SSL enabled). `ssl_version = ` string value SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions. 4.1.37. oslo_middleware The following table outlines the options available under the [oslo_middleware] group in the /etc/heat/heat.conf file. Table 4.36. oslo_middleware Configuration option = Default value Type Description enable_proxy_headers_parsing = False boolean value Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not. max_request_body_size = 114688 integer value The maximum body size for each request, in bytes. secure_proxy_ssl_header = X-Forwarded-Proto string value The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by a SSL termination proxy. 4.1.38. oslo_policy The following table outlines the options available under the [oslo_policy] group in the /etc/heat/heat.conf file. Table 4.37. oslo_policy Configuration option = Default value Type Description enforce_scope = False boolean value This option controls whether or not to enforce scope when evaluating policies. If True , the scope of the token used in the request is compared to the scope_types of the policy being enforced. If the scopes do not match, an InvalidScope exception will be raised. If False , a message will be logged informing operators that policies are being invoked with mismatching scope. policy_default_rule = default string value Default rule. Enforced when a requested rule is not found. policy_dirs = ['policy.d'] multi valued Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored. policy_file = policy.json string value The relative or absolute path of a file that maps roles to permissions for a given service. Relative paths must be specified in relation to the configuration file setting this option. remote_content_type = application/x-www-form-urlencoded string value Content Type to send and receive data for REST based policy check remote_ssl_ca_crt_file = None string value Absolute path to ca cert file for REST based policy check remote_ssl_client_crt_file = None string value Absolute path to client cert for REST based policy check remote_ssl_client_key_file = None string value Absolute path client key file REST based policy check remote_ssl_verify_server_crt = False boolean value server identity verification for REST based policy check 4.1.39. paste_deploy The following table outlines the options available under the [paste_deploy] group in the /etc/heat/heat.conf file. Table 4.38. paste_deploy Configuration option = Default value Type Description api_paste_config = api-paste.ini string value The API paste config file to use. flavor = None string value The flavor to use. 4.1.40. profiler The following table outlines the options available under the [profiler] group in the /etc/heat/heat.conf file. Table 4.39. profiler Configuration option = Default value Type Description connection_string = messaging:// string value Connection string for a notifier backend. Default value is messaging:// which sets the notifier to oslo_messaging. Examples of possible values: messaging:// - use oslo_messaging driver for sending spans. redis://127.0.0.1:6379 - use redis driver for sending spans. mongodb://127.0.0.1:27017 - use mongodb driver for sending spans. elasticsearch://127.0.0.1:9200 - use elasticsearch driver for sending spans. jaeger://127.0.0.1:6831 - use jaeger tracing as driver for sending spans. enabled = False boolean value Enable the profiling for all services on this node. Default value is False (fully disable the profiling feature). Possible values: True: Enables the feature False: Disables the feature. The profiling cannot be started via this project operations. If the profiling is triggered by another project, this project part will be empty. es_doc_type = notification string value Document type for notification indexing in elasticsearch. es_scroll_size = 10000 integer value Elasticsearch splits large requests in batches. This parameter defines maximum size of each batch (for example: es_scroll_size=10000). es_scroll_time = 2m string value This parameter is a time value parameter (for example: es_scroll_time=2m), indicating for how long the nodes that participate in the search will maintain relevant resources in order to continue and support it. filter_error_trace = False boolean value Enable filter traces that contain error/exception to a separated place. Default value is set to False. Possible values: True: Enable filter traces that contain error/exception. False: Disable the filter. hmac_keys = SECRET_KEY string value Secret key(s) to use for encrypting context data for performance profiling. This string value should have the following format: <key1>[,<key2>,... <keyn>], where each key is some random string. A user who triggers the profiling via the REST API has to set one of these keys in the headers of the REST API call to include profiling results of this node for this particular project. Both "enabled" flag and "hmac_keys" config options should be set to enable profiling. Also, to generate correct profiling information across all services at least one key needs to be consistent between OpenStack projects. This ensures it can be used from client side to generate the trace, containing information from all possible resources. sentinel_service_name = mymaster string value Redissentinel uses a service name to identify a master redis service. This parameter defines the name (for example: sentinal_service_name=mymaster ). socket_timeout = 0.1 floating point value Redissentinel provides a timeout option on the connections. This parameter defines that timeout (for example: socket_timeout=0.1). trace_sqlalchemy = False boolean value Enable SQL requests profiling in services. Default value is False (SQL requests won't be traced). Possible values: True: Enables SQL requests profiling. Each SQL query will be part of the trace and can the be analyzed by how much time was spent for that. False: Disables SQL requests profiling. The spent time is only shown on a higher level of operations. Single SQL queries cannot be analyzed this way. 4.1.41. revision The following table outlines the options available under the [revision] group in the /etc/heat/heat.conf file. Table 4.40. revision Configuration option = Default value Type Description heat_revision = unknown string value Heat build revision. If you would prefer to manage your build revision separately, you can move this section to a different file and add it as another config option. 4.1.42. ssl The following table outlines the options available under the [ssl] group in the /etc/heat/heat.conf file. Table 4.41. ssl Configuration option = Default value Type Description ca_file = None string value CA certificate file to use to verify connecting clients. cert_file = None string value Certificate file to use when starting the server securely. ciphers = None string value Sets the list of available ciphers. value should be a string in the OpenSSL cipher list format. key_file = None string value Private key file to use when starting the server securely. version = None string value SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions. 4.1.43. trustee The following table outlines the options available under the [trustee] group in the /etc/heat/heat.conf file. Table 4.42. trustee Configuration option = Default value Type Description auth-url = None string value Authentication URL auth_section = None string value Config Section from which to load plugin specific options auth_type = None string value Authentication type to load default-domain-id = None string value Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default-domain-name = None string value Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to password = None string value User's password project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to system-scope = None string value Scope for system operations trust-id = None string value Trust ID user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User id username = None string value Username 4.1.44. volumes The following table outlines the options available under the [volumes] group in the /etc/heat/heat.conf file. Table 4.43. volumes Configuration option = Default value Type Description backups_enabled = True boolean value Indicate if cinder-backup service is enabled. This is a temporary workaround until cinder-backup service becomes discoverable, see LP#1334856.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/configuration_reference/heat
Chapter 4. KVM Live Migration
Chapter 4. KVM Live Migration This chapter covers migrating guest virtual machines running on one host physical machine to another. In both instances, the host physical machines are running the KVM hypervisor. Migration describes the process of moving a guest virtual machine from one host physical machine to another. This is possible because guest virtual machines are running in a virtualized environment instead of directly on the hardware. Migration is useful for: Load balancing - guest virtual machines can be moved to host physical machines with lower usage when their host physical machine becomes overloaded, or another host physical machine is under-utilized. Hardware independence - when we need to upgrade, add, or remove hardware devices on the host physical machine, we can safely relocate guest virtual machines to other host physical machines. This means that guest virtual machines do not experience any downtime for hardware improvements. Energy saving - guest virtual machines can be redistributed to other host physical machines and can thus be powered off to save energy and cut costs in low usage periods. Geographic migration - guest virtual machines can be moved to another location for lower latency or in serious circumstances. Migration works by sending the state of the guest virtual machine's memory and any virtualized devices to a destination host physical machine. It is recommended to use shared, networked storage to store the guest virtual machine's images to be migrated. It is also recommended to use libvirt-managed storage pools for shared storage when migrating virtual machines. Migrations can be performed live or not. In a live migration, the guest virtual machine continues to run on the source host physical machine while its memory pages are transferred, in order, to the destination host physical machine. During migration, KVM monitors the source for any changes in pages it has already transferred, and begins to transfer these changes when all of the initial pages have been transferred. KVM also estimates transfer speed during migration, so when the remaining amount of data to transfer will take a certain configurable period of time (10 milliseconds by default), KVM suspends the original guest virtual machine, transfers the remaining data, and resumes the same guest virtual machine on the destination host physical machine. A migration that is not performed live, suspends the guest virtual machine, then moves an image of the guest virtual machine's memory to the destination host physical machine. The guest virtual machine is then resumed on the destination host physical machine and the memory the guest virtual machine used on the source host physical machine is freed. The time it takes to complete such a migration depends on network bandwidth and latency. If the network is experiencing heavy use or low bandwidth, the migration will take much longer. If the original guest virtual machine modifies pages faster than KVM can transfer them to the destination host physical machine, offline migration must be used, as live migration would never complete. 4.1. Live Migration Requirements Migrating guest virtual machines requires the following: Migration requirements A guest virtual machine installed on shared storage using one of the following protocols: Fibre Channel-based LUNs iSCSI FCoE NFS GFS2 SCSI RDMA protocols (SCSI RCP): the block export protocol used in Infiniband and 10GbE iWARP adapters The migration platforms and versions should be checked against table Table 4.1, "Live Migration Compatibility" . It should also be noted that Red Hat Enterprise Linux 6 supports live migration of guest virtual machines using raw and qcow2 images on shared storage. Both systems must have the appropriate TCP/IP ports open. In cases where a firewall is used, refer to the Red Hat Enterprise Linux Virtualization Security Guide which can be found at https://access.redhat.com/site/documentation/ for detailed port information. A separate system exporting the shared storage medium. Storage should not reside on either of the two host physical machines being used for migration. Shared storage must mount at the same location on source and destination systems. The mounted directory names must be identical. Although it is possible to keep the images using different paths, it is not recommended. Note that, if you are intending to use virt-manager to perform the migration, the path names must be identical. If however you intend to use virsh to perform the migration, different network configurations and mount directories can be used with the help of --xml option or pre-hooks when doing migrations. Even without shared storage, migration can still succeed with the option --copy-storage-all (deprecated). For more information on prehooks , refer to libvirt.org , and for more information on the XML option, refer to Chapter 20, Manipulating the Domain XML . When migration is attempted on an existing guest virtual machine in a public bridge+tap network, the source and destination host physical machines must be located in the same network. Otherwise, the guest virtual machine network will not operate after migration. In Red Hat Enterprise Linux 5 and 6, the default cache mode of KVM guest virtual machines is set to none , which prevents inconsistent disk states. Setting the cache option to none (using virsh attach-disk cache none , for example), causes all of the guest virtual machine's files to be opened using the O_DIRECT flag (when calling the open syscall), thus bypassing the host physical machine's cache, and only providing caching on the guest virtual machine. Setting the cache mode to none prevents any potential inconsistency problems, and when used makes it possible to live-migrate virtual machines. For information on setting cache to none , refer to Section 13.3, "Adding Storage Devices to Guests" . Make sure that the libvirtd service is enabled ( # chkconfig libvirtd on ) and running ( # service libvirtd start ). It is also important to note that the ability to migrate effectively is dependent on the parameter settings in the /etc/libvirt/libvirtd.conf configuration file. Procedure 4.1. Configuring libvirtd.conf Opening the libvirtd.conf requires running the command as root: Change the parameters as needed and save the file. Restart the libvirtd service:
[ "vim /etc/libvirt/libvirtd.conf", "service libvirtd restart" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/chap-Virtualization_Administration_Guide-KVM_live_migration
16.3.4. Module Arguments
16.3.4. Module Arguments PAM uses arguments to pass information to a pluggable module during authentication for some modules. For example, the pam_userdb.so module uses secrets stored in a Berkeley DB file to authenticate the user. Berkeley DB is an open source database system embedded in many applications. The module takes a db argument so that Berkeley DB knows which database to use for the requested service. A typical pam_userdb.so line within a PAM configuration file looks like this: In the example, replace <path-to-file> with the full path to the Berkeley DB database file. Invalid arguments are ignored and do not otherwise affect the success or failure of the PAM module. However, most modules report errors to the /var/log/messages file.
[ "auth required pam_userdb.so db= <path-to-file>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-pam-arguments
Chapter 11. Network Interfaces
Chapter 11. Network Interfaces Under Red Hat Enterprise Linux, all network communications occur between configured software interfaces and physical networking devices connected to the system. The configuration files for network interfaces are located in the /etc/sysconfig/network-scripts/ directory. The scripts used to activate and deactivate these network interfaces are also located here. Although the number and type of interface files can differ from system to system, there are three categories of files that exist in this directory: Interface configuration files Interface control scripts Network function files The files in each of these categories work together to enable various network devices. This chapter explores the relationship between these files and how they are used. 11.1. Network Configuration Files Before delving into the interface configuration files, let us first itemize the primary configuration files used in network configuration. Understanding the role these files play in setting up the network stack can be helpful when customizing a Red Hat Enterprise Linux system. The primary network configuration files are as follows: /etc/hosts The main purpose of this file is to resolve host names that cannot be resolved any other way. It can also be used to resolve host names on small networks with no DNS server. Regardless of the type of network the computer is on, this file should contain a line specifying the IP address of the loopback device ( 127.0.0.1 ) as localhost.localdomain . For more information, see the hosts(5) manual page. /etc/resolv.conf This file specifies the IP addresses of DNS servers and the search domain. Unless configured to do otherwise, the network initialization scripts populate this file. For more information about this file, see the resolv.conf(5) manual page. /etc/sysconfig/network This file specifies routing and host information for all network interfaces. It is used to contain directives which are to have global effect and not to be interface specific. For more information about this file and the directives it accepts, see Section D.1.14, "/etc/sysconfig/network" . /etc/sysconfig/network-scripts/ifcfg- interface-name For each network interface, there is a corresponding interface configuration script. Each of these files provide information specific to a particular network interface. See Section 11.2, "Interface Configuration Files" for more information on this type of file and the directives it accepts. Important Network interface names may be different on different hardware types. See Appendix A, Consistent Network Device Naming for more information. Warning The /etc/sysconfig/networking/ directory is used by the now deprecated Network Administration Tool ( system-config-network ). Its contents should not be edited manually. Using only one method for network configuration is strongly encouraged, due to the risk of configuration deletion. For more information about configuring network interfaces using graphical configuration tools, see Chapter 10, NetworkManager . 11.1.1. Setting the Host Name To permanently change the static host name, change the HOSTNAME directive in the /etc/sysconfig/network file. For example: Red Hat recommends the static host name matches the fully qualified domain name (FQDN) used for the machine in DNS, such as host.example.com. It is also recommended that the static host name consists only of 7 bit ASCII lower-case characters, no spaces or dots, and limits itself to the format allowed for DNS domain name labels, even though this is not a strict requirement. Older specifications do not permit the underscore, and so their use is not recommended. Changes will only take effect when the networking service, or the system, is restarted. Note that the FQDN of the host can be supplied by a DNS resolver, by settings in /etc/sysconfig/network , or by the /etc/hosts file. The default setting of hosts: files dns in /etc/nsswitch.conf causes the configuration files to be checked before a resolver. The default setting of multi on in the /etc/host.conf file means that all valid values in the /etc/hosts file are returned, not just the first. Sometimes you may need to use the host table in the /etc/hosts file instead of the HOSTNAME directive in /etc/sysconfig/network , for example, when DNS is not running during system bootup. To change the host name using the /etc/hosts file, add lines to it in the following format: 192.168.1.2 penguin.example.com penguin
[ "HOSTNAME=penguin.example.com" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/ch-network_interfaces
4.5.4. Synchronizing Quotas with the quotasync Command
4.5.4. Synchronizing Quotas with the quotasync Command GFS2 stores all quota information in its own internal file on disk. A GFS2 node does not update this quota file for every file system write; rather, by default it updates the quota file once every 60 seconds. This is necessary to avoid contention among nodes writing to the quota file, which would cause a slowdown in performance. As a user or group approaches their quota limit, GFS2 dynamically reduces the time between its quota-file updates to prevent the limit from being exceeded. The normal time period between quota synchronizations is a tunable parameter, quota_quantum . You can change this from its default value of 60 seconds using the quota_quantum= mount option, as described in Table 4.2, "GFS2-Specific Mount Options" . The quota_quantum parameter must be set on each node and each time the file system is mounted. Changes to the quota_quantum parameter are not persistent across unmounts. You can update the quota_quantum value with the mount -o remount . You can use the quotasync command to synchronize the quota information from a node to the on-disk quota file between the automatic updates performed by GFS2. Usage Synchronizing Quota Information u Sync the user quota files. g Sync the group quota files a Sync all file systems that are currently quota-enabled and support sync. When -a is absent, a file system mountpoint should be specified. mntpnt Specifies the GFS2 file system to which the actions apply. Tuning the Time Between Synchronizations MountPoint Specifies the GFS2 file system to which the actions apply. secs Specifies the new time period between regular quota-file synchronizations by GFS2. Smaller values may increase contention and slow down performance. Examples This example synchronizes all the cached dirty quotas from the node it is run on to the ondisk quota file for the file system /mnt/mygfs2 . This example changes the default time period between regular quota-file updates to one hour (3600 seconds) for file system /mnt/mygfs2 when remounting that file system on logical volume /dev/volgroup/logical_volume .
[ "quotasync [-ug] -a| mntpnt", "mount -o quota_quantum= secs ,remount BlockDevice MountPoint", "quotasync -ug /mnt/mygfs2", "mount -o quota_quantum=3600,remount /dev/volgroup/logical_volume /mnt/mygfs2" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/global_file_system_2/s3-quotasync
Chapter 2. Requirements
Chapter 2. Requirements 2.1. Subscription and repositories It's important to keep the subscription, kernel, and patch level identical on all cluster nodes. To be able to use this HA solution, either the RHEL for SAP Solutions (for on-premise or BYOS setups in public cloud environments) or RHEL for SAP with High Availability and Update Services (when using PAYG in public cloud environments) subscriptions are required for all cluster nodes. And the SAP NetWeaver, SAP Solutions and High Availability repos must be enabled on each cluster node. Follow this kbase article to enable the repos on both nodes which are required for this environment. 2.2. Pacemaker Resource Agents For a pacemaker-based HA cluster to manage both SAP HANA System Replication and also ENSA2 the following resource agents are required. 2.2.1. SAPInstance The SAPInstance resource agent will be used for managing the ASCS and ERS resources in this example. All operations of the SAPInstance resource agent are done by using the SAP start-up service framework sapstartsrv . 2.2.2. SAPHanaTopology (Cloned Resource) This resource agent is is gathering status and configuration of SAP HANA System Replication on each cluster node. It is essential to have the data from this agent present in cluster node attributes for SAPHana resource agent to work properly. 2.2.3. SAPHana (Promotable Cloned resource) This resource is responsible for starting, stopping, and relocating (failover) of SAP HANA database. This resource agent takes information gathered by SAPHanaTopology and based on that it is interacting with SAP HANA database to do things. It also adds additional information to cluster node attributes about SAP HANA status on cluster nodes. 2.2.4. filesystem Pacemaker cluster resource-agent for Filesystem. It manages a Filesystem on a shared storage medium exported by NFS or iSCSI etc. 2.2.5. IPaddr2 (or other RAs for managing VIPs on CCSPs) Manages virtual IPv4 and IPv6 addresses and aliases. 2.3. Two node cluster environment Since this is a Cost-Optimized scenario, we will focus only on a 2-node cluster environment. ENSA1 can only be configured in a 2-node cluster where the ASCS can failover to the other node where the ERS is running. ENSA2, on the other hand, supports running more than 2 nodes in a cluster; however, SAP HANA scale-up instances are limited to 2-node clusters only, therefore, this Cost-Optimized document keeps everything simple by using only 2 nodes in the cluster. 2.4. Storage requirements Directories created for S/4HANA installation should be put on shared storage, following the below-mentioned rules: 2.4.1. Instance Specific Directory There must be a separate SAN LUN or NFS export for the ASCS and ERS instances that can be mounted by the cluster on each node. For example, as shown below, 'ASCS' and 'ERS' instances, respectively, the instance specific directory must be present on the corresponding node. ASCS node: /usr/sap/SID/ASCS<Ins#> ERS node: /usr/sap/SID/ERS<Ins#> Both nodes: /hana/ Note : As there will be System Replication, the /hana/ directory is local that is non-shared on each node. Note : For the Application Servers, the following directory must be made available on the nodes where the Application Server instances will run: App Server Node(s) (D<Ins#>): /usr/sap/SID/D<Ins#> When using SAN LUNs for the instance directories, customers must use HA-LVM to ensure that the instance directories can only be mounted on one node at a time. When using NFS exports, if the directories are created on the same directory tree on an NFS file server, such as Azure NetApp Files or Amazon EFS, the option force_unmount=safe must be used when configuring the Filesystem resource. This option will ensure that the cluster only stops the processes running on the specific NFS export instead of stopping all processes running on the directory tree where the exports are created. 2.4.2. Shared Directories The following mount points must be available on ASCS, ERS, HANA, and Application Servers nodes. /sapmnt /usr/sap/trans /usr/sap/SID/SYS Shared storage can be achieved by: using the external NFS server (NFS server cannot run on any of the nodes inside the cluster in which the shares would be mounted. More details about this limitation can be found in the article Hangs occur if a Red Hat Enterprise Linux system is used as both NFS server and NFS client for the same mount ) using the GFS2 filesystem (this requires all nodes to have Resilient Storage Add-on ) using the glusterfs filesystem (check the additional notes in the article Can glusterfs be used for the SAP NetWeaver shared filesystems? ) These mount points must be either managed by the cluster or mounted before the cluster is started.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/configuring_a_cost-optimized_sap_s4hana_ha_cluster_hana_system_replication_ensa2_using_the_rhel_ha_add-on/asmb_cco_requirements_configuring-cost-optimized-sap
Preface
Preface Note You cannot apply a role-based access control (RBAC)-shared security group directly to an instance during instance creation. To apply an RBAC-shared security group to an instance you must first create the port, apply the shared security group to that port, and then assign that port to the instance. See Adding a security group to a port .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/networking_guide/pr01
5.137. keyutils
5.137. keyutils 5.137.1. RHEA-2012:0963 - keyutils enhancement update Updated keyutils packages that add one enhancement are now available for Red Hat Enterprise Linux 6. The keyutils package provides utilities to control the Linux kernel key management facility and to provide a mechanism by which the kernel calls up to user space to get a key instantiated. Enhancement BZ# 772497 With this update, the request-key utility allows multiple configuration files to be provided. The request-key configuration file and its associated key-type specific variants are used by the request-key utility to determine which program should be run to instantiate a key. All users of keyutils are advised to upgrade to these updated packages, which add this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/keyutils
Chapter 2. Getting started with hosted control planes
Chapter 2. Getting started with hosted control planes To get started with hosted control planes for OpenShift Container Platform, you first configure your hosted cluster on the provider that you want to use. Then, you complete a few management tasks. You can view the procedures by selecting from one of the following providers: 2.1. Bare metal Hosted control plane sizing guidance Installing the hosted control plane command line interface Distributing hosted cluster workloads Bare metal firewall and port requirements Bare metal infrastructure requirements : Review the infrastructure requirements to create a hosted cluster on bare metal. Configuring hosted control plane clusters on bare metal : Configure DNS Create a hosted cluster and verify cluster creation Scale the NodePool object for the hosted cluster Handle ingress traffic for the hosted cluster Enable node auto-scaling for the hosted cluster Configuring hosted control planes in a disconnected environment To destroy a hosted cluster on bare metal, follow the instructions in Destroying a hosted cluster on bare metal . If you want to disable the hosted control plane feature, see Disabling the hosted control plane feature . 2.2. OpenShift Virtualization Hosted control plane sizing guidance Installing the hosted control plane command line interface Distributing hosted cluster workloads Managing hosted control plane clusters on OpenShift Virtualization : Create OpenShift Container Platform clusters with worker nodes that are hosted by KubeVirt virtual machines. Configuring hosted control planes in a disconnected environment To destroy a hosted cluster is on OpenShift Virtualization, follow the instructions in Destroying a hosted cluster on OpenShift Virtualization . If you want to disable the hosted control plane feature, see Disabling the hosted control plane feature . 2.3. Amazon Web Services (AWS) AWS infrastructure requirements : Review the infrastructure requirements to create a hosted cluster on AWS. Configuring hosted control plane clusters on AWS : The tasks to configure hosted control plane clusters on AWS include creating the AWS S3 OIDC secret, creating a routable public zone, enabling external DNS, enabling AWS PrivateLink, and deploying a hosted cluster. Deploying the SR-IOV Operator for hosted control planes : After you configure and deploy your hosting service cluster, you can create a subscription to the Single Root I/O Virtualization (SR-IOV) Operator on a hosted cluster. The SR-IOV pod runs on worker machines rather than the control plane. To destroy a hosted cluster on AWS, follow the instructions in Destroying a hosted cluster on AWS . If you want to disable the hosted control plane feature, see Disabling the hosted control plane feature . 2.4. IBM Z Important Hosted control planes on the IBM Z platform is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Installing the hosted control plane command line interface Configuring the hosting cluster on x86 bare metal for IBM Z compute nodes (Technology Preview) 2.5. IBM Power Important Hosted control planes on the IBM Power platform is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Installing the hosted control plane command line interface Configuring the hosting cluster on a 64-bit x86 OpenShift Container Platform cluster to create hosted control planes for IBM Power compute nodes (Technology Preview) 2.6. Non bare metal agent machines Important Hosted control planes clusters using non bare metal agent machines is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Installing the hosted control plane command line interface Configuring hosted control plane clusters using non bare metal agent machines (Technology Preview) To destroy a hosted cluster on non bare metal agent machines, follow the instructions in Destroying a hosted cluster on non bare metal agent machines If you want to disable the hosted control plane feature, see Disabling the hosted control plane feature .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/hosted_control_planes/getting-started-with-hosted-control-planes
Chapter 49. JAX-RS 2.0 Client API
Chapter 49. JAX-RS 2.0 Client API Abstract JAX-RS 2.0 defines a full-featured client API which can be used for making REST invocations or any HTTP client invocations. This includes a fluent API (to simplify building up requests), a framework for parsing messages (based on a type of plug-in known as an entity provider ), and support for asynchronous invocations on the client side. 49.1. Introduction to the JAX-RS 2.0 Client API Overview JAX-RS 2.0 defines a fluent API for JAX-RS clients, which enables you to build up a HTTP request step-by-step and then invoke the request using the appropriate HTTP verb (GET, POST, PUT, or DELETE). Note It is also possible to define a JAX-RS client in Blueprint XML or Spring XML (using the jaxrs:client element). For details of this approach, see Section 18.2, "Configuring JAX-RS Client Endpoints" . Dependencies To use the JAX-RS 2.0 client API in your application, you must add the following Maven dependency to your project's pom.xml file: If you plan to use the asynchronous invocation feature (see Section 49.6, "Asynchronous Processing on the Client" ), you also need the following Maven dependency: Client API package The JAX-RS 2.0 client interfaces and classes are located in the following Java package: When developing JAX-RS 2.0 Java clients, you also typically need to access classes from the core package: Example of a simple client request The following code fragment shows a simple example, where the JAX-RS 2.0 client API is used to make an invocation on the http://example.org/bookstore JAX-RS service, invoking with the GET HTTP method: Fluent API The JAX-RS 2.0 client API is designed as a fluent API (sometimes called a Domain Specific Language). In the fluent API, a chain of Java methods is invoked in a single statement, in such a way that the Java methods look like the commands from a simple language. In JAX-RS 2.0, the fluent API is used to build and invoke a REST request. Steps to make a REST invocation Using the JAX-RS 2.0 client API, a client invocation is built and invoked in a series of steps, as follows: Bootstrap the client. Configure the target. Build and make the invocation. Parse the response. Bootstrap the client The first step is to bootstrap the client, by creating a javax.ws.rs.client.Client object. This Client instance is a relatively heavyweight object, which represents the stack of technologies required to support a JAX-RS client (possibly including, interceptors and additional CXF features). Ideally, you should re-use client objects when you can, instead of creating new ones. To create a new Client object, invoke a static method on the ClientBuilder class, as follows: Configure the target By configuring the target, you effectively define the URI that will be used for the REST invocation. The following example shows how you can define a base URI, base , and then add additional path segments to the base URI, using the path(String) method: Build and make the invocation This is really two steps rolled up into one: firstly, you build up the HTTP request (including headers, accepted media types, and so on); and secondly, you invoke the relevant HTTP method (optionally providing a request message body, if one is required). For example, to create and invoke a request that accepts the application/xml media type: Parse the response Finally, you need to parse the respose, resp , obtained in the step. Usually, the response is returned in the form of a javax.ws.rs.core.Response object, which encapsulates HTTP headers, along with other HTTP metadata, and the HTTP message body (if any). If you want to access the returned HTTP message in String format, you can easily do so by invoking the readEntity method with a String.class argument, as follows: You can always access the message body of a response as a String , by specifying String.class as the argument to readEntity . For more general transformations or conversions of the message body, you can provide an entity provider to perform the conversion. For more details, see Section 49.4, "Parsing Requests and Responses" . 49.2. Building the Client Target Overview After creating the initial Client instance, the step is to build up the request URI. The WebTarget builder class enables you to configure all aspects of the URI, including the URI path and query parameters. WebTarget builder class The javax.ws.rs.client.WebTarget builder class provides the part of the fluent API that enables you to build up the REST URI for the request. Create the client target To create a WebTarget instance, invoke one of the target methods on a javax.ws.rs.client.Client instance. For example: Base path and path segments You can specify the complete path all in one go, using the target method; or you can specify a base path, and then add path segments piece by piece, using a combination of the target method and the path methods. The advantage of combining a base path with path segments is that you can easily re-use the base path WebTarget object for multiple invocations on slightly different targets. For example: URI template parameters The syntax of the target path also supports URI template parameters. That is, a path segment can be initialized with a template parameter, { param } , which subsequently gets resolved to a specify value. For example: Where the resolveTemplate method replaces the path segment, {id} , with the value 123 . Define query parameters Query parameters can be appended to the URI path, where the beginning of the query parameters is marked by a single ? character. This mechanism enables you to set a series of name/value pairs, using the syntax: ? name1 = value1 & name2 = value2 &... A WebTarget instance enables you to define query parameters using the queryParam method, as follows: Define matrix parameters Matrix parameters are somewhat similar to query parameters, but are not as widely supported and use a different syntax. To define a matrix parameter on a WebTarget instance, invoke the matrixParam(String, Object) method. 49.3. Building the Client Invocation Overview After building the target URI, using the WebTarget builder class, the step is to configure the other aspects of the request-such as HTTP headers, cookies, and so on-using the Invocation.Builder class. The final step in building the invocation is to invoke the appropriate HTTP verb (GET, POST, PUT, or DELETE) and provide a message body, if required. Invocation.Builder class The javax.ws.rs.client.Invocation.Builder builder class provides the part of the fluent API that enables you to build up the contents of the HTTP message and to invoke a HTTP method. Create the invocation builder To create an Invocation.Builder instance, invoke one of the request methods on a javax.ws.rs.client.WebTarget instance. For example: Define HTTP headers You can add a HTTP header to the request message using the header method, as follows: Define cookies You can add a cookie to the request message using the cookie method, as follows: Define properties You can set a property in the context of this request using the property method, as follows: Define accepted media types, languages, or encodings You can define accepted media types, languages, or encodings, as follows: Invoke HTTP method The process of building a REST invocation is terminated by invoking a HTTP method, which performs the HTTP invocation. The following methods (inherited from the javax.ws.rs.client.SyncInvoker base class) can be invoked: If the specific HTTP verb you want to invoke is not on this list, you can use the generic method method to invoke any HTTP method. Typed responses All of the HTTP invocation methods are provided with an untyped variant and a typed variant (which takes an extra argument). If you invoke a request using the default get() method (taking no arguments), a javax.ws.rs.core.Response object is returned from the invocation. For example: It is also possible, however, to ask for the response to be returned as a specific type, using the get(Class<T>) method. For example, to invoke a request and ask for the response to be returned as a BookInfo object: In order for this to work, however, you must register a suitable entity provider with the Client instance, which is capable of mapping the response format, application/xml , to the requested type. For more details about entity providers, see Section 49.4, "Parsing Requests and Responses" . Specifying the outgoing message in post or put For HTTP methods that include a message body in the request (such as POST or PUT), you must specify the message body as the first argument of the method. The message body must be specified as a javax.ws.rs.client.Entity object, where the Entity encapsulates the message contents and its associated media type. For example, to invoke a POST method, where the message contents are provided as a String type: If necessary, the Entity.entity() constructor method will automatically map the supplied message instance to the specified media type, using the registered entity providers. It is always possible to specify the message body as a simple String type. Delayed invocation Instead of invoking the HTTP request right away (for example, by invoking the get() method), you have the option of creating an javax.ws.rs.client.Invocation object, which can be invoked at a later time. The Invocation object encapsulates all of the details of the pending invocation, including the HTTP method. The following methods can be used to build an Invocation object: For example, to create a GET Invocation object and invoke it at a later time, you can use code like the following: Asynchronous invocation The JAX-RS 2.0 client API supports asynchronous invocations on the client side. To make an asynchronous invocation, simply invoke the async() method in the chain of methods following request() . For example: When you make an asynchronous invocation, the returned value is a java.util.concurrent.Future object. For more details about asynchronous invocations, see Section 49.6, "Asynchronous Processing on the Client" . 49.4. Parsing Requests and Responses Overview An essential aspect of making HTTP invocations is that the client must be able to parse the outgoing request messages and the incoming responses. In JAX-RS 2.0, the key concept is the Entity class, which represents a raw message tagged with a media type. In order to parse the raw message, you can register multiple entity providers , which have the capability to convert media types to and from particular Java types. In other words, in the context of JAX-RS 2.0, an Entity is the representation of a raw message and an entity provider is the plug-in that provides the capability to parse the raw message (based on the media type). Entities An Entity is a message body augmented by metadata (media type, language, and encoding). An Entity instance holds the message in a raw format and is associated with a specific media type. To convert the contents of an Entity object to a Java object you require an entity provider , which is capable of mapping the given media type to the required Java type. Variants A javax.ws.rs.core.Variant object encapsulates the metadata associated with an Entity , as follows: Media type, Language, Encoding. Effectively, you can think of an Entity as consisting of the HTTP message contents, augmented by Variant metadata. Entity providers An entity provider is a class that provides the capability of mapping between a media type and a Java type. Effectively, you can think of an entity provider as a class that provides the ability to parse messages of a particular media type (or possibly of multiple media types). There are two different varieties of entity provider: MessageBodyReader Provides the capability of mapping from media type(s) to a Java type. MessageBodyWriter Provides the capability of mapping from a Java type to a media type. Standard entity providers Entity providers for the following Java and media type combinations are provided as standard: byte[] All media types ( */* ). java.lang.String All media types ( */* ). java.io.InputStream All media types ( */* ). java.io.Reader All media types ( */* ). java.io.File All media types ( */* ). javax.activation.DataSource All media types ( */* ). javax.xml.transform.Source XML types ( text/xml , application/xml , and media types of the form application/*+xml ). javax.xml.bind.JAXBElement and application-supplied JAXB classes XML types ( text/xml , application/xml , and media types of the form application/*+xml ). MultivaluedMap<String,String> Form content ( application/x-www-form-urlencoded ). StreamingOutput All media types ( */* ), MessageBodyWriter only. java.lang.Boolean , java.lang.Character , java.lang.Number Only for text/plain . Corresponding primitive types supported through boxing/unboxing conversion. Response object The default return type is the javax.ws.rs.core.Response type, which represents an untyped response. The Response object provides access to the complete HTTP response, including the message body, HTTP status, HTTP headers, media type, and so on. Accessing the response status You can access the response status, either through the getStatus method (which returns the HTTP status code): Or though the getStatusInfo method, which also provides a description string: Accessing the returned headers You can access the HTTP headers using any of the following methods: For example, if you know that the Response has a Date header, you could access it as follows: Accessing the returned cookies You can access any new cookies set on the Response using the getCookies method, as follows: Accessing the returned message content You can access the returned message content by invoking one of the readEntity methods on the Response object. The readEntity method automatically invokes the available entity providers to convert the message to the requested type (specified as the first argument of readEntity ). For example, to access the message content as a String type: Collection return value If you need to access the returned message as a Java generic type-for example, as a List or Collection type-you can specify the request message type using the javax.ws.rs.core.GenericType<T> construction. For example: 49.5. Configuring the Client Endpoint Overview It is possible to augment the functionality of the base javax.ws.rs.client.Client object by registering and configuring features and providers. Example The following example shows a client configured to have a logging feature, a custom entity provider, and to set the prettyLogging property to true : Configurable API for registering objects The Client class supports the Configurable API for registering objects, which provides several variants of the register method. In most cases, you would register either a class or an object instance, as shown in the following examples: For more details about the register variants, see the reference documentation for Configurable . What can you configure on the client? You can configure the following aspects of a client endpoint: Features Providers Properties Filters Interceptors Features A javax.ws.rs.core.Feature is effectively a plug-in that adds an extra feature or functionality to a JAX-RS client. Often, a feature installs one or more interceptors in order to provide the required functionality. Providers A provider is a particular kind of client plug-in that provides a mapping capability. The JAX-RS 2.0 specification defines the following kinds of provider: Entity providers An entity provider provides the capability of mapping between a specific media type a Java type. For more details, see Section 49.4, "Parsing Requests and Responses" . Exception mapping providers An exception mapping provider maps a checked runtime exception to an instance of Response . Context providers A context provider is used on the server side, to supply context to resource classes and other service providers. Filters A JAX-RS 2.0 filter is a plug-in that gives you access to the URI, headers, and miscellaneous context data at various points (extension points) of the message processing pipeline. For details, see Chapter 61, JAX-RS 2.0 Filters and Interceptors . Interceptors A JAX-RS 2.0 interceptor is a plug-in that gives you access to the message body of a request or response as it is being read or written. For details, see Chapter 61, JAX-RS 2.0 Filters and Interceptors . Properties By setting one or more properties on the client, you can customize the configuration of a registered feature or a registered provider. Other configurable types It is possible, not only to configure a javax.ws.rs.client.Client (and javax.ws.rs.client.ClientBuilder ) object, but also a WebTarget object. When you change the configuration of a WebTarget object, the underlying client configuration is deep copied to give the new WebTarget configuration. Hence, it is possible to change the configuration of the WebTarget object without changing the configuration of the original Client object. 49.6. Asynchronous Processing on the Client Overview JAX-RS 2.0 supports asynchronous processing of invocations on the client side. Two different styles of asynchronous processing are supported: either using a java.util.concurrent.Future<V> return value; or by registering an invocation callback. Asynchronous invocation with Future<V> return value Using the Future<V> approach to asynchronous processing, you can invoke a client request asynchronously, as follows: You can use a similar approach for typed responses. For example, to get a response of type, BookInfo : Asynchronous invocation with invocation callback Instead of accessing the return value using a Future<V> object, you can define an invocation callback (using javax.ws.rs.client.InvocationCallback<RESPONSE> ), as follows: You can use a similar approach for typed responses:
[ "<dependency> <groupId>org.apache.cxf</groupId> <artifactId>cxf-rt-rs-client</artifactId> <version>3.3.6.fuse-7_13_0-00015-redhat-00001</version> </dependency>", "<dependency> <groupId>org.apache.cxf</groupId> <artifactId>cxf-rt-transports-http-hc</artifactId> <version>3.3.6.fuse-7_13_0-00015-redhat-00001</version> </dependency>", "javax.ws.rs.client", "javax.ws.rs.core", "// Java import javax.ws.rs.client.ClientBuilder; import javax.ws.rs.client.Client; import javax.ws.rs.core.Response; Client client = ClientBuilder.newClient(); Response res = client.target(\"http://example.org/bookstore/books/123\") .request(\"application/xml\").get();", "// Java import javax.ws.rs.client.ClientBuilder; import javax.ws.rs.client.Client; Client client = ClientBuilder.newClient();", "// Java import javax.ws.rs.client.WebTarget; WebTarget base = client.target(\"http://example.org/bookstore/\"); WebTarget books = base.path(\"books\").path(\"{id}\");", "// Java import javax.ws.rs.core.Response; Response resp = books.resolveTemplate(\"id\", \"123\").request(\"application/xml\").get();", "// Java String msg = resp.readEntity(String.class);", "// Java import javax.ws.rs.client.WebTarget; WebTarget base = client.target(\"http://example.org/bookstore/\");", "// Java import javax.ws.rs.client.WebTarget; WebTarget base = client.target(\"http://example.org/bookstore/\"); WebTarget headers = base.path(\"bookheaders\"); // Now make some invocations on the 'headers' target WebTarget collections = base.path(\"collections\"); // Now make some invocations on the 'collections' target", "// Java import javax.ws.rs.client.WebTarget; import javax.ws.rs.core.Response; WebTarget base = client.target(\"http://example.org/bookstore/\"); WebTarget books = base.path(\"books\").path(\"{id}\"); Response resp = books.resolveTemplate(\"id\", \"123\").request(\"application/xml\").get();", "// Java WebTarget target = client.target(\"http://example.org/bookstore/\") .queryParam(\"userId\",\"Agamemnon\") .queryParam(\"lang\",\"gr\");", "// Java import javax.ws.rs.client.WebTarget; import javax.ws.rs.client.Invocation.Builder; WebTarget books = client.target(\"http://example.org/bookstore/books/123\"); Invocation.Builder invbuilder = books.request();", "Invocation.Builder invheader = invbuilder.header(\"From\", \"[email protected]\");", "Invocation.Builder invcookie = invbuilder.cookie(\"myrestclient\", \"123xyz\");", "Invocation.Builder invproperty = invbuilder.property(\"Name\", \"Value\");", "Invocation.Builder invmedia = invbuilder.accept(\"application/xml\") .acceptLanguage(\"en-US\") .acceptEncoding(\"gzip\");", "get post delete put head trace options", "Response res = client.target(\"http://example.org/bookstore/books/123\") .request(\"application/xml\").get();", "BookInfo res = client.target(\"http://example.org/bookstore/books/123\") .request(\"application/xml\").get(BookInfo.class);", "import javax.ws.rs.client.Entity; Response res = client.target(\"http://example.org/bookstore/registerbook\") .request(\"application/xml\") .put(Entity.entity(\"Red Hat Install Guide\", \"text/plain\"));", "buildGet buildPost buildDelete buildPut build", "import javax.ws.rs.client.Invocation; import javax.ws.rs.core.Response; Invocation getBookInfo = client.target(\"http://example.org/bookstore/books/123\") .request(\"application/xml\").buildGet(); // Later on, in some other part of the application: Response = getBookInfo.invoke();", "Future<Response> res = client.target(\"http://example.org/bookstore/books/123\") .request(\"application/xml\") .async() .get();", "int status = resp.getStatus();", "String statusReason = resp.getStatusInfo().getReasonPhrase();", "MultivaluedMap<String,Object> getHeaders() MultivaluedMap<String,String> getStringHeaders() String getHeaderString(String name)", "String dateAsString = resp.getHeaderString(\"Date\");", "import javax.ws.rs.core.NewCookie; java.util.Map<String,NewCookie> cookieMap = resp.getCookies(); java.util.Collection<NewCookie> cookieCollection = cookieMap.values();", "String messageBody = resp.readEntity(String.class);", "import javax.ws.rs.client.ClientBuilder; import javax.ws.rs.client.Client; import javax.ws.rs.core.GenericType; import java.util.List; GenericType<List<String>> stringListType = new GenericType<List<String>>() {}; Client client = ClientBuilder.newClient(); List<String> bookNames = client.target(\"http://example.org/bookstore/booknames\") .request(\"text/plain\") .get(stringListType);", "// Java import javax.ws.rs.client.ClientBuilder; import javax.ws.rs.client.Client; import org.apache.cxf.feature.LoggingFeature; Client client = ClientBuilder.newClient(); client.register(LoggingFeature.class) .register(MyCustomEntityProvider.class) .property(\"LoggingFeature.prettyLogging\",\"true\");", "client.register(LoggingFeature.class) client.register(new LoggingFeature())", "// Java import javax.ws.rs.client.ClientBuilder; import javax.ws.rs.client.Client; import java.util.concurrent.Future; import javax.ws.rs.core.Response; Client client = ClientBuilder.newClient(); Future<Response> futureResp = client.target(\"http://example.org/bookstore/books/123\") .request(\"application/xml\") .async() .get(); // At a later time, check (and wait) for the response: Response resp = futureResp.get();", "Client client = ClientBuilder.newClient(); Future<BookInfo> futureResp = client.target(\"http://example.org/bookstore/books/123\") .request(\"application/xml\") .async() .get(BookInfo.class); // At a later time, check (and wait) for the response: BookInfo resp = futureResp.get();", "// Java import javax.ws.rs.client.ClientBuilder; import javax.ws.rs.client.Client; import java.util.concurrent.Future; import javax.ws.rs.core.Response; import javax.ws.rs.client.InvocationCallback; Client client = ClientBuilder.newClient(); Future<Response> futureResp = client.target(\"http://example.org/bookstore/books/123\") .request(\"application/xml\") .async() .get( new InvocationCallback<Response>() { @Override public void completed(final Response resp) { // Do something when invocation is complete } @Override public void failed(final Throwable throwable) { throwable.printStackTrace(); } });", "// Java import javax.ws.rs.client.ClientBuilder; import javax.ws.rs.client.Client; import java.util.concurrent.Future; import javax.ws.rs.core.Response; import javax.ws.rs.client.InvocationCallback; Client client = ClientBuilder.newClient(); Future<BookInfo> futureResp = client.target(\"http://example.org/bookstore/books/123\") .request(\"application/xml\") .async() .get( new InvocationCallback<BookInfo>() { @Override public void completed(final BookInfo resp) { // Do something when invocation is complete } @Override public void failed(final Throwable throwable) { throwable.printStackTrace(); } });" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/JAXRS2Client
Chapter 14. Optional: Installing on vSphere
Chapter 14. Optional: Installing on vSphere If you install OpenShift Container Platform on vSphere, the Assisted Installer can integrate the OpenShift Container Platform cluster with the vSphere platform, which exposes the Machine API to vSphere and enables autoscaling. 14.1. Adding hosts on vSphere You can add hosts to the Assisted Installer cluster using the online vSphere client or the govc vSphere CLI tool. The following procedure demonstrates adding hosts with the govc CLI tool. To use the online vSphere Client, refer to the documentation for vSphere. To add hosts on vSphere with the vSphere govc CLI, generate the discovery image ISO from the Assisted Installer. The minimal discovery image ISO is the default setting. This image includes only what is required to boot a host with networking. The majority of the content is downloaded upon boot. The ISO image is about 100MB in size. After this is complete, you must create an image for the vSphere platform and create the vSphere virtual machines. Prerequisites You are using vSphere 7.0.2 or higher. You have the vSphere govc CLI tool installed and configured. You have set clusterSet disk.enableUUID to true in vSphere. You have created a cluster in the Assisted Installer UI, or You have: Created an Assisted Installer cluster profile and infrastructure environment with the API. Exported your infrastructure environment ID in your shell as USDINFRA_ENV_ID . Completed the configuration. Procedure Configure the discovery image if you want it to boot with an ignition file. In Cluster details , select vSphere from the Integrate with external partner platforms dropdown list. The Include custom manifest checkbox is optional. In Host discovery , click the Add hosts button and select the provisioning type. Add an SSH public key so that you can connect to the vSphere VMs as the core user. Having a login to the cluster hosts can provide you with debugging information during the installation. If you do not have an existing SSH key pair on your local machine, follow the steps in Generating a key pair for cluster node SSH access . In the SSH public key field, click Browse to upload the id_rsa.pub file containing the SSH public key. Alternatively, drag and drop the file into the field from the file manager. To see the file in the file manager, select Show hidden files in the menu. Select the desired discovery image ISO. Note Minimal image file: Provision with virtual media downloads a smaller image that will fetch the data needed to boot. In Networking , select Cluster-managed networking or User-managed networking : Optional: If the cluster hosts are behind a firewall that requires the use of a proxy, select Configure cluster-wide proxy settings . Enter the username, password, IP address and port for the HTTP and HTTPS URLs of the proxy server. Optional: If the cluster hosts are in a network with a re-encrypting man-in-the-middle (MITM) proxy or the cluster needs to trust certificates for other purposes such as container image registries, select Configure cluster-wide trusted certificates and add the additional certificates. Optional: Configure the discovery image if you want to boot it with an ignition file. See Configuring the discovery image for additional details. Click Generate Discovery ISO . Copy the Discovery ISO URL . Download the discovery ISO: USD wget - O vsphere-discovery-image.iso <discovery_url> Replace <discovery_url> with the Discovery ISO URL from the preceding step. On the command line, power down and destroy any pre-existing virtual machines: USD for VM in USD(/usr/local/bin/govc ls /<datacenter>/vm/<folder_name>) do /usr/local/bin/govc vm.power -off USDVM /usr/local/bin/govc vm.destroy USDVM done Replace <datacenter> with the name of the datacenter. Replace <folder_name> with the name of the VM inventory folder. Remove pre-existing ISO images from the data store, if there are any: USD govc datastore.rm -ds <iso_datastore> <image> Replace <iso_datastore> with the name of the data store. Replace image with the name of the ISO image. Upload the Assisted Installer discovery ISO: USD govc datastore.upload -ds <iso_datastore> vsphere-discovery-image.iso Replace <iso_datastore> with the name of the data store. Note All nodes in the cluster must boot from the discovery image. Boot three control plane (master) nodes: USD govc vm.create -net.adapter <network_adapter_type> \ -disk.controller <disk_controller_type> \ -pool=<resource_pool> \ -c=16 \ -m=32768 \ -disk=120GB \ -disk-datastore=<datastore_file> \ -net.address="<nic_mac_address>" \ -iso-datastore=<iso_datastore> \ -iso="vsphere-discovery-image.iso" \ -folder="<inventory_folder>" \ <hostname>.<cluster_name>.example.com See vm.create for details. Note The foregoing example illustrates the minimum required resources for control plane nodes. Boot at least two worker nodes: USD govc vm.create -net.adapter <network_adapter_type> \ -disk.controller <disk_controller_type> \ -pool=<resource_pool> \ -c=4 \ -m=8192 \ -disk=120GB \ -disk-datastore=<datastore_file> \ -net.address="<nic_mac_address>" \ -iso-datastore=<iso_datastore> \ -iso="vsphere-discovery-image.iso" \ -folder="<inventory_folder>" \ <hostname>.<cluster_name>.example.com See vm.create for details. Note The foregoing example illustrates the minimum required resources for worker nodes. Ensure the VMs are running: USD govc ls /<datacenter>/vm/<folder_name> Replace <datacenter> with the name of the datacenter. Replace <folder_name> with the name of the VM inventory folder. After 2 minutes, shut down the VMs: USD for VM in USD(govc ls /<datacenter>/vm/<folder_name>) do govc vm.power -s=true USDVM done Replace <datacenter> with the name of the datacenter. Replace <folder_name> with the name of the VM inventory folder. Set the disk.enableUUID setting to TRUE : USD for VM in USD(govc ls /<datacenter>/vm/<folder_name>) do govc vm.change -vm USDVM -e disk.enableUUID=TRUE done Replace <datacenter> with the name of the datacenter. Replace <folder_name> with the name of the VM inventory folder. Note You must set disk.enableUUID to TRUE on all of the nodes to enable autoscaling with vSphere. Restart the VMs: USD for VM in USD(govc ls /<datacenter>/vm/<folder_name>) do govc vm.power -on=true USDVM done Replace <datacenter> with the name of the datacenter. Replace <folder_name> with the name of the VM inventory folder. Return to the Assisted Installer user interface and wait until the Assisted Installer discovers the hosts and each of them have a Ready status. Select roles if needed. In Networking , uncheck Allocate IPs via DHCP server . Set the API VIP address. Set the Ingress VIP address. Continue with the installation procedure. 14.2. vSphere post-installation configuration using the CLI After installing an OpenShift Container Platform cluster using the Assisted Installer on vSphere with the platform integration feature enabled, you must update the following vSphere configuration settings manually: vCenter username vCenter password vCenter address vCenter cluster datacenter datastore folder Prerequisites The Assisted Installer has finished installing the cluster successfully. The cluster is connected to console.redhat.com . Procedure Generate a base64-encoded username and password for vCenter: USD echo -n "<vcenter_username>" | base64 -w0 Replace <vcenter_username> with your vCenter username. USD echo -n "<vcenter_password>" | base64 -w0 Replace <vcenter_password> with your vCenter password. Backup the vSphere credentials: USD oc get secret vsphere-creds -o yaml -n kube-system > creds_backup.yaml Edit the vSphere credentials: USD cp creds_backup.yaml vsphere-creds.yaml USD vi vsphere-creds.yaml apiVersion: v1 data: <vcenter_address>.username: <vcenter_username_encoded> <vcenter_address>.password: <vcenter_password_encoded> kind: Secret metadata: annotations: cloudcredential.openshift.io/mode: passthrough creationTimestamp: "2022-01-25T17:39:50Z" name: vsphere-creds namespace: kube-system resourceVersion: "2437" uid: 06971978-e3a5-4741-87f9-2ca3602f2658 type: Opaque Replace <vcenter_address> with the vCenter address. Replace <vcenter_username_encoded> with the base64-encoded version of your vSphere username. Replace <vcenter_password_encoded> with the base64-encoded version of your vSphere password. Replace the vSphere credentials: USD oc replace -f vsphere-creds.yaml Redeploy the kube-controller-manager pods: USD oc patch kubecontrollermanager cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date --rfc-3339=ns )"'"}}' --type=merge Backup the vSphere cloud provider configuration: USD oc get cm cloud-provider-config -o yaml -n openshift-config > cloud-provider-config_backup.yaml Edit the cloud provider configuration: USD cloud-provider-config_backup.yaml cloud-provider-config.yaml USD vi cloud-provider-config.yaml apiVersion: v1 data: config: | [Global] secret-name = "vsphere-creds" secret-namespace = "kube-system" insecure-flag = "1" [Workspace] server = "<vcenter_address>" datacenter = "<datacenter>" default-datastore = "<datastore>" folder = "/<datacenter>/vm/<folder>" [VirtualCenter "<vcenter_address>"] datacenters = "<datacenter>" kind: ConfigMap metadata: creationTimestamp: "2022-01-25T17:40:49Z" name: cloud-provider-config namespace: openshift-config resourceVersion: "2070" uid: 80bb8618-bf25-442b-b023-b31311918507 Replace <vcenter_address> with the vCenter address. Replace <datacenter> with the name of the data center. Replace <datastore> with the name of the data store. Replace <folder> with the folder containing the cluster VMs. Apply the cloud provider configuration: USD oc apply -f cloud-provider-config.yaml Taint the nodes with the uninitialized taint: Important Follow steps 9 through 12 if you are installing OpenShift Container Platform 4.13 or later. Identify the nodes to taint: USD oc get nodes Run the following command for each node: USD oc adm taint node <node_name> node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule Replace <node_name> with the name of the node. Example USD oc get nodes NAME STATUS ROLES AGE VERSION master-0 Ready control-plane,master 45h v1.26.3+379cd9f master-1 Ready control-plane,master 45h v1.26.3+379cd9f worker-0 Ready worker 45h v1.26.3+379cd9f worker-1 Ready worker 45h v1.26.3+379cd9f master-2 Ready control-plane,master 45h v1.26.3+379cd9f USD oc adm taint node master-0 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule USD oc adm taint node master-1 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule USD oc adm taint node master-2 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule USD oc adm taint node worker-0 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule USD oc adm taint node worker-1 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule Back up the infrastructures configuration: USD oc get infrastructures.config.openshift.io -o yaml > infrastructures.config.openshift.io.yaml.backup Edit the infrastructures configuration: USD cp infrastructures.config.openshift.io.yaml.backup infrastructures.config.openshift.io.yaml USD vi infrastructures.config.openshift.io.yaml apiVersion: v1 items: - apiVersion: config.openshift.io/v1 kind: Infrastructure metadata: creationTimestamp: "2022-05-07T10:19:55Z" generation: 1 name: cluster resourceVersion: "536" uid: e8a5742c-6d15-44e6-8a9e-064b26ab347d spec: cloudConfig: key: config name: cloud-provider-config platformSpec: type: VSphere vsphere: failureDomains: - name: assisted-generated-failure-domain region: assisted-generated-region server: <vcenter_address> topology: computeCluster: /<data_center>/host/<vcenter_cluster> datacenter: <data_center> datastore: /<data_center>/datastore/<datastore> folder: "/<data_center>/path/to/folder" networks: - "VM Network" resourcePool: /<data_center>/host/<vcenter_cluster>/Resources zone: assisted-generated-zone nodeNetworking: external: {} internal: {} vcenters: - datacenters: - <data_center> server: <vcenter_address> kind: List metadata: resourceVersion: "" Replace <vcenter_address> with your vCenter address. Replace <datacenter> with the name of your vCenter data center. Replace <datastore> with the name of your vCenter data store. Replace <folder> with the folder containing the cluster VMs. Replace <vcenter_cluster> with the vSphere vCenter cluster where OpenShift Container Platform is installed. Apply the infrastructures configuration: USD oc apply -f infrastructures.config.openshift.io.yaml --overwrite=true 14.3. vSphere post-installation configuration using the UI After installing an OpenShift Container Platform cluster using the Assisted Installer on vSphere with the platform integration feature enabled, you must update the following vSphere configuration settings manually: vCenter address vCenter cluster vCenter username vCenter password Datacenter Default data store Virtual machine folder Prerequisites The Assisted Installer has finished installing the cluster successfully. The cluster is connected to console.redhat.com . Procedure In the Administrator perspective, navigate to Home Overview . Under Status , click vSphere connection to open the vSphere connection configuration wizard. In the vCenter field, enter the network address of the vSphere vCenter server. This can be either a domain name or an IP address. It appears in the vSphere web client URL; for example https://[your_vCenter_address]/ui . In the vCenter cluster field, enter the name of the vSphere vCenter cluster where OpenShift Container Platform is installed. Important This step is mandatory if you installed OpenShift Container Platform 4.13 or later. In the Username field, enter your vSphere vCenter username. In the Password field, enter your vSphere vCenter password. Warning The system stores the username and password in the vsphere-creds secret in the kube-system namespace of the cluster. An incorrect vCenter username or password makes the cluster nodes unschedulable. In the Datacenter field, enter the name of the vSphere data center that contains the virtual machines used to host the cluster; for example, SDDC-Datacenter . In the Default data store field, enter the vSphere data store that stores the persistent data volumes; for example, /SDDC-Datacenter/datastore/datastorename . Warning Updating the vSphere data center or default data store after the configuration has been saved detaches any active vSphere PersistentVolumes . In the Virtual Machine Folder field, enter the data center folder that contains the virtual machine of the cluster; for example, /SDDC-Datacenter/vm/ci-ln-hjg4vg2-c61657-t2gzr . For the OpenShift Container Platform installation to succeed, all virtual machines comprising the cluster must be located in a single data center folder. Click Save Configuration . This updates the cloud-provider-config file in the openshift-config namespace, and starts the configuration process. Reopen the vSphere connection configuration wizard and expand the Monitored operators panel. Check that the status of the operators is either Progressing or Healthy . Verification The connection configuration process updates operator statuses and control plane nodes. It takes approximately an hour to complete. During the configuration process, the nodes will reboot. Previously bound PersistentVolumeClaims objects might become disconnected. Follow the steps below to monitor the configuration process. Check that the configuration process completed successfully: In the OpenShift Container Platform Administrator perspective, navigate to Home Overview . Under Status click Operators . Wait for all operator statuses to change from Progressing to All succeeded . A Failed status indicates that the configuration failed. Under Status , click Control Plane . Wait for the response rate of all Control Pane components to return to 100%. A Failed control plane component indicates that the configuration failed. A failure indicates that at least one of the connection settings is incorrect. Change the settings in the vSphere connection configuration wizard and save the configuration again. Check that you are able to bind PersistentVolumeClaims objects by performing the following steps: Create a StorageClass object using the following YAML: kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: vsphere-sc provisioner: kubernetes.io/vsphere-volume parameters: datastore: YOURVCENTERDATASTORE diskformat: thin reclaimPolicy: Delete volumeBindingMode: Immediate Create a PersistentVolumeClaims object using the following YAML: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-pvc namespace: openshift-config annotations: volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/vsphere-volume finalizers: - kubernetes.io/pvc-protection spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: vsphere-sc volumeMode: Filesystem For instructions, see Dynamic provisioning in the OpenShift Container Platform documentation. To troubleshoot a PersistentVolumeClaims object, navigate to Storage PersistentVolumeClaims in the Administrator perspective of the OpenShift Container Platform UI.
[ "wget - O vsphere-discovery-image.iso <discovery_url>", "for VM in USD(/usr/local/bin/govc ls /<datacenter>/vm/<folder_name>) do /usr/local/bin/govc vm.power -off USDVM /usr/local/bin/govc vm.destroy USDVM done", "govc datastore.rm -ds <iso_datastore> <image>", "govc datastore.upload -ds <iso_datastore> vsphere-discovery-image.iso", "govc vm.create -net.adapter <network_adapter_type> -disk.controller <disk_controller_type> -pool=<resource_pool> -c=16 -m=32768 -disk=120GB -disk-datastore=<datastore_file> -net.address=\"<nic_mac_address>\" -iso-datastore=<iso_datastore> -iso=\"vsphere-discovery-image.iso\" -folder=\"<inventory_folder>\" <hostname>.<cluster_name>.example.com", "govc vm.create -net.adapter <network_adapter_type> -disk.controller <disk_controller_type> -pool=<resource_pool> -c=4 -m=8192 -disk=120GB -disk-datastore=<datastore_file> -net.address=\"<nic_mac_address>\" -iso-datastore=<iso_datastore> -iso=\"vsphere-discovery-image.iso\" -folder=\"<inventory_folder>\" <hostname>.<cluster_name>.example.com", "govc ls /<datacenter>/vm/<folder_name>", "for VM in USD(govc ls /<datacenter>/vm/<folder_name>) do govc vm.power -s=true USDVM done", "for VM in USD(govc ls /<datacenter>/vm/<folder_name>) do govc vm.change -vm USDVM -e disk.enableUUID=TRUE done", "for VM in USD(govc ls /<datacenter>/vm/<folder_name>) do govc vm.power -on=true USDVM done", "echo -n \"<vcenter_username>\" | base64 -w0", "echo -n \"<vcenter_password>\" | base64 -w0", "oc get secret vsphere-creds -o yaml -n kube-system > creds_backup.yaml", "cp creds_backup.yaml vsphere-creds.yaml", "vi vsphere-creds.yaml", "apiVersion: v1 data: <vcenter_address>.username: <vcenter_username_encoded> <vcenter_address>.password: <vcenter_password_encoded> kind: Secret metadata: annotations: cloudcredential.openshift.io/mode: passthrough creationTimestamp: \"2022-01-25T17:39:50Z\" name: vsphere-creds namespace: kube-system resourceVersion: \"2437\" uid: 06971978-e3a5-4741-87f9-2ca3602f2658 type: Opaque", "oc replace -f vsphere-creds.yaml", "oc patch kubecontrollermanager cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge", "oc get cm cloud-provider-config -o yaml -n openshift-config > cloud-provider-config_backup.yaml", "cloud-provider-config_backup.yaml cloud-provider-config.yaml", "vi cloud-provider-config.yaml", "apiVersion: v1 data: config: | [Global] secret-name = \"vsphere-creds\" secret-namespace = \"kube-system\" insecure-flag = \"1\" [Workspace] server = \"<vcenter_address>\" datacenter = \"<datacenter>\" default-datastore = \"<datastore>\" folder = \"/<datacenter>/vm/<folder>\" [VirtualCenter \"<vcenter_address>\"] datacenters = \"<datacenter>\" kind: ConfigMap metadata: creationTimestamp: \"2022-01-25T17:40:49Z\" name: cloud-provider-config namespace: openshift-config resourceVersion: \"2070\" uid: 80bb8618-bf25-442b-b023-b31311918507", "oc apply -f cloud-provider-config.yaml", "oc get nodes", "oc adm taint node <node_name> node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule", "oc get nodes NAME STATUS ROLES AGE VERSION master-0 Ready control-plane,master 45h v1.26.3+379cd9f master-1 Ready control-plane,master 45h v1.26.3+379cd9f worker-0 Ready worker 45h v1.26.3+379cd9f worker-1 Ready worker 45h v1.26.3+379cd9f master-2 Ready control-plane,master 45h v1.26.3+379cd9f oc adm taint node master-0 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule oc adm taint node master-1 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule oc adm taint node master-2 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule oc adm taint node worker-0 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule oc adm taint node worker-1 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule", "oc get infrastructures.config.openshift.io -o yaml > infrastructures.config.openshift.io.yaml.backup", "cp infrastructures.config.openshift.io.yaml.backup infrastructures.config.openshift.io.yaml", "vi infrastructures.config.openshift.io.yaml", "apiVersion: v1 items: - apiVersion: config.openshift.io/v1 kind: Infrastructure metadata: creationTimestamp: \"2022-05-07T10:19:55Z\" generation: 1 name: cluster resourceVersion: \"536\" uid: e8a5742c-6d15-44e6-8a9e-064b26ab347d spec: cloudConfig: key: config name: cloud-provider-config platformSpec: type: VSphere vsphere: failureDomains: - name: assisted-generated-failure-domain region: assisted-generated-region server: <vcenter_address> topology: computeCluster: /<data_center>/host/<vcenter_cluster> datacenter: <data_center> datastore: /<data_center>/datastore/<datastore> folder: \"/<data_center>/path/to/folder\" networks: - \"VM Network\" resourcePool: /<data_center>/host/<vcenter_cluster>/Resources zone: assisted-generated-zone nodeNetworking: external: {} internal: {} vcenters: - datacenters: - <data_center> server: <vcenter_address> kind: List metadata: resourceVersion: \"\"", "oc apply -f infrastructures.config.openshift.io.yaml --overwrite=true", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: vsphere-sc provisioner: kubernetes.io/vsphere-volume parameters: datastore: YOURVCENTERDATASTORE diskformat: thin reclaimPolicy: Delete volumeBindingMode: Immediate", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-pvc namespace: openshift-config annotations: volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/vsphere-volume finalizers: - kubernetes.io/pvc-protection spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: vsphere-sc volumeMode: Filesystem" ]
https://docs.redhat.com/en/documentation/assisted_installer_for_openshift_container_platform/2023/html/assisted_installer_for_openshift_container_platform/installing-on-vsphere
3.2. PXE Boot Configuration
3.2. PXE Boot Configuration The step is to copy the files necessary to start the installation to the tftp server so they can be found when the client requests them. The tftp server is usually the same server as the network server exporting the installation tree. To copy these files, run the Network Booting Tool on the NFS, FTP, or HTTP server. A separate PXE server is not necessary. For the command line version of these instructions, refer to Section 3.2.1, "Command Line Configuration" . To use the graphical version of the Network Booting Tool , you must be running the X Window System, have root privileges, and have the system-config-netboot RPM package installed. To start the Network Booting Tool from the desktop, go to Applications (the main menu on the panel) => System Settings => Server Settings => Network Booting Service . Or, type the command system-config-netboot at a shell prompt (for example, in an XTerm or a GNOME terminal ). If starting the Network Booting Tool for the first time, select Network Install from the First Time Druid . Otherwise, select Configure => Network Installation from the pulldown menu, and then click Add . The dialog in Figure 3.1, "Network Installation Setup" is displayed. Figure 3.1. Network Installation Setup Operating system identifier - Provide a unique name using one word to identify the Red Hat Enterprise Linux version and variant. It is used as the directory name in the /tftpboot/linux-install/ directory. Description - Provide a brief description of the Red Hat Enterprise Linux version and variant. Selects protocol for installation - Selects NFS, FTP, or HTTP as the network installation type depending on which one was configured previously. If FTP is selected and anonymous FTP is not being used, uncheck Anonymous FTP and provide a valid username and password combination. Kickstart - Specify the location of the kickstart file. The file can be a URL or a file stored locally (diskette). The kickstart file can be created with the Kickstart Configurator . Refer to Chapter 2, Kickstart Configurator for details. Server - Provide the IP address or domain name of the NFS, FTP, or HTTP server. Location - Provide the directory shared by the network server. If FTP or HTTP was selected, the directory must be relative to the default directory for the FTP server or the document root for the HTTP server. For all network installations, the directory provided must contain the RedHat/ directory of the installation tree. After clicking OK , the initrd.img and vmlinuz files necessary to boot the installation program are transfered from images/pxeboot/ in the provided installation tree to /tftpboot/linux-install/ <os-identifier> / on the tftp server (the one you are running the Network Booting Tool on). 3.2.1. Command Line Configuration If the network server is not running X, the pxeos command line utility, which is part of the system-config-netboot package, can be used to configure the tftp server files : The following list explains the options: -a - Specifies that an OS instance is being added to the PXE configuration. -i " <description> " - Replace " <description> " with a description of the OS instance. This corresponds to the Description field in Figure 3.1, "Network Installation Setup" . -p <NFS|HTTP|FTP> - Specify which of the NFS, FTP, or HTTP protocols to use for installation. Only one may be specified. This corresponds to the Select protocol for installation menu in Figure 3.1, "Network Installation Setup" . -D <0|1> - Specify " 0 " which indicates that it is not a diskless configuration since pxeos can be used to configure a diskless environment as well. -s client.example.com - Provide the name of the NFS, FTP, or HTTP server after the -s option. This corresponds to the Server field in Figure 3.1, "Network Installation Setup" . -L <net-location> - Provide the location of the installation tree on that server after the -L option. This corresponds to the Location field in Figure 3.1, "Network Installation Setup" . -k <kernel> - Provide the specific kernel version of the server installation tree for booting. -K <kickstart> - Provide the location of the kickstart file, if available. <os-identifer> - Specify the OS identifier, which is used as the directory name in the /tftpboot/linux-install/ directory. This corresponds to the Operating system identifier field in Figure 3.1, "Network Installation Setup" . If FTP is selected as the installation protocol and anonymous login is not available, specify a username and password for login, with the following options before <os-identifer> in the command: For more information on command line options available for the pxeos command, refer to the pxeos man page.
[ "pxeos -a -i \" <description> \" -p <NFS|HTTP|FTP> -D 0 -s client.example.com \\ -L <net-location> -k <kernel> -K <kickstart> <os-identifer>", "-A 0 -u <username> -p <password>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/PXE_Network_Installations-PXE_Boot_Configuration
Chapter 8. Red Hat Virtualization 4.3 Batch Update 6 (ovirt-4.3.9)
Chapter 8. Red Hat Virtualization 4.3 Batch Update 6 (ovirt-4.3.9) 8.1. Red Hat Virtualization Manager v4.3 (RHEL 7 Server) (RPMs) The following table outlines the packages included in the rhel-7-server-rhv-4.3-manager-rpms repository. Table 8.1. Red Hat Virtualization Manager v4.3 (RHEL 7 Server) (RPMs) Name Version Advisory openvswitch2.11 2.11.0-48.el7fdp RHBA-2020:1306 openvswitch2.11-devel 2.11.0-48.el7fdp RHBA-2020:1306 ovirt-engine 4.3.9.3-1 RHSA-2020:1308 ovirt-engine-backend 4.3.9.3-1 RHSA-2020:1308 ovirt-engine-dbscripts 4.3.9.3-1 RHSA-2020:1308 ovirt-engine-extension-aaa-misc 1.0.4-1.el7ev RHSA-2020:1308 ovirt-engine-extensions-api-impl 4.3.9.3-1 RHSA-2020:1308 ovirt-engine-extensions-api-impl-javadoc 4.3.9.3-1 RHSA-2020:1308 ovirt-engine-health-check-bundler 4.3.9.3-1 RHSA-2020:1308 ovirt-engine-restapi 4.3.9.3-1 RHSA-2020:1308 ovirt-engine-setup 4.3.9.3-1 RHSA-2020:1308 ovirt-engine-setup-base 4.3.9.3-1 RHSA-2020:1308 ovirt-engine-setup-plugin-cinderlib 4.3.9.3-1 RHSA-2020:1308 ovirt-engine-setup-plugin-ovirt-engine 4.3.9.3-1 RHSA-2020:1308 ovirt-engine-setup-plugin-ovirt-engine-common 4.3.9.3-1 RHSA-2020:1308 ovirt-engine-setup-plugin-vmconsole-proxy-helper 4.3.9.3-1 RHSA-2020:1308 ovirt-engine-setup-plugin-websocket-proxy 4.3.9.3-1 RHSA-2020:1308 ovirt-engine-tools 4.3.9.3-1 RHSA-2020:1308 ovirt-engine-tools-backup 4.3.9.3-1 RHSA-2020:1308 ovirt-engine-vmconsole-proxy-helper 4.3.9.3-1 RHSA-2020:1308 ovirt-engine-webadmin-portal 4.3.9.3-1 RHSA-2020:1308 ovirt-engine-websocket-proxy 4.3.9.3-1 RHSA-2020:1308 ovirt-fast-forward-upgrade 1.0.0-17.el7ev RHSA-2020:1308 ovirt-host-deploy-common 1.8.5-1.el7ev RHBA-2020:1306 ovirt-host-deploy-java 1.8.5-1.el7ev RHBA-2020:1306 ovn2.11 2.11.1-33.el7fdp RHBA-2020:1306 ovn2.11-central 2.11.1-33.el7fdp RHBA-2020:1306 ovn2.11-vtep 2.11.1-33.el7fdp RHBA-2020:1306 python-openvswitch2.11 2.11.0-48.el7fdp RHBA-2020:1306 python2-ovirt-engine-lib 4.3.9.3-1 RHSA-2020:1308 python2-ovirt-host-deploy 1.8.5-1.el7ev RHBA-2020:1306 rhvm 4.3.9.3-1 RHSA-2020:1308 rhvm-dependencies 4.3.2-1.el7ev RHSA-2020:1308 8.2. Red Hat Virtualization Manager 4 Tools (RHEL 7 Server) (RPMs) The following table outlines the packages included in the rhel-7-server-rhv-4-manager-tools-rpms repository. Table 8.2. Red Hat Virtualization Manager 4 Tools (RHEL 7 Server) (RPMs) Name Version Advisory apache-commons-beanutils 1.8.3-15.el7_7 RHSA-2020:1308 apache-commons-beanutils-javadoc 1.8.3-15.el7_7 RHSA-2020:1308 8.3. Red Hat Virtualization 4 Management Agents (for RHEL 7 Server for IBM POWER9) RPMs The following table outlines the packages included in the rhel-7-server-rhv-4-mgmt-agent-for-power-9-rpms repository. Table 8.3. Red Hat Virtualization 4 Management Agents (for RHEL 7 Server for IBM POWER9) RPMs Name Version Advisory openvswitch2.11 2.11.0-48.el7fdp RHBA-2020:1306 openvswitch2.11-devel 2.11.0-48.el7fdp RHBA-2020:1306 ovirt-host-deploy-common 1.8.5-1.el7ev RHBA-2020:1306 ovirt-host-deploy-java 1.8.5-1.el7ev RHBA-2020:1306 ovirt-host-deploy-javadoc 1.8.5-1.el7ev RHBA-2020:1306 ovn2.11 2.11.1-33.el7fdp RHBA-2020:1306 ovn2.11-host 2.11.1-33.el7fdp RHBA-2020:1306 ovn2.11-vtep 2.11.1-33.el7fdp RHBA-2020:1306 python-openvswitch2.11 2.11.0-48.el7fdp RHBA-2020:1306 python2-ovirt-host-deploy 1.8.5-1.el7ev RHBA-2020:1306 vdsm 4.30.42-1.el7ev RHBA-2020:1307 vdsm-api 4.30.42-1.el7ev RHBA-2020:1307 vdsm-client 4.30.42-1.el7ev RHBA-2020:1307 vdsm-common 4.30.42-1.el7ev RHBA-2020:1307 vdsm-gluster 4.30.42-1.el7ev RHBA-2020:1307 vdsm-hook-checkips 4.30.42-1.el7ev RHBA-2020:1307 vdsm-hook-cpuflags 4.30.42-1.el7ev RHBA-2020:1307 vdsm-hook-ethtool-options 4.30.42-1.el7ev RHBA-2020:1307 vdsm-hook-extra-ipv4-addrs 4.30.42-1.el7ev RHBA-2020:1307 vdsm-hook-fcoe 4.30.42-1.el7ev RHBA-2020:1307 vdsm-hook-localdisk 4.30.42-1.el7ev RHBA-2020:1307 vdsm-hook-macspoof 4.30.42-1.el7ev RHBA-2020:1307 vdsm-hook-nestedvt 4.30.42-1.el7ev RHBA-2020:1307 vdsm-hook-openstacknet 4.30.42-1.el7ev RHBA-2020:1307 vdsm-hook-vhostmd 4.30.42-1.el7ev RHBA-2020:1307 vdsm-hook-vmfex-dev 4.30.42-1.el7ev RHBA-2020:1307 vdsm-http 4.30.42-1.el7ev RHBA-2020:1307 vdsm-jsonrpc 4.30.42-1.el7ev RHBA-2020:1307 vdsm-network 4.30.42-1.el7ev RHBA-2020:1307 vdsm-python 4.30.42-1.el7ev RHBA-2020:1307 vdsm-yajsonrpc 4.30.42-1.el7ev RHBA-2020:1307 8.4. Red Hat Virtualization 4 Management Agents RHEL 7 for IBM Power (RPMs) The following table outlines the packages included in the rhel-7-server-rhv-4-mgmt-agent-for-power-le-rpms repository. Table 8.4. Red Hat Virtualization 4 Management Agents RHEL 7 for IBM Power (RPMs) Name Version Advisory openvswitch2.11 2.11.0-48.el7fdp RHBA-2020:1306 openvswitch2.11-devel 2.11.0-48.el7fdp RHBA-2020:1306 ovirt-host-deploy-common 1.8.5-1.el7ev RHBA-2020:1306 ovirt-host-deploy-java 1.8.5-1.el7ev RHBA-2020:1306 ovirt-host-deploy-javadoc 1.8.5-1.el7ev RHBA-2020:1306 ovn2.11 2.11.1-33.el7fdp RHBA-2020:1306 ovn2.11-host 2.11.1-33.el7fdp RHBA-2020:1306 ovn2.11-vtep 2.11.1-33.el7fdp RHBA-2020:1306 python-openvswitch2.11 2.11.0-48.el7fdp RHBA-2020:1306 python2-ovirt-host-deploy 1.8.5-1.el7ev RHBA-2020:1306 vdsm 4.30.42-1.el7ev RHBA-2020:1307 vdsm-api 4.30.42-1.el7ev RHBA-2020:1307 vdsm-client 4.30.42-1.el7ev RHBA-2020:1307 vdsm-common 4.30.42-1.el7ev RHBA-2020:1307 vdsm-gluster 4.30.42-1.el7ev RHBA-2020:1307 vdsm-hook-checkips 4.30.42-1.el7ev RHBA-2020:1307 vdsm-hook-cpuflags 4.30.42-1.el7ev RHBA-2020:1307 vdsm-hook-ethtool-options 4.30.42-1.el7ev RHBA-2020:1307 vdsm-hook-extra-ipv4-addrs 4.30.42-1.el7ev RHBA-2020:1307 vdsm-hook-fcoe 4.30.42-1.el7ev RHBA-2020:1307 vdsm-hook-localdisk 4.30.42-1.el7ev RHBA-2020:1307 vdsm-hook-macspoof 4.30.42-1.el7ev RHBA-2020:1307 vdsm-hook-nestedvt 4.30.42-1.el7ev RHBA-2020:1307 vdsm-hook-openstacknet 4.30.42-1.el7ev RHBA-2020:1307 vdsm-hook-vhostmd 4.30.42-1.el7ev RHBA-2020:1307 vdsm-hook-vmfex-dev 4.30.42-1.el7ev RHBA-2020:1307 vdsm-http 4.30.42-1.el7ev RHBA-2020:1307 vdsm-jsonrpc 4.30.42-1.el7ev RHBA-2020:1307 vdsm-network 4.30.42-1.el7ev RHBA-2020:1307 vdsm-python 4.30.42-1.el7ev RHBA-2020:1307 vdsm-yajsonrpc 4.30.42-1.el7ev RHBA-2020:1307 8.5. Red Hat Virtualization 4 Management Agents for RHEL 7 (RPMs) The following table outlines the packages included in the rhel-7-server-rhv-4-mgmt-agent-rpms repository. Table 8.5. Red Hat Virtualization 4 Management Agents for RHEL 7 (RPMs) Name Version Advisory openvswitch2.11 2.11.0-48.el7fdp RHBA-2020:1306 openvswitch2.11-devel 2.11.0-48.el7fdp RHBA-2020:1306 ovirt-host-deploy-common 1.8.5-1.el7ev RHBA-2020:1306 ovirt-host-deploy-java 1.8.5-1.el7ev RHBA-2020:1306 ovirt-host-deploy-javadoc 1.8.5-1.el7ev RHBA-2020:1306 ovirt-hosted-engine-setup 2.3.13-1.el7ev RHBA-2020:1307 ovn2.11 2.11.1-33.el7fdp RHBA-2020:1306 ovn2.11-host 2.11.1-33.el7fdp RHBA-2020:1306 ovn2.11-vtep 2.11.1-33.el7fdp RHBA-2020:1306 python-openvswitch2.11 2.11.0-48.el7fdp RHBA-2020:1306 python2-ovirt-host-deploy 1.8.5-1.el7ev RHBA-2020:1306 vdsm 4.30.42-1.el7ev RHBA-2020:1307 vdsm-api 4.30.42-1.el7ev RHBA-2020:1307 vdsm-client 4.30.42-1.el7ev RHBA-2020:1307 vdsm-common 4.30.42-1.el7ev RHBA-2020:1307 vdsm-gluster 4.30.42-1.el7ev RHBA-2020:1307 vdsm-hook-checkips 4.30.42-1.el7ev RHBA-2020:1307 vdsm-hook-cpuflags 4.30.42-1.el7ev RHBA-2020:1307 vdsm-hook-ethtool-options 4.30.42-1.el7ev RHBA-2020:1307 vdsm-hook-extra-ipv4-addrs 4.30.42-1.el7ev RHBA-2020:1307 vdsm-hook-fcoe 4.30.42-1.el7ev RHBA-2020:1307 vdsm-hook-localdisk 4.30.42-1.el7ev RHBA-2020:1307 vdsm-hook-macspoof 4.30.42-1.el7ev RHBA-2020:1307 vdsm-hook-nestedvt 4.30.42-1.el7ev RHBA-2020:1307 vdsm-hook-openstacknet 4.30.42-1.el7ev RHBA-2020:1307 vdsm-hook-vhostmd 4.30.42-1.el7ev RHBA-2020:1307 vdsm-hook-vmfex-dev 4.30.42-1.el7ev RHBA-2020:1307 vdsm-http 4.30.42-1.el7ev RHBA-2020:1307 vdsm-jsonrpc 4.30.42-1.el7ev RHBA-2020:1307 vdsm-network 4.30.42-1.el7ev RHBA-2020:1307 vdsm-python 4.30.42-1.el7ev RHBA-2020:1307 vdsm-yajsonrpc 4.30.42-1.el7ev RHBA-2020:1307 8.6. Red Hat Virtualization 4 Tools (RHEL 7 Server) (RPMs) The following table outlines the packages included in the rhel-7-server-rhv-4-tools-rpms repository. Table 8.6. Red Hat Virtualization 4 Tools (RHEL 7 Server) (RPMs) Name Version Advisory apache-commons-beanutils 1.8.3-15.el7_7 RHSA-2020:1308 apache-commons-beanutils-javadoc 1.8.3-15.el7_7 RHSA-2020:1308
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/package_manifest/ovirt-4.3.9
Chapter 8. Authentication and Interoperability
Chapter 8. Authentication and Interoperability Better Interoperability with Active Directory Added functionality of System Security Services Daemon ( SSSD ) enables better interoperability of Red Hat Enterprise Linux clients with Active Directory, which makes identity management easier in Linux and Windows environments. The most notable enhancements include resolving users and groups and authenticating users from trusted domains in a single forest, DNS updates, site discovery, and using NetBIOS name for user and group lookups. Apache Modules for IPA A set of Apache modules has been added to Red Hat Enterprise Linux 6.6 as a Technology Preview. The Apache modules can be used by external applications to achieve tighter interaction with Identity Management beyond simple authentication.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_release_notes/authentication
Chapter 13. Deploy an AWS Global Accelerator load balancer
Chapter 13. Deploy an AWS Global Accelerator load balancer This topic describes the procedure required to deploy an AWS Global Accelerator to route traffic between multi-site Red Hat build of Keycloak deployments. This deployment is intended to be used with the setup described in the Concepts for multi-site deployments chapter. Use this deployment with the other building blocks outlined in the Building blocks multi-site deployments chapter. Note We provide these blueprints to show a minimal functionally complete example with a good baseline performance for regular installations. You would still need to adapt it to your environment and your organization's standards and security best practices. 13.1. Audience This chapter describes how to deploy an AWS Global Accelerator instance to handle Red Hat build of Keycloak client connection failover for multiple availability-zone Red Hat build of Keycloak deployments. 13.2. Architecture To ensure user requests are routed to each Red Hat build of Keycloak site we need to utilise a load balancer. To prevent issues with DNS caching on the client-side, the implementation should use a static IP address that remains the same when routing clients to both availability-zones. In this chapter we describe how to route all Red Hat build of Keycloak client requests via an AWS Global Accelerator load balancer. In the event of a Red Hat build of Keycloak site failing, the Accelerator ensures that all client requests are routed to the remaining healthy site. If both sites are marked as unhealthy, then the Accelerator will "fail-open" and forward requests to a site chosen at random. Figure 13.1. AWS Global Accelerator Failover An AWS Network Load Balancer (NLB) is created on both ROSA clusters in order to make the Keycloak pods available as Endpoints to an AWS Global Accelerator instance. Each cluster endpoint is assigned a weight of 128 (half of the maximum weight 255) to ensure that accelerator traffic is routed equally to both availability-zones when both clusters are healthy. 13.3. Prerequisites ROSA based Multi-AZ Red Hat build of Keycloak deployment 13.4. Procedure Create Network Load Balancers Perform the following on each of the Red Hat build of Keycloak clusters: Login to the ROSA cluster Create a Kubernetes load balancer service Command: cat <<EOF | oc apply -n USDNAMESPACE -f - 1 apiVersion: v1 kind: Service metadata: name: accelerator-loadbalancer annotations: service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: accelerator=USD{ACCELERATOR_NAME},site=USD{CLUSTER_NAME},namespace=USD{NAMESPACE} 2 service.beta.kubernetes.io/aws-load-balancer-type: "nlb" service.beta.kubernetes.io/aws-load-balancer-healthcheck-path: "/lb-check" service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: "https" service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "10" 3 service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: "3" 4 service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "3" 5 spec: ports: - name: https port: 443 protocol: TCP targetPort: 8443 selector: app: keycloak app.kubernetes.io/instance: keycloak app.kubernetes.io/managed-by: keycloak-operator sessionAffinity: None type: LoadBalancer EOF 1 USDNAMESPACE should be replaced with the namespace of your Red Hat build of Keycloak deployment 2 Add additional Tags to the resources created by AWS so that we can retrieve them later. ACCELERATOR_NAME should be the name of the Global Accelerator created in subsequent steps and CLUSTER_NAME should be the name of the current site. 3 How frequently the healthcheck probe is executed in seconds 4 How many healthchecks must pass for the NLB to be considered healthy 5 How many healthchecks must fail for the NLB to be considered unhealthy Take note of the DNS hostname as this will be required later: Command: oc -n USDNAMESPACE get svc accelerator-loadbalancer --template="{{range .status.loadBalancer.ingress}}{{.hostname}}{{end}}" Output: abab80a363ce8479ea9c4349d116bce2-6b65e8b4272fa4b5.elb.eu-west-1.amazonaws.com Create a Global Accelerator instance Command: aws globalaccelerator create-accelerator \ --name example-accelerator \ 1 --ip-address-type DUAL_STACK \ 2 --region us-west-2 3 1 The name of the accelerator to be created, update as required 2 Can be 'DUAL_STACK' or 'IPV4' 3 All globalaccelerator commands must use the region 'us-west-2' Output: { "Accelerator": { "AcceleratorArn": "arn:aws:globalaccelerator::606671647913:accelerator/e35a94dd-391f-4e3e-9a3d-d5ad22a78c71", 1 "Name": "example-accelerator", "IpAddressType": "DUAL_STACK", "Enabled": true, "IpSets": [ { "IpFamily": "IPv4", "IpAddresses": [ "75.2.42.125", "99.83.132.135" ], "IpAddressFamily": "IPv4" }, { "IpFamily": "IPv6", "IpAddresses": [ "2600:9000:a400:4092:88f3:82e2:e5b2:e686", "2600:9000:a516:b4ef:157e:4cbd:7b48:20f1" ], "IpAddressFamily": "IPv6" } ], "DnsName": "a099f799900e5b10d.awsglobalaccelerator.com", 2 "Status": "IN_PROGRESS", "CreatedTime": "2023-11-13T15:46:40+00:00", "LastModifiedTime": "2023-11-13T15:46:42+00:00", "DualStackDnsName": "ac86191ca5121e885.dualstack.awsglobalaccelerator.com" 3 } } 1 The ARN associated with the created Accelerator instance, this will be used in subsequent commands 2 The DNS name which IPv4 Red Hat build of Keycloak clients should connect to 3 The DNS name which IPv6 Red Hat build of Keycloak clients should connect to Create a Listener for the accelerator Command: aws globalaccelerator create-listener \ --accelerator-arn 'arn:aws:globalaccelerator::606671647913:accelerator/e35a94dd-391f-4e3e-9a3d-d5ad22a78c71' \ --port-ranges '[{"FromPort":443,"ToPort":443}]' \ --protocol TCP \ --region us-west-2 Output: { "Listener": { "ListenerArn": "arn:aws:globalaccelerator::606671647913:accelerator/e35a94dd-391f-4e3e-9a3d-d5ad22a78c71/listener/1f396d40", "PortRanges": [ { "FromPort": 443, "ToPort": 443 } ], "Protocol": "TCP", "ClientAffinity": "NONE" } } Create an Endpoint Group for the Listener Command: CLUSTER_1_ENDPOINT_ARN=USD(aws elbv2 describe-load-balancers \ --query "LoadBalancers[?DNSName=='abab80a363ce8479ea9c4349d116bce2-6b65e8b4272fa4b5.elb.eu-west-1.amazonaws.com'].LoadBalancerArn" \ 1 --region eu-west-1 \ 2 --output text ) CLUSTER_2_ENDPOINT_ARN=USD(aws elbv2 describe-load-balancers \ --query "LoadBalancers[?DNSName=='a1c76566e3c334e4ab7b762d9f8dcbcf-985941f9c8d108d4.elb.eu-west-1.amazonaws.com'].LoadBalancerArn" \ 3 --region eu-west-1 \ 4 --output text ) ENDPOINTS='[ { "EndpointId": "'USD{CLUSTER_1_ENDPOINT_ARN}'", "Weight": 128, "ClientIPPreservationEnabled": false }, { "EndpointId": "'USD{CLUSTER_2_ENDPOINT_ARN}'", "Weight": 128, "ClientIPPreservationEnabled": false } ]' aws globalaccelerator create-endpoint-group \ --listener-arn 'arn:aws:globalaccelerator::606671647913:accelerator/e35a94dd-391f-4e3e-9a3d-d5ad22a78c71/listener/1f396d40' \ 5 --traffic-dial-percentage 100 \ --endpoint-configurations USD{ENDPOINTS} \ --endpoint-group-region eu-west-1 \ 6 --region us-west-2 1 3 The DNS hostname of the Cluster's NLB 2 4 5 The ARN of the Listener created in the step 6 This should be the AWS region that hosts the clusters Output: { "EndpointGroup": { "EndpointGroupArn": "arn:aws:globalaccelerator::606671647913:accelerator/e35a94dd-391f-4e3e-9a3d-d5ad22a78c71/listener/1f396d40/endpoint-group/2581af0dc700", "EndpointGroupRegion": "eu-west-1", "EndpointDescriptions": [ { "EndpointId": "arn:aws:elasticloadbalancing:eu-west-1:606671647913:loadbalancer/net/abab80a363ce8479ea9c4349d116bce2/6b65e8b4272fa4b5", "Weight": 128, "HealthState": "HEALTHY", "ClientIPPreservationEnabled": false }, { "EndpointId": "arn:aws:elasticloadbalancing:eu-west-1:606671647913:loadbalancer/net/a1c76566e3c334e4ab7b762d9f8dcbcf/985941f9c8d108d4", "Weight": 128, "HealthState": "HEALTHY", "ClientIPPreservationEnabled": false } ], "TrafficDialPercentage": 100.0, "HealthCheckPort": 443, "HealthCheckProtocol": "TCP", "HealthCheckPath": "undefined", "HealthCheckIntervalSeconds": 30, "ThresholdCount": 3 } } Optional: Configure your custom domain If you are using a custom domain, pointed your custom domain to the AWS Global Load Balancer by configuring an Alias or CNAME in your custom domain. Create or update the Red Hat build of Keycloak Deployment Perform the following on each of the Red Hat build of Keycloak clusters: Login to the ROSA cluster Ensure the Keycloak CR has the following configuration apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: keycloak spec: hostname: hostname: USDHOSTNAME 1 ingress: enabled: false 2 1 The hostname clients use to connect to Keycloak 2 Disable the default ingress as all Red Hat build of Keycloak access should be via the provisioned NLB To ensure that request forwarding works as expected, it is necessary for the Keycloak CR to specify the hostname through which clients will access the Red Hat build of Keycloak instances. This can either be the DualStackDnsName or DnsName hostname associated with the Global Accelerator. If you are using a custom domain, point your custom domain to the AWS Globa Accelerator, and use your custom domain here. 13.5. Verify To verify that the Global Accelerator is correctly configured to connect to the clusters, navigate to hostname configured above, and you should be presented with the Red Hat build of Keycloak admin console. 13.6. Further reading Bring site online Take site offline
[ "cat <<EOF | oc apply -n USDNAMESPACE -f - 1 apiVersion: v1 kind: Service metadata: name: accelerator-loadbalancer annotations: service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: accelerator=USD{ACCELERATOR_NAME},site=USD{CLUSTER_NAME},namespace=USD{NAMESPACE} 2 service.beta.kubernetes.io/aws-load-balancer-type: \"nlb\" service.beta.kubernetes.io/aws-load-balancer-healthcheck-path: \"/lb-check\" service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: \"https\" service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: \"10\" 3 service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: \"3\" 4 service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: \"3\" 5 spec: ports: - name: https port: 443 protocol: TCP targetPort: 8443 selector: app: keycloak app.kubernetes.io/instance: keycloak app.kubernetes.io/managed-by: keycloak-operator sessionAffinity: None type: LoadBalancer EOF", "-n USDNAMESPACE get svc accelerator-loadbalancer --template=\"{{range .status.loadBalancer.ingress}}{{.hostname}}{{end}}\"", "abab80a363ce8479ea9c4349d116bce2-6b65e8b4272fa4b5.elb.eu-west-1.amazonaws.com", "aws globalaccelerator create-accelerator --name example-accelerator \\ 1 --ip-address-type DUAL_STACK \\ 2 --region us-west-2 3", "{ \"Accelerator\": { \"AcceleratorArn\": \"arn:aws:globalaccelerator::606671647913:accelerator/e35a94dd-391f-4e3e-9a3d-d5ad22a78c71\", 1 \"Name\": \"example-accelerator\", \"IpAddressType\": \"DUAL_STACK\", \"Enabled\": true, \"IpSets\": [ { \"IpFamily\": \"IPv4\", \"IpAddresses\": [ \"75.2.42.125\", \"99.83.132.135\" ], \"IpAddressFamily\": \"IPv4\" }, { \"IpFamily\": \"IPv6\", \"IpAddresses\": [ \"2600:9000:a400:4092:88f3:82e2:e5b2:e686\", \"2600:9000:a516:b4ef:157e:4cbd:7b48:20f1\" ], \"IpAddressFamily\": \"IPv6\" } ], \"DnsName\": \"a099f799900e5b10d.awsglobalaccelerator.com\", 2 \"Status\": \"IN_PROGRESS\", \"CreatedTime\": \"2023-11-13T15:46:40+00:00\", \"LastModifiedTime\": \"2023-11-13T15:46:42+00:00\", \"DualStackDnsName\": \"ac86191ca5121e885.dualstack.awsglobalaccelerator.com\" 3 } }", "aws globalaccelerator create-listener --accelerator-arn 'arn:aws:globalaccelerator::606671647913:accelerator/e35a94dd-391f-4e3e-9a3d-d5ad22a78c71' --port-ranges '[{\"FromPort\":443,\"ToPort\":443}]' --protocol TCP --region us-west-2", "{ \"Listener\": { \"ListenerArn\": \"arn:aws:globalaccelerator::606671647913:accelerator/e35a94dd-391f-4e3e-9a3d-d5ad22a78c71/listener/1f396d40\", \"PortRanges\": [ { \"FromPort\": 443, \"ToPort\": 443 } ], \"Protocol\": \"TCP\", \"ClientAffinity\": \"NONE\" } }", "CLUSTER_1_ENDPOINT_ARN=USD(aws elbv2 describe-load-balancers --query \"LoadBalancers[?DNSName=='abab80a363ce8479ea9c4349d116bce2-6b65e8b4272fa4b5.elb.eu-west-1.amazonaws.com'].LoadBalancerArn\" \\ 1 --region eu-west-1 \\ 2 --output text ) CLUSTER_2_ENDPOINT_ARN=USD(aws elbv2 describe-load-balancers --query \"LoadBalancers[?DNSName=='a1c76566e3c334e4ab7b762d9f8dcbcf-985941f9c8d108d4.elb.eu-west-1.amazonaws.com'].LoadBalancerArn\" \\ 3 --region eu-west-1 \\ 4 --output text ) ENDPOINTS='[ { \"EndpointId\": \"'USD{CLUSTER_1_ENDPOINT_ARN}'\", \"Weight\": 128, \"ClientIPPreservationEnabled\": false }, { \"EndpointId\": \"'USD{CLUSTER_2_ENDPOINT_ARN}'\", \"Weight\": 128, \"ClientIPPreservationEnabled\": false } ]' aws globalaccelerator create-endpoint-group --listener-arn 'arn:aws:globalaccelerator::606671647913:accelerator/e35a94dd-391f-4e3e-9a3d-d5ad22a78c71/listener/1f396d40' \\ 5 --traffic-dial-percentage 100 --endpoint-configurations USD{ENDPOINTS} --endpoint-group-region eu-west-1 \\ 6 --region us-west-2", "{ \"EndpointGroup\": { \"EndpointGroupArn\": \"arn:aws:globalaccelerator::606671647913:accelerator/e35a94dd-391f-4e3e-9a3d-d5ad22a78c71/listener/1f396d40/endpoint-group/2581af0dc700\", \"EndpointGroupRegion\": \"eu-west-1\", \"EndpointDescriptions\": [ { \"EndpointId\": \"arn:aws:elasticloadbalancing:eu-west-1:606671647913:loadbalancer/net/abab80a363ce8479ea9c4349d116bce2/6b65e8b4272fa4b5\", \"Weight\": 128, \"HealthState\": \"HEALTHY\", \"ClientIPPreservationEnabled\": false }, { \"EndpointId\": \"arn:aws:elasticloadbalancing:eu-west-1:606671647913:loadbalancer/net/a1c76566e3c334e4ab7b762d9f8dcbcf/985941f9c8d108d4\", \"Weight\": 128, \"HealthState\": \"HEALTHY\", \"ClientIPPreservationEnabled\": false } ], \"TrafficDialPercentage\": 100.0, \"HealthCheckPort\": 443, \"HealthCheckProtocol\": \"TCP\", \"HealthCheckPath\": \"undefined\", \"HealthCheckIntervalSeconds\": 30, \"ThresholdCount\": 3 } }", "apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: keycloak spec: hostname: hostname: USDHOSTNAME 1 ingress: enabled: false 2" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/high_availability_guide/deploy-aws-accelerator-loadbalancer-
21.3. Defining sudo Rules
21.3. Defining sudo Rules sudo rules are in a sense similar to access control rules: they define users who are granted access, the commands which are within the scope of the rule, and then the target hosts to which the rule applies. In IdM, additional information can be configured in the rule, such as sudoers options and run-as settings, but the basic elements always define who, what (services), and where (hosts). 21.3.1. About External Users sudo rules define four elements: who can do what , where , and as whom . The who is the regular user, and the as whom is the system or other user identity which the user uses to perform tasks. Those tasks are system commands that can be run (or specifically not run) on a target machine. Three of those elements - who, as whom, and where - are identities. They are users. Most of the time, those identities are going to be entities within the IdM domain because there will be overlap between the system users in the environment and the users and hosts belonging to the IdM domain. However, that is not necessarily the case with all identities that a sudo policy may realistically cover. For example, sudo rules could be used to grant root access to member of the IT group in IdM, and that root user is not a user in IdM. Or, for another example, administrators may want to block access to certain hosts that are on a network but are not part of the IdM domain. The sudo rules in Identity Management support the concept of external users - meaning, users which are stored and exist outside of the IdM configuration. Figure 21.1. External Entities When configuring a sudo rule, the user and run-as settings can point to an external identity to be included and evaluated in the sudo rule. 21.3.2. About sudo Options Format The sudo rule can be configured to use any supported sudoers options. The complete list of options is in the sudoers man page. However, the sudo rule configuration in Identity Management does not allow the same formatting as the configuration in the /etc/sudoers file. Specifically, Identity Management does not allow whitespaces in the options parameter, whether it is set in the UI or the CLI. For example, in the /etc/sudoers file, it is permissible to list options in a comma-separated list with spaces between: However, in Identity Management, that same configuration would be interpreted as different arguments - including the equals sign (=) since it has spaces around it. Instead, each option must be added individually, either through the UI or the command-line tools. Likewise, linebreaks that are ignored in the /etc/sudoers file are not allowed in the Identity Management configuration. For example, the same command in the IdM command line has all of the variables on one line and no spaces around the equals sign. To use multiple sudoers options in Identity Management, configure each one as a separate option setting, rather than all on one line. 21.3.3. Defining sudo Rules in the Web UI Click the Policy tab. Click the Sudo subtab, and then select the Sudo Rules link. Click the Add link at the top of the list of sudo rules. Enter the name for the rule. Click the Add and Edit button to go immediately to set the configuration for the rule. There are a number of configuration areas for the rule. The most basic elements are set in the Who , Access This Host , and Run Commands areas; the others are optional and are used to refine the rule. Optional. In the Options area, add any sudoers options. The complete list of options is in the sudoers man page. Note As described in Section 21.3.2, "About sudo Options Format" , do not use options with whitespace in the values. Rather than adding a list of options in one line, add a single option setting for each desired option. Click the + Add link at the right of the options list. Enter the sudoers option. Click Add . In the Who area, select the users or user groups to which the sudo rule is applied. Click the + Add link at the right of the users list. Click the checkbox by the users to add to the rule, and click the right arrows button, >> , to move the users to the selection box. Click Add . It is possible to configure both IdM users and external system users ( Section 21.3.1, "About External Users" ). In the Access This Host area, select the hosts on which the sudo rule is in effect. Click the + Add link at the right of the hosts list. Click the checkbox by the hosts to include with the rule, and click the right arrows button, >> , to move the hosts to the selection box. Click Add . In the Run Commands area, select the commands which are included in the sudo rule. The sudo rule can grant access or deny access to commands, and it can grant allow access to one command and also deny access to another. In the Allow/Deny area, click the + Add link at the right of the commands list. Click the checkbox by the commands or command groups to include with the rule, and click the right arrows button, >> , to move the commands to the selection box. Click Add . Optional. The sudo rule can be configured to run the given commands as a specific, non-root user. In the As Whom area, click the + Add link at the right of the users list. Click the checkbox by the users to run the command as, and click the right arrows button, >> , to move the users to the selection box. Click Add . 21.3.4. Defining sudo Rules in the Command Line Each element is added to the rule command using a different command (listed in Table 21.1, "sudo Commands" ). The basic outline of a sudo rule command is: Example 21.1. Creating Basic sudo Rules In the most basic case, the sudo configuration is going to grant the right to one user for one command on one host. The first step is to add the initial rule entry. , add the commands to grant access to. This can be a single command, using --sudocmds , or a group of commands, using --sudocmdgroups . Add a host or a host group to the rule. Last, add the user or group to the rule. This is the user who is allowed to use sudo as defined in the rule; if no "run-as" user is given, then this user will run the sudo commands as root. Example 21.2. Allowing and Denying Commands The sudo rule can grant access or deny access to commands. For example, this rule would allow read access to files but prevent editing: Example 21.3. Using sudoers Options The sudoers file has a lot of potential flags that can be set to control the behavior of sudo users, like requiring (or not requiring) passwords to offer a user to authenticate to sudo or using fully-qualified domain names in the sudoers file. The complete list of options is in the sudoers man page. Any of these options can be set for the IdM sudo rule using the sudorule-add-option command. When the command is run, it prompts for the option to add: Note As described in Section 21.3.2, "About sudo Options Format" , do not use options with whitespace in the values. Rather than adding a list of options in one line, add a single option setting for each desired option. Example 21.4. Running as Other Users The sudo rule also has the option of specifying a non-root user or group to run the commands as. The initial rule has the user or group specified using the --sudorule-add-runasuser or --sudorule-add-runasgroup command, respectively. When creating a rule, the sudorule-add-runasuser or sudorule-add-runasgroup command can only set specific users or groups. However, when editing a rule, it is possible to run sudo as all users or all groups by using the --runasusercat or --runasgroupcat option. For example: Note The --sudorule-add-runasuser and --sudorule-add-runasgroup commands do not support an all option, only specific user or group names. Specifying all users or all groups can only be used with options with the sudorule-mod command. Example 21.5. Referencing External Users The "who" in a sudo rule can be an IdM user, but there are many logical and useful rules where one of the referents is a system user. Similarly, a rule may need to grant or deny access to a host machine on the network which is not an IdM client. In those cases, the sudo policy can refer to an external user - an identity created and stored outside of IdM ( Section 21.3.1, "About External Users" ). The options to add an external identity to a sudo rule are: --externaluser --runasexternaluser For example: Table 21.1. sudo Commands Command Description sudorule-add Add a sudo rule entry. sudorule-add-user Add a user or a user group to the sudo rule. This user (or every member of the group) is then entitled to sudo any of the commands in the rule. sudorule-add-host Add a target host for the rule. These are the hosts where the users are granted sudo permissions. sudorule-add-runasgroup Set a group to run the sudo commands as. This must be a specific user; to specify all users, modify the rule using sudo-rule . sudorule-add-runasuser Set a user to run the sudo commands as. This must be a specific user; to specify all users, modify the rule using sudo-rule . sudorule-add-allow-command Add a command that users in the rule have sudo permission to run. sudorule-add-deny-command Add a command that users in the rule are explicitly denied sudo permission to run. sudorule-add-option Add a sudoers flag to the sudo rule. sudorule-disable Temporarily deactivate a sudo rule entry. sudorule-enable Activate a previously suspended sudo rule. sudorule-del Remove a sudo rule entirely. Example 21.6. Adding and Modifying a New sudo Rule from the Command Line To allow a specific user group to use sudo with any command on selected servers: Obtain a Kerberos ticket for the admin user or any other user allowed to manage sudo rules. Add a new sudo rule to IdM. Define the who : specify the group of users who will be entitled to use the sudo rule. Define the where : specify the group of hosts where the users will be granted the sudo permissions. Define the what : to allow the users to run any sudo command, add the all command category to the rule. To let the sudo commands be executed as root, do not specify any run-as users or groups. Add the !authenticate sudoers option to specify that the users will not be required to authenticate when using the sudo command. Display the new sudo rule configuration to verify it is correct. 21.3.5. Suspending and Removing sudo Rules Defined sudo rules can either be temporarily deactivated or entirely deleted from the web UI or from the command line. Suspended rules are removed from the ou=sudoers compat tree without a need for a server restart. Suspending and Removing sudo Rules from the Web UI To suspend or completely delete a rule from the web UI, use the Disable or Delete buttons at the top of the list of sudo rules: Figure 21.2. Suspending or Deleting a sudo Rule from the Web UI Suspending and Removing sudo Rules from the Command Line To suspend a rule from the command line, run a command such as the following: To completely delete a rule from the command line, run a command such as the following:
[ "mail_badpass, mail_no_host, mail_no_perms, syslog = local2", "[jsmith@server ~]USD ipa sudorule-add-option readfiles Sudo Option: mail_badpass ----------------------------------------------------- Added option \"mail_badpass\" to Sudo rule \"readfiles\" ----------------------------------------------------- [jsmith@server ~]USD ipa sudorule-add-option readfiles Sudo Option: syslog=local2 ----------------------------------------------------- Added option \"syslog=local2\" to Sudo rule \"readfiles\" -----------------------------------------------------", "env_keep = \"COLORS DISPLAY EDITOR HOSTNAME HISTSIZE INPUTRC KDEDIR LESSSECURE LS_COLORS MAIL PATH PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY\"", "[jsmith@server ~]USD ipa sudorule-add-option readfiles Sudo Option: env_keep=\"COLORS DISPLAY EDITOR HOSTNAME HISTSIZE INPUTRC KDEDIR LESSSECURE LS_COLORS MAIL PATH PS1 PS2 ... XAUTHORITY\"", "ipa sudorule-add* options ruleName", "[jsmith@server ~]USD kinit admin [jsmith@server ~]USD ipa sudorule-add files-commands ----------------------------------- Added sudo rule \"files-commands\" ----------------------------------- Rule name: files-commands Enabled: TRUE", "[jsmith@server ~]USD ipa sudorule-add-allow-command --sudocmds \"/usr/bin/vim\" files-commands Rule name: files-commands Enabled: TRUE sudo Commands: /usr/bin/vim ------------------------- Number of members added 1 -------------------------", "[jsmith@server ~]USD ipa sudorule-add-host --host server.example.com files-commands Rule name: files-commands Enabled: TRUE Hosts: server.example.com sudo Commands: /usr/bin/vim ------------------------- Number of members added 1 -------------------------", "[jsmith@server ~]USD ipa sudorule-add-user --user jsmith files-commands Rule name: files-commands Enabled: TRUE Users: jsmith Hosts: server.example.com sudo Commands: /usr/bin/vim\" ------------------------- Number of members added 1 -------------------------", "[jsmith@server ~]USD kinit admin [jsmith@server ~]USD ipa sudorule-add-allow-command --sudocmds \"/usr/bin/less\" readfiles [jsmith@server ~]USD ipa sudorule-add-allow-command --sudocmds \"/usr/bin/tail\" readfiles [jsmith@server ~]USD ipa sudorule-add-deny-command --sudocmds \"/usr/bin/vim\" readfiles", "[jsmith@server ~]USD kinit admin [jsmith@server ~]USD ipa sudorule-add-option readfiles Sudo Option: !authenticate ----------------------------------------------------- Added option \"!authenticate\" to Sudo rule \"readfiles\" -----------------------------------------------------", "ipa sudorule-add-runasuser --users=jsmith readfiles ipa sudorule-add-runasgroup --groups=ITadmins readfiles", "ipa sudorule-mod --runasgroupcat=all ruleName", "ipa sudorule-add-user --externaluser=ITAdmin readfiles ipa sudorule-add-runasuser --runasexternaluser=root readfiles", "kinit admin Password for [email protected]:", "ipa sudorule-add new_sudo_rule --desc=\"Rule for user_group\" --------------------------------- Added Sudo Rule \"new_sudo_rule\" --------------------------------- Rule name: new_sudo_rule Description: Rule for user_group Enabled: TRUE", "ipa sudorule-add-user new_sudo_rule --groups=user_group Rule name: new_sudo_rule Description: Rule for user_group Enabled: TRUE User Groups: user_group ------------------------- Number of members added 1 -------------------------", "ipa sudorule-add-host new_sudo_rule --hostgroups=host_group Rule name: new_sudo_rule Description: Rule for user_group Enabled: TRUE User Groups: user_group Host Groups: host_group ------------------------- Number of members added 1 -------------------------", "ipa sudorule-mod new_sudo_rule --cmdcat=all ------------------------------ Modified Sudo Rule \"new_sudo_rule\" ------------------------------ Rule name: new_sudo_rule Description: Rule for user_group Enabled: TRUE Command category: all User Groups: user_group Host Groups: host_group", "ipa sudorule-add-option new_sudo_rule Sudo Option: !authenticate ----------------------------------------------------- Added option \"!authenticate\" to Sudo Rule \"new_sudo_rule\" ----------------------------------------------------- Rule name: new_sudo_rule Description: Rule for user_group Enabled: TRUE Command category: all User Groups: user_group Host Groups: host_group Sudo Option: !authenticate", "ipa sudorule-show new_sudo_rule Rule name: new_sudo_rule Description: Rule for user_group Enabled: TRUE Command category: all User Groups: user_group Host Groups: host_group Sudo Option: !authenticate", "ipa sudorule-disable files-commands", "ipa sudorule-del files-commands" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/defining-sudorules
probe::netdev.set_promiscuity
probe::netdev.set_promiscuity Name probe::netdev.set_promiscuity - Called when the device enters/leaves promiscuity Synopsis netdev.set_promiscuity Values dev_name The device that is entering/leaving promiscuity mode enable If the device is entering promiscuity mode inc Count the number of promiscuity openers disable If the device is leaving promiscuity mode
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-netdev-set-promiscuity
Chapter 3. Attaching a host-based subscription to hypervisors
Chapter 3. Attaching a host-based subscription to hypervisors Use this procedure to attach a host-based subscription, such as Red Hat Enterprise Linux for Virtual Datacenters , to hypervisors that are already registered to Red Hat Satellite. Note This procedure is only valid if you have Simple Content Access (SCA) disabled on Satellite. You are not required to manually attach the subscriptions if you have SCA enabled. Note that SCA is enabled by default for newly created organizations. To learn more about SCA, see Simple Content Access . To register a new hypervisor, ensure your host activation key includes a host-based subscription, then see Registering Hosts to Satellite in Managing Hosts . You must register a hypervisor before configuring virt-who to query it. Prerequisites Import a Subscription Manifest that includes a host-based subscription into Satellite Server. Ensure you have sufficient entitlements for the host-based subscription to cover all of the hypervisors you plan to use. Procedure In the Satellite web UI, navigate to Hosts > Content Hosts . In the Content Hosts list, select the check box to the name of each hypervisor you want to attach the subscription to. From the Select Action list, select Manage Subscriptions . In the Content Host Bulk Subscriptions window, select a host-based subscription, then click Add Selected . For CLI users On Satellite Server, list the available subscriptions to find the host-based subscription's ID: Attach the host-based subscription to a hypervisor: Repeat these steps for each hypervisor you plan to use.
[ "hammer subscription list --organization-id organization_id", "hammer host subscription attach --host host_name --subscription-id subscription_id" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/configuring_virtual_machine_subscriptions/attaching-a-host-based-subscription-to-hypervisors_vm-subs-satellite
Part I. New Features
Part I. New Features This part describes new features and major enhancements introduced in Red Hat Enterprise Linux 6.10.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.10_release_notes/part-red_hat_enterprise_linux-6.10_release_notes-new-features
Chapter 3. Automatic starting of a dynamic JFR recording
Chapter 3. Automatic starting of a dynamic JFR recording When the Cryostat agent is enabled to start JFR recordings and the custom trigger condition for a dynamic recording is met, the Cryostat agent automatically starts the recording from within the target application. The Cryostat agent automatically assigns a name to the JFR recording, which is always in a cryostat-smart-trigger- X format, where X represents the recording ID. The JVM automatically generates the recording ID, which is an incremental numeric value that is unique for each JFR recording that is started within the JVM. When the Cryostat agent starts a dynamic JFR recording, you can subsequently view this recording in the Active Recordings tab in the Cryostat web console. For more information about using the Active Recordings tab, see Creating a JFR recording with Cryostat .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/3/html/enabling_dynamic_jfr_recordings_based_on_mbean_custom_triggers/con_automatic-starting-of-dynamic-recording_cryostat
Chapter 12. Red Hat Process Automation Manager decision and process engines with Fuse on Apache Karaf
Chapter 12. Red Hat Process Automation Manager decision and process engines with Fuse on Apache Karaf Apache Karaf is a standalone open-source runtime environment. It is based on the OSGi standard from the OSGi Alliance. Karaf provides support for modularisation through OSGi bundles with sophisticated class-loading support. You can deploy multiple versions of a dependency side by side in a Karaf container. You can use hot code swapping to upgrade or replace a module without shutting down the container. Red Hat Process Automation Manager integration with Fuse on Karaf is provided through Karaf features. You can install individual components of Red Hat Process Automation Manager for Fuse on Karaf using these features. Features files are XML files that specify which OSGI bundles are installed for a particular feature. The following features XML files facilitate Red Hat Process Automation Manager and Fuse on Karaf integration: rhba-features-<FUSE-VERSION>-features.xml This file is a part of Fuse installed in Karaf where <FUSE-VERSION> is the version of Fuse. This file is stored in the Karaf system repository, in the system/org/jboss/fuse/features/rhba-features directory. This file contains prerequisites for installing Red Hat Process Automation Manager features. kie-karaf-features-7.67.0.Final-redhat-00024-features-fuse.xml This file is a part of Red Hat Process Automation Manager and provides Red Hat Process Automation Manager features, which define the OSGi features that can be deployed into Red Hat Fuse. OSGi users can install features from this file to install Red Hat Process Automation Manager into Fuse and use it in their applications. You can find this features file in the online and offline Maven repository that is distributed with Red Hat Process Automation Manager. The group ID, artifact ID, and version (GAV) identifier of this file is org.kie:kie-karaf-features:7.67.0.Final-redhat-00024 . 12.1. Uninstalling obsolete Red Hat Process Automation Manager features XML files on Karaf If your installation contains older versions of the Red Hat Process Automation Manager features XML files (for example, kie-karaf-features-<VERSION>-features.xml ), you must remove these files and all associated files before installing the most recent features XML files. Prerequisites Obsolete features XML files exist in your Apache Karaf installation. Procedure Enter the following commands to determine whether your installation contains obsolete Red Hat Process Automation Manager features XML files: Enter the following command, where <FUSE_HOME> is the Fuse installation directory, to start the Red Hat Fuse console: Enter the following command, where <FEATURE_NAME> is the name of the feature that you want to uninstall, to uninstall features or applications that use obsolete features XML files: The following example shows how to remove features: Search Karaf home for references to bundles that use drools , kie , or jbpm . The following example shows how to use grep to search for these components: The example shows the output from these commands: Enter the following command, where BUNDLE_ID is a bundle ID returned in the search, to remove the bundles found in the step: Enter the following command to remove the obsolete drools-karaf-features URL: Restart Fuse. 12.2. Installing Red Hat Process Automation Manager features on Karaf using XML files You can install Red Hat Process Automation Manager features on Karaf to create a dynamic runtime environment for your Red Hat Process Automation Manager processes. Prerequisites A Red Hat Fuse installation in an Apache Karaf container is available. For information about installing Fuse in Apache Karaf, see Installing Red Hat Fuse on the Apache Karaf container . You have removed any obsolete Red Hat Process Automation Manager features XML files as described in Section 12.1, "Uninstalling obsolete Red Hat Process Automation Manager features XML files on Karaf" . Procedure To install Red Hat Process Automation Manager features, enter the following command: Note Use org.drools.osgi.spring.OsgiKModuleBeanFactoryPostProcessor instead of org.kie.spring.KModuleBeanFactoryPostProcessor to postprocess KIE elements in an OSGi environment. Do not install the drools-module feature before the kie-spring feature. If you do, the drools-compiler bundle will not detect packages exported by kie-spring . If you install the features in the incorrect order, run osgi:refresh drools-compiler_bundle_ID to force the drools-compiler to rebuild its Import-Package metadata. In this command, <FEATURE_NAME> is one of the features listed in Section 12.4, "Red Hat Process Automation Manager Karaf features" . 12.3. Installing Red Hat Process Automation Manager features on Karaf through maven Install Red Hat Process Automation Manager with Fuse on Apache Karaf to deploy integrated services where required. Prerequisites A Red Hat Fuse 7.12 on Apache Karaf installation exists. For installation instructions, see Installing Red Hat Fuse on the Apache Karaf container . Any obsolete features XML files have been removed, as described in Section 12.1, "Uninstalling obsolete Red Hat Process Automation Manager features XML files on Karaf" . Procedure To configure the Maven repository, open the FUSE_HOME/etc/org.ops4j.pax.url.mvn.cfg file in a text editor. Make sure that the https://maven.repository.redhat.com/ga/ repository is present in the org.ops4j.pax.url.mvn.repositories variable and add it if necessary. Note Separate entries in the org.ops4j.pax.url.mvn.repositories variable with a comma, space, and backslash ( , \ ). The backslash forces a new line. To start Fuse, enter the following command, where FUSE_HOME is the Fuse installation directory: To add a reference to the features file that contains installation prerequisites, enter the following command, where <FUSE_VERSION is the version of Fuse that you are installing: Enter the following command to add a reference to the Red Hat Process Automation Manager features XML file: To see the current drools-karaf-features version, see the Red Hat Process Automation Manager 7 Supported Configurations page. Enter the following command to install a feature provided by Red Hat Process Automation Manager features XML file. In this command, <FEATURE_NAME> is one of the features listed in Section 12.4, "Red Hat Process Automation Manager Karaf features" . Enter the following command to verify the installation: Successfully installed features have the status started . 12.4. Red Hat Process Automation Manager Karaf features The following table lists Red Hat Process Automation Manager Karaf features. Feature Description drools-module Contains the core and compiler of Drools, used to create KIE bases and KIE sessions from plain DRL. It also contains the implementation of the executable model. Uses Drools for rules evaluation, without requiring persistence, processes, or decision tables. drools-template Contains the Drools templates. drools-jpa Uses Drools for rules evaluation with persistence and transactions, but without requiring processes or decision tables. The drools-jpa feature includes the drools-module . However, you might also need to install the droolsjbpm-hibernate feature or ensure that a compatible hibernate bundle is installed. drools-decisiontable Uses Drools with decision tables. jbpm Uses jBPM. The jbpm feature includes the drools-module and drools-jpa . You might need to install the droolsjbpm-hibernate feature, or ensure that a compatible hibernate bundle is installed. jbpm and jbpm-human-task Uses jBPM with human tasks. jbpm-workitems-camel Provides the jbpm-workitems-camel component. Core engine JARs and kie-ci Uses Red Hat Process Automation Manager with the KIE scanner ( kie-ci ) to download kJARs from a Maven repository. kie-camel Provides the kie-camel component, an Apache Camel endpoint that integrates Fuse with Red Hat Process Automation Manager. kie-spring Installs the kie-spring component that enables you to configure listeners to KIE sessions using XML tags.
[ "JBossFuse:karaf@root> feature:repo-list JBossFuse:karaf@root> feature:list", "./<FUSE_HOME>/bin/fuse", "JBossFuse:karaf@root> features:uninstall <FEATURE_NAME>", "JBossFuse:karaf@root> features:uninstall drools-module JBossFuse:karaf@root> features:uninstall jbpm JBossFuse:karaf@root> features:uninstall kie-ci", "karaf@root> list -t 0 -s | grep drools karaf@root> list -t 0 -s | grep kie karaf@root> list -t 0 -s | grep jbpm", "250 โ”‚ Active โ”‚ 80 โ”‚ 7.19.0.201902201522 โ”‚ org.drools.canonical-model 251 โ”‚ Active โ”‚ 80 โ”‚ 7.19.0.201902201522 โ”‚ org.drools.cdi 252 โ”‚ Active โ”‚ 80 โ”‚ 7.19.0.201902201522 โ”‚ org.drools.compiler", "karaf@root> osgi:uninstall BUNDLE_ID", "karaf@root> features:removeurl mvn:org.kie/kie-karaf-features/VERSION.Final-redhat-VERSION/xml/features", "JBossFuse:karaf@root> feature:install <FEATURE_NAME>", "./FUSE_HOME/bin/fuse", "feature:repo-add mvn:org.jboss.fuse.features/rhba-features/<FUSE-VERSION>/xml/features", "JBossFuse:karaf@root> features:addurl mvn:org.kie/kie-karaf-features/VERSION/xml/features-fuse", "JBossFuse:karaf@root> features:install <FEATURE_NAME>", "JBossFuse:karaf@root>feature:list" ]
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/integrating_red_hat_process_automation_manager_with_other_products_and_components/ba-engines-karaf-con
Part II. Integrating Red Hat Fuse with Red Hat Decision Manager
Part II. Integrating Red Hat Fuse with Red Hat Decision Manager As a system administrator, you can integrate Red Hat Decision Manager with Red Hat Fuse on Red Hat JBoss Enterprise Application Platform to facilitate communication between integrated services.
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/integrating_red_hat_decision_manager_with_other_products_and_components/assembly-integrating-fuse
Chapter 2. Training a model
Chapter 2. Training a model RHEL AI can use your taxonomy tree and synthetic data to create a newly trained model with your domain-specific knowledge or skills using multi-phase training and evaluation. You can run the full training and evaluation process using the synthetic dataset you generated. The LAB optimized technique of multi-phase training is a type of LLM training that goes through multiple stages of training and evaluation. In these various stages, RHEL AI runs the training process and produces model checkpoints. The best checkpoint is selected for the phase. This process creates many checkpoints and selects the best scored checkpoint. This best scored checkpoint is your newly trained LLM. The entire process creates a newly generated model that is trained and evaluated using the synthetic data from your taxonomy tree. 2.1. Training the model on your data You can use Red Hat Enterprise Linux AI to train a model with your synthetically generated data. The following procedures show how to do this using the LAB multi-phase training strategy. Important Red Hat Enterprise Linux AI general availability does not support training and inference serving at the same time. If you have an inference server running, you must close it before you start the training process. Prerequisites You installed RHEL AI with the bootable container image. You downloaded the granite-7b-starter model. You created a custom qna.yaml file with knowledge data. You ran the synthetic data generation (SDG) process. You downloaded the prometheus-8x7b-v2-0 judge model. You have root user access on your machine. Procedure You can run multi-phase training and evaluation by running the following command with the data files generated from SDG. Note You can use the --enable-serving-output flag with the ilab model train commmand to display the training logs. USD ilab model train --strategy lab-multiphase \ --phased-phase1-data ~/.local/share/instructlab/datasets/<generation-date>/<knowledge-train-messages-jsonl-file> \ --phased-phase2-data ~/.local/share/instructlab/datasets/<generation-date>/<skills-train-messages-jsonl-file> where <generation-date> The date of when you ran Synthetic Data Generation (SDG). <knowledge-train-messages-file> The location of the knowledge_messages.jsonl file generated during SDG. RHEL AI trains the student model granite-7b-starter using the data from this .jsonl file. Example path: ~/.local/share/instructlab/datasets/2024-09-07_194933/knowledge_train_msgs_2024-09-07T20_54_21.jsonl . <skills-train-messages-file> The location of the skills_messages.jsonl file generated during SDG. RHEL AI trains the student model granite-7b-starter using the data from the .jsonl file. Example path: ~/.local/share/instructlab/datasets/2024-09-07_194933/skills_train_msgs_2024-09-07T20_54_21.jsonl . Note You can use the --strategy lab-skills-only value to train a model on skills only. Example skills only training command: USD ilab model train --strategy lab-skills-only --phased-phase2-data ~/.local/share/instructlab/datasets/<skills-train-messages-jsonl-file> The first phase trains the model using the synthetic data from your knowledge contribution. Example output of training knowledge Training Phase 1/2... TrainingArgs for current phase: TrainingArgs(model_path='/opt/app-root/src/.cache/instructlab/models/granite-7b-starter', chat_tmpl_path='/opt/app-root/lib64/python3.11/site-packages/instructlab/training/chat_templates/ibm_generic_tmpl.py', data_path='/tmp/jul19-knowledge-26k.jsonl', ckpt_output_dir='/tmp/e2e/phase1/checkpoints', data_output_dir='/opt/app-root/src/.local/share/instructlab/internal', max_seq_len=4096, max_batch_len=55000, num_epochs=2, effective_batch_size=128, save_samples=0, learning_rate=2e-05, warmup_steps=25, is_padding_free=True, random_seed=42, checkpoint_at_epoch=True, mock_data=False, mock_data_len=0, deepspeed_options=DeepSpeedOptions(cpu_offload_optimizer=False, cpu_offload_optimizer_ratio=1.0, cpu_offload_optimizer_pin_memory=False, save_samples=None), disable_flash_attn=False, lora=LoraOptions(rank=0, alpha=32, dropout=0.1, target_modules=('q_proj', 'k_proj', 'v_proj', 'o_proj'), quantize_data_type=<QuantizeDataType.NONE: None>)) Then, RHEL AI selects the best checkpoint to use for the phase. The phase trains the model using the synthetic data from the skills data. Example output of training skills Training Phase 2/2... TrainingArgs for current phase: TrainingArgs(model_path='/tmp/e2e/phase1/checkpoints/hf_format/samples_52096', chat_tmpl_path='/opt/app-root/lib64/python3.11/site-packages/instructlab/training/chat_templates/ibm_generic_tmpl.py', data_path='/usr/share/instructlab/sdg/datasets/skills.jsonl', ckpt_output_dir='/tmp/e2e/phase2/checkpoints', data_output_dir='/opt/app-root/src/.local/share/instructlab/internal', max_seq_len=4096, max_batch_len=55000, num_epochs=2, effective_batch_size=3840, save_samples=0, learning_rate=2e-05, warmup_steps=25, is_padding_free=True, random_seed=42, checkpoint_at_epoch=True, mock_data=False, mock_data_len=0, deepspeed_options=DeepSpeedOptions(cpu_offload_optimizer=False, cpu_offload_optimizer_ratio=1.0, cpu_offload_optimizer_pin_memory=False, save_samples=None), disable_flash_attn=False, lora=LoraOptions(rank=0, alpha=32, dropout=0.1, target_modules=('q_proj', 'k_proj', 'v_proj', 'o_proj'), quantize_data_type=<QuantizeDataType.NONE: None>)) Then, RHEL AI evaluates all of the checkpoints from phase 2 model training using the Multi-turn Benchmark (MT-Bench) and returns the best performing checkpoint as the fully trained output model. Example output of evaluating skills MT-Bench evaluation for Phase 2... Using gpus from --gpus or evaluate config and ignoring --tensor-parallel-size configured in serve vllm_args INFO 2024-08-15 10:04:51,065 instructlab.model.backends.backends:437: Trying to connect to model server at http://127.0.0.1:8000/v1 INFO 2024-08-15 10:04:53,580 instructlab.model.backends.vllm:208: vLLM starting up on pid 79388 at http://127.0.0.1:54265/v1 INFO 2024-08-15 10:04:53,580 instructlab.model.backends.backends:450: Starting a temporary vLLM server at http://127.0.0.1:54265/v1 INFO 2024-08-15 10:04:53,580 instructlab.model.backends.backends:465: Waiting for the vLLM server to start at http://127.0.0.1:54265/v1, this might take a moment... Attempt: 1/300 INFO 2024-08-15 10:04:58,003 instructlab.model.backends.backends:465: Waiting for the vLLM server to start at http://127.0.0.1:54265/v1, this might take a moment... Attempt: 2/300 INFO 2024-08-15 10:05:02,314 instructlab.model.backends.backends:465: Waiting for the vLLM server to start at http://127.0.0.1:54265/v1, this might take a moment... Attempt: 3/300 moment... Attempt: 3/300 INFO 2024-08-15 10:06:07,611 instructlab.model.backends.backends:472: vLLM engine successfully started at http://127.0.0.1:54265/v1 After training is complete, a confirmation appears and displays your best performed checkpoint. Example output of a complete multi-phase training run Make a note of this checkpoint because the path is necessary for evaluation and serving. Verification When training a model with ilab model train , multiple checkpoints are saved with the samples_ prefix based on how many data points they have been trained on. These are saved to the ~/.local/share/instructlab/phase/ directory. USD ls ~/.local/share/instructlab/phase/<phase1-or-phase2>/checkpoints/ Example output of the new models samples_1711 samples_1945 samples_1456 samples_1462 samples_1903 2.1.1. Continuing or restarting a training run RHEL AI allows you to continue a training run that may have failed during multi-phase training. There are a few ways a training run can fail: The vLLM server may not start correctly. A accelerator or GPU may freeze, causing training to abort. There may be an error in your InstructLab config.yaml file. When you run multi-phase training for the first time, the initial training data gets saved into a journalfile.yaml file. If necessary, this metadata in the file can be used to restart a failed training. You can also restart a training run which clears the training data by following the CLI prompts when running multi-phase training. Prerequisites You ran multi-phase training with your synthetic data and that failed. Procedure Run the multi-phase training command again. USD ilab model train --strategy lab-multiphase \ --phased-phase1-data ~/.local/share/instructlab/datasets/<generation-date>/<knowledge-train-messages-jsonl-file> \ --phased-phase2-data ~/.local/share/instructlab/datasets/<generation-date>/<skills-train-messages-jsonl-file> The Red Hat Enterprise Linux AI CLI reads if the journalfile.yaml file exists and continues the training run from that point. The CLI prompts you to continue for the training run, or start from the beginning. Type n in your shell to continue from your previews training run. Metadata (checkpoints, the training journal) may have been saved from a training run. By default, training will resume from this metadata if it exists Alternatively, the metadata can be cleared, and training can start from scratch Would you like to START TRAINING FROM THE BEGINNING? n Type y into the terminal to restart a training run. Metadata (checkpoints, the training journal) may have been saved from a training run. By default, training will resume from this metadata if it exists Alternatively, the metadata can be cleared, and training can start from scratch Would you like to START TRAINING FROM THE BEGINNING? y Restarting also clears your systems cache of checkpoints, journal files and other training data.
[ "ilab model train --strategy lab-multiphase --phased-phase1-data ~/.local/share/instructlab/datasets/<generation-date>/<knowledge-train-messages-jsonl-file> --phased-phase2-data ~/.local/share/instructlab/datasets/<generation-date>/<skills-train-messages-jsonl-file>", "ilab model train --strategy lab-skills-only --phased-phase2-data ~/.local/share/instructlab/datasets/<skills-train-messages-jsonl-file>", "Training Phase 1/2 TrainingArgs for current phase: TrainingArgs(model_path='/opt/app-root/src/.cache/instructlab/models/granite-7b-starter', chat_tmpl_path='/opt/app-root/lib64/python3.11/site-packages/instructlab/training/chat_templates/ibm_generic_tmpl.py', data_path='/tmp/jul19-knowledge-26k.jsonl', ckpt_output_dir='/tmp/e2e/phase1/checkpoints', data_output_dir='/opt/app-root/src/.local/share/instructlab/internal', max_seq_len=4096, max_batch_len=55000, num_epochs=2, effective_batch_size=128, save_samples=0, learning_rate=2e-05, warmup_steps=25, is_padding_free=True, random_seed=42, checkpoint_at_epoch=True, mock_data=False, mock_data_len=0, deepspeed_options=DeepSpeedOptions(cpu_offload_optimizer=False, cpu_offload_optimizer_ratio=1.0, cpu_offload_optimizer_pin_memory=False, save_samples=None), disable_flash_attn=False, lora=LoraOptions(rank=0, alpha=32, dropout=0.1, target_modules=('q_proj', 'k_proj', 'v_proj', 'o_proj'), quantize_data_type=<QuantizeDataType.NONE: None>))", "Training Phase 2/2 TrainingArgs for current phase: TrainingArgs(model_path='/tmp/e2e/phase1/checkpoints/hf_format/samples_52096', chat_tmpl_path='/opt/app-root/lib64/python3.11/site-packages/instructlab/training/chat_templates/ibm_generic_tmpl.py', data_path='/usr/share/instructlab/sdg/datasets/skills.jsonl', ckpt_output_dir='/tmp/e2e/phase2/checkpoints', data_output_dir='/opt/app-root/src/.local/share/instructlab/internal', max_seq_len=4096, max_batch_len=55000, num_epochs=2, effective_batch_size=3840, save_samples=0, learning_rate=2e-05, warmup_steps=25, is_padding_free=True, random_seed=42, checkpoint_at_epoch=True, mock_data=False, mock_data_len=0, deepspeed_options=DeepSpeedOptions(cpu_offload_optimizer=False, cpu_offload_optimizer_ratio=1.0, cpu_offload_optimizer_pin_memory=False, save_samples=None), disable_flash_attn=False, lora=LoraOptions(rank=0, alpha=32, dropout=0.1, target_modules=('q_proj', 'k_proj', 'v_proj', 'o_proj'), quantize_data_type=<QuantizeDataType.NONE: None>))", "MT-Bench evaluation for Phase 2 Using gpus from --gpus or evaluate config and ignoring --tensor-parallel-size configured in serve vllm_args INFO 2024-08-15 10:04:51,065 instructlab.model.backends.backends:437: Trying to connect to model server at http://127.0.0.1:8000/v1 INFO 2024-08-15 10:04:53,580 instructlab.model.backends.vllm:208: vLLM starting up on pid 79388 at http://127.0.0.1:54265/v1 INFO 2024-08-15 10:04:53,580 instructlab.model.backends.backends:450: Starting a temporary vLLM server at http://127.0.0.1:54265/v1 INFO 2024-08-15 10:04:53,580 instructlab.model.backends.backends:465: Waiting for the vLLM server to start at http://127.0.0.1:54265/v1, this might take a moment... Attempt: 1/300 INFO 2024-08-15 10:04:58,003 instructlab.model.backends.backends:465: Waiting for the vLLM server to start at http://127.0.0.1:54265/v1, this might take a moment... Attempt: 2/300 INFO 2024-08-15 10:05:02,314 instructlab.model.backends.backends:465: Waiting for the vLLM server to start at http://127.0.0.1:54265/v1, this might take a moment... Attempt: 3/300 moment... Attempt: 3/300 INFO 2024-08-15 10:06:07,611 instructlab.model.backends.backends:472: vLLM engine successfully started at http://127.0.0.1:54265/v1", "Training finished! Best final checkpoint: samples_1945 with score: 6.813759384", "ls ~/.local/share/instructlab/phase/<phase1-or-phase2>/checkpoints/", "samples_1711 samples_1945 samples_1456 samples_1462 samples_1903", "ilab model train --strategy lab-multiphase --phased-phase1-data ~/.local/share/instructlab/datasets/<generation-date>/<knowledge-train-messages-jsonl-file> --phased-phase2-data ~/.local/share/instructlab/datasets/<generation-date>/<skills-train-messages-jsonl-file>", "Metadata (checkpoints, the training journal) may have been saved from a previous training run. By default, training will resume from this metadata if it exists Alternatively, the metadata can be cleared, and training can start from scratch Would you like to START TRAINING FROM THE BEGINNING? n", "Metadata (checkpoints, the training journal) may have been saved from a previous training run. By default, training will resume from this metadata if it exists Alternatively, the metadata can be cleared, and training can start from scratch Would you like to START TRAINING FROM THE BEGINNING? y" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.4/html/generating_a_custom_llm_using_rhel_ai/train_and_eval
3.7. Managing Subject Names and Subject Alternative Names
3.7. Managing Subject Names and Subject Alternative Names The subject name of a certificate is a distinguished name (DN) that contains identifying information about the entity to which the certificate is issued. This subject name can be built from standard LDAP directory components, such as common names and organizational units. These components are defined in X.500. In addition to - or even in place of - the subject name, the certificate can have a subject alternative name , which is a kind of extension set for the certificate that includes additional information that is not defined in X.500. The naming components for both subject names and subject alternative names can be customized. Important If the subject name is empty, then the Subject Alternative Name extension must be present and marked critical. 3.7.1. Using the Requester CN or UID in the Subject Name The cn or uid value from a certificate request can be used to build the subject name of the issued certificate. This section demonstrates a profile that requires the naming attribute (CN or UID) being specified in the Subject Name Constraint to be present in the certificate request. If the naming attribute is missing, the request is rejected. There are two parts to this configuration: The CN or UID format is set in the pattern configuration in the Subject Name Constraint. The format of the subject DN, including the CN or UID token and the specific suffix for the certificate, is set in the Subject Name Default. For example, to use the CN in the subject DN: In this example, if a request comes in with the CN of cn=John Smith , then the certificate will be issued with a subject DN of cn=John Smith,DC=example, DC=com . If the request comes in but it has a UID of uid=jsmith and no CN, then the request is rejected. The same configuration is used to pull the requester UID into the subject DN: The format for the pattern parameter is covered in Section B.2.11, "Subject Name Constraint" and Section B.1.27, "Subject Name Default" . 3.7.2. Inserting LDAP Directory Attribute Values and Other Information into the Subject Alt Name Information from an LDAP directory or that was submitted by the requester can be inserted into the subject alternative name of the certificate by using matching variables in the Subject Alt Name Extension Default configuration. This default sets the type (format) of information and then the matching pattern (variable) to use to retrieve the information. For example: This inserts the requester's email as the first CN component in the subject alt name. To use additional components, increment the Type_ , Pattern_ , and Enable_ values numerically, such as Type_1 . Configuring the subject alt name is detailed in Section B.1.23, "Subject Alternative Name Extension Default" , as well. To insert LDAP components into the subject alt name of the certificate: Inserting LDAP attribute values requires enabling the user directory authentication plug-in, SharedSecret . Open the CA Console. Select Authentication in the left navigation tree. In the Authentication Instance tab, click Add , and add an instance of the SharedSecret authentication plug-in. Enter the following information: Save the new plug-in instance. Note pkiconsole is being deprecated. For information on setting a CMC shared token, see Section 10.4.2, "Setting a CMC Shared Secret" . The ldapStringAttributes parameter instructs the authentication plug-in to read the value of the mail attribute from the user's LDAP entry and put that value in the certificate request. When the value is in the request, the certificate profile policy can be set to insert that value for an extension value. The format for the dnpattern parameter is covered in Section B.2.11, "Subject Name Constraint" and Section B.1.27, "Subject Name Default" . To enable the CA to insert the LDAP attribute value in the certificate extension, edit the profile's configuration file, and insert a policy set parameter for an extension. For example, to insert the mail attribute value in the Subject Alternative Name extension in the caFullCMCSharedTokenCert profile, change the following code: For more details about editing a profile, see Section 3.2.1.3, "Editing a Certificate Profile in Raw Format" . Restart the CA. For this example, certificates submitted through the caFullCMCSharedTokenCert profile enrollment form will have the Subject Alternative Name extension added with the value of the requester's mail LDAP attribute. For example: There are many attributes which can be automatically inserted into certificates by being set as a token ( USDXUSD ) in any of the Pattern_ parameters in the policy set. The common tokens are listed in Table 3.1, "Variables Used to Populate Certificates" , and the default profiles contain examples for how these tokens are used. Table 3.1. Variables Used to Populate Certificates Policy Set Token Description USDrequest.auth_token.cn[0]USD The LDAP common name ( cn ) attribute of the user who requested the certificate. USDrequest.auth_token.mail[0]USD The value of the LDAP email ( mail ) attribute of the user who requested the certificate. USDrequest.auth_token.tokencertsubjectUSD The certificate subject name. USDrequest.auth_token.uidUSD The LDAP user ID ( uid ) attribute of the user who requested the certificate. USDrequest.auth_token.userdnUSD The user DN of the user who requested the certificate. USDrequest.auth_token.useridUSD The value of the user ID attribute for the user who requested the certificate. USDrequest.uidUSD The value of the user ID attribute for the user who requested the certificate. USDrequest.requestor_emailUSD The email address of the person who submitted the request. USDrequest.requestor_nameUSD The person who submitted the request. USDrequest.upnUSD The Microsoft UPN. This has the format (UTF8String)1.3.6.1.4.1.311.20.2.3,USDrequest.upnUSD . USDserver.sourceUSD Instructs the server to generate a version 4 UUID (random number) component in the subject name. This always has the format (IA5String)1.2.3.4,USDserver.sourceUSD . USDrequest.auth_token.userUSD Used when the request was submitted by TPS. The TPS subsystem trusted manager who requested the certificate. USDrequest.subjectUSD Used when the request was submitted by TPS. The subject name DN of the entity to which TPS has resolved and requested for. For example, cn=John.Smith.123456789,o=TMS Org 3.7.3. Using the CN Attribute in the SAN Extension Several client applications and libraries no longer support using the Common Name (CN) attribute of the Subject DN for domain name validation, which has been deprecated in RFC 2818 . Instead, these applications and libraries use the dNSName Subject Alternative Name (SAN) value in the certificate request. Certificate System copies the CN only if it matches the preferred name syntax according to RFC 1034 Section 3.5 and has more than one component. Additionally, existing SAN values are preserved. For example, the dNSName value based on the CN is appended to existing SANs. To configure Certificate System to automatically use the CN attribute in the SAN extension, edit the certificate profile used to issue the certificates. For example: Disable the profile: Edit the profile: Add the following configuration with a unique set number for the profile. For example: The example uses 12 as the set number. Append the new policy set number to the policyset.userCertSet.list parameter. For example: Save the profile. Enable the profile: Note All default server profiles contain the commonNameToSANDefaultImpl default. 3.7.4. Accepting SAN Extensions from a CSR In certain environments, administrators want to allow specifying Subject Alternative Name (SAN) extensions in Certificate Signing Request (CSR). 3.7.4.1. Configuring a Profile to Retrieve SANs from a CSR To allow retrieving SANs from a CSR, use the User Extension Default. For details, see Section B.1.32, "User Supplied Extension Default" . Note A SAN extension can contain one or more SANs. To accept SANs from a CSR, add the following default and constraint to a profile, such as caCMCECserverCert : 3.7.4.2. Generating a CSR with SANs For example, to generate a CSR with two SANs using the certutil utility: After generating the CSR, follow the steps described in Section 5.5.2, "The CMC Enrollment Process" to complete the CMC enrollment.
[ "policyset.serverCertSet.1.constraint.class_id=subjectNameConstraintImpl policyset.serverCertSet.1.constraint.name=Subject Name Constraint policyset.serverCertSet.1.constraint.params. pattern=CN=[^,]+,.+ policyset.serverCertSet.1.constraint.params.accept=true policyset.serverCertSet.1.default.class_id=subjectNameDefaultImpl policyset.serverCertSet.1.default.name=Subject Name Default policyset.serverCertSet.1.default.params. name=CN=USDrequest.req_subject_name.cnUSD,DC=example, DC=com", "policyset.serverCertSet.1.constraint.class_id=subjectNameConstraintImpl policyset.serverCertSet.1.constraint.name=Subject Name Constraint policyset.serverCertSet.1.constraint.params. pattern=UID=[^,]+,.+ policyset.serverCertSet.1.constraint.params.accept=true policyset.serverCertSet.1.default.class_id=subjectNameDefaultImpl policyset.serverCertSet.1.default.name=Subject Name Default policyset.serverCertSet.1.default.params. name=UID=USDrequest.req_subject_name.uidUSD,DC=example, DC=com", "policyset.userCertSet.8.default.class_id=subjectAltNameExtDefaultImpl policyset.userCertSet.8.default.name=Subject Alt Name Constraint policyset.userCertSet.8.default.params.subjAltNameExtCritical=false policyset.userCertSet.8.default.params.subjAltExtType_0=RFC822Name policyset.userCertSet.8.default.params.subjAltExtPattern_0=USDrequest.requestor_emailUSD policyset.userCertSet.8.default.params.subjAltExtGNEnable_0=true", "pkiconsole https://server.example.com:8443/ca", "Authentication InstanceID=SharedToken shrTokAttr=shrTok ldap.ldapconn.host= server.example.com ldap.ldapconn.port= 636 ldap.ldapconn.secureConn=true ldap.ldapauth.bindDN= cn=Directory Manager password= password ldap.ldapauth.authtype=BasicAuth ldap.basedn= ou=People,dc=example,dc=org", "policyset.setID.8.default.params. subjAltExtPattern_0=USDrequest.auth_token.mail[0]USD", "systemctl restart pki-tomcatd-nuxwdog@ instance_name .service", "Identifier: Subject Alternative Name - 2.5.29.17 Critical: no Value: RFC822Name: [email protected]", "pki -c password -p 8080 -n \" PKI Administrator for example.com \" ca-profile-disable profile_name", "pki -c password -p 8080 -n \" PKI Administrator for example.com \" ca-profile-edit profile_name", "policyset.serverCertSet.12.constraint.class_id=noConstraintImpl policyset.serverCertSet.12.constraint.name=No Constraint policyset.serverCertSet.12.default.class_id= commonNameToSANDefaultImpl policyset.serverCertSet.12.default.name= Copy Common Name to Subject", "policyset.userCertSet.list=1,10,2,3,4,5,6,7,8,9 ,12", "pki -c password -p 8080 -n \" PKI Administrator for example.com \" ca-profile-enable profile_name", "prefix .constraint.class_id=noConstraintImpl prefix .constraint.name=No Constraint prefix .default.class_id=userExtensionDefaultImpl prefix .default.name=User supplied extension in CSR prefix .default.params.userExtOID=2.5.29.17", "certutil -R -k ec -q nistp256 -d . -s \"cn= Example Multiple SANs \" --extSAN dns: www.example.com ,dns: www.example.org -a -o /root/request.csr.p10" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/Managing_Subject_Names_and_Subject_Alternative_Names
6.12. Red Hat Virtualization 4.4 Batch Update 2 (ovirt-4.4.3)
6.12. Red Hat Virtualization 4.4 Batch Update 2 (ovirt-4.4.3) 6.12.1. Bug Fix These bugs were fixed in this release of Red Hat Virtualization: BZ# 1702016 Previously, the Manager allowed adding or migrating hosts configured as self-hosted engine hosts to a data center or cluster other than the one in which the self-hosted engine VM is running, even though all self-hosted engine hosts should be in the same data center and cluster. The hosts' IDs were identical to what they were when initially deployed, causing a Sanlock error. Consequently, the agent failed to start. With this update, an error is raised when adding a new self-hosted engine host or migrating an existing one to a data center or cluster other than the one in which the self-hosted engine is running. To add or migrate a self-hosted engine host to a data center or cluster other than the one in which the self-hosted engine is running, you need to disable the host from being a self-hosted engine host by reinstalling it. Follow these steps in the Administration Portal: Move the host to Maintenance mode. Invoke Reinstall with the Hosted Engine UNDEPLOY option selected. If using the REST API, use the undeploy_hosted_engine parameter. Edit the host and select the target data center and cluster. Activate the host. For details, see the Administration Guide or REST API Guide. BZ# 1760170 Previously, the MAC Pool search functionality failed to find unused addresses. As a result, creating a vNIC failed. In this release, the MAC pool search is now able to locate an unused address in the pool, and all unused addresses are assigned from a pool. BZ# 1808320 Previously, users with specific Data Center or Cluster permissions could not edit the cluster they have access to. In this release, users with specific Data Center or Cluster permissions can edit the cluster they have access to if they don't change the MAC pool associated with the cluster or attempt to add a new MAC pool. BZ# 1821425 Previously, when deploying Self-Hosted Engine, the Appliance size was not estimated correctly, and as a result, not enough space was allotted, and unpacking the Appliance failed. In this release, the Appliance size estimation and unpacking space allotment are correct, and deployment succeeds. BZ# 1835550 Previously, when the RHV Manager requested a listing of available ports from the ovirt-provider-ovn, the implementation was not optimized for scaling scenarios. As a result, in scenarios with many active OVN vNICs on virtual machines, starting a virtual machine using OVN vNICs was slow and sometimes timed out. In this release, implementation of listing ports has been optimized for scaling, as starting a VM with OVN vNICs with many active OVN vNICs is quicker. BZ# 1855305 Previously, hot-plugging a disk to Virtual Machine sometimes failed if the disk was assigned an address that was already assigned to a host-passthrough disk device. In this release, conflicts are avoided by preventing an address that is assigned to host-passthrough disk device from being assigned to a disk that is hot-plugged to the Virtual Machine. BZ# 1859314 Previously, unicode strings were not handled properly by the rhv-log-collector-analyzer after porting to python3. In this release, unicode strings are now handled properly. BZ# 1866862 Previously, Virtual Machines deployed on AMD EPYC hosts without NUMA enabled sometimes failed to start, with an unsupported configuration error reported. In this release, Virtual Machines start successfully on AMD EPYC hosts. BZ# 1866981 Previously, unicode strings were not handled properly by the ovirt-engine-db-query after porting to Python3. In this release, unicode strings are now handled properly. BZ# 1871694 Previously, changing a cluster's bios type to UEFI or UEFI+SecureBoot changed the Self-Hosted Engine Virtual Machine that runs within the cluster as well. As a result, the Self-Hosted Engine Virtual Machine failed to reboot upon restart. In this release, the Self-Hosted Engine Virtual Machine is configured with a custom bios type, and does not change if the cluster's bios type changes. BZ# 1871819 Previously, when changes were made in the logical network, the ovn-controller on the host sometimes exceeded the timeout interval during recalculation, and calculation was triggered repeatedly. As a result, OVN networking failed. In this release, recalculation by the ovn-controller is only triggered once per change, and OVN networking is maintained. BZ# 1877632 Previously, when the VDSM was restarted during a Virtual Machine migration on the migration destination host, the VM status wasn't identified correctly. In this release, the VDSM identifies the migration destination status correctly. BZ# 1878005 Previously, when a RHV-H 4.4 host was being prepared as a conversion host for Infrastructure Migration (IMS) using CloudForms 5, installing the v2v-conversion-host-wrapper failed due to a dependency on the missing libcgroup-tools package. The current release fixes this issue. It ships the missing package in the rhvh-4-for-rhel-8-x86_64-rpms repository. 6.12.2. Enhancements This release of Red Hat Virtualization features the following enhancements: BZ# 1613514 This enhancement adds the 'nowait' option to the domain stats to help avoid instances of non-responsiveness in the VDSM. As a result, libvirt now receives the 'nowait' option to avoid non-responsiveness. BZ# 1657294 With this enhancement, the user can change the HostedEngine VM name after deployment. BZ# 1745024 With this enhancement, the Intel Icelake Server Family is now supported in 4.4 and 4.5 compatibility levels. BZ# 1752751 This enhancement enables customization of the columns displayed in the Virtual Machines table of the Administration Portal. Two new columns have been added to the Virtual Machines table - (number of) 'vCPUs', and 'Memory (MB)'. These columns are not displayed by default. A new pop-up menu has been added to the Virtual Machines table that allows you to reset the table column settings, and to add or remove columns from the display. The selected column display settings (column visibility and order) are now persistent on the server by default, and are migrated (uploaded) to the server. This functionality can be disabled in the User > Options popup, by de-selecting the 'Persist grid settings' option. BZ# 1797717 With this enhancement, you can now perform a free text search in the Administration Portal that includes internally defined keywords. BZ# 1812316 With this enhancement, when scheduling a Virtual Machine with pinned NUMA nodes, memory requirements are calculated correctly by taking into account the available memory as well as hugepages allocated on NUMA nodes. BZ# 1828347 Previously, you used Windows Guest Tools to install the required drivers for virtual machines running Microsoft Windows. Now, RHV version 4.4 uses VirtIO-Win to provide these drivers. For clusters with a compatibility level of 4.4 and later, the engine sign of the guest-agent depends on the available VirtIO-Win. The auto-attaching of a driver ISO is dropped in favor of Microsoft Windows updates. However, the initial installation needs to be done manually. BZ# 1845397 With this enhancement, the migration transfer speed in the VDSM log is now displayed as Mbps (Megabits per second). BZ# 1854888 This enhancements adds error handling for OVA import and export operations, providing successful detection and reporting to the Red Hat Virtualization Manager if the qemu-img process fails to complete. BZ# 1862968 This enhancement introduces a new option for automatically setting the CPU and NUMA pinning of a Virtual Machine by introducing a new configuration parameter, auto_pinning_policy. This option can be set to existing , using the current topology of the Virtual Machine's CPU, or it can be set to adjust , using the dedicated host CPU topology and changing it according to the Virtual Machine. BZ# 1879280 Default Data Center and Default Cluster, which are created during Red Hat Virtualization installation, are created with 4.5 compatibility level by default in Red Hat Virtualization 4.4.3. Please be aware that compatibility level 4.5 requires RHEL 8.3 with Advanced Virtualization 8.3. 6.12.3. Technology Preview The items listed in this section are provided as Technology Previews. For further information on the scope of Technology Preview status, and the associated support implications, refer to Technology Preview Features Support Scope . BZ# 1361718 This enhancement provides support for attaching an emulated NVDIMM to virtual machines that are backed by NVDIMM on the host machine. For details, see Virtual Machine Management Guide 6.12.4. Release Notes This section outlines important details about the release, including recommended practices and notable changes to Red Hat Virtualization. You must take this information into account to ensure the best possible outcomes for your deployment. BZ# 1888626 Ansible-2.9.14 is required for proper setup and functioning of Red Hat Virtualization Manager 4.4.3. BZ# 1888628 Ansible-2.9.14 is required for proper setup and functioning of Red Hat Virtualization Manager 4.4.3. 6.12.5. Known Issues These known issues exist in Red Hat Virtualization at this time: BZ# 1886487 RHV-H 4.4.3 is based on RHEL 8.3, which uses a new version of Anaconda (BZ#1691319). This new combination introduces a regression that breaks the features that BZ#1777886 "[RFE] Support minimal storage layout for RHVH" added to RHV-H 4.4 GA. This regression affects only new installations of RHV-H 4.4.3. To work around this issue, first install the RHV-H 4.4 GA ISO and then upgrade the host to RHV-H 4.4.3. 6.12.6. Removed Functionality BZ# 1884146 The ovirt-engine-api-explorer package has been deprecated and removed in Red Hat Virtualization Manager 4.4.3. Customers should use the official REST API Guide instead, which provides the same information as ovirt-engine-api-explorer. See the Red Hat Virtualization REST API Guide .
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/release_notes/red_hat_virtualization_4_4_batch_update_2_ovirt_4_4_3
Part II. Managing projects in Business Central
Part II. Managing projects in Business Central As a process administrator, you can use Business Central in Red Hat Process Automation Manager to manage new, sample, and imported projects on a single or multiple branches. Prerequisites Red Hat JBoss Enterprise Application Platform 7.4 is installed. For details, see the Red Hat JBoss Enterprise Application Platform 7.4 Installation Guide . Red Hat Process Automation Manager is installed and configured with KIE Server. For more information, see Installing and configuring Red Hat Process Automation Manager on Red Hat JBoss EAP 7.4 . Red Hat Process Automation Manager is running and you can log in to Business Central with the developer role. For more information, see Planning a Red Hat Process Automation Manager installation .
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/deploying_and_managing_red_hat_process_automation_manager_services/assembly-managing-projects
7.98. kdebase
7.98. kdebase 7.98.1. RHBA-2012:1371 - kdebase bug fix update Updated kdebase packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The K Desktop Environment (KDE) is a graphical desktop environment for the X Window System. The kdebase packages include core applications for KDE. Bug Fixes BZ#608007 Prior to this update, the Konsole context menu item "Show menu bar" was always checked in new windows even if this menu item was disabled before. This update modifies the underlying code to handle the menu item "Show menu bar" as expected. BZ# 729307 Prior to this update, users could not define a default size for xterm windows when using the Konsole terminal in KDE. This update modifies the underlying code and adds the functionality to define a default size. All users of kdebase are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/kdebase
Chapter 5. Quota management
Chapter 5. Quota management As a cloud administrator, you can set and manage quotas for a project. Each project is allocated resources, and project users are granted access to consume these resources. This enables multiple projects to use a single cloud without interfering with each other's permissions and resources. A set of resource quotas are preconfigured when a new project is created. The quotas include the amount of VCPUs, instances, RAM, and floating IPs that can be assigned to projects. Quotas can be enforced at both the project and the project-user level. You can set or modify Compute and Block Storage quotas for new and existing projects using the dashboard. For more information, see Managing projects . 5.1. Viewing Compute quotas for a user Run the following command to list the currently set quota values for a user. Procedure Example 5.2. Updating compute quotas for a user Run the following commands to update a particular quota value: Example Note To view a list of options for the quota-update command, run: 5.3. Setting Object Storage quotas for a user Object Storage quotas can be classified under the following categories: Container quotas - Limits the total size (in bytes) or number of objects that can be stored in a single container. Account quotas - Limits the total size (in bytes) that a user has available in the Object Storage service. To set either container quotas or the account quotas, the Object Storage proxy server must have the parameters container_quotas or account_quotas (or both) added to the [pipeline:main] section of the proxy-server.conf file: Use the following command to view and update the Object Storage quotas. All users included in a project can view the quotas placed on the project. To update the Object Storage quotas on a project, you must have the role of a ResellerAdmin in the project. To view account quotas: To update quotas: For example, to place a 5 GB quota on an account:
[ "nova quota-show --user [USER-ID] --tenant [TENANT-ID]", "nova quota-show --user 3b9763e4753843529db15085874b1e84 --tenant a4ee0cbb97e749dca6de584c0b1568a6 +-----------------------------+-------+ | Quota | Limit | +-----------------------------+-------+ | instances | 10 | | cores | 20 | | ram | 51200 | | floating_ips | 5 | | fixed_ips | -1 | | metadata_items | 128 | | injected_files | 5 | | injected_file_content_bytes | 10240 | | injected_file_path_bytes | 255 | | key_pairs | 100 | | security_groups | 10 | | security_group_rules | 20 | | server_groups | 10 | | server_group_members | 10 | +-----------------------------+-------+", "nova quota-update --user [USER-ID] --[QUOTA_NAME] [QUOTA_VALUE] [TENANT-ID] nova quota-show --user [USER-ID] --tenant [TENANT-ID]", "nova quota-update --user 3b9763e4753843529db15085874b1e84 --floating-ips 10 a4ee0cbb97e749dca6de584c0b1568a6 nova quota-show --user 3b9763e4753843529db15085874b1e84 --tenant a4ee0cbb97e749dca6de584c0b1568a6 +-----------------------------+-------+ | Quota | Limit | +-----------------------------+-------+ | instances | 10 | | cores | 20 | | ram | 51200 | | floating_ips | 10 | | ... | | +-----------------------------+-------+", "nova help quota-update", "[pipeline:main] pipeline = catch_errors [...] tempauth container-quotas account-quotas slo dlo proxy-logging proxy-server [filter:account_quotas] use = egg:swift#account_quotas [filter:container_quotas] use = egg:swift#container_quotas", "swift stat Account: AUTH_b36ed2d326034beba0a9dd1fb19b70f9 Containers: 0 Objects: 0 Bytes: 0 Meta Quota-Bytes: 214748364800 X-Timestamp: 1351050521.29419 Content-Type: text/plain; charset=utf-8 Accept-Ranges: bytes", "swift post -m quota-bytes:<BYTES>", "swift post -m quota-bytes:5368709120" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/users_and_identity_management_guide/quota_management
Appendix A. Comparison between Ceph Ansible and Cephadm
Appendix A. Comparison between Ceph Ansible and Cephadm Cephadm is used for the containerized deployment of the storage cluster. The tables compare Cephadm with Ceph-Ansible playbooks for managing the containerized deployment of a Ceph cluster for day one and day two operations. Table A.1. Day one operations Description Ceph-Ansible Cephadm Installation of the Red Hat Ceph Storage cluster Run the site-container.yml playbook. Run cephadm bootstrap command to bootstrap the cluster on the admin node. Addition of hosts Use the Ceph Ansible inventory. Run ceph orch add host HOST_NAME to add hosts to the cluster. Addition of monitors Run the add-mon.yml playbook. Run the ceph orch apply mon command. Addition of managers Run the site-container.yml playbook. Run the ceph orch apply mgr command. Addition of OSDs Run the add-osd.yml playbook. Run the ceph orch apply osd command to add OSDs on all available devices or on specific hosts. Addition of OSDs on specific devices Select the devices in the osd.yml file and then run the add-osd.yml playbook. Select the paths filter under the data_devices in the osd.yml file and then run ceph orch apply -i FILE_NAME .yml command. Addition of MDS Run the site-container.yml playbook. Run the ceph orch apply FILESYSTEM_NAME command to add MDS. Addition of Ceph Object Gateway Run the site-container.yml playbook. Run the ceph orch apply rgw commands to add Ceph Object Gateway. Table A.2. Day two operations Description Ceph-Ansible Cephadm Removing hosts Use the Ansible inventory. Run ceph orch host rm HOST_NAME to remove the hosts. Removing monitors Run the shrink-mon.yml playbook. Run ceph orch apply mon to redeploy other monitors. Removing managers Run the shrink-mon.yml playbook. Run ceph orch apply mgr to redeploy other managers. Removing OSDs Run the shrink-osd.yml playbook. Run ceph orch osd rm OSD_ID to remove the OSDs. Removing MDS Run the shrink-mds.yml playbook. Run ceph orch rm SERVICE_NAME to remove the specific service. Exporting Ceph File System over NFS Protocol. Not supported on Red Hat Ceph Storage 4. Run ceph nfs export create command. Deployment of Ceph Object Gateway Run the site-container.yml playbook. Run ceph orch apply rgw SERVICE_NAME to deploy Ceph Object Gateway service. Removing Ceph Object Gateway Run the shrink-rgw.yml playbook. Run ceph orch rm SERVICE_NAME to remove the specific service. Block device mirroring Run the site-container.yml playbook. Run ceph orch apply rbd-mirror command. Minor version upgrade of Red Hat Ceph Storage Run the infrastructure-playbooks/rolling_update.yml playbook. Run ceph orch upgrade start command. Deployment of monitoring stack Edit the all.yml file during installation. Run the ceph orch apply -i FILE .yml after specifying the services. Additional Resources For more details on using the Ceph Orchestrator, see the Red Hat Ceph Storage Operations Guide .
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/installation_guide/comparison-between-ceph-ansible-and-cephadm_install
Appendix A. Using your subscription
Appendix A. Using your subscription AMQ Streams is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. Accessing Your Account Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. Activating a Subscription Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. Downloading Zip and Tar Files To access zip or tar files, use the customer portal to find the relevant files for download. If you are using RPM packages, this step is not required. Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the AMQ Streams for Apache Kafka entries in the INTEGRATION AND AUTOMATION category. Select the desired AMQ Streams product. The Software Downloads page opens. Click the Download link for your component. Installing packages with DNF To install a package and all the package dependencies, use: dnf install <package_name> To install a previously-downloaded package from a local directory, use: dnf install <path_to_download_package> Revised on 2024-08-12 11:05:12 UTC
[ "dnf install <package_name>", "dnf install <path_to_download_package>" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/getting_started_with_amq_streams_on_openshift/using_your_subscription
8.232. sysstat
8.232. sysstat 8.232.1. RHBA-2014:1468 - sysstat bug fix and enhancement update Updated sysstat packages that fix several bugs and add two enhancements are now available for Red Hat Enterprise Linux 6. The sysstat packages provide a set of utilities which enable system monitoring of disks, network, and other I/O activity. Bug Fixes BZ# 921612 When the sysstat utility appended new statistics to the "sa" daily data files, it did not check whether those files existed from the month. Under certain circumstances, for example when a month was short, sysstat appended new statistics to the old files. With this update, the utility has been modified to check whether the old files exist and remove them in case they does. As a result, new statistics are not appended to the old "sa" files. BZ# 1088998 Previously, the "sa2" script did not support the xz compression. As a consequence, old daily data files were not deleted. The support for the xz compression has been added to the script and the data is now deleted as expected. BZ# 1124180 The dynamic ticks kernel feature can currently make the /proc/stat file unreliable because a CPU cannot provide reliable statistics if it is stopped. Even though the kernel is trying to provide the best guess, the statistics are not always accurate. As a consequence, some sysstat commands could show overflowed values. This update detects values going backwards in sysstat, and sysstat commands no longer show overflowed values. In addition, this update adds the following Enhancements BZ# 1102603 With this update, the user is able to set a compress method in the /etc/sysconfig/sysstat configuration file using the ZIP variable. This enhancement provides an easier configuration of the data file compression method. BZ# 1110851 The sysstat(5) manual page now provides documentation of the /etc/sysconfig/sysstat configuration file as well as detailed description of the HISTORY configuration variable. Users of sysstat are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/sysstat
Backup and restore
Backup and restore Red Hat Advanced Cluster Security for Kubernetes 4.7 Backing up and restoring Red Hat Advanced Cluster Security for Kubernetes Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/backup_and_restore/index
Chapter 9. Migrating your applications
Chapter 9. Migrating your applications You can migrate your applications by using the Migration Toolkit for Containers (MTC) web console or the command line . Most cluster-scoped resources are not yet handled by MTC. If your applications require cluster-scoped resources, you might have to create them manually on the target cluster. You can use stage migration and cutover migration to migrate an application between clusters: Stage migration copies data from the source cluster to the target cluster without stopping the application. You can run a stage migration multiple times to reduce the duration of the cutover migration. Cutover migration stops the transactions on the source cluster and moves the resources to the target cluster. You can use state migration to migrate an application's state: State migration copies selected persistent volume claims (PVCs). You can use state migration to migrate a namespace within the same cluster. During migration, the MTC preserves the following namespace annotations: openshift.io/sa.scc.mcs openshift.io/sa.scc.supplemental-groups openshift.io/sa.scc.uid-range These annotations preserve the UID range, ensuring that the containers retain their file system permissions on the target cluster. There is a risk that the migrated UIDs could duplicate UIDs within an existing or future namespace on the target cluster. 9.1. Migration prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Direct image migration You must ensure that the secure OpenShift image registry of the source cluster is exposed. You must create a route to the exposed registry. Direct volume migration If your clusters use proxies, you must configure an Stunnel TCP proxy. Clusters The source cluster must be upgraded to the latest MTC z-stream release. The MTC version must be the same on all clusters. Network The clusters have unrestricted network access to each other and to the replication repository. If you copy the persistent volumes with move , the clusters must have unrestricted network access to the remote volumes. You must enable the following ports on an OpenShift Container Platform 4 cluster: 6443 (API server) 443 (routes) 53 (DNS) You must enable port 443 on the replication repository if you are using TLS. Persistent volumes (PVs) The PVs must be valid. The PVs must be bound to persistent volume claims. If you use snapshots to copy the PVs, the following additional prerequisites apply: The cloud provider must support snapshots. The PVs must have the same cloud provider. The PVs must be located in the same geographic region. The PVs must have the same storage class. 9.2. Migrating your applications by using the MTC web console You can configure clusters and a replication repository by using the MTC web console. Then, you can create and run a migration plan. 9.2.1. Launching the MTC web console You can launch the Migration Toolkit for Containers (MTC) web console in a browser. Prerequisites The MTC web console must have network access to the OpenShift Container Platform web console. The MTC web console must have network access to the OAuth authorization server. Procedure Log in to the OpenShift Container Platform cluster on which you have installed MTC. Obtain the MTC web console URL by entering the following command: USD oc get -n openshift-migration route/migration -o go-template='https://{{ .spec.host }}' The output resembles the following: https://migration-openshift-migration.apps.cluster.openshift.com . Launch a browser and navigate to the MTC web console. Note If you try to access the MTC web console immediately after installing the Migration Toolkit for Containers Operator, the console might not load because the Operator is still configuring the cluster. Wait a few minutes and retry. If you are using self-signed CA certificates, you will be prompted to accept the CA certificate of the source cluster API server. The web page guides you through the process of accepting the remaining certificates. Log in with your OpenShift Container Platform username and password . 9.2.2. Adding a cluster to the MTC web console You can add a cluster to the Migration Toolkit for Containers (MTC) web console. Prerequisites Cross-origin resource sharing must be configured on the source cluster. If you are using Azure snapshots to copy data: You must specify the Azure resource group name for the cluster. The clusters must be in the same Azure resource group. The clusters must be in the same geographic location. If you are using direct image migration, you must expose a route to the image registry of the source cluster. Procedure Log in to the cluster. Obtain the migration-controller service account token: USD oc create token migration-controller -n openshift-migration Example output eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ Log in to the MTC web console. In the MTC web console, click Clusters . Click Add cluster . Fill in the following fields: Cluster name : The cluster name can contain lower-case letters ( a-z ) and numbers ( 0-9 ). It must not contain spaces or international characters. URL : Specify the API server URL, for example, https://<www.example.com>:8443 . Service account token : Paste the migration-controller service account token. Exposed route host to image registry : If you are using direct image migration, specify the exposed route to the image registry of the source cluster. To create the route, run the following command: For OpenShift Container Platform 3: USD oc create route passthrough --service=docker-registry --port=5000 -n default For OpenShift Container Platform 4: USD oc create route passthrough --service=image-registry --port=5000 -n openshift-image-registry Azure cluster : You must select this option if you use Azure snapshots to copy your data. Azure resource group : This field is displayed if Azure cluster is selected. Specify the Azure resource group. When an OpenShift Container Platform cluster is created on Microsoft Azure, an Azure Resource Group is created to contain all resources associated with the cluster. In the Azure CLI, you can display all resource groups by issuing the following command: USD az group list ResourceGroups associated with OpenShift Container Platform clusters are tagged, where sample-rg-name is the value you would extract and supply to the UI: { "id": "/subscriptions/...//resourceGroups/sample-rg-name", "location": "centralus", "name": "...", "properties": { "provisioningState": "Succeeded" }, "tags": { "kubernetes.io_cluster.sample-ld57c": "owned", "openshift_creationDate": "2019-10-25T23:28:57.988208+00:00" }, "type": "Microsoft.Resources/resourceGroups" }, This information is also available from the Azure Portal in the Resource groups blade. Require SSL verification : Optional: Select this option to verify the Secure Socket Layer (SSL) connection to the cluster. CA bundle file : This field is displayed if Require SSL verification is selected. If you created a custom CA certificate bundle file for self-signed certificates, click Browse , select the CA bundle file, and upload it. Click Add cluster . The cluster appears in the Clusters list. 9.2.3. Adding a replication repository to the MTC web console You can add an object storage as a replication repository to the Migration Toolkit for Containers (MTC) web console. MTC supports the following storage providers: Amazon Web Services (AWS) S3 Multi-Cloud Object Gateway (MCG) Generic S3 object storage, for example, Minio or Ceph S3 Google Cloud Provider (GCP) Microsoft Azure Blob Prerequisites You must configure the object storage as a replication repository. Procedure In the MTC web console, click Replication repositories . Click Add repository . Select a Storage provider type and fill in the following fields: AWS for S3 providers, including AWS and MCG: Replication repository name : Specify the replication repository name in the MTC web console. S3 bucket name : Specify the name of the S3 bucket. S3 bucket region : Specify the S3 bucket region. Required for AWS S3. Optional for some S3 providers. Check the product documentation of your S3 provider for expected values. S3 endpoint : Specify the URL of the S3 service, not the bucket, for example, https://<s3-storage.apps.cluster.com> . Required for a generic S3 provider. You must use the https:// prefix. S3 provider access key : Specify the <AWS_SECRET_ACCESS_KEY> for AWS or the S3 provider access key for MCG and other S3 providers. S3 provider secret access key : Specify the <AWS_ACCESS_KEY_ID> for AWS or the S3 provider secret access key for MCG and other S3 providers. Require SSL verification : Clear this checkbox if you are using a generic S3 provider. If you created a custom CA certificate bundle for self-signed certificates, click Browse and browse to the Base64-encoded file. GCP : Replication repository name : Specify the replication repository name in the MTC web console. GCP bucket name : Specify the name of the GCP bucket. GCP credential JSON blob : Specify the string in the credentials-velero file. Azure : Replication repository name : Specify the replication repository name in the MTC web console. Azure resource group : Specify the resource group of the Azure Blob storage. Azure storage account name : Specify the Azure Blob storage account name. Azure credentials - INI file contents : Specify the string in the credentials-velero file. Click Add repository and wait for connection validation. Click Close . The new repository appears in the Replication repositories list. 9.2.4. Creating a migration plan in the MTC web console You can create a migration plan in the Migration Toolkit for Containers (MTC) web console. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. You must ensure that the same MTC version is installed on all clusters. You must add the clusters and the replication repository to the MTC web console. If you want to use the move data copy method to migrate a persistent volume (PV), the source and target clusters must have uninterrupted network access to the remote volume. If you want to use direct image migration, you must specify the exposed route to the image registry of the source cluster. This can be done by using the MTC web console or by updating the MigCluster custom resource manifest. Procedure In the MTC web console, click Migration plans . Click Add migration plan . Enter the Plan name . The migration plan name must not exceed 253 lower-case alphanumeric characters ( a-z, 0-9 ) and must not contain spaces or underscores ( _ ). Select a Source cluster , a Target cluster , and a Repository . Click . Select the projects for migration. Optional: Click the edit icon beside a project to change the target namespace. Click . Select a Migration type for each PV: The Copy option copies the data from the PV of a source cluster to the replication repository and then restores the data on a newly created PV, with similar characteristics, in the target cluster. The Move option unmounts a remote volume, for example, NFS, from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. Click . Select a Copy method for each PV: Snapshot copy backs up and restores data using the cloud provider's snapshot functionality. It is significantly faster than Filesystem copy . Filesystem copy backs up the files on the source cluster and restores them on the target cluster. The file system copy method is required for direct volume migration. You can select Verify copy to verify data migrated with Filesystem copy . Data is verified by generating a checksum for each source file and checking the checksum after restoration. Data verification significantly reduces performance. Select a Target storage class . If you selected Filesystem copy , you can change the target storage class. Click . On the Migration options page, the Direct image migration option is selected if you specified an exposed image registry route for the source cluster. The Direct PV migration option is selected if you are migrating data with Filesystem copy . The direct migration options copy images and files directly from the source cluster to the target cluster. This option is much faster than copying images and files from the source cluster to the replication repository and then from the replication repository to the target cluster. Click . Optional: Click Add Hook to add a hook to the migration plan. A hook runs custom code. You can add up to four hooks to a single migration plan. Each hook runs during a different migration step. Enter the name of the hook to display in the web console. If the hook is an Ansible playbook, select Ansible playbook and click Browse to upload the playbook or paste the contents of the playbook in the field. Optional: Specify an Ansible runtime image if you are not using the default hook image. If the hook is not an Ansible playbook, select Custom container image and specify the image name and path. A custom container image can include Ansible playbooks. Select Source cluster or Target cluster . Enter the Service account name and the Service account namespace . Select the migration step for the hook: preBackup : Before the application workload is backed up on the source cluster postBackup : After the application workload is backed up on the source cluster preRestore : Before the application workload is restored on the target cluster postRestore : After the application workload is restored on the target cluster Click Add . Click Finish . The migration plan is displayed in the Migration plans list. Additional resources for persistent volume copy methods MTC file system copy method MTC snapshot copy method 9.2.5. Running a migration plan in the MTC web console You can migrate applications and data with the migration plan you created in the Migration Toolkit for Containers (MTC) web console. Note During migration, MTC sets the reclaim policy of migrated persistent volumes (PVs) to Retain on the target cluster. The Backup custom resource contains a PVOriginalReclaimPolicy annotation that indicates the original reclaim policy. You can manually restore the reclaim policy of the migrated PVs. Prerequisites The MTC web console must contain the following: Source cluster in a Ready state Target cluster in a Ready state Replication repository Valid migration plan Procedure Log in to the MTC web console and click Migration plans . Click the Options menu to a migration plan and select one of the following options under Migration : Stage copies data from the source cluster to the target cluster without stopping the application. Cutover stops the transactions on the source cluster and moves the resources to the target cluster. Optional: In the Cutover migration dialog, you can clear the Halt transactions on the source cluster during migration checkbox. State copies selected persistent volume claims (PVCs). Important Do not use state migration to migrate a namespace between clusters. Use stage or cutover migration instead. Select one or more PVCs in the State migration dialog and click Migrate . When the migration is complete, verify that the application migrated successfully in the OpenShift Container Platform web console: Click Home Projects . Click the migrated project to view its status. In the Routes section, click Location to verify that the application is functioning, if applicable. Click Workloads Pods to verify that the pods are running in the migrated namespace. Click Storage Persistent volumes to verify that the migrated persistent volumes are correctly provisioned.
[ "oc get -n openshift-migration route/migration -o go-template='https://{{ .spec.host }}'", "oc create token migration-controller -n openshift-migration", "eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ", "oc create route passthrough --service=docker-registry --port=5000 -n default", "oc create route passthrough --service=image-registry --port=5000 -n openshift-image-registry", "az group list", "{ \"id\": \"/subscriptions/...//resourceGroups/sample-rg-name\", \"location\": \"centralus\", \"name\": \"...\", \"properties\": { \"provisioningState\": \"Succeeded\" }, \"tags\": { \"kubernetes.io_cluster.sample-ld57c\": \"owned\", \"openshift_creationDate\": \"2019-10-25T23:28:57.988208+00:00\" }, \"type\": \"Microsoft.Resources/resourceGroups\" }," ]
https://docs.redhat.com/en/documentation/migration_toolkit_for_containers/1.8/html/migration_toolkit_for_containers/migrating-applications-with-mtc
5.8. Multipath Queries with multipath Command
5.8. Multipath Queries with multipath Command You can use the -l and -ll options of the multipath command to display the current multipath configuration. The -l option displays multipath topology gathered from information in sysfs and the device mapper. The -ll option displays the information the -l displays in addition to all other available components of the system. When displaying the multipath configuration, there are three verbosity levels you can specify with the -v option of the multipath command. Specifying -v0 yields no output. Specifying -v1 outputs the created or updated multipath names only, which you can then feed to other tools such as kpartx . Specifying -v2 prints all detected paths, multipaths, and device maps. The following example shows the output of a multipath -l command. The following example shows the output of a multipath -ll command.
[ "multipath -l 3600d0230000000000e13955cc3757800 dm-1 WINSYS,SF2372 size=269G features='0' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=1 status=active | `- 6:0:0:0 sdb 8:16 active ready running `-+- policy='round-robin 0' prio=1 status=enabled `- 7:0:0:0 sdf 8:80 active ready running", "multipath -ll 3600d0230000000000e13955cc3757801 dm-10 WINSYS,SF2372 size=269G features='0' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=1 status=enabled | `- 19:0:0:1 sdc 8:32 active ready running `-+- policy='round-robin 0' prio=1 status=enabled `- 18:0:0:1 sdh 8:112 active ready running 3600d0230000000000e13955cc3757803 dm-2 WINSYS,SF2372 size=125G features='0' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=1 status=active |- 19:0:0:3 sde 8:64 active ready running `- 18:0:0:3 sdj 8:144 active ready running" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/dm_multipath/multipath_queries
AMQ Clients overview
AMQ Clients overview Red Hat AMQ Clients 2023.Q4 Overview of AMQ Clients suite 2023.Q4
null
https://docs.redhat.com/en/documentation/red_hat_amq_clients/2023.q4/html/amq_clients_overview/index
6.3. NFS
6.3. NFS Red Hat Gluster Storage has two NFS server implementations, Gluster NFS and NFS-Ganesha. Gluster NFS supports only NFSv3 protocol, however, NFS-Ganesha supports NFSv3 and NFSv4 protocols. Section 6.3.1, "Support Matrix" Section 6.3.2, "Gluster NFS (Deprecated)" Section 6.3.3, "NFS Ganesha" 6.3.1. Support Matrix The following table contains the feature matrix of the NFS support on Red Hat Gluster Storage 3.1 and later: Table 6.5. NFS Support Matrix Features glusterFS NFS (NFSv3) NFS-Ganesha (NFSv3) NFS-Ganesha (NFSv4) Root-squash Yes Yes Yes All-squash No Yes Yes Sub-directory exports Yes Yes Yes Locking Yes Yes Yes Client based export permissions Yes Yes Yes Netgroups Yes Yes Yes Mount protocols UDP, TCP UDP, TCP Only TCP NFS transport protocols TCP UDP, TCP TCP AUTH_UNIX Yes Yes Yes AUTH_NONE Yes Yes Yes AUTH_KRB No Yes Yes ACLs Yes No Yes Delegations N/A N/A No High availability Yes (but with certain limitations. For more information see, "Setting up CTDB for NFS") Yes Yes Multi-head Yes Yes Yes Gluster RDMA volumes Yes Not supported Not supported DRC Not supported Yes Yes Dynamic exports No Yes Yes pseudofs N/A N/A Yes NFSv4.1 N/A N/A Yes Note Red Hat does not recommend running NFS-Ganesha with any other NFS servers, such as, kernel-NFS and Gluster NFS servers. Only one of NFS-Ganesha, gluster-NFS or kernel-NFS servers can be enabled on a given machine/host as all NFS implementations use the port 2049 and only one can be active at a given time. Hence you must disable kernel-NFS before NFS-Ganesha is started. 6.3.2. Gluster NFS (Deprecated) Warning Gluster-NFS is considered deprecated as of Red Hat Gluster Storage 3.5. Red Hat no longer recommends the use of Gluster-NFS, and does not support its use in new deployments and existing deployments that upgrade to Red Hat Gluster Storage 3.5.3. Linux, and other operating systems that support the NFSv3 standard can use NFS to access the Red Hat Gluster Storage volumes. Note From the Red Hat Gluster Storage 3.2 release onwards, Gluster NFS server will be disabled by default for any new volumes that are created. You can restart Gluster NFS server on the new volumes explicitly if needed. This can be done running the " mount -t nfs " command on the client as below: On any one of the server node: However, existing volumes (using Gluster NFS server) will not be impacted even after upgrade to Red Hat Gluster Storage 3.2 and will have implicit enablement of Gluster NFS server. Differences in implementation of the NFSv3 standard in operating systems may result in some operational issues. If issues are encountered when using NFSv3, contact Red Hat support to receive more information on Red Hat Gluster Storage client operating system compatibility, and information about known issues affecting NFSv3. NFS ACL v3 is supported, which allows getfacl and setfacl operations on NFS clients. The following options are provided to configure the Access Control Lists (ACL) in the glusterFS NFS server with the nfs.acl option. For example: To set nfs.acl ON, run the following command: To set nfs.acl OFF, run the following command: Note ACL is ON by default. Red Hat Gluster Storage includes Network Lock Manager (NLM) v4. NLM protocol allows NFSv3 clients to lock files across the network. NLM is required to make applications running on top of NFSv3 mount points to use the standard fcntl() (POSIX) and flock() (BSD) lock system calls to synchronize access across clients. This section describes how to use NFS to mount Red Hat Gluster Storage volumes (both manually and automatically) and how to verify that the volume has been mounted successfully. Important On Red Hat Enterprise Linux 7, enable the firewall service in the active zones for runtime and permanent mode using the following commands: To get a list of active zones, run the following command: To allow the firewall service in the active zones, run the following commands: Section 6.3.2.1, "Setting up CTDB for Gluster NFS (Deprecated) " Section 6.3.2.1.1, "Prerequisites" Section 6.3.2.1.2, "Port and Firewall Information for Gluster NFS" Section 6.3.2.1.3, "Configuring CTDB on Red Hat Gluster Storage Server" Section 6.3.2.2, "Using Gluster NFS to Mount Red Hat Gluster Storage Volumes (Deprecated)" Section 6.3.2.2.1, "Manually Mounting Volumes Using Gluster NFS (Deprecated)" Section 6.3.2.2.2, "Automatically Mounting Volumes Using Gluster NFS (Deprecated)" Section 6.3.2.2.3, "Automatically Mounting Subdirectories Using NFS (Deprecated)" Section 6.3.2.2.4, "Testing Volumes Mounted Using Gluster NFS (Deprecated)" Section 6.3.2.3, "Troubleshooting Gluster NFS (Deprecated)" 6.3.2.1. Setting up CTDB for Gluster NFS (Deprecated) In a replicated volume environment, the CTDB software (Cluster Trivial Database) has to be configured to provide high availability and lock synchronization for Samba shares. CTDB provides high availability by adding virtual IP addresses (VIPs) and a heartbeat service. When a node in the trusted storage pool fails, CTDB enables a different node to take over the virtual IP addresses that the failed node was hosting. This ensures the IP addresses for the services provided are always available. However, locks are not migrated as part of failover. Important On Red Hat Enterprise Linux 7, enable the CTDB firewall service in the active zones for runtime and permanent mode using the below commands: To get a list of active zones, run the following command: To add ports to the active zones, run the following commands: Note Amazon Elastic Compute Cloud (EC2) does not support VIPs and is hence not compatible with this solution. 6.3.2.1.1. Prerequisites Follow these steps before configuring CTDB on a Red Hat Gluster Storage Server: If you already have an older version of CTDB (version <= ctdb1.x), then remove CTDB by executing the following command: After removing the older version, proceed with installing the latest CTDB. Note Ensure that the system is subscribed to the samba channel to get the latest CTDB packages. Install CTDB on all the nodes that are used as NFS servers to the latest version using the following command: CTDB uses TCP port 4379 by default. Ensure that this port is accessible between the Red Hat Gluster Storage servers. 6.3.2.1.2. Port and Firewall Information for Gluster NFS On the GNFS-Client machine, configure firewalld to add ports used by statd, nlm and portmapper services by executing the following commands: Execute the following step on the client and server machines: On Red Hat Enterprise Linux 7, edit /etc/sysconfig/nfs file as mentioned below: Note This step is not applicable for Red Hat Enterprise Linux 8. Restart the services: For Red Hat Enterprise Linux 6: Important Red Hat Gluster Storage is not supported on Red Hat Enterprise Linux 6 (RHEL 6) from 3.5 Batch Update 1 onwards. See Version Details table in section Red Hat Gluster Storage Software Components and Versions of the Installation Guide For Red Hat Enterprise Linux 7: Note This step is not applicable for Red Hat Enterprise Linux 8. 6.3.2.1.3. Configuring CTDB on Red Hat Gluster Storage Server To configure CTDB on Red Hat Gluster Storage server, execute the following steps: Create a replicate volume. This volume will host only a zero byte lock file, hence choose minimal sized bricks. To create a replicate volume run the following command: where, N: The number of nodes that are used as Gluster NFS servers. Each node must host one brick. For example: In the following files, replace "all" in the statement META="all" to the newly created volume name For example: Start the volume. As part of the start process, the S29CTDBsetup.sh script runs on all Red Hat Gluster Storage servers, adds an entry in /etc/fstab for the mount, and mounts the volume at /gluster/lock on all the nodes with Gluster NFS server. It also enables automatic start of CTDB service on reboot. Note When you stop the special CTDB volume, the S29CTDB-teardown.sh script runs on all Red Hat Gluster Storage servers and removes an entry in /etc/fstab for the mount and unmounts the volume at /gluster/lock. Verify if the file /etc/sysconfig/ctdb exists on all the nodes that is used as Gluster NFS server. This file contains Red Hat Gluster Storage recommended CTDB configurations. Create /etc/ctdb/nodes file on all the nodes that is used as Gluster NFS servers and add the IPs of these nodes to the file. The IPs listed here are the private IPs of NFS servers. On all the nodes that are used as Gluster NFS server which require IP failover, create /etc/ctdb/public_addresses file and add the virtual IPs that CTDB should create to this file. Add these IP address in the following format: For example: Start the CTDB service on all the nodes by executing the following command: Note CTDB with gNFS only provides node level high availability and is not capable of detecting NFS service failure. Therefore, CTDB does not provide high availability if the NFS service goes down while the node is still up and running. 6.3.2.2. Using Gluster NFS to Mount Red Hat Gluster Storage Volumes (Deprecated) You can use either of the following methods to mount Red Hat Gluster Storage volumes: Note Currently GlusterFS NFS server only supports version 3 of NFS protocol. As a preferred option, always configure version 3 as the default version in the nfsmount.conf file at /etc/nfsmount.conf by adding the following text in the file: In case the file is not modified, then ensure to add vers=3 manually in all the mount commands. RDMA support in GlusterFS that is mentioned in the sections is with respect to communication between bricks and Fuse mount/GFAPI/NFS server. NFS kernel client will still communicate with GlusterFS NFS server over tcp. In case of volumes which were created with only one type of transport, communication between GlusterFS NFS server and bricks will be over that transport type. In case of tcp,rdma volume it could be changed using the volume set option nfs.transport-type . Section 6.3.2.2.1, "Manually Mounting Volumes Using Gluster NFS (Deprecated)" Section 6.3.2.2.2, "Automatically Mounting Volumes Using Gluster NFS (Deprecated)" After mounting a volume, you can test the mounted volume using the procedure described in . Section 6.3.2.2.4, "Testing Volumes Mounted Using Gluster NFS (Deprecated)" 6.3.2.2.1. Manually Mounting Volumes Using Gluster NFS (Deprecated) Create a mount point and run the mount command to manually mount a Red Hat Gluster Storage volume using Gluster NFS. If a mount point has not yet been created for the volume, run the mkdir command to create a mount point. Run the correct mount command for the system. For Linux For Solaris Manually Mount a Red Hat Gluster Storage Volume using Gluster NFS over TCP Create a mount point and run the mount command to manually mount a Red Hat Gluster Storage volume using Gluster NFS over TCP. Note glusterFS NFS server does not support UDP. If a NFS client such as Solaris client, connects by default using UDP, the following message appears: requested NFS version or transport protocol is not supported The option nfs.mount-udp is supported for mounting a volume, by default it is disabled. The following are the limitations: If nfs.mount-udp is enabled, the MOUNT protocol needed for NFSv3 can handle requests from NFS-clients that require MOUNT over UDP. This is useful for at least some versions of Solaris, IBM AIX and HP-UX. Currently, MOUNT over UDP does not have support for mounting subdirectories on a volume. Mounting server:/volume/subdir exports is only functional when MOUNT over TCP is used. MOUNT over UDP does not currently have support for different authentication options that MOUNT over TCP honors. Enabling nfs.mount-udp may give more permissions to NFS clients than intended via various authentication options like nfs.rpc-auth-allow , nfs.rpc-auth-reject and nfs.export-dir . If a mount point has not yet been created for the volume, run the mkdir command to create a mount point. Run the correct mount command for the system, specifying the TCP protocol option for the system. For Linux For Solaris 6.3.2.2.2. Automatically Mounting Volumes Using Gluster NFS (Deprecated) Red Hat Gluster Storage volumes can be mounted automatically using Gluster NFS, each time the system starts. Note In addition to the tasks described below, Red Hat Gluster Storage supports Linux, UNIX, and similar operating system's standard method of auto-mounting Gluster NFS mounts. Update the /etc/auto.master and /etc/auto.misc files, and restart the autofs service. Whenever a user or process attempts to access the directory it will be mounted in the background on-demand. Mounting a Volume Automatically using NFS Mount a Red Hat Gluster Storage Volume automatically using NFS at server start. Open the /etc/fstab file in a text editor. Append the following configuration to the fstab file. Using the example server names, the entry contains the following replaced values. Mounting a Volume Automatically using NFS over TCP Mount a Red Hat Gluster Storage Volume automatically using NFS over TCP at server start. Open the /etc/fstab file in a text editor. Append the following configuration to the fstab file. Using the example server names, the entry contains the following replaced values. 6.3.2.2.3. Automatically Mounting Subdirectories Using NFS (Deprecated) The nfs.export-dir and nfs.export-dirs options provide granular control to restrict or allow specific clients to mount a sub-directory. These clients can be authenticated during sub-directory mount with either an IP, host name or a Classless Inter-Domain Routing (CIDR) range. nfs.export-dirs This option is enabled by default. It allows the sub-directories of exported volumes to be mounted by clients without needing to export individual sub-directories. When enabled, all sub-directories of all volumes are exported. When disabled, sub-directories must be exported individually in order to mount them on clients. To disable this option for all volumes, run the following command: nfs.export-dir When nfs.export-dirs is set to on , the nfs.export-dir option allows you to specify one or more sub-directories to export, rather than exporting all subdirectories ( nfs.export-dirs on ), or only exporting individually exported subdirectories ( nfs.export-dirs off ). To export certain subdirectories, run the following command: The subdirectory path should be the path from the root of the volume. For example, in a volume with six subdirectories, to export the first three subdirectories, the command would be the following: Subdirectories can also be exported based on the IP address, hostname, or a Classless Inter-Domain Routing (CIDR) range by adding these details in parentheses after the directory path: 6.3.2.2.4. Testing Volumes Mounted Using Gluster NFS (Deprecated) You can confirm that Red Hat Gluster Storage directories are mounting successfully. To test mounted volumes Testing Mounted Red Hat Gluster Storage Volumes Using the command-line, verify the Red Hat Gluster Storage volumes have been successfully mounted. All three commands can be run in the order listed, or used independently to verify a volume has been successfully mounted. Prerequisites Section 6.3.2.2.2, "Automatically Mounting Volumes Using Gluster NFS (Deprecated)" , or Section 6.3.2.2.1, "Manually Mounting Volumes Using Gluster NFS (Deprecated)" Run the mount command to check whether the volume was successfully mounted. Run the df command to display the aggregated storage space from all the bricks in a volume. Move to the mount directory using the cd command, and list the contents. Note The LOCK functionality in NFS protocol is advisory, it is recommended to use locks if the same volume is accessed by multiple clients. 6.3.2.3. Troubleshooting Gluster NFS (Deprecated) Q: The mount command on the NFS client fails with RPC Error: Program not registered. This error is encountered due to one of the following reasons: Q: The rpcbind service is not running on the NFS client. This could be due to the following reasons: Q: The NFS server glusterfsd starts but the initialization fails with nfsrpc- service: portmap registration of program failed error message in the log. Q: The NFS server start-up fails with the message Port is already in use in the log file. Q: The mount command fails with NFS server failed error: Q: The showmount command fails with clnt_create: RPC: Unable to receive error. This error is encountered due to the following reasons: Q: The application fails with Invalid argument or Value too large for defined data type Q: After the machine that is running NFS server is restarted the client fails to reclaim the locks held earlier. Q: The rpc actor failed to complete successfully error is displayed in the nfs.log, even after the volume is mounted successfully. Q: The mount command fails with No such file or directory. Q: The mount command on the NFS client fails with RPC Error: Program not registered . This error is encountered due to one of the following reasons: The NFS server is not running. You can check the status using the following command: The volume is not started. You can check the status using the following command: rpcbind is restarted. To check if rpcbind is running, execute the following command: # ps ax| grep rpcbind A: If the NFS server is not running, then restart the NFS server using the following command: If the volume is not started, then start the volume using the following command: If both rpcbind and NFS server is running then restart the NFS server using the following commands: # gluster volume stop VOLNAME # gluster volume start VOLNAME Q: The rpcbind service is not running on the NFS client. This could be due to the following reasons: The portmap is not running. Another instance of kernel NFS server or glusterNFS server is running. A: Start the rpcbind service by running the following command: Q: The NFS server glusterfsd starts but the initialization fails with nfsrpc- service: portmap registration of program failed error message in the log. A: NFS start-up succeeds but the initialization of the NFS service can still fail preventing clients from accessing the mount points. Such a situation can be confirmed from the following error messages in the log file: Start the rpcbind service on the NFS server by running the following command: After starting rpcbind service, glusterFS NFS server needs to be restarted. Stop another NFS server running on the same machine. Such an error is also seen when there is another NFS server running on the same machine but it is not the glusterFS NFS server. On Linux systems, this could be the kernel NFS server. Resolution involves stopping the other NFS server or not running the glusterFS NFS server on the machine. Before stopping the kernel NFS server, ensure that no critical service depends on access to that NFS server's exports. On Linux, kernel NFS servers can be stopped by using either of the following commands depending on the distribution in use: Restart glusterFS NFS server. Q: The NFS server start-up fails with the message Port is already in use in the log file. A: This error can arise in case there is already a glusterFS NFS server running on the same machine. This situation can be confirmed from the log file, if the following error lines exist: In this release, the glusterFS NFS server does not support running multiple NFS servers on the same machine. To resolve the issue, one of the glusterFS NFS servers must be shutdown. Q: The mount command fails with NFS server failed error: A: mount: mount to NFS server '10.1.10.11' failed: timed out (retrying). Review and apply the suggested solutions to correct the issue. Disable name lookup requests from NFS server to a DNS server. The NFS server attempts to authenticate NFS clients by performing a reverse DNS lookup to match host names in the volume file with the client IP addresses. There can be a situation where the NFS server either is not able to connect to the DNS server or the DNS server is taking too long to respond to DNS request. These delays can result in delayed replies from the NFS server to the NFS client resulting in the timeout error. NFS server provides a work-around that disables DNS requests, instead relying only on the client IP addresses for authentication. The following option can be added for successful mounting in such situations: Note Remember that disabling the NFS server forces authentication of clients to use only IP addresses. If the authentication rules in the volume file use host names, those authentication rules will fail and client mounting will fail. NFS version used by the NFS client is other than version 3 by default. glusterFS NFS server supports version 3 of NFS protocol by default. In recent Linux kernels, the default NFS version has been changed from 3 to 4. It is possible that the client machine is unable to connect to the glusterFS NFS server because it is using version 4 messages which are not understood by glusterFS NFS server. The timeout can be resolved by forcing the NFS client to use version 3. The vers option to mount command is used for this purpose: # mount nfsserver :export -o vers=3 / MOUNTPOINT Q: The showmount command fails with clnt_create: RPC: Unable to receive error. This error is encountered due to the following reasons: The firewall might have blocked the port. rpcbind might not be running. A: Check the firewall settings, and open ports 111 for portmap requests/replies and glusterFS NFS server requests/replies. glusterFS NFS server operates over the following port numbers: 38465, 38466, and 38467. Q: The application fails with Invalid argument or Value too large for defined data type A: These two errors generally happen for 32-bit NFS clients, or applications that do not support 64-bit inode numbers or large files. Use the following option from the command-line interface to make glusterFS NFS return 32-bit inode numbers instead: This option is off by default, which permits NFS to return 64-bit inode numbers by default. Applications that will benefit from this option include those that are: built and run on 32-bit machines, which do not support large files by default, built to 32-bit standards on 64-bit systems. Applications which can be rebuilt from source are recommended to be rebuilt using the following flag with gcc: Q: After the machine that is running NFS server is restarted the client fails to reclaim the locks held earlier. A: The Network Status Monitor (NSM) service daemon (rpc.statd) is started before gluster NFS server. Hence, NSM sends a notification to the client to reclaim the locks. When the clients send the reclaim request, the NFS server does not respond as it is not started yet. Hence the client request fails. Solution : To resolve the issue, prevent the NSM daemon from starting when the server starts. Run chkconfig --list nfslock to check if NSM is configured during OS boot. If any of the entries are on, run chkconfig nfslock off to disable NSM clients during boot, which resolves the issue. Q: The rpc actor failed to complete successfully error is displayed in the nfs.log, even after the volume is mounted successfully. A: gluster NFS supports only NFS version 3. When nfs-utils mounts a client when the version is not mentioned, it tries to negotiate using version 4 before falling back to version 3. This is the cause of the messages in both the server log and the nfs.log file. To resolve the issue, declare NFS version 3 and the noacl option in the mount command as follows: Q: The mount command fails with No such file or directory . A: This problem is encountered as the volume is not present. 6.3.3. NFS Ganesha NFS-Ganesha is a user space file server for the NFS protocol with support for NFSv3, NFSv4.0, and NFSv4.1. Red Hat Gluster Storage 3.5 is supported with the community's V2.7 stable release of NFS-Ganesha on Red Hat Enterprise Linux 7. To understand the various supported features of NFS-ganesha see, Supported Features of NFS-Ganesha . Note To install NFS-Ganesha refer, Deploying NFS-Ganesha on Red Hat Gluster Storage in the Red Hat Gluster Storage 3.5 Installation Guide . Section 6.3.3.1, "Supported Features of NFS-Ganesha" Section 6.3.3.2, "Setting up NFS Ganesha" Section 6.3.3.2.1, "Port and Firewall Information for NFS-Ganesha" Section 6.3.3.2.2, "Prerequisites to run NFS-Ganesha" Section 6.3.3.2.3, "Configuring the Cluster Services" Section 6.3.3.2.4, "Creating the ganesha-ha.conf file" Section 6.3.3.2.5, "Configuring NFS-Ganesha using Gluster CLI" Section 6.3.3.2.6, "Exporting and Unexporting Volumes through NFS-Ganesha" Section 6.3.3.2.7, "Verifying the NFS-Ganesha Status" Section 6.3.3.3, "Accessing NFS-Ganesha Exports" Section 6.3.3.3.1, "Mounting exports in NFSv3 Mode" Section 6.3.3.3.2, "Mounting exports in NFSv4 Mode" Section 6.3.3.3.3, "Finding clients of an NFS server using dbus" Section 6.3.3.3.4, " Finding authorized client list and other information from an NFS server using dbus" Section 6.3.3.4, "Modifying the NFS-Ganesha HA Setup" Section 6.3.3.4.1, "Adding a Node to the Cluster" Section 6.3.3.4.2, "Deleting a Node in the Cluster" Section 6.3.3.4.3, "Replacing a Node in the Cluster" Section 6.3.3.5, "Modifying the Default Export Configurations" Section 6.3.3.5.1, "Providing Permissions for Specific Clients" Section 6.3.3.5.2, "Enabling and Disabling NFSv4 ACLs " Section 6.3.3.5.3, "Providing Pseudo Path for NFSv4 Mount" Section 6.3.3.5.4, "Exporting Subdirectories" Section 6.3.3.5.5, "Unexporting Subdirectories" Section 6.3.3.6, "Configuring Kerberized NFS-Ganesha" Section 6.3.3.6.1, "Setting up the NFS-Ganesha Server" Section 6.3.3.6.2, "Setting up the NFS Client" Section 6.3.3.7, "NFS-Ganesha Service Downtime" Section 6.3.3.7.1, "Modifying the Fail-over Time" Section 6.3.3.9, "Troubleshooting NFS Ganesha" 6.3.3.1. Supported Features of NFS-Ganesha The following list briefly describes the supported features of NFS-Ganesha: Highly Available Active-Active NFS-Ganesha In a highly available active-active environment, if a NFS-Ganesha server that is connected to a NFS client running a particular application goes down, the application/NFS client is seamlessly connected to another NFS-Ganesha server without any administrative intervention. Data coherency across the multi-head NFS-Ganesha servers in the cluster is achieved using the Gluster's Upcall infrastructure. Gluster's Upcall infrastructure is a generic and extensible framework that sends notifications to the respective glusterfs clients (in this case NFS-Ganesha server) when changes are detected in the back-end file system. Dynamic Export of Volumes NFS-Ganesha supports addition and removal of exports dynamically. Dynamic exports is managed by the DBus interface. DBus is a system local IPC mechanism for system management and peer-to-peer application communication. Exporting Multiple Entries In NFS-Ganesha, multiple Red Hat Gluster Storage volumes or sub-directories can be exported simultaneously. Pseudo File System NFS-Ganesha creates and maintains a NFSv4 pseudo-file system, which provides clients with seamless access to all exported objects on the server. Access Control List NFS-Ganesha NFSv4 protocol includes integrated support for Access Control List (ACL)s, which are similar to those used by Windows. These ACLs can be used to identify a trustee and specify the access rights allowed, or denied for that trustee.This feature is disabled by default. Note AUDIT and ALARM ACE types are not currently supported. 6.3.3.2. Setting up NFS Ganesha To set up NFS Ganesha, follow the steps mentioned in the further sections. Note You can also set up NFS-Ganesha using gdeploy, that automates the steps mentioned below. For more information, see "Deploying NFS-Ganesha" 6.3.3.2.1. Port and Firewall Information for NFS-Ganesha You must ensure to open the ports and firewall services: The following table lists the port details for NFS-Ganesha cluster setup: Table 6.6. NFS Port Details Service Port Number Protocol sshd 22 TCP rpcbind/portmapper 111 TCP/UDP NFS 2049 TCP/UDP mountd 20048 TCP/UDP NLM 32803 TCP/UDP RQuota 875 TCP/UDP statd 662 TCP/UDP pcsd 2224 TCP pacemaker_remote 3121 TCP corosync 5404 and 5405 UDP dlm 21064 TCP Note The port details for the Red Hat Gluster Storage services are listed under section 3. Verifying Port Access . Defining Service Ports Ensure the statd service is configured to use the ports mentioned above by executing the following commands on every node in the nfs-ganesha cluster: On Red Hat Enterprise Linux 7, edit /etc/sysconfig/nfs file as mentioned below: Note This step is not applicable for Red Hat Enterprise Linux 8. Restart the statd service: For Red Hat Enterprise Linux 7: Note This step is not applicable for Red Hat Enterprise Linux 8. Note For the NFS client to use the LOCK functionality, the ports used by LOCKD and STATD daemons has to be configured and opened via firewalld on the client machine: Edit '/etc/sysconfig/nfs' using following commands: Restart the services: For Red Hat Enterprise Linux 7: Open the ports that are configured in the first step using the following command: To ensure NFS client UDP mount does not fail, ensure to open port 2049 by executing the following command: Firewall Settings On Red Hat Enterprise Linux 7, enable the firewall services mentioned below. Get a list of active zones using the following command: Allow the firewall service in the active zones, run the following commands: 6.3.3.2.2. Prerequisites to run NFS-Ganesha Ensure that the following prerequisites are taken into consideration before you run NFS-Ganesha in your environment: A Red Hat Gluster Storage volume must be available for export and NFS-Ganesha rpms are installed. Ensure that the fencing agents are configured. For more information on configuring fencing agents, refer to the following documentation: Fencing Configuration section in the High Availability Add-On Administration guide: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/s1-fenceconfig-haaa Fence Devices section in the High Availability Add-On Reference guide: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-guiclustcomponents-haar#s2-guifencedevices-HAAR Note The required minimum number of nodes for a highly available installation/configuration of NFS Ganesha is 3 and a maximum number of supported nodes is 8. Only one of NFS-Ganesha, gluster-NFS or kernel-NFS servers can be enabled on a given machine/host as all NFS implementations use the port 2049 and only one can be active at a given time. Hence you must disable kernel-NFS before NFS-Ganesha is started. Disable the kernel-nfs using the following command: For Red Hat Enterprise Linux 7 To verify if kernel-nfs is disabled, execute the following command: The service should be in stopped state. Note Gluster NFS will be stopped automatically when NFS-Ganesha is enabled. Ensure that none of the volumes have the variable nfs.disable set to 'off'. Ensure to configure the ports as mentioned in Port/Firewall Information for NFS-Ganesha . Edit the ganesha-ha.conf file based on your environment. Reserve virtual IPs on the network for each of the servers configured in the ganesha.conf file. Ensure that these IPs are different than the hosts' static IPs and are not used anywhere else in the trusted storage pool or in the subnet. Ensure that all the nodes in the cluster are DNS resolvable. For example, you can populate the /etc/hosts with the details of all the nodes in the cluster. Make sure the SELinux is in Enforcing mode. Start network service on all machines using the following command: For Red Hat Enterprise Linux 7: Create and mount a gluster shared volume by executing the following command: For more information, see Section 11.12, "Setting up Shared Storage Volume" Create a directory named nfs-ganesha under /var/run/gluster/shared_storage Note With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ . Copy the ganesha.conf and ganesha-ha.conf files from /etc/ganesha to /var/run/gluster/shared_storage/nfs-ganesha . Enable the glusterfssharedstorage.service service using the following command: Enable the nfs-ganesha service using the following command: 6.3.3.2.3. Configuring the Cluster Services The HA cluster is maintained using Pacemaker and Corosync. Pacemaker acts a resource manager and Corosync provides the communication layer of the cluster. For more information about Pacemaker/Corosync see the documentation under the Clustering section of the Red Hat Enterprise Linux 7 documentation: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/ Note It is recommended to use 3 or more nodes to configure NFS Ganesha HA cluster, in order to maintain cluster quorum. Enable the pacemaker service using the following command: For Red Hat Enterprise Linux 7: Start the pcsd service using the following command. For Red Hat Enterprise Linux 7: Note To start pcsd by default after the system is rebooted, execute the following command: For Red Hat Enterprise Linux 7: Set a password for the user 'hacluster' on all the nodes using the following command. Use the same password for all the nodes: Perform cluster authentication between the nodes, where, username is 'hacluster', and password is the one you used in the step. Ensure to execute the following command on every node: For Red Hat Enterprise Linux 7: For Red Hat Enterprise Linux 8: Note The hostname of all the nodes in the Ganesha-HA cluster must be included in the command when executing it on every node. For example, in a four node cluster; nfs1, nfs2, nfs3, and nfs4, execute the following command on every node: For Red Hat Enterprise Linux 7: For Red Hat Enterprise Linux 8: Key-based SSH authentication without password for the root user has to be enabled on all the HA nodes. Follow these steps: On one of the nodes (node1) in the cluster, run: Deploy the generated public key from node1 to all the nodes (including node1) by executing the following command for every node: Copy the ssh keypair from node1 to all the nodes in the Ganesha-HA cluster by executing the following command for every node: As part of cluster setup, port 875 is used to bind to the Rquota service. If this port is already in use, assign a different port to this service by modifying following line in '/etc/ganesha/ganesha.conf' file on all the nodes. 6.3.3.2.4. Creating the ganesha-ha.conf file The ganesha-ha.conf.sample is created in the following location /etc/ganesha when Red Hat Gluster Storage is installed. Rename the file to ganesha-ha.conf and make the changes based on your environment. Create a directory named nfs-ganesha under /var/run/gluster/shared_storage Note With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ . Copy the ganesha.conf and ganesha-ha.conf files from /etc/ganesha to /var/run/gluster/shared_storage/nfs-ganesha. Sample ganesha-ha.conf file: Note Pacemaker handles the creation of the VIP and assigning an interface. Ensure that the VIP is in the same network range. Ensure that the HA_CLUSTER_NODES are specified as hostnames. Using IP addresses will cause clustering to fail. 6.3.3.2.5. Configuring NFS-Ganesha using Gluster CLI Setting up the HA cluster To setup the HA cluster, enable NFS-Ganesha by executing the following command: Enable NFS-Ganesha by executing the following command Note Before enabling or disabling NFS-Ganesha, ensure that all the nodes that are part of the NFS-Ganesha cluster are up. For example, Note After enabling NFS-Ganesha, if rpcinfo -p shows the statd port different from 662, then, restart the statd service: For Red Hat Enterprise Linux 7: Tearing down the HA cluster To tear down the HA cluster, execute the following command: For example, Verifying the status of the HA cluster To verify the status of the HA cluster, execute the following script: Note With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ . For example: Note It is recommended to manually restart the ganesha.nfsd service after the node is rebooted, to fail back the VIPs. Disabling NFS Ganesha does not enable Gluster NFS by default. If required, Gluster NFS must be enabled manually. Note Ensure to disable the RQUOTA port to avoid the issues described in Section 6.3.3.9, "Troubleshooting NFS Ganesha" NFS-Ganesha fails to start. NFS-Ganesha port 875 is unavailable. To disable RQUOTA port, run the following steps: The ganesha.conf file is available at /etc/ganesha/ganesha.conf. Uncomment the line #Enable_RQUOTA = false; to disable RQUOTA. Restart the nfs-ganesha service on all nodes. 6.3.3.2.6. Exporting and Unexporting Volumes through NFS-Ganesha Note Start Red Hat Gluster Storage Volume before enabling NFS-Ganesha. Exporting Volumes through NFS-Ganesha To export a Red Hat Gluster Storage volume, execute the following command: For example: Unexporting Volumes through NFS-Ganesha To unexport a Red Hat Gluster Storage volume, execute the following command: This command unexports the Red Hat Gluster Storage volume without affecting other exports. For example: 6.3.3.2.7. Verifying the NFS-Ganesha Status To verify the status of the volume set options, follow the guidelines mentioned below: Check if NFS-Ganesha is started by executing the following commands: On Red Hat Enterprise Linux-7 For example: Check if the volume is exported. For example: The logs of ganesha.nfsd daemon are written to /var/log/ganesha/ganesha.log. Check the log file on noticing any unexpected behavior. 6.3.3.3. Accessing NFS-Ganesha Exports NFS-Ganesha exports can be accessed by mounting them in either NFSv3 or NFSv4 mode. Since this is an active-active HA configuration, the mount operation can be performed from the VIP of any node. For better large file performance on all workloads that is generated on Red Hat Enterprise Linux 7 clients, it is recommended to set the following tunable before mounting the volume: Execute the following commands to set the tunable: To make the tunable persistent on reboot, execute the following commands: Note Ensure that NFS clients and NFS-Ganesha servers in the cluster are DNS resolvable with unique host-names to use file locking through Network Lock Manager (NLM) protocol. 6.3.3.3.1. Mounting exports in NFSv3 Mode To mount an export in NFSv3 mode, execute the following command: For example: 6.3.3.3.2. Mounting exports in NFSv4 Mode To mount an export in NFSv4 mode on RHEL 7 client(s), execute the following command: For example: Important The default version for RHEL 8 is NFSv4.2 To mount an export in a specific NFS version on RHEL 8 client(s), execute the following command: For example: 6.3.3.3.3. Finding clients of an NFS server using dbus To display the IP addresses of clients that have mounted the NFS exports, execute the following command: Note If the NFS export is unmounted or if a client is disconnected from the server, it may take a few minutes for this to be updated in the command output. 6.3.3.3.4. Finding authorized client list and other information from an NFS server using dbus To display the authorized client access list and other export options configured from an NFS server, execute the following command: This command, along with the ACLs, fetches other information like fullpath, pseudopath and tag of the export volume. The fullpath and the pseudopath is used for mounting the export volume. The dbus DisplayExport command will give clients details of the export volume. The output syntax is as follows: In the above output, client_type is the client's IP address, CIDR_version , CIDR_address , CIDR_mask and CIDR_proto are the CIDR representation details of the client and uint32 anonymous_uid , uint32 anonymous_gid , uint32 expire_time_attr , uint32 options and uint32 set are the Client Permissions. For example: 6.3.3.4. Modifying the NFS-Ganesha HA Setup To modify the existing HA cluster and to change the default values of the exports use the ganesha-ha.sh script located at /usr/libexec/ganesha/. 6.3.3.4.1. Adding a Node to the Cluster Before adding a node to the cluster, ensure that the firewall services are enabled as mentioned in Port Information for NFS-Ganesha and also the prerequisites mentioned in section Pre-requisites to run NFS-Ganesha are met. Note Since shared storage and /var/lib/glusterd/nfs/secret.pem SSH key are already generated, those steps should not be repeated. To add a node to the cluster, execute the following command on any of the nodes in the existing NFS-Ganesha cluster: where, HA_CONF_DIR: The directory path containing the ganesha-ha.conf file. By default it is /run/gluster/shared_storage/nfs-ganesha. HOSTNAME: Hostname of the new node to be added NODE-VIP: Virtual IP of the new node to be added. For example: Note With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ . 6.3.3.4.2. Deleting a Node in the Cluster To delete a node from the cluster, execute the following command on any of the nodes in the existing NFS-Ganesha cluster: where, HA_CONF_DIR: The directory path containing the ganesha-ha.conf file. By default it is located at /run/gluster/shared_storage/nfs-ganesha . HOSTNAME: Hostname of the node to be deleted For example: Note With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ . 6.3.3.4.3. Replacing a Node in the Cluster To replace a node in the existing NFS-Ganesha cluster: Delete the node from the cluster. Refer Section 6.3.3.4.2, "Deleting a Node in the Cluster" Create a node with the same hostname.Refer Section 11.10.2, "Replacing a Host Machine with the Same Hostname" Note It is not required for the new node to have the same name as that of the old node. Add the node to the cluster. Refer Section 6.3.3.4.1, "Adding a Node to the Cluster" Note Ensure that firewall services are enabled as mentioned in Section 6.3.3.2.1, "Port and Firewall Information for NFS-Ganesha" and also the Section 6.3.3.2.2, "Prerequisites to run NFS-Ganesha" are met. 6.3.3.5. Modifying the Default Export Configurations It is recommended to use gluster CLI options to export or unexport volumes through NFS-Ganesha. However, this section provides some information on changing configurable parameters in NFS-Ganesha. Such parameter changes require NFS-Ganesha to be started manually. For various supported export options see the ganesha-export-config 8 man page. To modify the default export configurations perform the following steps on any of the nodes in the existing ganesha cluster: Edit/add the required fields in the corresponding export file located at /run/gluster/shared_storage/nfs-ganesha/exports/ . Execute the following command where: HA_CONF_DIR: The directory path containing the ganesha-ha.conf file. By default it is located at /run/gluster/shared_storage/nfs-ganesha . volname: The name of the volume whose export configuration has to be changed. Sample export configuration file: The following are the default set of parameters required to export any entry. The values given here are the default values used by the CLI options to start or stop NFS-Ganesha. The following sections describe various configurations possible via NFS-Ganesha. Minor changes have to be made to the export.conf file to see the expected behavior. Providing Permissions for Specific Clients Enabling and Disabling NFSv4 ACLs Providing Pseudo Path for NFSv4 Mount Exporting Subdirectories 6.3.3.5.1. Providing Permissions for Specific Clients The parameter values and permission values given in the EXPORT block applies to any client that mounts the exported volume. To provide specific permissions to specific clients , introduce a client block inside the EXPORT block. For example, to assign specific permissions for client 10.00.00.01, add the following block in the EXPORT block. All the other clients inherit the permissions that are declared outside the client block. 6.3.3.5.2. Enabling and Disabling NFSv4 ACLs To enable NFSv4 ACLs , edit the following parameter: Note NFS clients should remount their share after enabling/disabling ACLs on the NFS-Ganesha server. 6.3.3.5.3. Providing Pseudo Path for NFSv4 Mount To set NFSv4 pseudo path , edit the below parameter: This path has to be used while mounting the export entry in NFSv4 mode. 6.3.3.5.4. Exporting Subdirectories You can follow either of the following two methods to export subdir in NFS-ganesha Method 1: Creating a separate export file. This will export the sub-directory shares without disrupting existing clients connected to other shares Create a separate export file for the sub-directory. Change the Export_ID to any unique unused ID.Edit the Path and Pseudo parameters and add the volpath entry to the export file. If a new export file is created for the sub-directory, you must add it's entry in ganesha.conf file. Note With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ . For example: Execute the following script to export the sub-directory shares without disrupting existing clients connected to other shares : For example: Method 2: Editing the volume export file with subdir entries in it. This method will only export the subdir and not the parent volume. Edit the volume export file with subdir entry. For Example: Change the Export_ID to any unique unused ID.Edit the Path and Pseudo parameters and add the volpath entry to the export file. Execute the following script to export the sub-directory shares without disrupting existing clients connected to other shares: For example: Note If the same export file contains multiple EXPORT{} entries, then a volume restart or nfs-ganesha service restart is required. 6.3.3.5.4.1. Enabling all_squash option To enable all_squash , edit the following parameter: 6.3.3.5.5. Unexporting Subdirectories Sub-directory in NFS-ganesha can be unexported by the following steps: Note the export id of the share which you want to unexport from configuration file (/var/run/gluster/shared_storage/nfs-ganesha/exports/file-name.conf) Deleting the configuration: Delete the configuration file (if there is a seperate configraution file): Delete the entry of the conf file from /etc/ganesha/ganesha.conf Remove the line: Run the below command: Export_id in above command should be of export entry obtained from step 1. 6.3.3.6. Configuring Kerberized NFS-Ganesha Note NTP is no longer supported on Red Hat Enterprise Linux 8. For Red Hat Enterprise Linux 8, to configure chrony daemon, see https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_basic_system_settings/using-chrony-to-configure-ntp Execute the following steps on all the machines: Install the krb5-workstation and the ntpdate (RHEL 7) or the chrony (RHEL 8) packages on all the machines: For Red Hat Enterprise Linux 7: For Red Hat Enterprise Linux 8: Note The krb5-libs package will be updated as a dependent package. For RHEL 7, configure the ntpdate based on the valid time server according to the environment: For RHEL 8, configure chrony based on the valid time server accroding to the environment: For RHEL 7 and RHEL 8 both, perform the following steps: Ensure that all systems can resolve each other by FQDN in DNS. Configure the /etc/krb5.conf file and add relevant changes accordingly. For example: Note For further details regarding the file configuration, refer to man krb5.conf . On the NFS-server and client, update the /etc/idmapd.conf file by making the required change. For example: 6.3.3.6.1. Setting up the NFS-Ganesha Server Execute the following steps to set up the NFS-Ganesha server: Note Before setting up the NFS-Ganesha server, make sure to set up the KDC based on the requirements. Install the following packages: Install the relevant gluster and NFS-Ganesha rpms. For more information see, Red Hat Gluster Storage 3.5 Installation Guide . Create a Kerberos principle and add it to krb5.keytab on the NFS-Ganesha server For example: Update /etc/ganesha/ganesha.conf file as mentioned below: Based on the different kerberos security flavours (krb5, krb5i and krb5p) supported by nfs-ganesha, configure the 'SecType' parameter in the volume export file (/var/run/gluster/shared_storage/nfs-ganesha/exports) with appropriate security flavour. Note With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ . Create an unprivileged user and ensure that the users that are created are resolvable to the UIDs through the central user database. For example: Note The username of this user has to be the same as the one on the NFS-client. 6.3.3.6.2. Setting up the NFS Client Execute the following steps to set up the NFS client: Note For a detailed information on setting up NFS-clients for security on Red Hat Enterprise Linux, see Section 8.8.2 NFS Security , in the Red Hat Enterprise Linux 7 Storage Administration Guide . Install the following packages: Create a kerberos principle and add it to krb5.keytab on the client side. For example: Check the status of nfs-client.target service and start it, if not already started: Create an unprivileged user and ensure that the users that are created are resolvable to the UIDs through the central user database. For example: Note The username of this user has to be the same as the one on the NFS-server. Mount the volume specifying kerberos security type: As root, all access should be granted. For example: Creation of a directory on the mount point and all other operations as root should be successful. Login as a guest user: Without a kerberos ticket, all access to /mnt should be denied. For example: Get the kerberos ticket for the guest and access /mnt: Important With this ticket, some access must be allowed to /mnt. If there are directories on the NFS-server where "guest" does not have access to, it should work correctly. 6.3.3.7. NFS-Ganesha Service Downtime In a highly available active-active environment, if a NFS-Ganesha server that is connected to a NFS client running a particular application goes down, the application/NFS client is seamlessly connected to another NFS-Ganesha server without any administrative intervention. However, there is a delay or fail-over time in connecting to another NFS-Ganesha server. This delay can be experienced during fail-back too, that is, when the connection is reset to the original node/server. The following list describes how the time taken for the NFS server to detect a server reboot or resume is calculated. If the ganesha.nfsd dies (crashes, oomkill, admin kill), the maximum time to detect it and put the ganesha cluster into grace is 20sec, plus whatever time pacemaker needs to effect the fail-over. Note This time taken to detect if the service is down, can be edited using the following command on all the nodes: If the whole node dies (including network failure) then this down time is the total of whatever time pacemaker needs to detect that the node is gone, the time to put the cluster into grace, and the time to effect the fail-over. This is ~20 seconds. So the max-fail-over time is approximately 20-22 seconds, and the average time is typically less. In other words, the time taken for NFS clients to detect server reboot or resume I/O is 20 - 22 seconds. 6.3.3.7.1. Modifying the Fail-over Time After failover, there is a short period of time during which clients try to reclaim their lost OPEN/LOCK state. Servers block certain file operations during this period, as per the NFS specification. The file operations blocked are as follows: Table 6.7. Protocols File Operations NFSV3 SETATTR NLM LOCK UNLOCK SHARE UNSHARE CANCEL LOCKT NFSV4 LOCK LOCKT OPEN REMOVE RENAME SETATTR Note LOCK, SHARE, and UNSHARE will be blocked only if it is requested with reclaim set to FALSE. OPEN will be blocked if requested with claim type other than CLAIM_PREVIOUS or CLAIM_DELEGATE_PREV. The default value for the grace period is 90 seconds. This value can be changed by adding the following lines in the /etc/ganesha/ganesha.conf file. After editing the /etc/ganesha/ganesha.conf file, restart the NFS-Ganesha service using the following command on all the nodes : On Red Hat Enterprise Linux 7 6.3.3.8. Tuning Readdir Performance for NFS-Ganesha The NFS-Ganesha process reads entire content of a directory at an instance. Any parallel operations on that directory are paused until the readdir operation is complete. With Red Hat Gluster Storage 3.5, the Dir_Chunk parameter enables the directory content to be read in chunks at an instance. This parameter is enabled by default. The default value of this parameter is 128 . The range for this parameter is 1 to UINT32_MAX . To disable this parameter, set the value to 0 Procedure 6.1. Configuring readdir perform for NFS-Ganesha Edit the /etc/ganesha/ganesha.conf file. Locate the CACHEINODE block. Add the Dir_Chunk parameter inside the block: Save the ganesha.conf file and restart the NFS-Ganesha service on all nodes: 6.3.3.9. Troubleshooting NFS Ganesha Mandatory checks Ensure you execute the following commands for all the issues/failures that is encountered: Make sure all the prerequisites are met. Execute the following commands to check the status of the services: Review the followings logs to understand the cause of failure. Situation NFS-Ganesha fails to start. Solution Ensure you execute all the mandatory checks to understand the root cause before proceeding with the following steps. Follow the listed steps to fix the issue: Ensure the kernel and gluster nfs services are inactive. Ensure that the port 875 is free to connect to the RQUOTA service. Ensure that the shared storage volume mount exists on the server after node reboot/shutdown. If it does not, then mount the shared storage volume manually using the following command: Note With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ . For more information see, section Exporting and Unexporting Volumes through NFS-Ganesha. Situation NFS-Ganesha port 875 is unavailable. Solution Ensure you execute all the mandatory checks to understand the root cause before proceeding with the following steps. Follow the listed steps to fix the issue: Run the following command to extract the PID of the process using port 875: Determine if the process using port 875 is an important system or user process. Perform one of the following depending upon the importance of the process: If the process using port 875 is an important system or user process: Assign a different port to this service by modifying following line in '/etc/ganesha/ganesha.conf' file on all the nodes: Run the following commands after modifying the port number: Run the following command to restart NFS-Ganesha: If the process using port 875 is not an important system or user process: Run the following command to kill the process using port 875: Use the process ID extracted from the step. Run the following command to ensure that the process is killed and port 875 is free to use: Run the following command to restart NFS-Ganesha: If required, restart the killed process. Situation NFS-Ganesha Cluster setup fails. Solution Ensure you execute all the mandatory checks to understand the root cause before proceeding with the following steps. Ensure the kernel and gluster nfs services are inactive. Ensure that pcs cluster auth command is executed on all the nodes with same password for the user hacluster Ensure that shared volume storage is mounted on all the nodes. Ensure that the name of the HA Cluster does not exceed 15 characters. Ensure UDP multicast packets are pingable using OMPING . Ensure that Virtual IPs are not assigned to any NIC. Situation NFS-Ganesha has started and fails to export a volume. Solution Ensure you execute all the mandatory checks to understand the root cause before proceeding with the following steps. Follow the listed steps to fix the issue: Ensure that volume is in Started state using the following command: Execute the following commands to check the status of the services: Review the followings logs to understand the cause of failure. Ensure that dbus service is running using the following command If the volume is not in a started state, run the following command to start the volume. If the volume is not exported as part of volume start, run the following command to re-export the volume: Note With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ . Situation Adding a new node to the HA cluster fails. Solution Ensure you execute all the mandatory checks to understand the root cause before proceeding with the following steps. Follow the listed steps to fix the issue: Ensure to run the following command from one of the nodes that is already part of the cluster: Ensure that gluster_shared_storage volume is mounted on the node that needs to be added. Make sure that all the nodes of the cluster is DNS resolvable from the node that needs to be added. Execute the following command for each of the hosts in the HA cluster on the node that needs to be added: For Red Hat Enterprize Linux 7: For Red Hat Enterprize Linux 8: Situation Cleanup required when nfs-ganesha HA cluster setup fails. Solution To restore back the machines to the original state, execute the following commands on each node forming the cluster: Note With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ . Situation Permission issues. Solution By default, the root squash option is disabled when you start NFS-Ganesha using the CLI. In case, you encounter any permission issues, check the unix permissions of the exported entry.
[ "mount -t nfs HOSTNAME:VOLNAME MOUNTPATH", "gluster volume set VOLNAME nfs.disable off", "gluster volume set VOLNAME nfs.acl on", "gluster volume set VOLNAME nfs.acl off", "firewall-cmd --get-active-zones", "firewall-cmd --zone=zone_name --add-service=nfs --add-service=rpc-bind firewall-cmd --zone=zone_name --add-service=nfs --add-service=rpc-bind --permanent", "firewall-cmd --get-active-zones", "firewall-cmd --zone=zone_name --add-port=4379/tcp firewall-cmd --zone=zone_name --add-port=4379/tcp --permanent", "yum remove ctdb", "yum install ctdb", "firewall-cmd --zone=public --add-port=662/tcp --add-port=662/udp --add-port=32803/tcp --add-port=32769/udp --add-port=111/tcp --add-port=111/udp", "firewall-cmd --zone=public --add-port=662/tcp --add-port=662/udp --add-port=32803/tcp --add-port=32769/udp --add-port=111/tcp --add-port=111/udp --permanent", "sed -i '/STATD_PORT/s/^#//' /etc/sysconfig/nfs", "service nfslock restart service nfs restart", "systemctl restart nfs-config systemctl restart rpc-statd systemctl restart nfs-mountd systemctl restart nfslock", "gluster volume create volname replica n ipaddress:/brick path.......N times", "gluster volume create ctdb replica 3 10.16.157.75:/rhgs/brick1/ctdb/b1 10.16.157.78:/rhgs/brick1/ctdb/b2 10.16.157.81:/rhgs/brick1/ctdb/b3", "/var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh /var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh", "META=\"all\" to META=\"ctdb\"", "gluster volume start ctdb", "10.16.157.0 10.16.157.3 10.16.157.6", "<Virtual IP>/<routing prefix><node interface>", "192.168.1.20/24 eth0 192.168.1.21/24 eth0", "service ctdb start", "Defaultvers=3", "mount nfsserver:export -o vers=3 /MOUNTPOINT", "mkdir /mnt/glusterfs", "mount -t nfs -o vers=3 server1:/test-volume /mnt/glusterfs", "mount -o vers=3 nfs://server1:38467/test-volume /mnt/glusterfs", "mkdir /mnt/glusterfs", "mount -t nfs -o vers=3,mountproto=tcp server1:/test-volume /mnt/glusterfs", "mount -o proto=tcp, nfs://server1:38467/test-volume /mnt/glusterfs", "HOSTNAME|IPADDRESS :/ VOLNAME / MOUNTDIR nfs defaults,_netdev, 0 0", "server1:/test-volume /mnt/glusterfs nfs defaults,_netdev, 0 0", "HOSTNAME|IPADDRESS :/ VOLNAME / MOUNTDIR nfs defaults,_netdev,mountproto=tcp 0 0", "server1:/test-volume /mnt/glusterfs nfs defaults,_netdev,mountproto=tcp 0 0", "gluster volume set VOLNAME nfs.export-dirs off", "gluster volume set VOLNAME nfs.export-dir subdirectory", "gluster volume set myvolume nfs.export-dir /dir1,/dir2,/dir3", "gluster volume set VOLNAME nfs.export-dir subdirectory(IPADDRESS) , subdirectory(HOSTNAME) , subdirectory(CIDR)", "gluster volume set myvolume nfs.export-dir /dir1(192.168.10.101),/dir2(storage.example.com),/dir3(192.168.98.0/24)", "mount server1:/test-volume on /mnt/glusterfs type nfs (rw,addr=server1)", "df -h /mnt/glusterfs Filesystem Size Used Avail Use% Mounted on server1:/test-volume 28T 22T 5.4T 82% /mnt/glusterfs", "cd /mnt/glusterfs ls", "gluster volume status", "gluster volume info", "gluster volume start VOLNAME", "gluster volume start VOLNAME", "service rpcbind start", "[2010-05-26 23:33:47] E [rpcsvc.c:2598:rpcsvc_program_register_portmap] rpc-service: Could notregister with portmap [2010-05-26 23:33:47] E [rpcsvc.c:2682:rpcsvc_program_register] rpc-service: portmap registration of program failed [2010-05-26 23:33:47] E [rpcsvc.c:2695:rpcsvc_program_register] rpc-service: Program registration failed: MOUNT3, Num: 100005, Ver: 3, Port: 38465 [2010-05-26 23:33:47] E [nfs.c:125:nfs_init_versions] nfs: Program init failed [2010-05-26 23:33:47] C [nfs.c:531:notify] nfs: Failed to initialize protocols [2010-05-26 23:33:49] E [rpcsvc.c:2614:rpcsvc_program_unregister_portmap] rpc-service: Could not unregister with portmap [2010-05-26 23:33:49] E [rpcsvc.c:2731:rpcsvc_program_unregister] rpc-service: portmap unregistration of program failed [2010-05-26 23:33:49] E [rpcsvc.c:2744:rpcsvc_program_unregister] rpc-service: Program unregistration failed: MOUNT3, Num: 100005, Ver: 3, Port: 38465", "service rpcbind start", "service nfs-kernel-server stop service nfs stop", "[2010-05-26 23:40:49] E [rpc-socket.c:126:rpcsvc_socket_listen] rpc-socket: binding socket failed:Address already in use [2010-05-26 23:40:49] E [rpc-socket.c:129:rpcsvc_socket_listen] rpc-socket: Port is already in use [2010-05-26 23:40:49] E [rpcsvc.c:2636:rpcsvc_stage_program_register] rpc-service: could not create listening connection [2010-05-26 23:40:49] E [rpcsvc.c:2675:rpcsvc_program_register] rpc-service: stage registration of program failed [2010-05-26 23:40:49] E [rpcsvc.c:2695:rpcsvc_program_register] rpc-service: Program registration failed: MOUNT3, Num: 100005, Ver: 3, Port: 38465 [2010-05-26 23:40:49] E [nfs.c:125:nfs_init_versions] nfs: Program init failed [2010-05-26 23:40:49] C [nfs.c:531:notify] nfs: Failed to initialize protocols", "mount: mount to NFS server '10.1.10.11' failed: timed out (retrying).", "option nfs.addr.namelookup off", "NFS.enable-ino32 <on | off>", "-D_FILE_OFFSET_BITS=64", "[2013-06-25 00:03:38.160547] W [rpcsvc.c:180:rpcsvc_program_actor] 0-rpc-service: RPC program version not available (req 100003 4) [2013-06-25 00:03:38.160669] E [rpcsvc.c:448:rpcsvc_check_and_reply_error] 0-rpcsvc: rpc actor failed to complete successfully", "mount -t nfs -o vers=3,noacl server1:/test-volume /mnt/glusterfs", "sed -i '/STATD_PORT/s/^#//' /etc/sysconfig/nfs", "systemctl restart nfs-config systemctl restart rpc-statd", "sed -i '/STATD_PORT/s/^#//' /etc/sysconfig/nfs sed -i '/LOCKD_TCPPORT/s/^#//' /etc/sysconfig/nfs sed -i '/LOCKD_UDPPORT/s/^#//' /etc/sysconfig/nfs", "systemctl restart nfs-config systemctl restart rpc-statd systemctl restart nfslock", "firewall-cmd --zone=public --add-port=662/tcp --add-port=662/udp --add-port=32803/tcp --add-port=32769/udp --add-port=111/tcp --add-port=111/udp", "firewall-cmd --zone=public --add-port=662/tcp --add-port=662/udp --add-port=32803/tcp --add-port=32769/udp --add-port=111/tcp --add-port=111/udp --permanent", "firewall-cmd --zone=zone_name --add-port=2049/udp firewall-cmd --zone=zone_name --add-port=2049/udp --permanent", "firewall-cmd --get-active-zones", "firewall-cmd --zone= zone_name --add-service=nlm --add-service=nfs --add-service=rpc-bind --add-service=high-availability --add-service=mountd --add-service=rquota firewall-cmd --zone= zone_name --add-service=nlm --add-service=nfs --add-service=rpc-bind --add-service=high-availability --add-service=mountd --add-service=rquota --permanent firewall-cmd --zone= zone_name --add-port=662/tcp --add-port=662/udp firewall-cmd --zone= zone_name --add-port=662/tcp --add-port=662/udp --permanent", "systemctl stop nfs-server systemctl disable nfs-server", "systemctl status nfs-server", "systemctl start network", "gluster volume set all cluster.enable-shared-storage enable volume set: success", "systemctl enable glusterfssharedstorage.service", "systemctl enable nfs-ganesha", "systemctl enable pacemaker.service", "systemctl start pcsd", "systemctl enable pcsd", "echo <password> | passwd --stdin hacluster", "pcs cluster auth <hostname1> <hostname2>", "pcs host auth <hostname1> <hostname2>", "pcs cluster auth nfs1 nfs2 nfs3 nfs4 Username: hacluster Password: nfs1: Authorized nfs2: Authorized nfs3: Authorized nfs4: Authorized", "pcs host auth nfs1 nfs2 nfs3 nfs4 Username: hacluster Password: nfs1: Authorized nfs2: Authorized nfs3: Authorized nfs4: Authorized", "ssh-keygen -f /var/lib/glusterd/nfs/secret.pem -t rsa -N ''", "ssh-copy-id -i /var/lib/glusterd/nfs/secret.pem.pub root@<node-ip/hostname>", "scp -i /var/lib/glusterd/nfs/secret.pem /var/lib/glusterd/nfs/secret.* root@<node-ip/hostname>:/var/lib/glusterd/nfs/", "Use a non-privileged port for RQuota Rquota_Port = 875;", "Name of the HA cluster created. must be unique within the subnet HA_NAME=\"ganesha-ha-360\" # # You may use short names or long names; you may not use IP addresses. Once you select one, stay with it as it will be mildly unpleasant to clean up if you switch later on. Ensure that all names - short and/or long - are in DNS or /etc/hosts on all machines in the cluster. # The subset of nodes of the Gluster Trusted Pool that form the ganesha HA cluster. Hostname is specified. HA_CLUSTER_NODES=\"server1.lab.redhat.com,server2.lab.redhat.com,...\" # Virtual IPs for each of the nodes specified above. VIP_server1=\"10.0.2.1\" VIP_server2=\"10.0.2.2\" #VIP_server1_lab_redhat_com=\"10.0.2.1\" #VIP_server2_lab_redhat_com=\"10.0.2.2\" . .", "gluster nfs-ganesha enable", "gluster nfs-ganesha enable Enabling NFS-Ganesha requires Gluster-NFS to be disabled across the trusted pool. Do you still want to continue? (y/n) y This will take a few minutes to complete. Please wait .. nfs-ganesha : success", "systemctl restart rpc-statd", "gluster nfs-ganesha disable", "gluster nfs-ganesha disable Disabling NFS-Ganesha will tear down entire ganesha cluster across the trusted pool. Do you still want to continue? (y/n) y This will take a few minutes to complete. Please wait .. nfs-ganesha : success", "/usr/libexec/ganesha/ganesha-ha.sh --status /var/run/gluster/shared_storage/nfs-ganesha", "/usr/libexec/ganesha/ganesha-ha.sh --status /var/run/gluster/shared_storage/nfs-ganesha", "Online: [ server1 server2 server3 server4 ] server1-cluster_ip-1 server1 server2-cluster_ip-1 server2 server3-cluster_ip-1 server3 server4-cluster_ip-1 server4 Cluster HA Status: HEALTHY", "systemctl restart nfs-ganesha", "gluster volume set <volname> ganesha.enable on", "gluster vol set testvol ganesha.enable on volume set: success", "gluster volume set <volname> ganesha.enable off", "gluster vol set testvol ganesha.enable off volume set: success", "systemctl status nfs-ganesha", "systemctl status nfs-ganesha nfs-ganesha.service - NFS-Ganesha file server Loaded: loaded (/usr/lib/systemd/system/nfs-ganesha.service; disabled) Active: active (running) since Tue 2015-07-21 05:08:22 IST; 19h ago Docs: http://github.com/nfs-ganesha/nfs-ganesha/wiki Main PID: 15440 (ganesha.nfsd) CGroup: /system.slice/nfs-ganesha.service โ””โ”€15440 /usr/bin/ganesha.nfsd -L /var/log/ganesha/ganesha.log -f /etc/ganesha/ganesha.conf -N NIV_EVENT Jul 21 05:08:22 server1 systemd[1]: Started NFS-Ganesha file server.]", "showmount -e localhost", "showmount -e localhost Export list for localhost: /volname (everyone)", "sysctl -w sunrpc.tcp_slot_table_entries=128 echo 128 > /proc/sys/sunrpc/tcp_slot_table_entries echo 128 > /proc/sys/sunrpc/tcp_max_slot_table_entries", "echo \"options sunrpc tcp_slot_table_entries=128\" >> /etc/modprobe.d/sunrpc.conf echo \"options sunrpc tcp_max_slot_table_entries=128\" >> /etc/modprobe.d/sunrpc.conf", "mount -t nfs -o vers=3 virtual_ip :/ volname /mountpoint", "mount -t nfs -o vers=3 10.70.0.0:/testvol /mnt", "mount -t nfs -o vers=4 virtual_ip :/ volname /mountpoint", "mount -t nfs -o vers=4 10.70.0.0:/testvol /mnt", "mount -t nfs -o vers= 4.0 or 4.1 virtual_ip :/ volname /mountpoint", "mount -t nfs -o vers=4.1 10.70.0.0:/testvol /mnt", "dbus-send --type=method_call --print-reply --system --dest=org.ganesha.nfsd /org/ganesha/nfsd/ClientMgr org.ganesha.nfsd.clientmgr.ShowClients", "dbus-send --type=method_call --print-reply --system --dest=org.ganesha.nfsd /org/ganesha/nfsd/ExportMgr org.ganesha.nfsd.exportmgr.DisplayExport uint16: Export_Id", "uint16 export_id string fullpath string pseudopath string tag array[ struct { string client_type int32 CIDR_version byte CIDR_address byte CIDR_mask int32 CIDR_proto uint32 anonymous_uid uint32 anonymous_gid uint32 expire_time_attr uint32 options uint32 set } struct { . . . } . . . ]", "#dbus-send --type=method_call --print-reply --system --dest=org.ganesha.nfsd /org/ganesha/nfsd/ExportMgr org.ganesha.nfsd.exportmgr.DisplayExport uint16:2 method return time=1559209192.642525 sender=:1.5491 -> destination=:1.5510 serial=370 reply_serial=2 uint16 2 string \"/mani1\" string \"/mani1\" string \"\" array [ struct { string \"10.70.46.107/32\" int32 0 byte 0 byte 255 int32 1 uint32 1440 uint32 72 uint32 0 uint32 52441250 uint32 7340536 } struct { string \"10.70.47.152/32\" int32 0 byte 0 byte 255 int32 1 uint32 1440 uint32 72 uint32 0 uint32 51392994 uint32 7340536 } ]", "/usr/libexec/ganesha/ganesha-ha.sh --add <HA_CONF_DIR> <HOSTNAME> <NODE-VIP>", "/usr/libexec/ganesha/ganesha-ha.sh --add /var/run/gluster/shared_storage/nfs-ganesha server16 10.00.00.01", "/usr/libexec/ganesha/ganesha-ha.sh --delete <HA_CONF_DIR> <HOSTNAME>", "/usr/libexec/ganesha/ganesha-ha.sh --delete /var/run/gluster/shared_storage/nfs-ganesha server16", "/usr/libexec/ganesha/ganesha-ha.sh --refresh-config <HA_CONF_DIR> <volname>", "cat export.conf EXPORT{ Export_Id = 1 ; # Export ID unique to each export Path = \"volume_path\"; # Path of the volume to be exported. Eg: \"/test_volume\" FSAL { name = GLUSTER; hostname = \"10.xx.xx.xx\"; # IP of one of the nodes in the trusted pool volume = \"volume_name\"; # Volume name. Eg: \"test_volume\" } Access_type = RW; # Access permissions Squash = No_root_squash; # To enable/disable root squashing Disable_ACL = true; # To enable/disable ACL Pseudo = \"pseudo_path\"; # NFSv4 pseudo path for this export. Eg: \"/test_volume_pseudo\" Protocols = \"3\", \"4\" ; # NFS protocols supported Transports = \"UDP\", \"TCP\" ; # Transport protocols supported SecType = \"sys\"; # Security flavors supported }", "client { clients = 10.00.00.01; # IP of the client. access_type = \"RO\"; # Read-only permissions Protocols = \"3\"; # Allow only NFSv3 protocol. anonymous_uid = 1440; anonymous_gid = 72; }", "Disable_ACL = false;", "Pseudo = \" pseudo_path \"; # NFSv4 pseudo path for this export. Eg: \"/test_volume_pseudo\"", "cat export.ganesha-dir.conf # WARNING : Using Gluster CLI will overwrite manual # changes made to this file. To avoid it, edit the # file and run ganesha-ha.sh --refresh-config. EXPORT{ Export_Id = 3; Path = \"/ganesha/dir\"; FSAL { name = GLUSTER; hostname=\"localhost\"; volume=\"ganesha\"; volpath=\"/dir\"; } Access_type = RW; Disable_ACL = true; Squash=\"No_root_squash\"; Pseudo=\"/ganesha/dir\"; Protocols = \"3\", \"4\"; Transports = \"UDP\",\"TCP\"; SecType = \"sys\"; }", "%include \"/var/run/gluster/shared_storage/nfs-ganesha/exports/export.<share-name>.conf\"", "%include \"/var/run/gluster/shared_storage/nfs-ganesha/exports/export.ganesha.conf\" --> Volume entry %include >/var/run/gluster/shared_storage/nfs-ganesha/exports/export.ganesha-dir.conf\" --> Subdir entry", "/usr/libexec/ganesha/ganesha-ha.sh --refresh-config <HA_CONF_DIR> <share-name>", "/usr/libexec/ganesha/ganesha-ha.sh --refresh-config /run/gluster/shared_storage/nfs-ganesha/ ganesha-dir", "cat export.ganesha.conf # WARNING : Using Gluster CLI will overwrite manual # changes made to this file. To avoid it, edit the # file and run ganesha-ha.sh --refresh-config. EXPORT{ Export_Id = 4; Path = \"/ganesha/dir1\"; FSAL { name = GLUSTER; hostname=\"localhost\"; volume=\"ganesha\"; volpath=\"/dir1\"; } Access_type = RW; Disable_ACL = true; Squash=\"No_root_squash\"; Pseudo=\"/ganesha/dir1\"; Protocols = \"3\", \"4\"; Transports = \"UDP\",\"TCP\"; SecType = \"sys\"; }", "/usr/libexec/ganesha/ganesha-ha.sh --refresh-config <HA_CONF_DIR> <share-name>", "/usr/libexec/ganesha/ganesha-ha.sh --refresh-config /run/gluster/shared_storage/nfs-ganesha/ ganesha", "Squash = all_squash ; # To enable/disable root squashing", "rm -rf /var/run/gluster/shared_storage/nfs-ganesha/exports/file-name.conf", "%include \"/var/run/gluster/shared_storage/nfs-ganesha/export/export.conf", "dbus-send --print-reply --system --dest=org.ganesha.nfsd /org/ganesha/nfsd/ExportMgr org.ganesha.nfsd.exportmgr.RemoveExport uint16:export_id", "yum install krb5-workstation", "yum install ntpdate", "dnf install chrony", "echo <valid_time_server> >> /etc/ntp/step-tickers systemctl enable ntpdate systemctl start ntpdate", "vi /etc/chrony.conf # systemctl enable chrony # systemctl start chrony", "[logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] dns_lookup_realm = false ticket_lifetime = 24h renew_lifetime = 7d forwardable = true rdns = false default_realm = EXAMPLE.COM default_ccache_name = KEYRING:persistent:%{uid} [realms] EXAMPLE.COM = { kdc = kerberos.example.com admin_server = kerberos.example.com } [domain_realm] .example.com = EXAMPLE.COM example.com = EXAMPLE.COM", "Domain = example.com", "yum install nfs-utils yum install rpcbind", "kadmin kadmin: addprinc -randkey nfs/<host_name>@EXAMPLE.COM kadmin: ktadd nfs/<host_name>@EXAMPLE.COM", "kadmin Authenticating as principal root/[email protected] with password. Password for root/[email protected]: kadmin: addprinc -randkey nfs/<host_name>@EXAMPLE.COM WARNING: no policy specified for nfs/<host_name>@EXAMPLE.COM; defaulting to no policy Principal \"nfs/<host_name>@EXAMPLE.COM\" created. kadmin: ktadd nfs/<host_name>@EXAMPLE.COM Entry for principal nfs/<host_name>@EXAMPLE.COM with kvno2, encryption type aes256-cts-hmac-sha1-96 added to keytab FILE:/etc/krb5.keytab. Entry for principal nfs/<host_name>@EXAMPLE.COM with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab FILE:/etc/krb5.keytab. Entry for principal nfs/<host_name>@EXAMPLE.COM with kvno 2, encryption type des3-cbc-sha1 added to keytab FILE:/etc/krb5.keytab. Entry for principal nfs/<host_name>@EXAMPLE.COM with kvno 2, encryption type arcfour-hmac added to keytab FILE:/etc/krb5.keytab. Entry for principal nfs/<host_name>@EXAMPLE.COM with kvno 2, encryption type camellia256-cts-cmac added to keytab FILE:/etc/krb5.keytab. Entry for principal nfs/<host_name>@EXAMPLE.COM with kvno 2, encryption type camellia128-cts-cmac added to keytab FILE:/etc/krb5.keytab. Entry for principal nfs/<host_name>@EXAMPLE.COM with kvno 2, encryption type des-hmac-sha1 added to keytab FILE:/etc/krb5.keytab. Entry for principal nfs/<host_name>@EXAMPLE.COM with kvno 2, encryption type des-cbc-md5 added to keytab FILE:/etc/krb5.keytab.", "NFS_KRB5 { PrincipalName = nfs ; KeytabPath = /etc/krb5.keytab ; Active_krb5 = true ; }", "useradd guest", "yum install nfs-utils yum install rpcbind", "kadmin kadmin: addprinc -randkey host/<host_name>@EXAMPLE.COM kadmin: ktadd host/<host_name>@EXAMPLE.COM", "kadmin Authenticating as principal root/[email protected] with password. Password for root/[email protected]: kadmin: addprinc -randkey host/<host_name>@EXAMPLE.COM WARNING: no policy specified for host/<host_name>@EXAMPLE.COM; defaulting to no policy Principal \"host/<host_name>@EXAMPLE.COM\" created. kadmin: ktadd host/<host_name>@EXAMPLE.COM Entry for principal host/<host_name>@EXAMPLE.COM with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab FILE:/etc/krb5.keytab. Entry for principal host/<host_name>@EXAMPLE.COM with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab FILE:/etc/krb5.keytab. Entry for principal host/<host_name>@EXAMPLE.COM with kvno 2, encryption type des3-cbc-sha1 added to keytab FILE:/etc/krb5.keytab. Entry for principal host/<host_name>@EXAMPLE.COM with kvno 2, encryption type arcfour-hmac added to keytab FILE:/etc/krb5.keytab. Entry for principal host/<host_name>@EXAMPLE.COM with kvno 2, encryption type camellia256-cts-cmac added to keytab FILE:/etc/krb5.keytab. Entry for principal host/<host_name>@EXAMPLE.COM with kvno 2, encryption type camellia128-cts-cmac added to keytab FILE:/etc/krb5.keytab. Entry for principal host/<host_name>@EXAMPLE.COM with kvno 2, encryption type des-hmac-sha1 added to keytab FILE:/etc/krb5.keytab. Entry for principal host/<host_name>@EXAMPLE.COM with kvno 2, encryption type des-cbc-md5 added to keytab FILE:/etc/krb5.keytab.", "systemctl status nfs-client.target systemctl start nfs-client.target systemctl enable nfs-client.target", "useradd guest", "mount -t nfs -o sec=krb5 <host_name>:/testvolume /mnt", "mkdir <directory name>", "su - guest", "su guest ls ls: cannot open directory .: Permission denied", "kinit Password for [email protected]: ls <directory created>", "pcs resource op remove nfs-mon monitor pcs resource op add nfs-mon monitor interval=<interval_period_value>", "NFSv4 { Grace_Period=<grace_period_value_in_sec>; }", "systemctl restart nfs-ganesha", "CACHEINODE { Entries_HWMark = 125000; Chunks_HWMark = 1000; Dir_Chunk = 128; # Range: 1 to UINT32_MAX , 0 to disable }", "systemctl restart nfs-ganesha", "service nfs-ganesha status service pcsd status service pacemaker status pcs status", "/var/log/ganesha/ganesha.log /var/log/ganesha/ganesha-gfapi.log /var/log/messages /var/log/pcsd.log", "mount -t glusterfs < local_node's_hostname >:gluster_shared_storage /var/run/gluster/shared_storage", "netstat -anlp | grep 875", "Use a non-privileged port for RQuota Rquota_Port = port_number ;", "semanage port -a -t mountd_port_t -p tcp port_number semanage port -a -t mountd_port_t -p udp port_number", "systemctl restart nfs-ganesha", "kill pid ;", "ps aux | grep pid ;", "systemctl restart nfs-ganesha", "gluster volume status <volname>", "service nfs-ganesha status showmount -e localhost", "/var/log/ganesha/ganesha.log /var/log/ganesha/ganesha-gfapi.log /var/log/messages", "service messagebus status", "gluster volume start <volname>", "/usr/libexec/ganesha/dbus-send.sh /var/run/gluster/shared_storage on <volname>", "ganesha-ha.sh --add <HA_CONF_DIR> <NODE-HOSTNAME> <NODE-VIP>", "pcs cluster auth <hostname>", "pcs host auth <hostname>", "/usr/libexec/ganesha/ganesha-ha.sh --teardown /var/run/gluster/shared_storage/nfs-ganesha /usr/libexec/ganesha/ganesha-ha.sh --cleanup /var/run/gluster/shared_storage/nfs-ganesha systemctl stop nfs-ganesha" ]
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/nfs
Chapter 2. Managing LVM physical volumes
Chapter 2. Managing LVM physical volumes A physical volume (PV) is a physical storage device or a partition on a storage device that LVM uses. During the initialization process, an LVM disk label and metadata are written to the device, which allows LVM to track and manage it as part of the logical volume management scheme. Note You cannot increase the size of the metadata after the initialization. If you need larger metadata, you must set the appropriate size during the initialization process. When initialization process is complete, you can allocate the PV to a volume group (VG). You can divide this VG into logical volumes (LVs), which are the virtual block devices that operating systems and applications can use for storage. To ensure optimal performance, partition the whole disk as a single PV for LVM use. 2.1. Creating an LVM physical volume You can use the pvcreate command to initialize a physical volume LVM usage. Prerequisites Administrative access. The lvm2 package is installed. Procedure Identify the storage device you want to use as a physical volume. To list all available storage devices, use: Create an LVM physical volume: Replace /dev/sdb with the name of the device you want to initialize as a physical volume. Verification steps Display the created physical volume: Additional resources pvcreate(8) , pvdisplay(8) , pvs(8) , pvscan(8) , and lvm(8) man pages on your system 2.2. Removing LVM physical volumes You can use the pvremove command to remove a physical volume for LVM usage. Prerequisites Administrative access. Procedure List the physical volumes to identify the device you want to remove: Remove the physical volume: Replace /dev/sdb1 with the name of the device associated with the physical volume. Note If your physical volume is part of the volume group, you need to remove it from the volume group first. If you volume group contains more that one physical volume, use the vgreduce command: Replace VolumeGroupName with the name of the volume group. Replace /dev/sdb1 with the name of the device. If your volume group contains only one physical volume, use vgremove command: Replace VolumeGroupName with the name of the volume group. Verification Verify the physical volume is removed: Additional resources pvremove(8) man page on your system 2.3. Creating logical volumes in the web console Logical volumes act as physical drives. You can use the RHEL 8 web console to create LVM logical volumes in a volume group. Prerequisites You have installed the RHEL 8 web console. For instructions, see Installing and enabling the web console . The cockpit-storaged package is installed on your system. The volume group is created. Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . Click Storage . In the Storage table, click the volume group in which you want to create logical volumes. On the Logical volume group page, scroll to the LVM2 logical volumes section and click Create new logical volume . In the Name field, enter a name for the new logical volume. Do not include spaces in the name. In the Purpose drop-down menu, select Block device for filesystems . This configuration enables you to create a logical volume with the maximum volume size which is equal to the sum of the capacities of all drives included in the volume group. Define the size of the logical volume. Consider: How much space the system using this logical volume will need. How many logical volumes you want to create. You do not have to use the whole space. If necessary, you can grow the logical volume later. Click Create . The logical volume is created. To use the logical volume you must format and mount the volume. Verification On the Logical volume page, scroll to the LVM2 logical volumes section and verify whether the new logical volume is listed. 2.4. Formatting logical volumes in the web console Logical volumes act as physical drives. To use them, you must format them with a file system. Warning Formatting logical volumes erases all data on the volume. The file system you select determines the configuration parameters you can use for logical volumes. For example, the XFS file system does not support shrinking volumes. Prerequisites You have installed the RHEL 8 web console. For instructions, see Installing and enabling the web console . The cockpit-storaged package is installed on your system. The logical volume created. You have root access privileges to the system. Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . Click Storage . In the Storage table, click the volume group in the logical volumes is created. On the Logical volume group page, scroll to the LVM2 logical volumes section. Click the menu button, ... , to the volume group you want to format. From the drop-down menu, select Format . In the Name field, enter a name for the file system. In the Mount Point field, add the mount path. In the Type drop-down menu, select a file system: XFS file system supports large logical volumes, switching physical drives online without outage, and growing an existing file system. Leave this file system selected if you do not have a different strong preference. XFS does not support reducing the size of a volume formatted with an XFS file system ext4 file system supports: Logical volumes Switching physical drives online without an outage Growing a file system Shrinking a file system Select the Overwrite existing data with zeros checkbox if you want the RHEL web console to rewrite the whole disk with zeros. This option is slower because the program has to go through the whole disk, but it is more secure. Use this option if the disk includes any data and you need to overwrite it. If you do not select the Overwrite existing data with zeros checkbox, the RHEL web console rewrites only the disk header. This increases the speed of formatting. From the Encryption drop-down menu, select the type of encryption if you want to enable it on the logical volume. You can select a version with either the LUKS1 (Linux Unified Key Setup) or LUKS2 encryption, which allows you to encrypt the volume with a passphrase. In the At boot drop-down menu, select when you want the logical volume to mount after the system boots. Select the required Mount options . Format the logical volume: If you want to format the volume and immediately mount it, click Format and mount . If you want to format the volume without mounting it, click Format only . Formatting can take several minutes depending on the volume size and which formatting options are selected. Verification On the Logical volume group page, scroll to the LVM2 logical volumes section and click the logical volume to check the details and additional options. If you selected the Format only option, click the menu button at the end of the line of the logical volume, and select Mount to use the logical volume. 2.5. Resizing logical volumes in the web console You can extend or reduce logical volumes in the RHEL 8 web console. The example procedure demonstrates how to grow and shrink the size of a logical volume without taking the volume offline. Warning You cannot reduce volumes that contains GFS2 or XFS filesystem. Prerequisites You have installed the RHEL 8 web console. For instructions, see Installing and enabling the web console . The cockpit-storaged package is installed on your system. An existing logical volume containing a file system that supports resizing logical volumes. Procedure Log in to the RHEL web console. Click Storage . In the Storage table, click the volume group in the logical volumes is created. On the Logical volume group page, scroll to the LVM2 logical volumes section and click the menu button, ... , to volume group you want to resize. From the menu, select Grow or Shrink to resize the volume: Growing the Volume: Select Grow to increase the size of the volume. In the Grow logical volume dialog box, adjust the size of the logical volume. Click Grow . LVM grows the logical volume without causing a system outage. Shrinking the Volume: Select Shrink to reduce the size of the volume. In the Shrink logical volume dialog box, adjust the size of the logical volume. Click Shrink . LVM shrinks the logical volume without causing a system outage. 2.6. Additional resources pvcreate(8) man page. Creating a partition table on a disk with parted . parted(8) man page on your system
[ "lsblk", "pvcreate /dev/sdb", "pvs PV VG Fmt Attr PSize PFree /dev/sdb lvm2 a-- 28.87g 13.87g", "pvs PV VG Fmt Attr PSize PFree /dev/sdb1 lvm2 --- 28.87g 28.87g", "pvremove /dev/sdb1", "vgreduce VolumeGroupName /dev/sdb1", "vgremove VolumeGroupName", "pvs" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_logical_volumes/managing-lvm-physical-volumes_configuring-and-managing-logical-volumes
Chapter 2. Requirements
Chapter 2. Requirements 2.1. Red Hat Virtualization Manager Requirements 2.1.1. Hardware Requirements The minimum and recommended hardware requirements outlined here are based on a typical small to medium-sized installation. The exact requirements vary between deployments based on sizing and load. Hardware certification for Red Hat Virtualization is covered by the hardware certification for Red Hat Enterprise Linux. For more information, see https://access.redhat.com/solutions/725243 . To confirm whether specific hardware items are certified for use with Red Hat Enterprise Linux, see https://access.redhat.com/ecosystem/#certifiedHardware . Table 2.1. Red Hat Virtualization Manager Hardware Requirements Resource Minimum Recommended CPU A dual core CPU. A quad core CPU or multiple dual core CPUs. Memory 4 GB of available system RAM if Data Warehouse is not installed and if memory is not being consumed by existing processes. 16 GB of system RAM. Hard Disk 25 GB of locally accessible, writable disk space. 50 GB of locally accessible, writable disk space. You can use the RHV Manager History Database Size Calculator to calculate the appropriate disk space for the Manager history database size. Network Interface 1 Network Interface Card (NIC) with bandwidth of at least 1 Gbps. 1 Network Interface Card (NIC) with bandwidth of at least 1 Gbps. 2.1.2. Browser Requirements The following browser versions and operating systems can be used to access the Administration Portal and the VM Portal. Browser support is divided into tiers: Tier 1: Browser and operating system combinations that are fully tested and fully supported. Red Hat Engineering is committed to fixing issues with browsers on this tier. Tier 2: Browser and operating system combinations that are partially tested, and are likely to work. Limited support is provided for this tier. Red Hat Engineering will attempt to fix issues with browsers on this tier. Tier 3: Browser and operating system combinations that are not tested, but may work. Minimal support is provided for this tier. Red Hat Engineering will attempt to fix only minor issues with browsers on this tier. Table 2.2. Browser Requirements Support Tier Operating System Family Browser Tier 1 Red Hat Enterprise Linux Mozilla Firefox Extended Support Release (ESR) version Any Most recent version of Google Chrome, Mozilla Firefox, or Microsoft Edge Tier 2 Tier 3 Any Earlier versions of Google Chrome or Mozilla Firefox Any Other browsers 2.1.3. Client Requirements Virtual machine consoles can only be accessed using supported Remote Viewer ( virt-viewer ) clients on Red Hat Enterprise Linux and Windows. To install virt-viewer , see Installing Supporting Components on Client Machines in the Virtual Machine Management Guide . Installing virt-viewer requires Administrator privileges. Virtual machine consoles are accessed through the SPICE, VNC, or RDP (Windows only) protocols. The QXL graphical driver can be installed in the guest operating system for improved/enhanced SPICE functionalities. SPICE currently supports a maximum resolution of 2560x1600 pixels. Supported QXL drivers are available on Red Hat Enterprise Linux, Windows XP, and Windows 7. SPICE support is divided into tiers: Tier 1: Operating systems on which Remote Viewer has been fully tested and is supported. Tier 2: Operating systems on which Remote Viewer is partially tested and is likely to work. Limited support is provided for this tier. Red Hat Engineering will attempt to fix issues with remote-viewer on this tier. Table 2.3. Client Operating System SPICE Support Support Tier Operating System Tier 1 Red Hat Enterprise Linux 7.2 and later Microsoft Windows 7 Tier 2 Microsoft Windows 8 Microsoft Windows 10 2.1.4. Operating System Requirements The Red Hat Virtualization Manager must be installed on a base installation of Red Hat Enterprise Linux 7 that has been updated to the latest minor release. Do not install any additional packages after the base installation, as they may cause dependency issues when attempting to install the packages required by the Manager. Do not enable additional repositories other than those required for the Manager installation. 2.2. Host Requirements Hardware certification for Red Hat Virtualization is covered by the hardware certification for Red Hat Enterprise Linux. For more information, see https://access.redhat.com/solutions/725243 . To confirm whether specific hardware items are certified for use with Red Hat Enterprise Linux, see https://access.redhat.com/ecosystem/#certifiedHardware . For more information on the requirements and limitations that apply to guests see https://access.redhat.com/articles/rhel-limits and https://access.redhat.com/articles/906543 . 2.2.1. CPU Requirements All CPUs must have support for the Intel(R) 64 or AMD64 CPU extensions, and the AMD-VTM or Intel VT(R) hardware virtualization extensions enabled. Support for the No eXecute flag (NX) is also required. The following CPU models are supported: AMD Opteron G4 Opteron G5 EPYC Intel Nehalem Westmere Sandybridge Haswell Haswell-noTSX Broadwell Broadwell-noTSX Skylake (client) Skylake (server) IBM POWER8 2.2.1.1. Checking if a Processor Supports the Required Flags You must enable virtualization in the BIOS. Power off and reboot the host after this change to ensure that the change is applied. At the Red Hat Enterprise Linux or Red Hat Virtualization Host boot screen, press any key and select the Boot or Boot with serial console entry from the list. Press Tab to edit the kernel parameters for the selected option. Ensure there is a space after the last kernel parameter listed, and append the parameter rescue . Press Enter to boot into rescue mode. At the prompt, determine that your processor has the required extensions and that they are enabled by running this command: If any output is shown, the processor is hardware virtualization capable. If no output is shown, your processor may still support hardware virtualization; in some circumstances manufacturers disable the virtualization extensions in the BIOS. If you believe this to be the case, consult the system's BIOS and the motherboard manual provided by the manufacturer. 2.2.2. Memory Requirements The minimum required RAM is 2 GB. The maximum supported RAM per VM in Red Hat Virtualization Host is 4 TB. However, the amount of RAM required varies depending on guest operating system requirements, guest application requirements, and guest memory activity and usage. KVM can also overcommit physical RAM for virtualized guests, allowing you to provision guests with RAM requirements greater than what is physically present, on the assumption that the guests are not all working concurrently at peak load. KVM does this by only allocating RAM for guests as required and shifting underutilized guests into swap. 2.2.3. Storage Requirements Hosts require storage to store configuration, logs, kernel dumps, and for use as swap space. Storage can be local or network-based. Red Hat Virtualization Host (RHVH) can boot with one, some, or all of its default allocations in network storage. Booting from network storage can result in a freeze if there is a network disconnect. Adding a drop-in multipath configuration file can help address losses in network connectivity. If RHVH boots from SAN storage and loses connectivity, the files become read-only until network connectivity restores. Using network storage might result in a performance downgrade. The minimum storage requirements of RHVH are documented in this section. The storage requirements for Red Hat Enterprise Linux hosts vary based on the amount of disk space used by their existing configuration but are expected to be greater than those of RHVH. The minimum storage requirements for host installation are listed below. However, Red Hat recommends using the default allocations, which use more storage space. / (root) - 6 GB /home - 1 GB /tmp - 1 GB /boot - 1 GB /var - 15 GB /var/crash - 10 GB /var/log - 8 GB /var/log/audit - 2 GB swap - 1 GB (for the recommended swap size, see https://access.redhat.com/solutions/15244 ) Anaconda reserves 20% of the thin pool size within the volume group for future metadata expansion. This is to prevent an out-of-the-box configuration from running out of space under normal usage conditions. Overprovisioning of thin pools during installation is also not supported. Minimum Total - 55 GB If you are also installing the RHV-M Appliance for self-hosted engine installation, /var/tmp must be at least 5 GB. If you plan to use memory overcommitment, add enough swap space to provide virtual memory for all of virtual machines. See Memory Optimization . 2.2.4. PCI Device Requirements Hosts must have at least one network interface with a minimum bandwidth of 1 Gbps. Red Hat recommends that each host have two network interfaces, with one dedicated to supporting network-intensive activities, such as virtual machine migration. The performance of such operations is limited by the bandwidth available. For information about how to use PCI Express and conventional PCI devices with Intel Q35-based virtual machines, see Using PCI Express and Conventional PCI Devices with the Q35 Virtual Machine . 2.2.5. Device Assignment Requirements If you plan to implement device assignment and PCI passthrough so that a virtual machine can use a specific PCIe device from a host, ensure the following requirements are met: CPU must support IOMMU (for example, VT-d or AMD-Vi). IBM POWER8 supports IOMMU by default. Firmware must support IOMMU. CPU root ports used must support ACS or ACS-equivalent capability. PCIe devices must support ACS or ACS-equivalent capability. Red Hat recommends that all PCIe switches and bridges between the PCIe device and the root port support ACS. For example, if a switch does not support ACS, all devices behind that switch share the same IOMMU group, and can only be assigned to the same virtual machine. For GPU support, Red Hat Enterprise Linux 7 supports PCI device assignment of PCIe-based NVIDIA K-Series Quadro (model 2000 series or higher), GRID, and Tesla as non-VGA graphics devices. Currently up to two GPUs may be attached to a virtual machine in addition to one of the standard, emulated VGA interfaces. The emulated VGA is used for pre-boot and installation and the NVIDIA GPU takes over when the NVIDIA graphics drivers are loaded. Note that the NVIDIA Quadro 2000 is not supported, nor is the Quadro K420 card. Check vendor specification and datasheets to confirm that your hardware meets these requirements. The lspci -v command can be used to print information for PCI devices already installed on a system. 2.2.6. vGPU Requirements A host must meet the following requirements in order for virtual machines on that host to use a vGPU: vGPU-compatible GPU GPU-enabled host kernel Installed GPU with correct drivers Predefined mdev_type set to correspond with one of the mdev types supported by the device vGPU-capable drivers installed on each host in the cluster vGPU-supported virtual machine operating system with vGPU drivers installed 2.3. Networking Requirements 2.3.1. General Requirements Red Hat Virtualization requires IPv6 to remain enabled on the computer or virtual machine where you are running the Manager (also called "the Manager machine"). Do not disable IPv6 on the Manager machine, even if your systems do not use it. 2.3.2. Firewall Requirements for DNS, NTP, IPMI Fencing, and Metrics Store The firewall requirements for all of the following topics are special cases that require individual consideration. DNS and NTP Red Hat Virtualization does not create a DNS or NTP server, so the firewall does not need to have open ports for incoming traffic. By default, Red Hat Enterprise Linux allows outbound traffic to DNS and NTP on any destination address. If you disable outgoing traffic, define exceptions for requests that are sent to DNS and NTP servers. Important The Red Hat Virtualization Manager and all hosts (Red Hat Virtualization Host and Red Hat Enterprise Linux host) must have a fully qualified domain name and full, perfectly-aligned forward and reverse name resolution. Running a DNS service as a virtual machine in the Red Hat Virtualization environment is not supported. All DNS services the Red Hat Virtualization environment uses must be hosted outside of the environment. Red Hat strongly recommends using DNS instead of the /etc/hosts file for name resolution. Using a hosts file typically requires more work and has a greater chance for errors. IPMI and Other Fencing Mechanisms (optional) For IPMI (Intelligent Platform Management Interface) and other fencing mechanisms, the firewall does not need to have open ports for incoming traffic. By default, Red Hat Enterprise Linux allows outbound IPMI traffic to ports on any destination address. If you disable outgoing traffic, make exceptions for requests being sent to your IPMI or fencing servers. Each Red Hat Virtualization Host and Red Hat Enterprise Linux host in the cluster must be able to connect to the fencing devices of all other hosts in the cluster. If the cluster hosts are experiencing an error (network error, storage error... ) and cannot function as hosts, they must be able to connect to other hosts in the data center. The specific port number depends on the type of the fence agent you are using and how it is configured. The firewall requirement tables in the following sections do not represent this option. Metrics Store, Kibana, and ElasticSearch For Metrics Store, Kibana, and ElasticSearch, see Network Configuration for Metrics Store virtual machines . 2.3.3. Red Hat Virtualization Manager Firewall Requirements The Red Hat Virtualization Manager requires that a number of ports be opened to allow network traffic through the system's firewall. The engine-setup script can configure the firewall automatically, but this overwrites any pre-existing firewall configuration if you are using iptables . If you want to keep the existing firewall configuration, you must manually insert the firewall rules required by the Manager. The engine-setup command saves a list of the iptables rules required in the /etc/ovirt-engine/iptables.example file. If you are using firewalld , engine-setup does not overwrite the existing configuration. The firewall configuration documented here assumes a default configuration. Note A diagram of these firewall requirements is available at https://access.redhat.com/articles/3932211 . You can use the IDs in the table to look up connections in the diagram. Table 2.4. Red Hat Virtualization Manager Firewall Requirements ID Port(s) Protocol Source Destination Purpose Encrypted by default M1 - ICMP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Manager Optional. May help in diagnosis. No M2 22 TCP System(s) used for maintenance of the Manager including backend configuration, and software upgrades. Red Hat Virtualization Manager Secure Shell (SSH) access. Optional. Yes M3 2222 TCP Clients accessing virtual machine serial consoles. Red Hat Virtualization Manager Secure Shell (SSH) access to enable connection to virtual machine serial consoles. Yes M4 80, 443 TCP Administration Portal clients VM Portal clients Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts REST API clients Red Hat Virtualization Manager Provides HTTP (port 80, not encrypted) and HTTPS (port 443, encrypted) access to the Manager. HTTP redirects connections to HTTPS. Yes M5 6100 TCP Administration Portal clients VM Portal clients Red Hat Virtualization Manager Provides websocket proxy access for a web-based console client, noVNC , when the websocket proxy is running on the Manager. If the websocket proxy is running on a different host, however, this port is not used. No M6 7410 UDP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Manager If Kdump is enabled on the hosts, open this port for the fence_kdump listener on the Manager. See fence_kdump Advanced Configuration . fence_kdump doesn't provide a way to encrypt the connection. However, you can manually configure this port to block access from hosts that are not eligible. No M7 54323 TCP Administration Portal clients Red Hat Virtualization Manager (ImageIO Proxy server) Required for communication with the ImageIO Proxy ( ovirt-imageio-proxy ). Yes M8 6442 TCP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Open Virtual Network (OVN) southbound database Connect to Open Virtual Network (OVN) database Yes M9 9696 TCP Clients of external network provider for OVN External network provider for OVN OpenStack Networking API Yes, with configuration generated by engine-setup. M10 35357 TCP Clients of external network provider for OVN External network provider for OVN OpenStack Identity API Yes, with configuration generated by engine-setup. M11 53 TCP, UDP Red Hat Virtualization Manager DNS Server DNS lookup requests from ports above 1023 to port 53, and responses. Open by default. No M12 123 UDP Red Hat Virtualization Manager NTP Server NTP requests from ports above 1023 to port 123, and responses. Open by default. No Note A port for the OVN northbound database (6641) is not listed because, in the default configuration, the only client for the OVN northbound database (6641) is ovirt-provider-ovn . Because they both run on the same host, their communication is not visible to the network. By default, Red Hat Enterprise Linux allows outbound traffic to DNS and NTP on any destination address. If you disable outgoing traffic, make exceptions for the Manager to send requests to DNS and NTP servers. Other nodes may also require DNS and NTP. In that case, consult the requirements for those nodes and configure the firewall accordingly. 2.3.4. Host Firewall Requirements Red Hat Enterprise Linux hosts and Red Hat Virtualization Hosts (RHVH) require a number of ports to be opened to allow network traffic through the system's firewall. The firewall rules are automatically configured by default when adding a new host to the Manager, overwriting any pre-existing firewall configuration. To disable automatic firewall configuration when adding a new host, clear the Automatically configure host firewall check box under Advanced Parameters . To customize the host firewall rules, see https://access.redhat.com/solutions/2772331 . Note A diagram of these firewall requirements is available at https://access.redhat.com/articles/3932211 . You can use the IDs in the table to look up connections in the diagram. Table 2.5. Virtualization Host Firewall Requirements ID Port(s) Protocol Source Destination Purpose Encrypted by default H1 22 TCP Red Hat Virtualization Manager Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Secure Shell (SSH) access. Optional. Yes H2 2223 TCP Red Hat Virtualization Manager Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Secure Shell (SSH) access to enable connection to virtual machine serial consoles. Yes H3 161 UDP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Manager Simple network management protocol (SNMP). Only required if you want Simple Network Management Protocol traps sent from the host to one or more external SNMP managers. Optional. No H4 111 TCP NFS storage server Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts NFS connections. Optional. No H5 5900 - 6923 TCP Administration Portal clients VM Portal clients Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Remote guest console access via VNC and SPICE. These ports must be open to facilitate client access to virtual machines. Yes (optional) H6 5989 TCP, UDP Common Information Model Object Manager (CIMOM) Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Used by Common Information Model Object Managers (CIMOM) to monitor virtual machines running on the host. Only required if you want to use a CIMOM to monitor the virtual machines in your virtualization environment. Optional. No H7 9090 TCP Red Hat Virtualization Manager Client machines Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Required to access the Cockpit web interface, if installed. Yes H8 16514 TCP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Virtual machine migration using libvirt . Yes H9 49152 - 49215 TCP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Virtual machine migration and fencing using VDSM. These ports must be open to facilitate both automated and manual migration of virtual machines. Yes. Depending on agent for fencing, migration is done through libvirt. H10 54321 TCP Red Hat Virtualization Manager Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts VDSM communications with the Manager and other virtualization hosts. Yes H11 54322 TCP Red Hat Virtualization Manager (ImageIO Proxy server) Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Required for communication with the ImageIO daemon ( ovirt-imageio-daemon ). Yes H12 6081 UDP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Required, when Open Virtual Network (OVN) is used as a network provider, to allow OVN to create tunnels between hosts. No H13 53 TCP, UDP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts DNS Server DNS lookup requests from ports above 1023 to port 53, and responses. This port is required and open by default. No Note By default, Red Hat Enterprise Linux allows outbound traffic to DNS and NTP on any destination address. If you disable outgoing traffic, make exceptions for the Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts to send requests to DNS and NTP servers. Other nodes may also require DNS and NTP. In that case, consult the requirements for those nodes and configure the firewall accordingly. 2.3.5. Database Server Firewall Requirements Red Hat Virtualization supports the use of a remote database server for the Manager database ( engine ) and the Data Warehouse database ( ovirt-engine-history ). If you plan to use a remote database server, it must allow connections from the Manager and the Data Warehouse service (which can be separate from the Manager). Similarly, if you plan to access a local or remote Data Warehouse database from an external system, such as Red Hat CloudForms, the database must allow connections from that system. Important Accessing the Manager database from external systems is not supported. Note A diagram of these firewall requirements is available at https://access.redhat.com/articles/3932211 . You can use the IDs in the table to look up connections in the diagram. Table 2.6. Database Server Firewall Requirements ID Port(s) Protocol Source Destination Purpose Encrypted by default D1 5432 TCP, UDP Red Hat Virtualization Manager Data Warehouse service Manager ( engine ) database server Data Warehouse ( ovirt-engine-history ) database server Default port for PostgreSQL database connections. No, but can be enabled . D2 5432 TCP, UDP External systems Data Warehouse ( ovirt-engine-history ) database server Default port for PostgreSQL database connections. Disabled by default. No, but can be enabled .
[ "grep -E 'svm|vmx' /proc/cpuinfo | grep nx" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/installing_red_hat_virtualization_as_a_standalone_manager_with_local_databases/rhv_requirements
Chapter 67. Kamelet Main
Chapter 67. Kamelet Main Since Camel 3.11 A main class that is opinionated to boostrap and run Camel standalone with Kamelets (or plain YAML routes) for development and demo purposes. 67.1. Initial configuration The KameletMain is pre-configured with the following properties: camel.component.kamelet.location = classpath:/kamelets,github:apache:camel-kamelets/kamelets camel.component.rest.consumerComponentName = platform-http camel.component.rest.producerComponentName = vertx-http You can override these settings by updating the configuration in application.properties . 67.2. Automatic dependencies downloading The Kamelet Main can automatically download Kamelet YAML files from a remote location over http/https, and from github as well. The official Kamelets from the Apache Camel Kamelet Catalog is stored on github and they can be used out of the box as-is. For example a Camel route can be coded in YAML which uses the Earthquake Kamelet from the catalog, as shown below: - route: from: "kamelet:earthquake-source" steps: - unmarshal: json: {} - log: "Earthquake with magnitude USD{body[properties][mag]} at USD{body[properties][place]}" In the above example, the earthquake kamelet will be downloaded from github, and as well its required dependencies. For more information, see Kamelet Main example
[ "camel.component.kamelet.location = classpath:/kamelets,github:apache:camel-kamelets/kamelets camel.component.rest.consumerComponentName = platform-http camel.component.rest.producerComponentName = vertx-http", "- route: from: \"kamelet:earthquake-source\" steps: - unmarshal: json: {} - log: \"Earthquake with magnitude USD{body[properties][mag]} at USD{body[properties][place]}\"" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-kamelet-main-component-starter
Storage Administration Guide
Storage Administration Guide Red Hat Enterprise Linux 7 Deploying and configuring single-node storage in RHEL 7 Edited by Marek Suchanek Red Hat Customer Content Services [email protected] Edited by Apurva Bhide Red Hat Customer Content Services [email protected] Milan Navratil Red Hat Customer Content Services Jacquelynn East Red Hat Customer Content Services Don Domingo Red Hat Customer Content Services
[ "Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 11675568 6272120 4810348 57% / /dev/sda1 100691 9281 86211 10% /boot none 322856 0 322856 0% /dev/shm", "Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 12G 6.0G 4.6G 57% / /dev/sda1 99M 9.1M 85M 10% /boot none 316M 0 316M 0% /dev/shm", "fstrim -v /mnt/ non_discard fstrim: /mnt/ non_discard : the discard operation is not supported", "mkfs.xfs block_device", "meta-data=/dev/device isize=256 agcount=4, agsize=3277258 blks = sectsz=512 attr=2 data = bsize=4096 blocks=13109032, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal log bsize=4096 blocks=6400, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0", "mkfs.xfs -d su=64k,sw=4 /dev/ block_device", "mount /dev/ device /mount/point", "mount -o nobarrier /dev/device /mount/point", "xfs_quota -x", "User quota on /home (/dev/blockdevice) Blocks User ID Used Soft Hard Warn/Grace ---------- --------------------------------- root 0 0 0 00 [------] testuser 103.4G 0 0 00 [------]", "xfs_quota -x -c 'limit isoft=500 ihard=700 john' /home/", "xfs_quota -x -c 'limit -g bsoft=1000m bhard=1200m accounting' /target/path", "echo 11:/var/log >> /etc/projects", "echo logfiles:11 >> /etc/projid", "xfs_quota -x -c 'project -s logfiles' /var", "xfs_quota -x -c 'limit -p bhard=lg logfiles' /var", "xfs_growfs /mount/point -D size", "xfs_repair /dev/device", "xfs_freeze mount-point", "xfs_freeze -f /mount/point", "xfs_freeze -u /mount/point", "xfsdump -l level [ -L label ] -f backup-destination path-to-xfs-filesystem", "xfsdump -l 0 -f /backup-files/boot.xfsdump /boot # xfsdump -l 0 -f /backup-files/data.xfsdump /data", "xfsdump -l 0 -L \"backup_boot\" -f /dev/ st0 /boot # xfsdump -l 0 -L \"backup_data\" -f /dev/ st0 /data", "xfsrestore [ -r ] [ -S session-id ] [ -L session-label ] [ -i ] -f backup-location restoration-path", "xfsrestore -f /backup-files/boot.xfsdump /mnt/boot/ # xfsrestore -f /backup-files/data.xfsdump /mnt/data/", "xfsrestore -f /dev/st0 -L \"backup_boot\" /mnt/boot/ # xfsrestore -f /dev/st0 -S \"45e9af35-efd2-4244-87bc-4762e476cbab\" /mnt/data/", "xfsrestore: preparing drive xfsrestore: examining media file 0 xfsrestore: inventory session uuid (8590224e-3c93-469c-a311-fc8f23029b2a) does not match the media header's session uuid (7eda9f86-f1e9-4dfd-b1d4-c50467912408) xfsrestore: examining media file 1 xfsrestore: inventory session uuid (8590224e-3c93-469c-a311-fc8f23029b2a) does not match the media header's session uuid (7eda9f86-f1e9-4dfd-b1d4-c50467912408) [...]", "echo value > /sys/fs/xfs/ device /error/metadata/ condition /max_retries", "echo value > /sys/fs/xfs/ device /error/metadata/default/max_retries", "echo value > /sys/fs/xfs/ device /error/metadata/ condition /retry_timeout_seconds", "echo value > /sys/fs/xfs/ device /error/metadata/default/retry_timeout_seconds", "echo value > /sys/fs/xfs/ device /error/fail_at_unmount", "mkfs.ext3 block_device", "e2label block_device volume_label", "mkfs.ext3 -U UUID device", "tune2fs -j block_device", "umount /dev/mapper/VolGroup00-LogVol02", "tune2fs -O ^has_journal /dev/mapper/VolGroup00-LogVol02", "e2fsck -y /dev/mapper/VolGroup00-LogVol02", "mount -t ext2 /dev/mapper/VolGroup00-LogVol02 /mount/point", "mkfs.ext4 block_device", "~]# mkfs.ext4 /dev/sdb1 mke2fs 1.41.12 (17-May-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 245280 inodes, 979456 blocks 48972 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=1006632960 30 block groups 32768 blocks per group, 32768 fragments per group 8176 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736 Writing inode tables: done Creating journal (16384 blocks): done Writing superblocks and filesystem accounting information: done", "mkfs.ext4 -E stride=16,stripe-width=64 /dev/ block_device", "mkfs.ext4 -U UUID device", "mount /dev/ device /mount/point", "mount -o acl,user_xattr /dev/ device /mount/point", "mount -o data_err=abort / dev / device / mount / point", "mount -o nobarrier /dev/ device /mount/point", "resize2fs /mount/device size", "resize2fs /dev/ device size", "e2fsck /dev/ device", "dump -0uf backup-file /dev/ device", "dump -0uf /backup-files/sda1.dump /dev/sda1 # dump -0uf /backup-files/sda2.dump /dev/sda2 # dump -0uf /backup-files/sda3.dump /dev/sda3", "dump -0u -f - /dev/ device | ssh root@ remoteserver.example.com dd of= backup-file", "mkfs.ext4 /dev/ device", "e2label /dev/ device label", "mkdir /mnt/ device # mount -t ext4 /dev/ device /mnt/ device", "cd /mnt/ device # restore -rf device-backup-file", "ssh remote-address \"cd /mnt/ device && cat backup-file | /usr/sbin/restore -r -f -\"", "ssh remote-machine-1 \"cd /mnt/ device && RSH=/usr/bin/ssh /usr/sbin/restore -rf remote-machine-2 : backup-file \"", "systemctl reboot", "mkfs.ext4 /dev/sda1 # mkfs.ext4 /dev/sda2 # mkfs.ext4 /dev/sda3", "e2label /dev/sda1 Boot1 # e2label /dev/sda2 Root # e2label /dev/sda3 Data", "mkdir /mnt/sda1 # mount -t ext4 /dev/sda1 /mnt/sda1 # mkdir /mnt/sda2 # mount -t ext4 /dev/sda2 /mnt/sda2 # mkdir /mnt/sda3 # mount -t ext4 /dev/sda3 /mnt/sda3", "mkdir /backup-files # mount -t ext4 /dev/sda6 /backup-files", "cd /mnt/sda1 # restore -rf /backup-files/sda1.dump # cd /mnt/sda2 # restore -rf /backup-files/sda2.dump # cd /mnt/sda3 # restore -rf /backup-files/sda3.dump", "systemctl reboot", "mkfs.btrfs / dev / device", "mount / dev / device / mount-point", "btrfs filesystem resize amount / mount-point", "btrfs filesystem resize +200M /btrfssingle Resize '/btrfssingle' of '+200M'", "btrfs filesystem show /mount-point", "btrfs filesystem show /btrfstest Label: none uuid: 755b41b7-7a20-4a24-abb3-45fdbed1ab39 Total devices 4 FS bytes used 192.00KiB devid 1 size 1.00GiB used 224.75MiB path /dev/vdc devid 2 size 524.00MiB used 204.75MiB path /dev/vdd devid 3 size 1.00GiB used 8.00MiB path /dev/vde devid 4 size 1.00GiB used 8.00MiB path /dev/vdf Btrfs v3.16.2", "btrfs filesystem resize devid : amount /mount-point", "btrfs filesystem resize 2:+200M /btrfstest Resize '/btrfstest/' of '2:+200M'", "btrfs filesystem resize amount / mount-point", "btrfs filesystem resize -200M /btrfssingle Resize '/btrfssingle' of '-200M'", "btrfs filesystem show /mount-point", "btrfs filesystem show /btrfstest Label: none uuid: 755b41b7-7a20-4a24-abb3-45fdbed1ab39 Total devices 4 FS bytes used 192.00KiB devid 1 size 1.00GiB used 224.75MiB path /dev/vdc devid 2 size 524.00MiB used 204.75MiB path /dev/vdd devid 3 size 1.00GiB used 8.00MiB path /dev/vde devid 4 size 1.00GiB used 8.00MiB path /dev/vdf Btrfs v3.16.2", "btrfs filesystem resize devid : amount /mount-point", "btrfs filesystem resize 2:-200M /btrfstest Resize '/btrfstest' of '2:-200M'", "btrfs filesystem resize amount / mount-point", "btrfs filesystem resize 700M /btrfssingle Resize '/btrfssingle' of '700M'", "btrfs filesystem show / mount-point", "btrfs filesystem show /btrfstest Label: none uuid: 755b41b7-7a20-4a24-abb3-45fdbed1ab39 Total devices 4 FS bytes used 192.00KiB devid 1 size 1.00GiB used 224.75MiB path /dev/vdc devid 2 size 724.00MiB used 204.75MiB path /dev/vdd devid 3 size 1.00GiB used 8.00MiB path /dev/vde devid 4 size 1.00GiB used 8.00MiB path /dev/vdf Btrfs v3.16.2", "btrfs filesystem resize devid : amount /mount-point", "btrfs filesystem resize 2:300M /btrfstest Resize '/btrfstest' of '2:300M'", "mkfs.btrfs /dev/ device1 /dev/ device2 /dev/ device3 /dev/ device4", "mkfs.btrfs -m raid0 /dev/ device1 /dev/ device2", "mkfs.btrfs -m raid10 -d raid10 /dev/ device1 /dev/ device2 /dev/ device3 /dev/ device4", "mkfs.btrfs -m single / dev / device", "mkfs.btrfs -d single /dev/ device1 /dev/ device2 /dev/ device3", "btrfs device add /dev/ device1 / mount-point", "btrfs device scan", "btrfs device scan /dev/ device", "mkfs.btrfs /dev/ device1 mount /dev/ device1", "btrfs device add /dev/ device2 / mount-point", "btrfs filesystem balance / mount-point", "mount /dev/sdb1 /mnt btrfs device add /dev/sdc1 /mnt btrfs balance start -dconvert=raid1 -mconvert=raid1 /mnt", "mkfs.btrfs /dev/sdb /dev/sdc /dev/sdd /dev/sde mount /dev/sdb /mnt", "btrfs device delete /dev/sdc /mnt", "mkfs.btrfs -m raid1 /dev/sdb /dev/sdc /dev/sdd /dev/sde ssd is destroyed or removed, use -o degraded to force the mount to ignore missing devices mount -o degraded /dev/sdb /mnt 'missing' is a special device name btrfs device delete missing /mnt", "/dev/sdb /mnt btrfs device=/dev/sdb,device=/dev/sdc,device=/dev/sdd,device=/dev/sde 0", "mount -t nfs -o options server : /remote/export /local/directory", "server:/usr/local/pub /pub nfs defaults 0 0", "server : /remote/export /local/directory nfs options 0 0", "systemctl daemon-reload", "/- /tmp/auto_dcthon /- /tmp/auto_test3_direct /- /tmp/auto_test4_direct", "mount-point map-name options", "/home /etc/auto.misc", "mount-point [ options ] location", "payroll -fstype=nfs personnel:/dev/hda3 sales -fstype=ext3 :/dev/hda4", "systemctl start autofs", "systemctl restart autofs", "systemctl status autofs", "automount: files nis", "+auto.master", "/home auto.home", "beth fileserver.example.com:/export/home/beth joe fileserver.example.com:/export/home/joe * fileserver.example.com:/export/home/&", "/home \\u00ad/etc/auto.home +auto.master", "* labserver.example.com:/export/home/&", "mydir someserver:/export/mydir +auto.home", "beth joe mydir", "DEFAULT_MAP_OBJECT_CLASS=\"automountMap\" DEFAULT_ENTRY_OBJECT_CLASS=\"automount\" DEFAULT_MAP_ATTRIBUTE=\"automountMapName\" DEFAULT_ENTRY_ATTRIBUTE=\"automountKey\" DEFAULT_VALUE_ATTRIBUTE=\"automountInformation\"", "extended LDIF # LDAPv3 base <> with scope subtree filter: (&(objectclass=automountMap)(automountMapName=auto.master)) requesting: ALL # auto.master, example.com dn: automountMapName=auto.master,dc=example,dc=com objectClass: top objectClass: automountMap automountMapName: auto.master extended LDIF # LDAPv3 base <automountMapName=auto.master,dc=example,dc=com> with scope subtree filter: (objectclass=automount) requesting: ALL # /home, auto.master, example.com dn: automountMapName=auto.master,dc=example,dc=com objectClass: automount cn: /home automountKey: /home automountInformation: auto.home extended LDIF # LDAPv3 base <> with scope subtree filter: (&(objectclass=automountMap)(automountMapName=auto.home)) requesting: ALL # auto.home, example.com dn: automountMapName=auto.home,dc=example,dc=com objectClass: automountMap automountMapName: auto.home extended LDIF # LDAPv3 base <automountMapName=auto.home,dc=example,dc=com> with scope subtree filter: (objectclass=automount) requesting: ALL # foo, auto.home, example.com dn: automountKey=foo,automountMapName=auto.home,dc=example,dc=com objectClass: automount automountKey: foo automountInformation: filer.example.com:/export/foo /, auto.home, example.com dn: automountKey=/,automountMapName=auto.home,dc=example,dc=com objectClass: automount automountKey: / automountInformation: filer.example.com:/export/&", "systemctl status rpcbind", "systemctl start nfs-lock # systemctl enable nfs-lock", "systemctl start nfs", "systemctl enable nfs", "systemctl stop nfs", "systemctl restart nfs", "systemctl restart nfs-config", "systemctl try-restart nfs", "systemctl reload nfs", "export host ( options )", "export host1 ( options1 ) host2 ( options2 ) host3 ( options3 )", "/exported/directory bob.example.com", "export host (anonuid= uid ,anongid= gid )", "/home bob.example.com(rw) /home bob.example.com (rw)", "systemctl restart nfs-config", "systemctl restart nfs-server", "showmount -e myserver Export list for mysever /exports/ foo /exports/ bar", "mount myserver :/ /mnt/ # cd /mnt/ exports # ls exports foo bar", "systemctl enable rpc-rquotad", "systemctl start rpc-rquotad", "systemctl restart rpc-rquotad", "systemctl restart rpc-rquotad", "systemctl restart nfs", "Requested NFS version or transport protocol is not supported.", "RPCNFSDARGS=\"-N 2 -N 3 -U\"", "RPCMOUNTDOPTS=\"-N 2 -N 3\"", "systemctl mask --now rpc-statd.service rpcbind.service rpcbind.socket", "systemctl restart nfs", "netstat -ltu Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 0.0.0.0:nfs 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:ssh 0.0.0.0:* LISTEN tcp 0 0 localhost:smtp 0.0.0.0:* LISTEN tcp6 0 0 [::]:nfs [::]:* LISTEN tcp6 0 0 [::]:12432 [::]:* LISTEN tcp6 0 0 [::]:12434 [::]:* LISTEN tcp6 0 0 localhost:7092 [::]:* LISTEN tcp6 0 0 [::]:ssh [::]:* LISTEN udp 0 0 localhost:323 0.0.0.0:* udp 0 0 0.0.0.0:bootpc 0.0.0.0:* udp6 0 0 localhost:323 [::]:*", "netstat -ltu Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 0.0.0.0:nfs 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:36069 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:52364 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:sunrpc 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:mountd 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:ssh 0.0.0.0:* LISTEN tcp 0 0 localhost:smtp 0.0.0.0:* LISTEN tcp6 0 0 [::]:34941 [::]:* LISTEN tcp6 0 0 [::]:nfs [::]:* LISTEN tcp6 0 0 [::]:sunrpc [::]:* LISTEN tcp6 0 0 [::]:mountd [::]:* LISTEN tcp6 0 0 [::]:12432 [::]:* LISTEN tcp6 0 0 [::]:56881 [::]:* LISTEN tcp6 0 0 [::]:12434 [::]:* LISTEN tcp6 0 0 localhost:7092 [::]:* LISTEN tcp6 0 0 [::]:ssh [::]:* LISTEN udp 0 0 localhost:323 0.0.0.0:* udp 0 0 0.0.0.0:37190 0.0.0.0:* udp 0 0 0.0.0.0:876 0.0.0.0:* udp 0 0 localhost:877 0.0.0.0:* udp 0 0 0.0.0.0:mountd 0.0.0.0:* udp 0 0 0.0.0.0:38588 0.0.0.0:* udp 0 0 0.0.0.0:nfs 0.0.0.0:* udp 0 0 0.0.0.0:bootpc 0.0.0.0:* udp 0 0 0.0.0.0:sunrpc 0.0.0.0:* udp6 0 0 localhost:323 [::]:* udp6 0 0 [::]:57683 [::]:* udp6 0 0 [::]:876 [::]:* udp6 0 0 [::]:mountd [::]:* udp6 0 0 [::]:40874 [::]:* udp6 0 0 [::]:nfs [::]:* udp6 0 0 [::]:sunrpc [::]:*", "/export *(sec=sys:krb5:krb5i:krb5p)", "mount -o sec=krb5 server:/export /mnt", "rpcinfo -p", "program vers proto port service 100021 1 udp 32774 nlockmgr 100021 3 udp 32774 nlockmgr 100021 4 udp 32774 nlockmgr 100021 1 tcp 34437 nlockmgr 100021 3 tcp 34437 nlockmgr 100021 4 tcp 34437 nlockmgr 100011 1 udp 819 rquotad 100011 2 udp 819 rquotad 100011 1 tcp 822 rquotad 100011 2 tcp 822 rquotad 100003 2 udp 2049 nfs 100003 3 udp 2049 nfs 100003 2 tcp 2049 nfs 100003 3 tcp 2049 nfs 100005 1 udp 836 mountd 100005 1 tcp 839 mountd 100005 2 udp 836 mountd 100005 2 tcp 839 mountd 100005 3 udp 836 mountd 100005 3 tcp 839 mountd", "mount -t nfs -o v4.1 server:/remote-export /local-directory", "lsmod | grep nfs_layout_nfsv41_files", "mount -t nfs -o v4.2 server:/remote-export /local-directory", "lsmod | grep nfs_layout_flexfiles", "yum install sg3_utils", "sg_persist --in --report-capabilities --verbose path-to-scsi-device", "inquiry cdb: 12 00 00 00 24 00 Persistent Reservation In cmd: 5e 02 00 00 00 00 00 20 00 00 LIO-ORG block11 4.0 Peripheral device type: disk Report capabilities response: Compatible Reservation Handling(CRH): 1 Specify Initiator Ports Capable(SIP_C): 1 All Target Ports Capable(ATP_C): 1 Persist Through Power Loss Capable(PTPL_C): 1 Type Mask Valid(TMV): 1 Allow Commands: 1 Persist Through Power Loss Active(PTPL_A): 1 Support indicated in Type mask: Write Exclusive, all registrants: 1 Exclusive Access, registrants only: 1 Write Exclusive, registrants only: 1 Exclusive Access: 1 Write Exclusive: 1 Exclusive Access, all registrants: 1", "[nfsd] vers4.1=y", "/exported/directory allowed.example.com(pnfs)", "mount -t nfs -o nfsvers=4.1 host : /remote/export /local/directory", "yum install sg3_utils", "sg_persist --read-reservation path-to-scsi-device", "sg_persist --read-reservation /dev/sda LIO-ORG block_1 4.0 Peripheral device type: disk PR generation=0x8, Reservation follows: Key=0x100000000000000 scope: LU_SCOPE, type: Exclusive Access, registrants only", "sg_persist --out --release --param-rk= reservation-key --prout-type=6 path-to-scsi-device", "sg_persist --out --release --param-rk=0x100000000000000 --prout-type=6 /dev/sda LIO-ORG block_1 4.0 Peripheral device type: disk", "watch --differences \"nfsstat --server | egrep --after-context=1 read\\|write\\|layout\" Every 2.0s: nfsstat --server | egrep --after-context=1 read\\|write\\|layout putrootfh read readdir readlink remove rename 2 0% 0 0% 1 0% 0 0% 0 0% 0 0% -- setcltidconf verify write rellockowner bc_ctl bind_conn 0 0% 0 0% 0 0% 0 0% 0 0% 0 0% -- getdevlist layoutcommit layoutget layoutreturn secinfononam sequence 0 0% 29 1% 49 1% 5 0% 0 0% 2435 86%", "cat /proc/self/mountstats | awk /scsi_lun_0/,/^USD/ | egrep device\\|READ\\|WRITE\\|LAYOUT device 192.168.122.73:/exports/scsi_lun_0 mounted on /mnt/rhel7/scsi_lun_0 with fstype nfs4 statvers=1.1 nfsv4: bm0=0xfdffbfff,bm1=0x40f9be3e,bm2=0x803,acl=0x3,sessions,pnfs=LAYOUT_SCSI READ: 0 0 0 0 0 0 0 0 WRITE: 0 0 0 0 0 0 0 0 READLINK: 0 0 0 0 0 0 0 0 READDIR: 0 0 0 0 0 0 0 0 LAYOUTGET: 49 49 0 11172 9604 2 19448 19454 LAYOUTCOMMIT: 28 28 0 7776 4808 0 24719 24722 LAYOUTRETURN: 0 0 0 0 0 0 0 0 LAYOUTSTATS: 0 0 0 0 0 0 0 0", "yum install cifs-utils", "mount -t cifs -o vers=1.0 ,username= user_name //server_name/share_name /mnt/", "mount // server / share on /mnt type cifs (..., unix ,...)", "mount -t cifs -o username= user_name // server_name / share_name /mnt/ Password for user_name @// server_name / share_name : ********", "mount -t cifs -o username= DOMAIN \\ Administrator ,seal,vers=3.0 // server / example /mnt/ Password for user_name @// server_name / share_name : ********", "// server_name / share_name /mnt cifs credentials= /root/smb.cred 0 0", "mount /mnt/", "username= user_name password= password domain= domain_name", "chown user_name ~/smb.cred # chmod 600 ~/smb.cred", "// server_name / share_name /mnt cifs multiuser,sec=ntlmssp, credentials= /root/smb.cred 0 0", "mount /mnt/", "mount // server_name / share_name on /mnt type cifs (sec=ntlmssp, multiuser ,...)", "cifscreds add -u SMB_user_name server_name Password: ********", "dir /path/to/cache", "dir /var/cache/fscache", "semanage fcontext -a -e /var/cache/fscache /path/to/cache # restorecon -Rv /path/to/cache", "semanage permissive -a cachefilesd_t # semanage permissive -a cachefiles_kernel_t", "tune2fs -o user_xattr /dev/ device", "mount /dev/ device /path/to/cache -o user_xattr", "systemctl start cachefilesd", "systemctl enable cachefilesd", "mount nfs-share :/ /mount/point -o fsc", "cat /proc/fs/fscache/stats", "dmraid -r -E / device /", "parted /dev/sda", "partx --update --nr partition-number disk", "(parted) print", "Model: ATA ST3160812AS (scsi) Disk /dev/sda: 160GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 107MB 107MB primary ext3 boot 2 107MB 105GB 105GB primary ext3 3 105GB 107GB 2147MB primary linux-swap 4 107GB 160GB 52.9GB extended root 5 107GB 133GB 26.2GB logical ext3 6 133GB 133GB 107MB logical ext3 7 133GB 160GB 26.6GB logical lvm", "(parted) select /dev/sda", "parted /dev/sda", "(parted) print", "(parted) mkpart part-type name fs-type start end", "(parted) mkpart primary 1024 2048", "(parted) print", "(parted) quit", "cat /proc/partitions", "mkfs.ext4 /dev/ sda6", "e2label /dev/sda6 \"Work\"", "systemctl daemon-reload", "mount /work", "parted device", "(parted) print", "(parted) rm 3", "(parted) print", "(parted) quit", "cat /proc/partitions", "systemctl daemon-reload", "fdisk /dev/sdc Command (m for help): t Selected partition 1 Partition type (type L to list all types): 83 Changed type of partition 'Linux LVM' to 'Linux'.", "parted /dev/sdc 'set 1 lvm off'", "umount /dev/vda", "fdisk /dev/vda Welcome to fdisk (util-linux 2.23.2). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Command (m for help):", "Command (m for help): p Disk /dev/vda: 16.1 GB, 16106127360 bytes, 31457280 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x0006d09a Device Boot Start End Blocks Id System /dev/vda1 * 2048 1026047 512000 83 Linux /dev/vda2 1026048 31457279 15215616 8e Linux LVM", "Command (m for help): d Partition number (1,2, default 2): 2 Partition 2 is deleted", "Command (m for help): n Partition type: p primary (1 primary, 0 extended, 3 free) e extended Select (default p): *Enter* Using default response p Partition number (2-4, default 2): *Enter* First sector (1026048-31457279, default 1026048): *Enter* Using default value 1026048 Last sector, +sectors or +size{K,M,G} (1026048-31457279, default 31457279): +500M Partition 2 of type Linux and of size 500 MiB is set", "Command (m for help): t Partition number (1,2, default 2): *Enter* Hex code (type L to list all codes): 8e Changed type of partition 'Linux' to 'Linux LVM'", "e2fsck /dev/vda e2fsck 1.41.12 (17-May-2010) Pass 1:Checking inodes, blocks, and sizes Pass 2:Checking directory structure Pass 3:Checking directory connectivity Pass 4:Checking reference counts Pass 5:Checking group summary information ext4-1:11/131072 files (0.0% non-contiguous),27050/524128 blocks", "mount /dev/vda", "snapper -c config_name create-config -f \"lvm( fs_type )\" /mount-point", "snapper -c lvm_config create-config -f \"lvm(ext4)\" /lvm_mount", "snapper -c config_name create-config -f btrfs /mount-point", "snapper -c btrfs_config create-config -f btrfs /btrfs_mount", "snapper -c config_name create -t pre", "snapper -c SnapperExample create -t pre -p 1", "snapper -c config_name list", "snapper -c lvm_config list Type | # | Pre # | Date | User | Cleanup | Description | Userdata -------+---+-------+-------------------+------+----------+-------------+--------- single | 0 | | | root | | current | pre | 1 | | Mon 06<...> | root | | |", "snapper -c config_file create -t post --pre-num pre_snapshot_number", "snapper -c lvm_config create -t post --pre-num 1 -p 2", "snapper -c lvm_config list Type | # | Pre # | Date | User | Cleanup | Description | Userdata -------+---+-------+-------------------+------+----------+-------------+--------- single | 0 | | | root | | current | pre | 1 | | Mon 06<...> | root | | | post | 2 | 1 | Mon 06<...> | root | | |", "snapper -c lvm_config create --command \" command_to_be_tracked \"", "snapper -c lvm_config create --command \"echo Hello > /lvm_mount/hello_file\"", "snapper -c config_file status first_snapshot_number .. second_snapshot_number", "snapper -c lvm_config status 3..4 +..... /lvm_mount/hello_file", "snapper -c config_name create -t single", "snapper -c lvm_config create -t single", "snapper -c config_file status first_snapshot_number .. second_snapshot_number", "snapper -c lvm_config status 1..2 tp.... /lvm_mount/dir1 -..... /lvm_mount/dir1/file_a c.ug.. /lvm_mount/file2 +..... /lvm_mount/file3 ....x. /lvm_mount/file4 cp..xa /lvm_mount/file5", "+..... /lvm_mount/file3 |||||| 123456", "snapper -c config_name diff first_snapshot_number .. second_snapshot_number", "snapper -c lvm_config diff 1..2 --- /lvm_mount/.snapshots/13/snapshot/file4 19<...> +++ /lvm_mount/.snapshots/14/snapshot/file4 20<...> @@ -0,0 +1 @@ +words", "snapper -c config_name xadiff first_snapshot_number .. second_snapshot_number", "snapper -c lvm_config xadiff 1..2", "snapper -c config_name undochange 1 .. 2", "snapper -c config_name delete snapshot_number", "swapoff -v /dev/VolGroup00/LogVol01", "lvresize /dev/VolGroup00/LogVol01 -L +2G", "mkswap /dev/VolGroup00/LogVol01", "swapon -v /dev/VolGroup00/LogVol01", "cat /proc/swaps free -h", "lvcreate VolGroup00 -n LogVol02 -L 2G", "mkswap /dev/VolGroup00/LogVol02", "/dev/VolGroup00/LogVol02 swap swap defaults 0 0", "systemctl daemon-reload", "swapon -v /dev/VolGroup00/LogVol02", "cat /proc/swaps free -h", "dd if=/dev/zero of=/swapfile bs=1024 count= 65536", "mkswap /swapfile", "chmod 0600 /swapfile", "/swapfile swap swap defaults 0 0", "systemctl daemon-reload", "swapon /swapfile", "cat /proc/swaps free -h", "swapoff -v /dev/VolGroup00/LogVol01", "lvreduce /dev/VolGroup00/LogVol01 -L -512M", "mkswap /dev/VolGroup00/LogVol01", "swapon -v /dev/VolGroup00/LogVol01", "cat /proc/swaps free -h", "swapoff -v /dev/VolGroup00/LogVol02", "lvremove /dev/VolGroup00/LogVol02", "/dev/VolGroup00/LogVol02 swap swap defaults 0 0", "systemctl daemon-reload", "vi /etc/default/grub", "grub2-mkconfig -o /boot/grub2/grub.cfg", "grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg", "cat /proc/swaps free -h", "swapoff -v /swapfile", "systemctl daemon-reload", "rm /swapfile", "yum install system-storage-manager", "ssm list ---------------------------------------------------------- Device Free Used Total Pool Mount point ---------------------------------------------------------- /dev/sda 2.00 GB PARTITIONED /dev/sda1 47.83 MB /test /dev/vda 15.00 GB PARTITIONED /dev/vda1 500.00 MB /boot /dev/vda2 0.00 KB 14.51 GB 14.51 GB rhel ---------------------------------------------------------- ------------------------------------------------ Pool Type Devices Free Used Total ------------------------------------------------ rhel lvm 1 0.00 KB 14.51 GB 14.51 GB ------------------------------------------------ --------------------------------------------------------------------------------- Volume Pool Volume size FS FS size Free Type Mount point --------------------------------------------------------------------------------- /dev/rhel/root rhel 13.53 GB xfs 13.52 GB 9.64 GB linear / /dev/rhel/swap rhel 1000.00 MB linear /dev/sda1 47.83 MB xfs 44.50 MB 44.41 MB part /test /dev/vda1 500.00 MB xfs 496.67 MB 403.56 MB part /boot ---------------------------------------------------------------------------------", "ssm create --fs xfs -s 1G /dev/vdb /dev/vdc Physical volume \"/dev/vdb\" successfully created Physical volume \"/dev/vdc\" successfully created Volume group \"lvm_pool\" successfully created Logical volume \"lvol001\" created", "ssm create --fs xfs -p new_pool -n XFS_Volume /dev/vdd Volume group \"new_pool\" successfully created Logical volume \"XFS_Volume\" created", "ssm check /dev/lvm_pool/lvol001 Checking xfs file system on '/dev/mapper/lvm_pool-lvol001'. Phase 1 - find and verify superblock Phase 2 - using internal log - scan filesystem freespace and inode maps - found root inode chunk Phase 3 - for each AG - scan (but don't clear) agi unlinked lists - process known inodes and perform inode discovery - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - process newly discovered inodes Phase 4 - check for duplicate blocks - setting up duplicate extent list - check for inodes claiming duplicate blocks - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity - traversing filesystem - traversal finished - moving disconnected inodes to lost+found Phase 7 - verify link counts No modify flag set, skipping filesystem flush and exiting.", "ssm list ----------------------------------------------------------------- Device Free Used Total Pool Mount point ----------------------------------------------------------------- /dev/vda 15.00 GB PARTITIONED /dev/vda1 500.00 MB /boot /dev/vda2 0.00 KB 14.51 GB 14.51 GB rhel /dev/vdb 120.00 MB 900.00 MB 1.00 GB lvm_pool /dev/vdc 1.00 GB ----------------------------------------------------------------- --------------------------------------------------------- Pool Type Devices Free Used Total --------------------------------------------------------- lvm_pool lvm 1 120.00 MB 900.00 MB 1020.00 MB rhel lvm 1 0.00 KB 14.51 GB 14.51 GB --------------------------------------------------------- -------------------------------------------------------------------------------------------- Volume Pool Volume size FS FS size Free Type Mount point -------------------------------------------------------------------------------------------- /dev/rhel/root rhel 13.53 GB xfs 13.52 GB 9.64 GB linear / /dev/rhel/swap rhel 1000.00 MB linear /dev/lvm_pool/lvol001 lvm_pool 900.00 MB xfs 896.67 MB 896.54 MB linear /dev/vda1 500.00 MB xfs 496.67 MB 403.56 MB part /boot --------------------------------------------------------------------------------------------", "~]# ssm resize -s +500M /dev/lvm_pool/lvol001 /dev/vdc Physical volume \"/dev/vdc\" successfully created Volume group \"lvm_pool\" successfully extended Phase 1 - find and verify superblock Phase 2 - using internal log - scan filesystem freespace and inode maps - found root inode chunk Phase 3 - for each AG - scan (but don't clear) agi unlinked lists - process known inodes and perform inode discovery - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes Phase 4 - check for duplicate blocks - setting up duplicate extent list - check for inodes claiming duplicate blocks - agno = 0 - agno = 1 - agno = 2 - agno = 3 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity - traversing filesystem - traversal finished - moving disconnected inodes to lost+found Phase 7 - verify link counts No modify flag set, skipping filesystem flush and exiting. Extending logical volume lvol001 to 1.37 GiB Logical volume lvol001 successfully resized meta-data=/dev/mapper/lvm_pool-lvol001 isize=256 agcount=4, agsize=57600 blks = sectsz=512 attr=2, projid32bit=1 = crc=0 data = bsize=4096 blocks=230400, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=0 log =internal bsize=4096 blocks=853, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 data blocks changed from 230400 to 358400", "ssm list ------------------------------------------------------------------ Device Free Used Total Pool Mount point ------------------------------------------------------------------ /dev/vda 15.00 GB PARTITIONED /dev/vda1 500.00 MB /boot /dev/vda2 0.00 KB 14.51 GB 14.51 GB rhel /dev/vdb 0.00 KB 1020.00 MB 1.00 GB lvm_pool /dev/vdc 640.00 MB 380.00 MB 1.00 GB lvm_pool ------------------------------------------------------------------ ------------------------------------------------------ Pool Type Devices Free Used Total ------------------------------------------------------ lvm_pool lvm 2 640.00 MB 1.37 GB 1.99 GB rhel lvm 1 0.00 KB 14.51 GB 14.51 GB ------------------------------------------------------ ---------------------------------------------------------------------------------------------- Volume Pool Volume size FS FS size Free Type Mount point ---------------------------------------------------------------------------------------------- /dev/rhel/root rhel 13.53 GB xfs 13.52 GB 9.64 GB linear / /dev/rhel/swap rhel 1000.00 MB linear /dev/lvm_pool/lvol001 lvm_pool 1.37 GB xfs 1.36 GB 1.36 GB linear /dev/vda1 500.00 MB xfs 496.67 MB 403.56 MB part /boot ----------------------------------------------------------------------------------------------", "ssm resize -s-50M /dev/lvm_pool/lvol002 Rounding size to boundary between physical extents: 972.00 MiB WARNING: Reducing active logical volume to 972.00 MiB THIS MAY DESTROY YOUR DATA (filesystem etc.) Do you really want to reduce lvol002? [y/n]: y Reducing logical volume lvol002 to 972.00 MiB Logical volume lvol002 successfully resized", "ssm snapshot /dev/lvm_pool/lvol001 Logical volume \"snap20150519T130900\" created", "ssm list ---------------------------------------------------------------- Device Free Used Total Pool Mount point ---------------------------------------------------------------- /dev/vda 15.00 GB PARTITIONED /dev/vda1 500.00 MB /boot /dev/vda2 0.00 KB 14.51 GB 14.51 GB rhel /dev/vdb 0.00 KB 1020.00 MB 1.00 GB lvm_pool /dev/vdc 1.00 GB ---------------------------------------------------------------- -------------------------------------------------------- Pool Type Devices Free Used Total -------------------------------------------------------- lvm_pool lvm 1 0.00 KB 1020.00 MB 1020.00 MB rhel lvm 1 0.00 KB 14.51 GB 14.51 GB -------------------------------------------------------- ---------------------------------------------------------------------------------------------- Volume Pool Volume size FS FS size Free Type Mount point ---------------------------------------------------------------------------------------------- /dev/rhel/root rhel 13.53 GB xfs 13.52 GB 9.64 GB linear / /dev/rhel/swap rhel 1000.00 MB linear /dev/lvm_pool/lvol001 lvm_pool 900.00 MB xfs 896.67 MB 896.54 MB linear /dev/vda1 500.00 MB xfs 496.67 MB 403.56 MB part /boot ---------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------- Snapshot Origin Pool Volume size Size Type ---------------------------------------------------------------------------------- /dev/lvm_pool/snap20150519T130900 lvol001 lvm_pool 120.00 MB 0.00 KB linear ----------------------------------------------------------------------------------", "ssm remove lvm_pool Do you really want to remove volume group \"lvm_pool\" containing 2 logical volumes? [y/n]: y Do you really want to remove active logical volume snap20150519T130900? [y/n]: y Logical volume \"snap20150519T130900\" successfully removed Do you really want to remove active logical volume lvol001? [y/n]: y Logical volume \"lvol001\" successfully removed Volume group \"lvm_pool\" successfully removed", "vim /etc/fstab", "/dev/VolGroup00/LogVol00 / ext3 defaults 1 1 LABEL=/boot /boot ext3 defaults 1 2 none /dev/pts devpts gid=5,mode=620 0 0 none /dev/shm tmpfs defaults 0 0 none /proc proc defaults 0 0 none /sys sysfs defaults 0 0 /dev/VolGroup00/LogVol02 /home ext3 defaults,usrquota,grpquota 1 2 /dev/VolGroup00/LogVol01 swap swap defaults 0 0 . . .", "quotacheck -cug / file system", "quotacheck -avug", "edquota username", "quota username", "Disk quotas for user testuser (uid 501): Filesystem blocks soft hard inodes soft hard /dev/VolGroup00/LogVol02 440436 0 0 37418 0 0", "Disk quotas for user testuser (uid 501): Filesystem blocks soft hard inodes soft hard /dev/VolGroup00/LogVol02 440436 500000 550000 37418 0 0", "quota testuser Disk quotas for user username (uid 501): Filesystem blocks quota limit grace files quota limit grace /dev/sdb 1000* 1000 1000 0 0 0", "edquota -g groupname", "quota -g groupname", "edquota -g devel", "Disk quotas for group devel (gid 505): Filesystem blocks soft hard inodes soft hard /dev/VolGroup00/LogVol02 440400 0 0 37418 0 0", "quota -g devel", "edquota -t", "quotaoff -vaug", "quotaon", "quotaon -vaug", "quotaon -vug /home", "*** Report for user quotas on device /dev/mapper/VolGroup00-LogVol02 Block grace time: 7days; Inode grace time: 7days Block limits File limits User used soft hard grace used soft hard grace ---------------------------------------------------------------------- root -- 36 0 0 4 0 0 kristin -- 540 0 0 125 0 0 testuser -- 440400 500000 550000 37418 0 0", "repquota -a", "quotacheck", "crontab -e", "quotaoff -vug / file_system", "quotacheck -vug / file_system", "quotaon -vug / file_system", "dd if=/dev/sda1 of=/dev/sdb1", "parted /dev/sda set 1 prep on USD parted /dev/sda set 1 boot on USD parted /dev/sdb set 1 prep on USD parted /dev/sdb set 1 boot on", "mount", "findmnt", "mount -t type", "findmnt -t type", "mount -t ext4 /dev/sda2 on / type ext4 (rw) /dev/sda1 on /boot type ext4 (rw)", "findmnt -t ext4 TARGET SOURCE FSTYPE OPTIONS / /dev/sda2 ext4 rw,realtime,seclabel,barrier=1,data=ordered /boot /dev/sda1 ext4 rw,realtime,seclabel,barrier=1,data=ordered", "mount [ option ... ] device directory", "findmnt directory ; echo USD?", "mount [ option ... ] directory mount [ option ... ] device", "blkid device", "blkid /dev/sda3 /dev/sda3: LABEL=\"home\" UUID=\"34795a28-ca6d-4fd8-a347-73671d0c19cb\" TYPE=\"ext3\"", "mount -t type device directory", "~]# mount -t vfat /dev/sdc1 /media/flashdisk", "mount -o options device directory", "mount -o ro,loop Fedora-14-x86_64-Live-Desktop.iso /media/cdrom", "mount --bind old_directory new_directory", "mount --rbind old_directory new_directory", "mount --make-shared mount_point", "mount --make-rshared mount_point", "mount --bind /media /media # mount --make-shared /media", "mount --bind /media /mnt", "mount /dev/cdrom /media/cdrom # ls /media/cdrom EFI GPL isolinux LiveOS # ls /mnt/cdrom EFI GPL isolinux LiveOS", "# mount /dev/sdc1 /mnt/flashdisk # ls /media/flashdisk en-US publican.cfg # ls /mnt/flashdisk en-US publican.cfg", "mount --make-slave mount_point", "mount --make-rslave mount_point", "~]# mount --bind /media /media ~]# mount --make-shared /media", "~]# mount --bind /media /mnt ~]# mount --make-slave /mnt", "~]# mount /dev/cdrom /media/cdrom ~]# ls /media/cdrom EFI GPL isolinux LiveOS ~]# ls /mnt/cdrom EFI GPL isolinux LiveOS", "~]# mount /dev/sdc1 /mnt/flashdisk ~]# ls /media/flashdisk ~]# ls /mnt/flashdisk en-US publican.cfg", "mount --make-private mount_point", "mount --make-rprivate mount_point", "~]# mount --bind /media /media ~]# mount --make-shared /media ~]# mount --bind /media /mnt", "~]# mount --make-private /mnt", "~]# mount /dev/cdrom /media/cdrom ~]# ls /media/cdrom EFI GPL isolinux LiveOS ~]# ls /mnt/cdrom ~]#", "~]# mount /dev/sdc1 /mnt/flashdisk ~]# ls /media/flashdisk ~]# ls /mnt/flashdisk en-US publican.cfg", "mount --make-unbindable mount_point", "mount --make-runbindable mount_point", "mount --bind /media /media # mount --make-unbindable /media", "mount --bind /media /mnt mount: wrong fs type, bad option, bad superblock on /media, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so", "mount --move old_directory new_directory", "mount --move /mnt/userdirs /home", "ls /mnt/userdirs # ls /home jill joe", "Set to 'yes' to mount the file systems as read-only. READONLY=yes [output truncated]", "/dev/mapper/luks-c376919e... / ext4 ro ,x-systemd.device-timeout=0 1 1", "GRUB_CMDLINE_LINUX=\"crashkernel=auto rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quiet ro \"", "grub2-mkconfig -o /boot/grub2/grub.cfg", "files /etc/example/file", "mount -o remount,rw /", "mount -o remount,ro /", "dirs /var/cache/man dirs /var/gdm [output truncated] empty /tmp empty /var/cache/foomatic [output truncated] files /etc/adjtime files /etc/ntp.conf [output truncated]", "how the file or directory is copied to tmpfs path to the file or directory", "umount directory USD umount device", "fuser -m directory", "fuser -m /media/cdrom /media/cdrom: 1793 2013 2022 2435 10532c 10672c", "umount /media/cdrom", "volume_key [OPTION]... OPERAND", "volume_key --save /path/to/volume -o escrow-packet", "volume_key --restore /path/to/volume escrow-packet", "certutil -d /the/nss/directory -N", "pk12util -d /the/nss/directory -i the-pkcs12-file", "volume_key --save /path/to/volume -c /path/to/cert escrow-packet", "volume_key --reencrypt -d /the/nss/directory escrow-packet-in -o escrow-packet-out", "volume_key --restore /path/to/volume escrow-packet-out", "volume_key --save /path/to/volume -c /path/to/ert --create-random-passphrase passphrase-packet", "volume_key --secrets -d /your/nss/directory passphrase-packet", "hdparm -I /dev/sda | grep TRIM Data Set Management TRIM supported (limit 8 block) Deterministic read data after TRIM", "cat /sys/block/ disk-name /queue/discard_zeroes_data", "options raid456 devices_handle_discard_safely=Y", "cat /sys/block/ disk-name /queue/discard_zeroes_data", "raid456.devices_handle_discard_safely=Y", "grub2-mkconfig -o /boot/grub2/grub.cfg", "grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg", "mount -t ext4 -o discard /dev/sda2 /mnt", "hdparm -W0 / device /", "MegaCli64 -LDGetProp -DskCache -LAll -aALL", "MegaCli64 -LDSetProp -DisDskCache -Lall -aALL", "alignment_offset: 0 physical_block_size: 512 logical_block_size: 512 minimum_io_size: 512 optimal_io_size: 0", "sg_inq -p 0xb0 disk", "add_dracutmodules+=\" nfs\"", "systemctl enable --now tftp", "cp /usr/share/syslinux/pxelinux.0 /var/lib/tftpboot/", "mkdir -p /var/lib/tftpboot/pxelinux.cfg/", "allow booting; allow bootp; class \"pxeclients\" { match if substring(option vendor-class-identifier, 0, 9) = \"PXEClient\"; next-server server-ip ; filename \"pxelinux.0\"; }", "rsync -a -e ssh --exclude='/proc/*' --exclude='/sys/*' hostname.com :/ exported-root-directory", "yum install @Base kernel dracut-network nfs-utils --installroot= exported-root-directory --releasever=/", "cp /boot/vmlinuz- kernel-version /var/lib/tftpboot/", "dracut --add nfs initramfs- kernel-version .img kernel-version", "chmod 644 initramfs- kernel-version .img", "default rhel7 label rhel7 kernel vmlinuz- kernel-version append initrd=initramfs- kernel-version .img root=nfs: server-ip : /exported/root/directory rw", "systemctl start target # systemctl enable target", "yum install targetcli", "systemctl start target", "systemctl enable target", "firewall-cmd --permanent --add-port=3260/tcp Success # firewall-cmd --reload Success", "targetcli : /> ls o- /........................................[...] o- backstores.............................[...] | o- block.................[Storage Objects: 0] | o- fileio................[Storage Objects: 0] | o- pscsi.................[Storage Objects: 0] | o- ramdisk...............[Storage Ojbects: 0] o- iscsi...........................[Targets: 0] o- loopback........................[Targets: 0]", "/> /backstores/fileio create file1 /tmp/disk1.img 200M write_back=false Created fileio file1 with size 209715200", "fdisk /dev/ vdb Welcome to fdisk (util-linux 2.23.2). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Device does not contain a recognized partition table Building a new DOS disklabel with disk identifier 0x39dc48fb. Command (m for help): n Partition type: p primary (0 primary, 0 extended, 4 free) e extended Select (default p): *Enter* Using default response p Partition number (1-4, default 1): *Enter* First sector (2048-2097151, default 2048): *Enter* Using default value 2048 Last sector, +sectors or +size{K,M,G} (2048-2097151, default 2097151): +250M Partition 1 of type Linux and of size 250 MiB is set Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks.", "/> /backstores/block create name=block_backend dev=/dev/ vdb Generating a wwn serial. Created block storage object block_backend using /dev/ vdb .", "/> backstores/pscsi/ create name=pscsi_backend dev=/dev/sr0 Generating a wwn serial. Created pscsi storage object pscsi_backend using /dev/sr0", "/> backstores/ramdisk/ create name=rd_backend size=1GB Generating a wwn serial. Created rd_mcp ramdisk rd_backend with size 1GB.", "/> iscsi/", "/iscsi> create Created target iqn.2003-01.org.linux-iscsi.hostname.x8664:sn.78b473f296ff Created TPG1", "/iscsi > create iqn.2006-04.com.example:444 Created target iqn.2006-04.com.example:444 Created TPG1", "/iscsi > ls o- iscsi.......................................[1 Target] o- iqn.2006-04.com.example:444................[1 TPG] o- tpg1...........................[enabled, auth] o- acls...............................[0 ACL] o- luns...............................[0 LUN] o- portals.........................[0 Portal]", "/iscsi> iqn.2006-04.example:444/tpg1/", "/iscsi/iqn.20...mple:444/tpg1> portals/ create Using default IP port 3260 Binding to INADDR_Any (0.0.0.0) Created network portal 0.0.0.0:3260", "/iscsi/iqn.20...mple:444/tpg1> portals/ create 192.168.122.137 Using default IP port 3260 Created network portal 192.168.122.137:3260", "/iscsi/iqn.20...mple:444/tpg1> ls o- tpg.................................. [enambled, auth] o- acls ......................................[0 ACL] o- luns ......................................[0 LUN] o- portals ................................[1 Portal] o- 192.168.122.137:3260......................[OK]", "/iscsi/iqn.20...mple:444/tpg1> luns/ create /backstores/ramdisk/rd_backend Created LUN 0. /iscsi/iqn.20...mple:444/tpg1> luns/ create /backstores/block/block_backend Created LUN 1. /iscsi/iqn.20...mple:444/tpg1> luns/ create /backstores/fileio/file1 Created LUN 2.", "/iscsi/iqn.20...mple:444/tpg1> ls o- tpg.................................. [enambled, auth] o- acls ......................................[0 ACL] o- luns .....................................[3 LUNs] | o- lun0.........................[ramdisk/ramdisk1] | o- lun1.................[block/block1 (/dev/vdb1)] | o- lun2...................[fileio/file1 (/foo.img)] o- portals ................................[1 Portal] o- 192.168.122.137:3260......................[OK]", "/> set global auto_add_mapped_luns=false Parameter auto_add_mapped_luns is now 'false'.", "/> iscsi/iqn.2015-06.com.redhat:target/tpg1/acls/iqn.2015-06.com.redhat:initiator/ create mapped_lun=1 tpg_lun_or_backstore=/backstores/block/block2 write_protect=1 Created LUN 1. Created Mapped LUN 1. /> ls o- / ...................................................... [...] o- backstores ........................................... [...] <snip> o- iscsi ......................................... [Targets: 1] | o- iqn.2015-06.com.redhat:target .................. [TPGs: 1] | o- tpg1 ............................ [no-gen-acls, no-auth] | o- acls ....................................... [ACLs: 2] | | o- iqn.2015-06.com.redhat:initiator .. [Mapped LUNs: 2] | | | o- mapped_lun0 .............. [lun0 block/disk1 (rw)] | | | o- mapped_lun1 .............. [lun1 block/disk2 (ro)] | o- luns ....................................... [LUNs: 2] | | o- lun0 ...................... [block/disk1 (/dev/vdb)] | | o- lun1 ...................... [block/disk2 (/dev/vdc)] <snip>", "/iscsi/iqn.20...mple:444/tpg1> acls/", "/iscsi/iqn.20...444/tpg1/acls> create iqn.2006-04.com.example.foo:888 Created Node ACL for iqn.2006-04.com.example.foo:888 Created mapped LUN 2. Created mapped LUN 1. Created mapped LUN 0.", "/iscsi/iqn.20...scsi:444/tpg1> set attribute generate_node_acls=1", "/iscsi/iqn.20...444/tpg1/acls> ls o- acls .................................................[1 ACL] o- iqn.2006-04.com.example.foo:888 ....[3 Mapped LUNs, auth] o- mapped_lun0 .............[lun0 ramdisk/ramdisk1 (rw)] o- mapped_lun1 .................[lun1 block/block1 (rw)] o- mapped_lun2 .................[lun2 fileio/file1 (rw)]", "/> tcm_fc/ create 00:11:22:33:44:55:66:77", "/> tcm_fc/ 00:11:22:33:44:55:66:77", "/> luns/ create /backstores/fileio/ example2", "/> acls/ create 00:99:88:77:66:55:44:33", "/> /backstores/ backstore-type / backstore-name", "/> /iscsi/ iqn-name /tpg/ acls / delete iqn-name", "/> /iscsi delete iqn-name", "yum install iscsi-initiator-utils -y", "cat /etc/iscsi/initiatorname.iscsi InitiatorName=iqn.2006-04.com.example.node1 # vi /etc/iscsi/initiatorname.iscsi", "iscsiadm -m discovery -t st -p target-ip-address 10.64.24.179:3260,1 iqn.2006-04.com.example:3260", "iscsiadm -m node -T iqn.2006-04.com.example:3260 -l Logging in to [iface: default, target: iqn.2006-04.com.example:3260, portal: 10.64.24.179,3260] (multiple) Login to [iface: default, target: iqn.2006-04.com.example:3260, portal: 10.64.24.179,3260] successful.", "grep \"Attached SCSI\" /var/log/messages # mkfs.ext4 /dev/ disk_name", "mkdir /mount/point # mount /dev/ disk_name /mount/point", "vim /etc/fstab /dev/ disk_name /mount/point ext4 _netdev 0 0", "iscsiadm -m node -T iqn.2006-04.com.example:3260 -u", "/iscsi/iqn.20...mple:444/tpg1> set attribute authentication=1 Parameter authentication is now '1'.", "/iscsi/iqn.20...mple:444/tpg1> set auth userid= redhat Parameter userid is now 'redhat'. /iscsi/iqn.20...mple:444/tpg1> set auth password= redhat_passwd Parameter password is now 'redhat_passwd'.", "vi /etc/iscsi/iscsid.conf node.session.auth.authmethod = CHAP", "node.session.auth.username = redhat node.session.auth.password = redhat_passwd", "systemctl restart iscsid.service", "options qla2xxx qlini_mode=disabled", "cp /etc/fcoe/cfg-ethx /etc/fcoe/cfg-eth X", "systemctl start lldpad", "dcbtool sc eth X dcb on", "dcbtool sc eth X app:fcoe e:1", "ip link set dev eth X up", "systemctl start fcoe", "fcoeadm -i", "systemctl enable lldpad", "systemctl enable fcoe", "mount_fcoe_disks_from_fstab() { local timeout=20 local done=1 local fcoe_disks=(USD(egrep 'by-path\\/fc-.*_netdev' /etc/fstab | cut -d ' ' -f1)) test -z USDfcoe_disks && return 0 echo -n \"Waiting for fcoe disks . \" while [ USDtimeout -gt 0 ]; do for disk in USD{fcoe_disks[*]}; do if ! test -b USDdisk; then done=0 break fi done test USDdone -eq 1 && break; sleep 1 echo -n \". \" done=1 let timeout-- done if test USDtimeout -eq 0; then echo \"timeout!\" else echo \"done!\" fi # mount any newly discovered disk mount -a 2>/dev/null }", "/dev/disk/by-path/fc-0xXX:0xXX /mnt/fcoe-disk1 ext3 defaults,_netdev 0 0 /dev/disk/by-path/fc-0xYY:0xYY /mnt/fcoe-disk2 ext3 defaults,_netdev 0 0", "iscsiadm -m session -P 3", "iscsiadm -m session -P 0", "iscsiadm -m session", "driver [ sid ] target_ip:port,target_portal_group_tag proper_target_name", "iscsiadm -m session tcp [2] 10.15.84.19:3260,2 iqn.1992-08.com.netapp:sn.33615311 tcp [3] 10.15.85.19:3260,3 iqn.1992-08.com.netapp:sn.33615311", "scsi-3600508b400105e210000900000490000 -> ../../sda", "scsi-SSEAGATE_ST373453LW_3HW1RHM6 -> ../../sda", "3600508b400105df70000e00000ac0000 dm-2 vendor,product [size=20G][features=1 queue_if_no_path][hwhandler=0][rw] \\_ round-robin 0 [prio=0][active] \\_ 5:0:1:1 sdc 8:32 [active][undef] \\_ 6:0:1:1 sdg 8:96 [active][undef] \\_ round-robin 0 [prio=0][enabled] \\_ 5:0:0:1 sdb 8:16 [active][undef] \\_ 6:0:0:1 sdf 8:80 [active][undef]", "/dev/disk/by-label/Boot", "LABEL=Boot", "UUID=3e6be9de-8139-11d1-9106-a43f08d823a6", "/dev/disk/by-id/scsi-3600508e000000000ce506dc50ab0ad05", "/dev/disk/by-id/wwn-0x600508e000000000ce506dc50ab0ad05", "/dev/disk/by-id/scsi-3600508e000000000ce506dc50ab0ad05-part1", "/dev/disk/by-id/wwn-0x600508e000000000ce506dc50ab0ad05-part1", "/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:0", "/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:0-part1", "umount /dev/ device # xfs_admin [ -U new_uuid ] [ -L new_label ] /dev/ device # udevadm settle", "tune2fs [ -U new_uuid ] [ -L new_label ] /dev/ device # udevadm settle", "blockdev --flushbufs device", "echo \" c t l \" > /sys/class/scsi_host/host h /scan", "grep 5006016090203181 /sys/class/fc_transport/*/node_name", "/sys/class/fc_transport/target5:0:2/node_name:0x5006016090203181 /sys/class/fc_transport/target5:0:3/node_name:0x5006016090203181 /sys/class/fc_transport/target6:0:2/node_name:0x5006016090203181 /sys/class/fc_transport/target6:0:3/node_name:0x5006016090203181", "echo \"0 2 56\" > /sys/class/scsi_host/host5/scan", "iscsiadm -m discovery -t discovery_type -p target_IP : port -o delete", "iscsiadm -m discovery -t discovery_type -p target_IP : port", "iscsiadm -m discovery -t discovery_type -p target_IP : port -o update -n setting -v % value", "ping -I eth X target_IP", "iface_name transport_name , hardware_address , ip_address , net_ifacename , initiator_name", "iface0 qla4xxx,00:c0:dd:08:63:e8,20.15.0.7,default,iqn.2005-06.com.redhat:madmax iface1 qla4xxx,00:c0:dd:08:63:ea,20.15.0.9,default,iqn.2005-06.com.redhat:madmax", "default tcp,<empty>,<empty>,<empty>,<empty> iser iser,<empty>,<empty>,<empty>,<empty> cxgb3i.00:07:43:05:97:07 cxgb3i,00:07:43:05:97:07,<empty>,<empty>,<empty>", "iface. setting = value", "BEGIN RECORD 2.0-871 iface.iscsi_ifacename = cxgb3i.00:07:43:05:97:07 iface.net_ifacename = <empty> iface.ipaddress = <empty> iface.hwaddress = 00:07:43:05:97:07 iface.transport_name = cxgb3i iface.initiatorname = <empty> END RECORD", "iscsiadm -m iface -I iface_name --op=new", "iscsiadm -m iface -I iface_name --op=update -n iface. setting -v hw_address", "iscsiadm -m iface -I iface0 --op=update -n iface.hwaddress -v 00:0F:1F:92:6B:BF", "iscsiadm -m iface -I iface_name -o update -n iface.ipaddress -v initiator_ip_address", "iscsiadm -m iface -I cxgb3i.00:07:43:05:97:07 -o update -n iface.ipaddress -v 20.15.0.66", "iscsiadm -m discovery -t st -p target_IP:port -I iface_name -P 1 [5]", "iscsiadm -m discovery -t st -p IP:port -I default -P 1", "iscsiadm -m node -targetname proper_target_name -I iface0 --op=delete [6]", "iscsiadm -m node -I iface_name --op=delete", "iscsiadm -m node -p IP:port -I iface_name --op=delete", "iscsiadm -m discovery -t sendtargets -p target_IP:port [5]", "target_IP:port , target_portal_group_tag proper_target_name", "10.15.84.19:3260,2 iqn.1992-08.com.netapp:sn.33615311 10.15.85.19:3260,3 iqn.1992-08.com.netapp:sn.33615311", "Target: proper_target_name Portal: target_IP:port , target_portal_group_tag Iface Name: iface_name", "Target: iqn.1992-08.com.netapp:sn.33615311 Portal: 10.15.84.19:3260,2 Iface Name: iface2 Portal: 10.15.85.19:3260,3 Iface Name: iface2", "iscsiadm -m session --rescan", "iscsiadm -m session -r SID --rescan [7]", "iscsiadm -m discovery -t st -p target_IP -o new", "iscsiadm -m discovery -t st -p target_IP -o delete", "iscsiadm -m discovery -t st -p target_IP -o delete -o new", "ip:port,target_portal_group_tag proper_target_name", "10.16.41.155:3260,0 iqn.2001-05.com.equallogic:6-8a0900-ac3fe0101-63aff113e344a4a2-dl585-03-1", "iscsiadm --mode node --targetname proper_target_name --portal ip:port,target_portal_group_tag \\ --login [8]", "iscsiadm --mode node --targetname \\ iqn.2001-05.com.equallogic:6-8a0900-ac3fe0101-63aff113e344a4a2-dl585-03-1 \\ --portal 10.16.41.155:3260,0 --login [8]", "systemctl start iscsi", "iscsiadm -m node --targetname proper_target_name -p target_IP:port -o update -n node.startup -v manual", "iscsiadm -m node --targetname proper_target_name -p target_IP:port -o delete", "/dev/sdb /mnt/iscsi ext3 _netdev 0 0", "iscsiadm -m node --targetname proper_target_name -p target_IP:port -l", "echo 1 > /sys/block/sd X /device/rescan", "iscsiadm -m node --targetname target_name -R [5]", "iscsiadm -m node -R -I interface", "multipathd -k\"resize map multipath_device \"", "blockdev --getro /dev/sd XYZ", "cat /sys/block/sd XYZ /ro 1 = read-only 0 = read-write", "36001438005deb4710000500000640000 dm-8 GZ,GZ500 [size=20G][features=0][hwhandler=0][ro] \\_ round-robin 0 [prio=200][active] \\_ 6:0:4:1 sdax 67:16 [active][ready] \\_ 6:0:5:1 sday 67:32 [active][ready] \\_ round-robin 0 [prio=40][enabled] \\_ 6:0:6:1 sdaz 67:48 [active][ready] \\_ 6:0:7:1 sdba 67:64 [active][ready]", "blockdev --flushbufs /dev/ device", "echo 1 > /sys/block/sd X /device/rescan", "echo 1 > /sys/block/sd ax /device/rescan # echo 1 > /sys/block/sd ay /device/rescan # echo 1 > /sys/block/sd az /device/rescan # echo 1 > /sys/block/sd ba /device/rescan", "multipath -r", "cat /sys/block/ device /device/state", "cat /sys/class/fc_remote_port/rport- H : B : R /port_state", "echo 30 > /sys/class/fc_remote_port/rport- H : B : R /dev_loss_tmo", "features \"1 queue_if_no_path\"", "node.conn[0].timeo.noop_out_interval = [interval value]", "node.conn[0].timeo.noop_out_timeout = [timeout value]", "iscsiadm -m session -P 3", "node.session.timeo.replacement_timeout = [replacement_timeout]", "node.conn[0].timeo.noop_out_interval = 0 node.conn[0].timeo.noop_out_timeout = 0", "node.session.timeo.replacement_timeout = replacement_timeout", "iscsiadm -m node -T target_name -p target_IP : port -o update -n node.session.timeo.replacement_timeout -v USD timeout_value", "cat /sys/block/ device-name /device/state", "echo running > /sys/block/ device-name /device/state", "echo value > /sys/block/ device-name /device/timeout", "ls -l /dev/mpath | grep stale-logical-unit", "lrwxrwxrwx 1 root root 7 Aug 2 10:33 /3600d0230003414f30000203a7bc41a00 -> ../dm-4 lrwxrwxrwx 1 root root 7 Aug 2 10:33 /3600d0230003414f30000203a7bc41a00p1 -> ../dm-5", "/dev/dm-4 /dev/dm-5 /dev/mapper/3600d0230003414f30000203a7bc41a00 /dev/mapper/3600d0230003414f30000203a7bc41a00p1 /dev/mpath/3600d0230003414f30000203a7bc41a00 /dev/mpath/3600d0230003414f30000203a7bc41a00p1", "yum install libstoragemgmt libstoragemgmt-python", "yum install libstoragemgmt-devel", "yum install libstoragemgmt- name -plugin", "systemctl status libstoragemgmt", "systemctl stop libstoragemgmt", "systemctl start libstoragemgmt", "lsmcli -u sim://", "export LSMCLI_URI=sim://", "lsmcli list --type SYSTEMS ID | Name | Status -------+-------------------------------+-------- sim-01 | LSM simulated storage plug-in | OK", "lsmcli list --type POOLS -H ID | Name | Total space | Free space | System ID -----+---------------+----------------------+----------------------+----------- POO2 | Pool 2 | 18446744073709551616 | 18446744073709551616 | sim-01 POO3 | Pool 3 | 18446744073709551616 | 18446744073709551616 | sim-01 POO1 | Pool 1 | 18446744073709551616 | 18446744073709551616 | sim-01 POO4 | lsm_test_aggr | 18446744073709551616 | 18446744073709551616 | sim-01", "lsmcli volume-create --name volume_name --size 20G --pool POO1 -H ID | Name | vpd83 | bs | #blocks | status | -----+-------------+----------------------------------+-----+----------+--------+---- Vol1 | volume_name | F7DDF7CA945C66238F593BC38137BD2F | 512 | 41943040 | OK |", "lsmcli --create-access-group example_ag --id iqn.1994-05.com.domain:01.89bd01 --type ISCSI --system sim-01 ID | Name | Initiator ID |SystemID ---------------------------------+------------+----------------------------------+-------- 782d00c8ac63819d6cca7069282e03a0 | example_ag | iqn.1994-05.com.domain:01.89bd01 |sim-01", "lsmcli access-group-create --name example_ag --init iqn.1994-05.com.domain:01.89bd01 --init-type ISCSI --sys sim-01 ID | Name | Initiator IDs | System ID ---------------------------------+------------+----------------------------------+----------- 782d00c8ac63819d6cca7069282e03a0 | example_ag | iqn.1994-05.com.domain:01.89bd01 | sim-01", "lsmcli access-group-grant --ag 782d00c8ac63819d6cca7069282e03a0 --vol Vol1 --access RW", "lsmcli volume-create --name async_created --size 20G --pool POO1 -b JOB_3", "echo USD? 7", "lsmcli job-status --job JOB_3 33", "echo USD? 7", "lsmcli job-status --job JOB_3 ID | Name | vpd83 | Block Size | -----+---------------+----------------------------------+-------------+----- Vol2 | async_created | 855C9BA51991B0CC122A3791996F6B15 | 512 |", "lsmcli list --type volumes -t# Vol1#volume_name#049167B5D09EC0A173E92A63F6C3EA2A#512#41943040#21474836480#OK#sim-01#POO1 Vol2#async_created#3E771A2E807F68A32FA5E15C235B60CC#512#41943040#21474836480#OK#sim-01#POO1", "lsmcli list --type volumes -t \" | \" Vol1 | volume_name | 049167B5D09EC0A173E92A63F6C3EA2A | 512 | 41943040 | 21474836480 | OK | 21474836480 | sim-01 | POO1 Vol2 | async_created | 3E771A2E807F68A32FA5E15C235B60CC | 512 | 41943040 | 21474836480 | OK | sim-01 | POO1", "lsmcli list --type volumes -s --------------------------------------------- ID | Vol1 Name | volume_name VPD83 | 049167B5D09EC0A173E92A63F6C3EA2A Block Size | 512 #blocks | 41943040 Size | 21474836480 Status | OK System ID | sim-01 Pool ID | POO1 --------------------------------------------- ID | Vol2 Name | async_created VPD83 | 3E771A2E807F68A32FA5E15C235B60CC Block Size | 512 #blocks | 41943040 Size | 21474836480 Status | OK System ID | sim-01 Pool ID | POO1 ---------------------------------------------", "yum install ndctl", "ndctl list --regions [ { \"dev\":\" region1 \", \"size\": 34359738368 , \"available_size\":0, \"type\":\"pmem\" }, { \"dev\":\" region0 \", \"size\": 34359738368 , \"available_size\":0, \"type\":\"pmem\" } ]", "ndctl list --namespaces --idle [ { \"dev\":\" namespace1.0 \", \"mode\":\"raw\", \"size\": 34359738368 , \"state\":\"disabled\", \"numa_node\": 1 }, { \"dev\":\" namespace0.0 \", \"mode\":\"raw\", \"size\": 34359738368 , \"state\":\"disabled\", \"numa_node\": 0 } ]", "ndctl create-namespace --force --reconfig= namespace0.0 --mode=fsdax --map=mem { \"dev\":\" namespace0.0 \", \"mode\":\"fsdax\", \"size\":\" 32.00 GiB (34.36 GB) \", \"uuid\":\" ab91cc8f-4c3e-482e-a86f-78d177ac655d \", \"blockdev\":\" pmem0 \", \"numa_node\": 0 }", "ndctl list --regions [ { \"dev\":\" region5 \", \"size\": 270582939648 , \"available_size\": 270582939648 , \"type\":\"pmem\", \"iset_id\": -7337419320239190016 }, { \"dev\":\" region4 \", \"size\": 270582939648 , \"available_size\": 270582939648 , \"type\":\"pmem\", \"iset_id\": -137289417188962304 } ]", "ndctl create-namespace --region= region4 --mode=fsdax --map=dev --size= 36G { \"dev\":\" namespace4.0 \", \"mode\":\"fsdax\", \"size\":\" 35.44 GiB (38.05 GB) \", \"uuid\":\" 9c5330b5-dc90-4f7a-bccd-5b558fa881fe \", \"blockdev\":\"pmem4\", \"numa_node\": 0 }", "ndctl create-namespace --region= region4 --mode=fsdax --map=dev --size= 36G { \"dev\":\" namespace4.1 \", \"mode\":\"fsdax\", \"size\":\" 35.44 GiB (38.05 GB) \", \"uuid\":\" 91868e21-830c-4b8f-a472-353bf482a26d \", \"blockdev\":\"pmem4.1\", \"numa_node\": 0 }", "ndctl create-namespace --region= region4 --mode=devdax --align= 2M --size= 36G { \"dev\":\" namespace4.2 \", \"mode\":\"devdax\", \"size\":\" 35.44 GiB (38.05 GB) \", \"uuid\":\" a188c847-4153-4477-81bb-7143e32ffc5c \", \"daxregion\": { \"id\": 4 , \"size\":\" 35.44 GiB (38.05 GB) \", \"align\": 2097152 , \"devices\":[ { \"chardev\":\" dax4.2 \", \"size\":\" 35.44 GiB (38.05 GB) \" }] }, \"numa_node\": 0 }", "ndctl create-namespace --force --reconfig= namespace1.0 --mode=sector { \"dev\":\" namespace1.0 \", \"mode\":\"sector\", \"size\": 17162027008 , \"uuid\":\" 029caa76-7be3-4439-8890-9c2e374bcc76 \", \"sector_size\":4096, \"blockdev\":\" pmem1s \" }", "ndctl create-namespace --force --reconfig= namespace0.0 --mode=fsdax --map=mem { \"dev\":\" namespace0.0 \", \"mode\":\"fsdax\", \"size\": 17177772032 , \"uuid\":\" e6944638-46aa-4e06-a722-0b3f16a5acbf \", \"blockdev\":\" pmem0 \" }", "mkfs -t xfs /dev/ pmem0 # mount -o dax /dev/ pmem0 /mnt/ pmem /", "ndctl create-namespace --force --reconfig= namespace0.0 --mode=devdax --align= 2M", "modprobe acpi_ipmi", "ndctl list --dimms --health { \"dev\":\" nmem0 \", \"id\":\" 802c-01-1513-b3009166 \", \"handle\": 1 , \"phys_id\": 22 , \"health\": { \"health_state\":\" ok \", \"temperature_celsius\": 25.000000 , \"spares_percentage\": 99 , \"alarm_temperature\": false , \"alarm_spares\": false , \"temperature_threshold\": 50.000000 , \"spares_threshold\": 20 , \"life_used_percentage\": 1 , \"shutdown_state\":\" clean \" } }", "ndctl list --dimms --regions --health --media-errors --human", "ndctl list --dimms --regions --health --media-errors --human \"regions\":[ { \"dev\":\"region0\", \"size\":\"250.00 GiB (268.44 GB)\", \"available_size\":0, \"type\":\"pmem\", \"numa_node\":0, \"iset_id\":\"0xXXXXXXXXXXXXXXXX\", \"mappings\":[ { \"dimm\":\"nmem1\", \"offset\":\"0x10000000\", \"length\":\"0x1f40000000\", \"position\":1 }, { \"dimm\":\"nmem0\", \"offset\":\"0x10000000\", \"length\":\"0x1f40000000\", \"position\":0 } ], \"badblock_count\":1, \"badblocks\":[ { \"offset\":65536, \"length\":1, \"dimms\":[ \"nmem0\" ] } ] , \"persistence_domain\":\"memory_controller\" } ] }", "ndctl list --dimms --human", "ndctl list --dimms --human [ { \"dev\":\"nmem1\", \"id\":\"XXXX-XX-XXXX-XXXXXXXX\", \"handle\":\"0x120\", \"phys_id\":\"0x1c\" }, { \"dev\":\"nmem0\" , \"id\":\"XXXX-XX-XXXX-XXXXXXXX\", \"handle\":\"0x20\", \"phys_id\":\"0x10\" , \"flag_failed_flush\":true, \"flag_smart_event\":true } ]", "dmidecode", "dmidecode Handle 0x0010, DMI type 17, 40 bytes Memory Device Array Handle: 0x0004 Error Information Handle: Not Provided Total Width: 72 bits Data Width: 64 bits Size: 125 GB Form Factor: DIMM Set: 1 Locator: DIMM-XXX-YYYY Bank Locator: Bank0 Type: Other Type Detail: Non-Volatile Registered (Buffered)", "ndctl list --namespaces --dimm= DIMM-ID-number", "ndctl list --namespaces --dimm=0 [ { \"dev\":\"namespace0.2\" , \"mode\":\"sector\", \"size\":67042312192, \"uuid\":\"XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX\", \"raw_uuid\":\"XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX\", \"sector_size\":4096, \"blockdev\":\"pmem0.2s\", \"numa_node\":0 }, { \"dev\":\"namespace0.0\" , \"mode\":\"sector\", \"size\":67042312192, \"uuid\":\"XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX\", \"raw_uuid\":\"XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX\", \"sector_size\":4096, \"blockdev\":\"pmem0s\", \"numa_node\":0 } ]", "yum install nvme-cli", "modprobe nvme-rdma", "nvme discover -t rdma -a 172.31.0.202 -s 4420 Discovery Log Number of Records 1, Generation counter 2 =====Discovery Log Entry 0====== trtype: rdma adrfam: ipv4 subtype: nvme subsystem treq: not specified, sq flow control disable supported portid: 1 trsvcid: 4420 subnqn: testnqn traddr: 172.31.0.202 rdma_prtype: not specified rdma_qptype: connected rdma_cms: rdma-cm rdma_pkey: 0x0000", "nvme connect -t rdma -n testnqn -a 172.31.0.202 -s 4420 # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 465.8G 0 disk โ”œโ”€sda1 8:1 0 1G 0 part /boot โ””โ”€sda2 8:2 0 464.8G 0 part โ”œโ”€rhel_rdma--virt--03-root 253:0 0 50G 0 lvm / โ”œโ”€rhel_rdma--virt--03-swap 253:1 0 4G 0 lvm [SWAP] โ””โ”€rhel_rdma--virt--03-home 253:2 0 410.8G 0 lvm /home nvme0n1 # cat /sys/class/nvme/nvme0/transport rdma", "nvme list", "nvme disconnect -n testnqn NQN:testnqn disconnected 1 controller(s) # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 465.8G 0 disk โ”œโ”€sda1 8:1 0 1G 0 part /boot โ””โ”€sda2 8:2 0 464.8G 0 part โ”œโ”€rhel_rdma--virt--03-root 253:0 0 50G 0 lvm / โ”œโ”€rhel_rdma--virt--03-swap 253:1 0 4G 0 lvm [SWAP] โ””โ”€rhel_rdma--virt--03-home 253:2 0 410.8G 0 lvm /home", "yum install nvme-cli", "nvme gen-hostnqn", "options lpfc lpfc_enable_fc4_type=3", "dracut --force", "systemctl reboot", "cat /sys/class/scsi_host/host*/nvme_info NVME Initiator Enabled XRI Dist lpfc0 Total 6144 IO 5894 ELS 250 NVME LPORT lpfc0 WWPN x10000090fae0b5f5 WWNN x20000090fae0b5f5 DID x010f00 ONLINE NVME RPORT WWPN x204700a098cbcac6 WWNN x204600a098cbcac6 DID x01050e TARGET DISCSRVC ONLINE NVME Statistics LS: Xmt 000000000e Cmpl 000000000e Abort 00000000 LS XMIT: Err 00000000 CMPL: xb 00000000 Err 00000000 Total FCP Cmpl 00000000000008ea Issue 00000000000008ec OutIO 0000000000000002 abort 00000000 noxri 00000000 nondlp 00000000 qdepth 00000000 wqerr 00000000 err 00000000 FCP CMPL: xb 00000000 Err 00000000", "nvme discover --transport fc \\ --traddr nn-0x204600a098cbcac6:pn-0x204700a098cbcac6 \\ --host-traddr nn-0x20000090fae0b5f5:pn-0x10000090fae0b5f5 Discovery Log Number of Records 2, Generation counter 49530 =====Discovery Log Entry 0====== trtype: fc adrfam: fibre-channel subtype: nvme subsystem treq: not specified portid: 0 trsvcid: none subnqn: nqn.1992-08.com.netapp:sn.e18bfca87d5e11e98c0800a098cbcac6:subsystem.st14_nvme_ss_1_1 traddr: nn-0x204600a098cbcac6:pn-0x204700a098cbcac6", "nvme connect --transport fc --traddr nn-0x204600a098cbcac6:pn-0x204700a098cbcac6 --host-traddr nn-0x20000090fae0b5f5:pn-0x10000090fae0b5f5 -n nqn.1992-08.com.netapp:sn.e18bfca87d5e11e98c0800a098cbcac6:subsystem.st14_nvme_ss_1_1", "nvme list Node SN Model Namespace Usage Format FW Rev ---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- -------- /dev/nvme0n1 80BgLFM7xMJbAAAAAAAC NetApp ONTAP Controller 1 107.37 GB / 107.37 GB 4 KiB + 0 B FFFFFFFF # lsblk |grep nvme nvme0n1 259:0 0 100G 0 disk", "yum install nvme-cli", "nvme gen-hostnqn", "rmmod qla2xxx # modprobe qla2xxx", "dmesg |grep traddr [ 6.139862] qla2xxx [0000:04:00.0]-ffff:0: register_localport: host-traddr=nn-0x20000024ff19bb62:pn-0x21000024ff19bb62 on portID:10700 [ 6.241762] qla2xxx [0000:04:00.0]-2102:0: qla_nvme_register_remote: traddr=nn-0x203b00a098cbcac6:pn-0x203d00a098cbcac6 PortID:01050d", "nvme discover --transport fc --traddr nn-0x203b00a098cbcac6:pn-0x203d00a098cbcac6 --host-traddr nn-0x20000024ff19bb62:pn-0x21000024ff19bb62 Discovery Log Number of Records 2, Generation counter 49530 =====Discovery Log Entry 0====== trtype: fc adrfam: fibre-channel subtype: nvme subsystem treq: not specified portid: 0 trsvcid: none subnqn: nqn.1992-08.com.netapp:sn.c9ecc9187b1111e98c0800a098cbcac6:subsystem.vs_nvme_multipath_1_subsystem_468 traddr: nn-0x203b00a098cbcac6:pn-0x203d00a098cbcac6", "nvme connect --transport fc --traddr nn-0x203b00a098cbcac6:pn-0x203d00a098cbcac6 --host_traddr nn-0x20000024ff19bb62:pn-0x21000024ff19bb62 -n nqn.1992-08.com.netapp:sn.c9ecc9187b1111e98c0800a098cbcac6:subsystem.vs_nvme_multipath_1_subsystem_468", "nvme list Node SN Model Namespace Usage Format FW Rev ---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- -------- /dev/nvme0n1 80BgLFM7xMJbAAAAAAAC NetApp ONTAP Controller 1 107.37 GB / 107.37 GB 4 KiB + 0 B FFFFFFFF # lsblk |grep nvme nvme0n1 259:0 0 100G 0 disk", "yum install vdo kmod-kvdo", "vdo create --name= vdo_name --device= block_device --vdoLogicalSize= logical_size [ --vdoSlabSize= slab_size ]", "vdo: ERROR - vdoformat: formatVDO failed on '/dev/ device ': VDO Status: Exceeds maximum number of slabs supported", "vdo create --name=vdo1 --device=/dev/disk/by-id/scsi-3600508b1001c264ad2af21e903ad031f --vdoLogicalSize=10T", "mkfs.xfs -K /dev/mapper/ vdo_name", "mkfs.ext4 -E nodiscard /dev/mapper/ vdo_name", "mkdir -m 1777 /mnt/ vdo_name # mount /dev/mapper/ vdo_name /mnt/ vdo_name", "/dev/mapper/ vdo_name /mnt/ vdo_name xfs defaults,_netdev,x-systemd.device-timeout=0,x-systemd.requires=vdo.service 0 0", "/dev/mapper/ vdo_name /mnt/ vdo_name ext4 defaults,_netdev,x-systemd.device-timeout=0,x-systemd.requires=vdo.service 0 0", "[Unit] Description = VDO unit file to mount file system name = vdo_name .mount Requires = vdo.service After = multi-user.target Conflicts = umount.target [Mount] What = /dev/mapper/ vdo_name Where = /mnt/ vdo_name Type = xfs [Install] WantedBy = multi-user.target", "vdostats --human-readable Device 1K-blocks Used Available Use% Space saving% /dev/mapper/node1osd1 926.5G 21.0G 905.5G 2% 73% /dev/mapper/node1osd2 926.5G 28.2G 898.3G 3% 64%", "Oct 2 17:13:39 system lvm[13863]: Monitoring VDO pool vdo_name. Oct 2 17:27:39 system lvm[13863]: WARNING: VDO pool vdo_name is now 80.69% full. Oct 2 17:28:19 system lvm[13863]: WARNING: VDO pool vdo_name is now 85.25% full. Oct 2 17:29:39 system lvm[13863]: WARNING: VDO pool vdo_name is now 90.64% full. Oct 2 17:30:29 system lvm[13863]: WARNING: VDO pool vdo_name is now 96.07% full.", "vdo start --name= my_vdo # vdo start --all", "vdo stop --name= my_vdo # vdo stop --all", "vdo changeWritePolicy --writePolicy= sync|async|auto --name= vdo_name", "cat '/sys/block/sda/device/scsi_disk/7:0:0:0/cache_type' write back", "cat '/sys/block/sdb/device/scsi_disk/1:2:0:0/cache_type' None", "sd 7:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sd 1:2:0:0: [sdb] Write cache: disabled, read cache: disabled, supports DPO and FUA", "vdo remove --name= my_vdo", "vdo remove --force --name= my_vdo", "[...] A previous operation failed. Recovery from the failure either failed or was interrupted. Add '--force' to 'remove' to perform the following cleanup. Steps to clean up VDO my_vdo : umount -f /dev/mapper/ my_vdo udevadm settle dmsetup remove my_vdo vdo: ERROR - VDO volume my_vdo previous operation (create) is incomplete", "vdo stop --name= my_vdo", "vdo start --name= my_vdo --forceRebuild", "vdo deactivate --name= my_vdo", "vdo deactivate --all", "vdo activate --name= my_vdo", "vdo activate --all", "vdo disableDeduplication --name= my_vdo", "vdo enableDeduplication --name= my_vdo", "vdo disableCompression --name= my_vdo", "vdo enableCompression --name= my_vdo", "Device 1K-blocks Used Available Use% /dev/mapper/ my_vdo 211812352 105906176 105906176 50%", "Oct 2 17:13:39 system lvm[13863]: Monitoring VDO pool my_vdo. Oct 2 17:27:39 system lvm[13863]: WARNING: VDO pool my_vdo is now 80.69% full. Oct 2 17:28:19 system lvm[13863]: WARNING: VDO pool my_vdo is now 85.25% full. Oct 2 17:29:39 system lvm[13863]: WARNING: VDO pool my_vdo is now 90.64% full. Oct 2 17:30:29 system lvm[13863]: WARNING: VDO pool my_vdo is now 96.07% full.", "sg_vpd --page=0xb0 /dev/ device", "vdo growLogical --name= my_vdo --vdoLogicalSize= new_logical_size", "vdo growPhysical --name= my_vdo", "vdo { activate | changeWritePolicy | create | deactivate | disableCompression | disableDeduplication | enableCompression | enableDeduplication | growLogical | growPhysical | list | modify | printConfigFile | remove | start | status | stop } [ options... ]", "vdostats [ --verbose | --human-readable | --si | --all ] [ --version ] [ device ...]", "Device 1K-blocks Used Available Use% Space Saving% /dev/mapper/my_vdo 1932562432 427698104 1504864328 22% 21%", "Device Size Used Available Use% Space Saving% /dev/mapper/my_vdo 1.8T 407.9G 1.4T 22% 21%", "Device Size Used Available Use% Space Saving% /dev/mapper/my_vdo 2.0T 438G 1.5T 22% 21%", "echo \"deadline\" > /sys/block/ device /queue/scheduler", "echo \"noop\" > /sys/block/ device /queue/scheduler", "vdo status --name= my_vdo", "vdo create --name=vdo0 --device= /dev/sdb --vdoLogicalSize=1T --writePolicy=async --verbose", "vdo create --name=vdo0 --device= /dev/sdb --vdoLogicalSize=1T --writePolicy=sync --verbose", "mkfs.xfs -K /dev/mapper/vdo0", "mkfs.ext4 -E nodiscard /dev/mapper/vdo0", "mkdir /mnt/VDOVolume mount /dev/mapper/vdo0 /mnt/VDOVolume && chmod a+rwx /mnt/VDOVolume", "dd if=/dev/urandom of=/mnt/VDOVolume/testfile bs=4096 count=8388608", "dd if=/mnt/VDOVolume/testfile of=/home/user/testfile bs=4096", "diff -s /mnt/VDOVolume/testfile /home/user/testfile", "dd if=/home/user/testfile of=/mnt/VDOVolume/testfile2 bs=4096", "diff -s /mnt/VDOVolume/testfile2 /home/user/testfile", "umount /mnt/VDOVolume", "vdo remove --name=vdo0", "vdo list --all | grep vdo", "mkdir /mnt/VDOVolume/vdo{01..10}", "df -h /mnt/VDOVolume Filesystem Size Used Avail Use% Mounted on /dev/mapper/vdo0 1.5T 198M 1.4T 1% /mnt/VDOVolume", "vdostats --verbose | grep \"blocks used\" data blocks used : 1090 overhead blocks used : 538846 logical blocks used : 6059434", "dd if=/dev/urandom of=/mnt/VDOVolume/sourcefile bs=4096 count=1048576 4294967296 bytes (4.3 GB) copied, 540.538 s, 7.9 MB/s", "df -h /mnt/VDOVolume Filesystem Size Used Avail Use% Mounted on /dev/mapper/vdo0 1.5T 4.2G 1.4T 1% /mnt/VDOVolume", "vdostats --verbose | grep \"blocks used\" data blocks used : 1050093 (increased by 4GB) overhead blocks used : 538846 (Did not change) logical blocks used : 7108036 (increased by 4GB)", "for i in {01..10}; do cp /mnt/VDOVolume/sourcefile /mnt/VDOVolume/vdoUSDi done", "df -h /mnt/VDOVolume Filesystem Size Used Avail Use% Mounted on /dev/mapper/vdo0 1.5T 45G 1.3T 4% /mnt/VDOVolume", "vdostats --verbose | grep \"blocks used\" data blocks used : 1050836 (increased by 3M) overhead blocks used : 538846 logical blocks used : 17594127 (increased by 41G)", "vdo create --name=vdo0 --device= /dev/sdb --vdoLogicalSize= 10G --verbose --deduplication=disabled --compression=enabled", "vdostats --verbose | grep \"blocks used\"", "mkfs.xfs -K /dev/mapper/vdo0", "mkfs.ext4 -E nodiscard /dev/mapper/vdo0", "mkdir /mnt/VDOVolume mount /dev/mapper/vdo0 /mnt/VDOVolume && chmod a+rwx /mnt/VDOVolume", "sync && dmsetup message vdo0 0 sync-dedupe", "vdostats --verbose | grep \"blocks used\"", "cp -vR /lib /mnt/VDOVolume sent 152508960 bytes received 60448 bytes 61027763.20 bytes/sec total size is 152293104 speedup is 1.00", "sync && dmsetup message vdo0 0 sync-dedupe", "vdostats --verbose | grep \"blocks used\"", "umount /mnt/VDOVolume && vdo remove --name=vdo0", "fstrim /mnt/VDOVolume", "df -m /mnt/VDOVolume", "dd if=/dev/urandom of=/mnt/VDOVolume/file bs=1M count=1K", "rm /mnt/VDOVolume/file", "for depth in 1 2 4 8 16 32 64 128 256 512 1024 2048; do fio --rw=write --bs=4096 --name=vdo --filename=/dev/mapper/vdo0 --ioengine=libaio --numjobs=1 --thread --norandommap --runtime=300 --direct=1 --iodepth=USDdepth --scramble_buffers=1 --offset=0 --size=100g done", "z= [see previous step] for iosize in 4 8 16 32 64 128 256 512 1024; do fio --rw=write --bs=USDiosize\\k --name=vdo --filename=/dev/mapper/vdo0 --ioengine=libaio --numjobs=1 --thread --norandommap --runtime=300 --direct=1 --iodepth=USDz --scramble_buffers=1 --offset=0 --size=100g done", "z= [see previous step] for readmix in 0 10 20 30 40 50 60 70 80 90 100; do for iosize in 4 8 16 32 64 128 256 512 1024; do fio --rw=rw --rwmixread=USDreadmix --bs=USDiosize\\k --name=vdo --filename=/dev/mapper/vdo0 --ioengine=libaio --numjobs=1 --thread --norandommap --runtime=300 --direct=0 --iodepth=USDz --scramble_buffers=1 --offset=0 --size=100g done done", "for readmix in 20 50 80; do for iosize in 4 8 16 32 64 128 256 512 1024; do fio --rw=rw --rwmixread=USDreadmix --bsrange=4k-256k --name=vdo --filename=/dev/mapper/vdo0 --ioengine=libaio --numjobs=1 --thread --norandommap --runtime=300 --direct=0 --iodepth=USDiosize --scramble_buffers=1 --offset=0 --size=100g done done" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html-single/storage_administration_guide/index
B.2. Audit Record Types
B.2. Audit Record Types Table B.2, "Record Types" lists all currently-supported types of Audit records. The event type is specified in the type= field at the beginning of every Audit record. Table B.2. Record Types Event Type Explanation ADD_GROUP Triggered when a user-space group is added. ADD_USER Triggered when a user-space user account is added. ANOM_ABEND [a] Triggered when a processes ends abnormally (with a signal that could cause a core dump, if enabled). ANOM_ACCESS_FS [a] Triggered when a file or a directory access ends abnormally. ANOM_ADD_ACCT [a] Triggered when a user-space account addition ends abnormally. ANOM_AMTU_FAIL [a] Triggered when a failure of the Abstract Machine Test Utility (AMTU) is detected. ANOM_CRYPTO_FAIL [a] Triggered when a failure in the cryptographic system is detected. ANOM_DEL_ACCT [a] Triggered when a user-space account deletion ends abnormally. ANOM_EXEC [a] Triggered when an execution of a file ends abnormally. ANOM_LOGIN_ACCT [a] Triggered when an account login attempt ends abnormally. ANOM_LOGIN_FAILURES [a] Triggered when the limit of failed login attempts is reached. ANOM_LOGIN_LOCATION [a] Triggered when a login attempt is made from a forbidden location. ANOM_LOGIN_SESSIONS [a] Triggered when a login attempt reaches the maximum amount of concurrent sessions. ANOM_LOGIN_TIME [a] Triggered when a login attempt is made at a time when it is prevented by, for example, pam_time . ANOM_MAX_DAC [a] Triggered when the maximum amount of Discretionary Access Control (DAC) failures is reached. ANOM_MAX_MAC [a] Triggered when the maximum amount of Mandatory Access Control (MAC) failures is reached. ANOM_MK_EXEC [a] Triggered when a file is made executable. ANOM_MOD_ACCT [a] Triggered when a user-space account modification ends abnormally. ANOM_PROMISCUOUS [a] Triggered when a device enables or disables promiscuous mode. ANOM_RBAC_FAIL [a] Triggered when a Role-Based Access Control (RBAC) self-test failure is detected. ANOM_RBAC_INTEGRITY_FAIL [a] Triggered when a Role-Based Access Control (RBAC) file integrity test failure is detected. ANOM_ROOT_TRANS [a] Triggered when a user becomes root. AVC Triggered to record an SELinux permission check. AVC_PATH Triggered to record the dentry and vfsmount pair when an SELinux permission check occurs. BPRM_FCAPS Triggered when a user executes a program with a file system capability. CAPSET Triggered to record any changes in process-based capabilities. CHGRP_ID Triggered when a user-space group ID is changed. CHUSER_ID Triggered when a user-space user ID is changed. CONFIG_CHANGE Triggered when the Audit system configuration is modified. CRED_ACQ Triggered when a user acquires user-space credentials. CRED_DISP Triggered when a user disposes of user-space credentials. CRED_REFR Triggered when a user refreshes their user-space credentials. CRYPTO_FAILURE_USER Triggered when a decrypt, encrypt, or randomize cryptographic operation fails. CRYPTO_KEY_USER Triggered to record the cryptographic key identifier used for cryptographic purposes. CRYPTO_LOGIN Triggered when a cryptographic officer login attempt is detected. CRYPTO_LOGOUT Triggered when a crypto officer logout attempt is detected. CRYPTO_PARAM_CHANGE_USER Triggered when a change in a cryptographic parameter is detected. CRYPTO_REPLAY_USER Triggered when a replay attack is detected. CRYPTO_SESSION Triggered to record parameters set during a TLS session establishment. CRYPTO_TEST_USER Triggered to record cryptographic test results as required by the FIPS-140 standard. CWD Triggered to record the current working directory. DAC_CHECK Triggered to record DAC check results. DAEMON_ABORT Triggered when a daemon is stopped due to an error. DAEMON_ACCEPT Triggered when the auditd daemon accepts a remote connection. DAEMON_CLOSE Triggered when the auditd daemon closes a remote connection. DAEMON_CONFIG Triggered when a daemon configuration change is detected. DAEMON_END Triggered when a daemon is successfully stopped. DAEMON_RESUME Triggered when the auditd daemon resumes logging. DAEMON_ROTATE Triggered when the auditd daemon rotates the Audit log files. DAEMON_START Triggered when the auditd daemon is started. DEL_GROUP Triggered when a user-space group is deleted DEL_USER Triggered when a user-space user is deleted DEV_ALLOC Triggered when a device is allocated. DEV_DEALLOC Triggered when a device is deallocated. EOE Triggered to record the end of a multi-record event. EXECVE Triggered to record arguments of the execve(2) system call. FD_PAIR Triggered to record the use of the pipe and socketpair system calls. FS_RELABEL Triggered when a file system relabel operation is detected. GRP_AUTH Triggered when a group password is used to authenticate against a user-space group. INTEGRITY_DATA [b] Triggered to record a data integrity verification event run by the kernel. INTEGRITY_HASH [b] Triggered to record a hash type integrity verification event run by the kernel. INTEGRITY_METADATA [b] Triggered to record a metadata integrity verification event run by the kernel. INTEGRITY_PCR [b] Triggered to record Platform Configuration Register (PCR) invalidation messages. INTEGRITY_RULE [b] Triggered to record a policy rule. INTEGRITY_STATUS [b] Triggered to record the status of integrity verification. IPC Triggered to record information about a Inter-Process Communication object referenced by a system call. IPC_SET_PERM Triggered to record information about new values set by an IPC_SET control operation on an IPC object. KERNEL Triggered to record the initialization of the Audit system. KERNEL_OTHER Triggered to record information from third-party kernel modules. LABEL_LEVEL_CHANGE Triggered when an object's level label is modified. LABEL_OVERRIDE Triggered when an administrator overrides an object's level label. LOGIN Triggered to record relevant login information when a user log in to access the system. MAC_CIPSOV4_ADD Triggered when a Commercial Internet Protocol Security Option (CIPSO) user adds a new Domain of Interpretation (DOI). Adding DOIs is a part of the packet labeling capabilities of the kernel provided by NetLabel. MAC_CIPSOV4_DEL Triggered when a CIPSO user deletes an existing DOI. Adding DOIs is a part of the packet labeling capabilities of the kernel provided by NetLabel. MAC_CONFIG_CHANGE Triggered when an SELinux Boolean value is changed. MAC_IPSEC_EVENT Triggered to record information about an IPSec event, when one is detected, or when the IPSec configuration changes. MAC_MAP_ADD Triggered when a new Linux Security Module (LSM) domain mapping is added. LSM domain mapping is a part of the packet labeling capabilities of the kernel provided by NetLabel. MAC_MAP_DEL Triggered when an existing LSM domain mapping is added. LSM domain mapping is a part of the packet labeling capabilities of the kernel provided by NetLabel. MAC_POLICY_LOAD Triggered when a SELinux policy file is loaded. MAC_STATUS Triggered when the SELinux mode (enforcing, permissive, off) is changed. MAC_UNLBL_ALLOW Triggered when unlabeled traffic is allowed when using the packet labeling capabilities of the kernel provided by NetLabel. MAC_UNLBL_STCADD Triggered when a static label is added when using the packet labeling capabilities of the kernel provided by NetLabel. MAC_UNLBL_STCDEL Triggered when a static label is deleted when using the packet labeling capabilities of the kernel provided by NetLabel. MMAP Triggered to record a file descriptor and flags of the mmap(2) system call. MQ_GETSETATTR Triggered to record the mq_getattr(3) and mq_setattr(3) message queue attributes. MQ_NOTIFY Triggered to record arguments of the mq_notify(3) system call. MQ_OPEN Triggered to record arguments of the mq_open(3) system call. MQ_SENDRECV Triggered to record arguments of the mq_send(3) and mq_receive(3) system calls. NETFILTER_CFG Triggered when Netfilter chain modifications are detected. NETFILTER_PKT Triggered to record packets traversing Netfilter chains. OBJ_PID Triggered to record information about a process to which a signal is sent. PATH Triggered to record file name path information. RESP_ACCT_LOCK [c] Triggered when a user account is locked. RESP_ACCT_LOCK_TIMED [c] Triggered when a user account is locked for a specified period of time. RESP_ACCT_REMOTE [c] Triggered when a user account is locked from a remote session. RESP_ACCT_UNLOCK_TIMED [c] Triggered when a user account is unlocked after a configured period of time. RESP_ALERT [c] Triggered when an alert email is sent. RESP_ANOMALY [c] Triggered when an anomaly was not acted upon. RESP_EXEC [c] Triggered when an intrusion detection program responds to a threat originating from the execution of a program. RESP_HALT [c] Triggered when the system is shut down. RESP_KILL_PROC [c] Triggered when a process is terminated. RESP_SEBOOL [c] Triggered when an SELinux Boolean value is set. RESP_SINGLE [c] Triggered when the system is put into single-user mode. RESP_TERM_ACCESS [c] Triggered when a session is terminated. RESP_TERM_LOCK [c] Triggered when a terminal is locked. ROLE_ASSIGN Triggered when an administrator assigns a user to an SELinux role. ROLE_MODIFY Triggered when an administrator modifies an SELinux role. ROLE_REMOVE Triggered when an administrator removes a user from an SELinux role. SELINUX_ERR Triggered when an internal SELinux error is detected. SERVICE_START Triggered when a service is started. SERVICE_STOP Triggered when a service is stopped. SOCKADDR Triggered to record a socket address. SOCKETCALL Triggered to record arguments of the sys_socketcall system call (used to multiplex many socket-related system calls). SYSCALL Triggered to record a system call to the kernel. SYSTEM_BOOT Triggered when the system is booted up. SYSTEM_RUNLEVEL Triggered when the system's run level is changed. SYSTEM_SHUTDOWN Triggered when the system is shut down. TEST Triggered to record the success value of a test message. TRUSTED_APP The record of this type can be used by third party application that require auditing. TTY Triggered when TTY input was sent to an administrative process. USER_ACCT Triggered when a user-space user account is modified. USER_AUTH Triggered when a user-space authentication attempt is detected. USER_AVC Triggered when a user-space AVC message is generated. USER_CHAUTHTOK Triggered when a user account attribute is modified. USER_CMD Triggered when a user-space shell command is executed. USER_END Triggered when a user-space session is terminated. USER_ERR Triggered when a user account state error is detected. USER_LABELED_EXPORT Triggered when an object is exported with an SELinux label. USER_LOGIN Triggered when a user logs in. USER_LOGOUT Triggered when a user logs out. USER_MAC_POLICY_LOAD Triggered when a user-space daemon loads an SELinux policy. USER_MGMT Triggered to record user-space management data. USER_ROLE_CHANGE Triggered when a user's SELinux role is changed. USER_SELINUX_ERR Triggered when a user-space SELinux error is detected. USER_START Triggered when a user-space session is started. USER_TTY Triggered when an explanatory message about TTY input to an administrative process is sent from user-space. USER_UNLABELED_EXPORT Triggered when an object is exported without SELinux label. USYS_CONFIG Triggered when a user-space system configuration change is detected. VIRT_CONTROL Triggered when a virtual machine is started, paused, or stopped. VIRT_MACHINE_ID Triggered to record the binding of a label to a virtual machine. VIRT_RESOURCE Triggered to record resource assignment of a virtual machine. [a] All Audit event types prepended with ANOM are intended to be processed by an intrusion detection program. [b] This event type is related to the Integrity Measurement Architecture (IMA), which functions best with a Trusted Platform Module (TPM) chip. [c] All Audit event types prepended with RESP are intended responses of an intrusion detection system in case it detects malicious activity on the system.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sec-audit_record_types
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_developer_tools/1/html/using_llvm_17.0.6_toolset/making-open-source-more-inclusive
Chapter 4. Installing a user-provisioned bare metal cluster on a restricted network
Chapter 4. Installing a user-provisioned bare metal cluster on a restricted network In OpenShift Container Platform 4.12, you can install a cluster on bare metal infrastructure that you provision in a restricted network. Important While you might be able to follow this procedure to deploy a cluster on virtualized or cloud environments, you must be aware of additional considerations for non-bare metal platforms. Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you attempt to install an OpenShift Container Platform cluster in such an environment. 4.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You created a registry on your mirror host and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You provisioned persistent storage for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access modes. If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 4.2. About installations in restricted networks In OpenShift Container Platform 4.12, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. Important Because of the complexity of the configuration for user-provisioned installations, consider completing a standard user-provisioned infrastructure installation before you attempt a restricted network installation using user-provisioned infrastructure. Completing this test installation might make it easier to isolate and troubleshoot any issues that might arise during your installation in a restricted network. 4.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 4.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.12, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 4.4. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 4.4.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 4.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Note As an exception, you can run zero compute machines in a bare metal cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. Running one compute machine is not supported. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 8 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 4.4.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 4.2. Minimum resource requirements Machine Operating System CPU [1] RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One CPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = CPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 4.4.3. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. Additional resources See Configuring a three-node cluster for details about deploying three-node clusters in bare metal environments. See Approving the certificate signing requests for your machines for more information about approving cluster certificate signing requests after installation. 4.4.4. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 4.4.4.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 4.4.4.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Table 4.3. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 4.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 4.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. Additional resources Configuring chrony time service 4.4.5. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. Note It is recommended to use a DHCP server to provide the hostnames to each cluster node. See the DHCP recommendations for user-provisioned infrastructure section for more information. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 4.6. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 4.4.5.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 4.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 4.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. Additional resources Validating DNS resolution for user-provisioned infrastructure 4.4.6. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application ingress load balancing infrastructure. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 4.7. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 4.8. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 4.4.6.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 4.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 4.5. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. Additional resources Requirements for a cluster with user-provisioned infrastructure Installing RHCOS and starting the OpenShift Container Platform bootstrap process Setting the cluster node hostnames through DHCP Advanced RHCOS installation configuration Networking requirements for user-provisioned infrastructure User-provisioned DNS requirements Validating DNS resolution for user-provisioned infrastructure Load balancing requirements for user-provisioned infrastructure 4.6. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. Additional resources User-provisioned DNS requirements Load balancing requirements for user-provisioned infrastructure 4.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 , ppc64le , and s390x architectures. do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide the key to the installation program. Additional resources Verifying node health 4.8. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain the imageContentSources section from the output of the command to mirror the repository. Obtain the contents of the certificate for your mirror registry. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Unless you use a registry that RHCOS trusts by default, such as docker.io , you must provide the contents of the certificate for your mirror repository in the additionalTrustBundle section. In most cases, you must provide the certificate for your mirror. You must include the imageContentSources section from the output of the command to mirror the repository. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 4.8.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide a customized install-config.yaml installation configuration file that describes the details for your environment. Note After installation, you cannot modify these parameters in the install-config.yaml file. 4.8.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 4.9. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters and hyphens ( - ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 4.8.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. If you use the Red Hat OpenShift Networking OVN-Kubernetes network plugin, both IPv4 and IPv6 address families are supported. If you use the Red Hat OpenShift Networking OpenShift SDN network plugin, only the IPv4 address family is supported. If you configure your cluster to use both IP address families, review the following requirements: Both IP families must use the same network interface for the default gateway. Both IP families must have the default gateway. You must specify IPv4 and IPv6 addresses in the same order for all network configuration parameters. For example, in the following configuration IPv4 addresses are listed before IPv6 addresses. networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112 Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 4.10. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. If you use the OpenShift SDN network plugin, specify an IPv4 network. If you use the OVN-Kubernetes network plugin, you can specify IPv4 and IPv6 networks. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . The prefix length for an IPv6 block is between 0 and 128 . For example, 10.128.0.0/14 or fd01::/48 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. For an IPv4 network the default value is 23 . For an IPv6 network the default value is 64 . The default value is also the minimum value for IPv6. networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. If you use the OVN-Kubernetes network plugin, you can specify an IP address block for both of the IPv4 and IPv6 address families. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 - fd02::/112 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 or fd00::/48 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 4.8.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 4.11. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . featureSet Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 , ppc64le , and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . 4.8.2. Sample install-config.yaml file for bare metal You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not enabled in your BIOS settings, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether in the BIOS or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for your platform. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 , ppc64le , and s390x architectures. 15 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 17 Provide the contents of the certificate file that you used for your mirror registry. 18 Provide the imageContentSources section from the output of the command to mirror the repository. Additional resources See Load balancing requirements for user-provisioned infrastructure for more information on the API and application ingress load balancing requirements. 4.8.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Note For bare metal installations, if you do not assign node IP addresses from the range that is specified in the networking.machineNetwork[].cidr field in the install-config.yaml file, you must include them in the proxy.noProxy field. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 4.8.4. Configuring a three-node cluster Optionally, you can deploy zero compute machines in a bare metal cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 4.9. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. For a restricted network installation, these files are on your mirror host. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: Additional resources See Recovering from expired control plane certificates for more information about recovering kubelet certificates. 4.10. Configuring chrony time service You must set the time server and related settings used by the chrony time service ( chronyd ) by modifying the contents of the chrony.conf file and passing those contents to your nodes as a machine config. Procedure Create a Butane config including the contents of the chrony.conf file. For example, to configure chrony on worker nodes, create a 99-worker-chrony.bu file. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.12.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony 1 2 On control plane nodes, substitute master for worker in both of these locations. 3 Specify an octal value mode for the mode field in the machine config file. After creating the file and applying the changes, the mode is converted to a decimal value. You can check the YAML file with the command oc get mc <mc-name> -o yaml . 4 Specify any valid, reachable time source, such as the one provided by your DHCP server. Use Butane to generate a MachineConfig object file, 99-worker-chrony.yaml , containing the configuration to be delivered to the nodes: USD butane 99-worker-chrony.bu -o 99-worker-chrony.yaml Apply the configurations in one of two ways: If the cluster is not running yet, after you generate manifest files, add the MachineConfig object file to the <installation_directory>/openshift directory, and then continue to create the cluster. If the cluster is already running, apply the file: USD oc apply -f ./99-worker-chrony.yaml 4.11. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on bare metal infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on the machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. To install RHCOS on the machines, follow either the steps to use an ISO image or network PXE booting. Note The compute node deployment steps included in this installation document are RHCOS-specific. If you choose instead to deploy RHEL-based compute nodes, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Only RHEL 8 compute machines are supported. You can configure RHCOS during ISO and PXE installations by using the following methods: Kernel arguments: You can use kernel arguments to provide installation-specific information. For example, you can specify the locations of the RHCOS installation files that you uploaded to your HTTP server and the location of the Ignition config file for the type of node you are installing. For a PXE installation, you can use the APPEND parameter to pass the arguments to the kernel of the live installer. For an ISO installation, you can interrupt the live installation boot process to add the kernel arguments. In both installation cases, you can use special coreos.inst.* arguments to direct the live installer, as well as standard installation boot arguments for turning standard kernel services on or off. Ignition configs: OpenShift Container Platform Ignition config files ( *.ign ) are specific to the type of node you are installing. You pass the location of a bootstrap, control plane, or compute node Ignition config file during the RHCOS installation so that it takes effect on first boot. In special cases, you can create a separate, limited Ignition config to pass to the live system. That Ignition config could do a certain set of tasks, such as reporting success to a provisioning system after completing installation. This special Ignition config is consumed by the coreos-installer to be applied on first boot of the installed system. Do not provide the standard control plane and compute node Ignition configs to the live ISO directly. coreos-installer : You can boot the live ISO installer to a shell prompt, which allows you to prepare the permanent system in a variety of ways before first boot. In particular, you can run the coreos-installer command to identify various artifacts to include, work with disk partitions, and set up networking. In some cases, you can configure features on the live system and copy them to the installed system. Whether to use an ISO or PXE install depends on your situation. A PXE install requires an available DHCP service and more preparation, but can make the installation process more automated. An ISO install is a more manual process and can be inconvenient if you are setting up more than a few machines. 4.11.1. Installing RHCOS by using an ISO image You can use an ISO image to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Obtain the SHA512 digest for each of your Ignition config files. For example, you can use the following on a system running Linux to get the SHA512 digest for your bootstrap.ign Ignition config file: USD sha512sum <installation_directory>/bootstrap.ign The digests are provided to the coreos-installer in a later step to validate the authenticity of the Ignition config files on the cluster nodes. Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS images that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS images are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep '\.iso[^.]' Example output "location": "<url>/art/storage/releases/rhcos-4.12-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso", "location": "<url>/art/storage/releases/rhcos-4.12-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso", "location": "<url>/art/storage/releases/rhcos-4.12-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso", "location": "<url>/art/storage/releases/rhcos-4.12/<release>/x86_64/rhcos-<release>-live.x86_64.iso", Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Use only ISO images for this procedure. RHCOS qcow2 images are not supported for this installation type. ISO file names resemble the following example: rhcos-<version>-live.<architecture>.iso Use the ISO to start the RHCOS installation. Use one of the following installation options: Burn the ISO image to a disk and boot it directly. Use ISO redirection by using a lights-out management (LOM) interface. Boot the RHCOS ISO image without specifying any options or interrupting the live boot sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment. Note It is possible to interrupt the RHCOS installation boot process to add kernel arguments. However, for this ISO procedure you should use the coreos-installer command as outlined in the following steps, instead of adding kernel arguments. Run the coreos-installer command and specify the options that meet your installation requirements. At a minimum, you must specify the URL that points to the Ignition config file for the node type, and the device that you are installing to: USD sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2 1 1 You must run the coreos-installer command by using sudo , because the core user does not have the required root privileges to perform the installation. 2 The --ignition-hash option is required when the Ignition config file is obtained through an HTTP URL to validate the authenticity of the Ignition config file on the cluster node. <digest> is the Ignition config file SHA512 digest obtained in a preceding step. Note If you want to provide your Ignition config files through an HTTPS server that uses TLS, you can add the internal certificate authority (CA) to the system trust store before running coreos-installer . The following example initializes a bootstrap node installation to the /dev/sda device. The Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP address 192.168.1.2: USD sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, you must reboot the system. During the system reboot, it applies the Ignition config file that you specified. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Continue to create the other machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install OpenShift Container Platform. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 4.11.2. Installing RHCOS by using PXE or iPXE booting You can use PXE or iPXE booting to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have configured suitable PXE or iPXE infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS kernel , initramfs and rootfs files that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS files are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep -Eo '"https.*(kernel-|initramfs.|rootfs.)\w+(\.img)?"' Example output "<url>/art/storage/releases/rhcos-4.12-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64" "<url>/art/storage/releases/rhcos-4.12-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.12-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.12-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le" "<url>/art/storage/releases/rhcos-4.12-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.12-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.12-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x" "<url>/art/storage/releases/rhcos-4.12-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img" "<url>/art/storage/releases/rhcos-4.12-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img" "<url>/art/storage/releases/rhcos-4.12/<release>/x86_64/rhcos-<release>-live-kernel-x86_64" "<url>/art/storage/releases/rhcos-4.12/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img" "<url>/art/storage/releases/rhcos-4.12/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img" Important The RHCOS artifacts might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel , initramfs , and rootfs artifacts described below for this procedure. RHCOS QCOW2 images are not supported for this installation type. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel : rhcos-<version>-live-kernel-<architecture> initramfs : rhcos-<version>-live-initramfs.<architecture>.img rootfs : rhcos-<version>-live-rootfs.<architecture>.img Upload the rootfs , kernel , and initramfs files to your HTTP server. Important If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. Configure the network boot infrastructure so that the machines boot from their local disks after RHCOS is installed on them. Configure PXE or iPXE installation for the RHCOS images and begin the installation. Modify one of the following example menu entries for your environment and verify that the image and Ignition files are properly accessible: For PXE ( x86_64 ): 1 1 Specify the location of the live kernel file that you uploaded to your HTTP server. The URL must be HTTP, TFTP, or FTP; HTTPS and NFS are not supported. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The initrd parameter value is the location of the initramfs file, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file. You can also add more kernel arguments to the APPEND line to configure networking or other boot options. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the APPEND line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section. For iPXE ( x86_64 + aarch64 ): 1 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The kernel parameter value is the location of the kernel file, the initrd=main argument is needed for booting on UEFI systems, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your HTTP server. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the kernel line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section. Note To network boot the CoreOS kernel on aarch64 architecture, you need to use a version of iPXE build with the IMAGE_GZIP option enabled. See IMAGE_GZIP option in iPXE . For PXE (with UEFI and Grub as second stage) on aarch64 : 1 Specify the locations of the RHCOS files that you uploaded to your HTTP/TFTP server. The kernel parameter value is the location of the kernel file on your TFTP server. The coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file on your HTTP Server. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your TFTP server. Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, the system reboots. During reboot, the system applies the Ignition config file that you specified. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Continue to create the machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install the cluster. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 4.11.3. Advanced RHCOS installation configuration A key benefit for manually provisioning the Red Hat Enterprise Linux CoreOS (RHCOS) nodes for OpenShift Container Platform is to be able to do configuration that is not available through default OpenShift Container Platform installation methods. This section describes some of the configurations that you can do using techniques that include: Passing kernel arguments to the live installer Running coreos-installer manually from the live system Customizing a live ISO or PXE boot image The advanced configuration topics for manual Red Hat Enterprise Linux CoreOS (RHCOS) installations detailed in this section relate to disk partitioning, networking, and using Ignition configs in different ways. 4.11.3.1. Using advanced networking options for PXE and ISO installations Networking for OpenShift Container Platform nodes uses DHCP by default to gather all necessary configuration settings. To set up static IP addresses or configure special settings, such as bonding, you can do one of the following: Pass special kernel parameters when you boot the live installer. Use a machine config to copy networking files to the installed system. Configure networking from a live installer shell prompt, then copy those settings to the installed system so that they take effect when the installed system first boots. To configure a PXE or iPXE installation, use one of the following options: See the "Advanced RHCOS installation reference" tables. Use a machine config to copy networking files to the installed system. To configure an ISO installation, use the following procedure. Procedure Boot the ISO installer. From the live system shell prompt, configure networking for the live system using available RHEL tools, such as nmcli or nmtui . Run the coreos-installer command to install the system, adding the --copy-network option to copy networking configuration. For example: USD sudo coreos-installer install --copy-network \ --ignition-url=http://host/worker.ign /dev/sda Important The --copy-network option only copies networking configuration found under /etc/NetworkManager/system-connections . In particular, it does not copy the system hostname. Reboot into the installed system. Additional resources See Getting started with nmcli and Getting started with nmtui in the RHEL 8 documentation for more information about the nmcli and nmtui tools. 4.11.3.2. Disk partitioning Disk partitions are created on OpenShift Container Platform cluster nodes during the Red Hat Enterprise Linux CoreOS (RHCOS) installation. Each RHCOS node of a particular architecture uses the same partition layout, unless you override the default partitioning configuration. During the RHCOS installation, the size of the root file system is increased to use any remaining available space on the target device. Important The use of a custom partition scheme on your node might result in OpenShift Container Platform not monitoring or alerting on some node partitions. If you override the default partitioning, see Understanding OpenShift File System Monitoring (eviction conditions) for more information about how OpenShift Container Platform monitors your host file systems. OpenShift Container Platform monitors the following two filesystem identifiers: nodefs , which is the filesystem that contains /var/lib/kubelet imagefs , which is the filesystem that contains /var/lib/containers For the default partition scheme, nodefs and imagefs monitor the same root filesystem, / . To override the default partitioning when installing RHCOS on an OpenShift Container Platform cluster node, you must create separate partitions. Important For disk sizes larger than 100GB, and especially disk sizes larger than 1TB, create a separate /var partition. See "Creating a separate /var partition" and this Red Hat Knowledgebase article for more information. Consider a situation where you want to add a separate storage partition for your containers and container images. For example, by mounting /var/lib/containers in a separate partition, the kubelet separately monitors /var/lib/containers as the imagefs directory and the root file system as the nodefs directory. Important If you have resized your disk size to host a larger file system, consider creating a separate /var/lib/containers partition. Consider resizing a disk that has an xfs format to reduce CPU time issues caused by a high number of allocation groups. 4.11.3.2.1. Creating a separate /var partition In general, you should use the default disk partitioning that is created during the RHCOS installation. However, there are cases where you might want to create a separate partition for a directory that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var directory or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Important For disk sizes larger than 100GB, and especially larger than 1TB, create a separate /var partition. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. The use of a separate partition for the /var directory or a subdirectory of /var also prevents data growth in the partitioned directory from filling up the root file system. The following procedure sets up a separate /var partition by adding a machine config manifest that is wrapped into the Ignition config file for a node type during the preparation phase of an installation. Procedure On your installation host, change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD openshift-install create manifests --dir <installation_directory> Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.12.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum offset value of 25000 mebibytes is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no offset value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for compute nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Create the Ignition config files: USD openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory: The files in the <installation_directory>/manifest and <installation_directory>/openshift directories are wrapped into the Ignition config files, including the file that contains the 98-var-partition custom MachineConfig object. steps You can apply the custom disk partitioning by referencing the Ignition config files during the RHCOS installations. 4.11.3.2.2. Retaining existing partitions For an ISO installation, you can add options to the coreos-installer command that cause the installer to maintain one or more existing partitions. For a PXE installation, you can add coreos.inst.* options to the APPEND parameter to preserve partitions. Saved partitions might be data partitions from an existing OpenShift Container Platform system. You can identify the disk partitions you want to keep either by partition label or by number. Note If you save existing partitions, and those partitions do not leave enough space for RHCOS, the installation will fail without damaging the saved partitions. Retaining existing partitions during an ISO installation This example preserves any partition in which the partition label begins with data ( data* ): # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign \ --save-partlabel 'data*' /dev/sda The following example illustrates running the coreos-installer in a way that preserves the sixth (6) partition on the disk: # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign \ --save-partindex 6 /dev/sda This example preserves partitions 5 and higher: # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/sda In the examples where partition saving is used, coreos-installer recreates the partition immediately. Retaining existing partitions during a PXE installation This APPEND option preserves any partition in which the partition label begins with 'data' ('data*'): coreos.inst.save_partlabel=data* This APPEND option preserves partitions 5 and higher: coreos.inst.save_partindex=5- This APPEND option preserves partition 6: coreos.inst.save_partindex=6 4.11.3.3. Identifying Ignition configs When doing an RHCOS manual installation, there are two types of Ignition configs that you can provide, with different reasons for providing each one: Permanent install Ignition config : Every manual RHCOS installation needs to pass one of the Ignition config files generated by openshift-installer , such as bootstrap.ign , master.ign and worker.ign , to carry out the installation. Important It is not recommended to modify these Ignition config files directly. You can update the manifest files that are wrapped into the Ignition config files, as outlined in examples in the preceding sections. For PXE installations, you pass the Ignition configs on the APPEND line using the coreos.inst.ignition_url= option. For ISO installations, after the ISO boots to the shell prompt, you identify the Ignition config on the coreos-installer command line with the --ignition-url= option. In both cases, only HTTP and HTTPS protocols are supported. Live install Ignition config : This type can be created by using the coreos-installer customize subcommand and its various options. With this method, the Ignition config passes to the live install medium, runs immediately upon booting, and performs setup tasks before or after the RHCOS system installs to disk. This method should only be used for performing tasks that must be done once and not applied again later, such as with advanced partitioning that cannot be done using a machine config. For PXE or ISO boots, you can create the Ignition config and APPEND the ignition.config.url= option to identify the location of the Ignition config. You also need to append ignition.firstboot ignition.platform.id=metal or the ignition.config.url option will be ignored. 4.11.3.4. Default console configuration Red Hat Enterprise Linux CoreOS (RHCOS) nodes installed from an OpenShift Container Platform 4.12 boot image use a default console that is meant to accomodate most virtualized and bare metal setups. Different cloud and virtualization platforms may use different default settings depending on the chosen architecture. Bare metal installations use the kernel default settings which typically means the graphical console is the primary console and the serial console is disabled. The default consoles may not match your specific hardware configuration or you might have specific needs that require you to adjust the default console. For example: You want to access the emergency shell on the console for debugging purposes. Your cloud platform does not provide interactive access to the graphical console, but provides a serial console. You want to enable multiple consoles. Console configuration is inherited from the boot image. This means that new nodes in existing clusters are unaffected by changes to the default console. You can configure the console for bare metal installations in the following ways: Using coreos-installer manually on the command line. Using the coreos-installer iso customize or coreos-installer pxe customize subcommands with the --dest-console option to create a custom image that automates the process. Note For advanced customization, perform console configuration using the coreos-installer iso or coreos-installer pxe subcommands, and not kernel arguments. 4.11.3.5. Enabling the serial console for PXE and ISO installations By default, the Red Hat Enterprise Linux CoreOS (RHCOS) serial console is disabled and all output is written to the graphical console. You can enable the serial console for an ISO installation and reconfigure the bootloader so that output is sent to both the serial console and the graphical console. Procedure Boot the ISO installer. Run the coreos-installer command to install the system, adding the --console option once to specify the graphical console, and a second time to specify the serial console: USD coreos-installer install \ --console=tty0 \ 1 --console=ttyS0,<options> \ 2 --ignition-url=http://host/worker.ign /dev/sda 1 The desired secondary console. In this case, the graphical console. Omitting this option will disable the graphical console. 2 The desired primary console. In this case the serial console. The options field defines the baud rate and other settings. A common value for this field is 11520n8 . If no options are provided, the default kernel value of 9600n8 is used. For more information on the format of this option, see Linux kernel serial console documentation. Reboot into the installed system. Note A similar outcome can be obtained by using the coreos-installer install --append-karg option, and specifying the console with console= . However, this will only set the console for the kernel and not the bootloader. To configure a PXE installation, make sure the coreos.inst.install_dev kernel command line option is omitted, and use the shell prompt to run coreos-installer manually using the above ISO installation procedure. 4.11.3.6. Customizing a live RHCOS ISO or PXE install You can use the live ISO image or PXE environment to install RHCOS by injecting an Ignition config file directly into the image. This creates a customized image that you can use to provision your system. For an ISO image, the mechanism to do this is the coreos-installer iso customize subcommand, which modifies the .iso file with your configuration. Similarly, the mechanism for a PXE environment is the coreos-installer pxe customize subcommand, which creates a new initramfs file that includes your customizations. The customize subcommand is a general purpose tool that can embed other types of customizations as well. The following tasks are examples of some of the more common customizations: Inject custom CA certificates for when corporate security policy requires their use. Configure network settings without the need for kernel arguments. Embed arbitrary preinstall and post-install scripts or binaries. 4.11.3.7. Customizing a live RHCOS ISO image You can customize a live RHCOS ISO image directly with the coreos-installer iso customize subcommand. When you boot the ISO image, the customizations are applied automatically. You can use this feature to configure the ISO image to automatically install RHCOS. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS ISO image from the RHCOS image mirror page and the Ignition config file, and then run the following command to inject the Ignition config directly into the ISO image: USD coreos-installer iso customize rhcos-<version>-live.x86_64.iso \ --dest-ignition bootstrap.ign \ 1 --dest-device /dev/sda 2 1 The Ignition config file that is generated from the openshift-installer installation program. 2 When you specify this option, the ISO image automatically runs an installation. Otherwise, the image remains configured for installation, but does not install automatically unless you specify the coreos.inst.install_dev kernel argument. Optional: To remove the ISO image customizations and return the image to its pristine state, run: USD coreos-installer iso reset rhcos-<version>-live.x86_64.iso You can now re-customize the live ISO image or use it in its pristine state. Applying your customizations affects every subsequent boot of RHCOS. 4.11.3.7.1. Modifying a live install ISO image to enable the serial console On clusters installed with OpenShift Container Platform 4.12 and above, the serial console is disabled by default and all output is written to the graphical console. You can enable the serial console with the following procedure. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following command to customize the ISO image to enable the serial console to receive output: USD coreos-installer iso customize rhcos-<version>-live.x86_64.iso \ --dest-ignition <path> \ 1 --dest-console tty0 \ 2 --dest-console ttyS0,<options> \ 3 --dest-device /dev/sda 4 1 The location of the Ignition config to install. 2 The desired secondary console. In this case, the graphical console. Omitting this option will disable the graphical console. 3 The desired primary console. In this case, the serial console. The options field defines the baud rate and other settings. A common value for this field is 115200n8 . If no options are provided, the default kernel value of 9600n8 is used. For more information on the format of this option, see the Linux kernel serial console documentation. 4 The specified disk to install to. In this case, /dev/sda . If you omit this option, the ISO image automatically runs the installation program which will fail unless you also specify the coreos.inst.install_dev kernel argument. Note The --dest-console option affects the installed system and not the live ISO system. To modify the console for a live ISO system, use the --live-karg-append option and specify the console with console= . Your customizations are applied and affect every subsequent boot of the ISO image. Optional: To remove the ISO image customizations and return the image to its original state, run the following command: USD coreos-installer iso reset rhcos-<version>-live.x86_64.iso You can now recustomize the live ISO image or use it in its original state. 4.11.3.7.2. Modifying a live install ISO image to use a custom certificate authority You can provide certificate authority (CA) certificates to Ignition with the --ignition-ca flag of the customize subcommand. You can use the CA certificates during both the installation boot and when provisioning the installed system. Note Custom CA certificates affect how Ignition fetches remote resources but they do not affect the certificates installed onto the system. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following command to customize the ISO image for use with a custom CA: USD coreos-installer iso customize rhcos-<version>-live.x86_64.iso --ignition-ca cert.pem Important The coreos.inst.ignition_url kernel parameter does not work with the --ignition-ca flag. You must use the --dest-ignition flag to create a customized image for each cluster. Applying your custom CA certificate affects every subsequent boot of RHCOS. 4.11.3.7.3. Modifying a live install ISO image with customized network settings You can embed a NetworkManager keyfile into the live ISO image and pass it through to the installed system with the --network-keyfile flag of the customize subcommand. Warning When creating a connection profile, you must use a .nmconnection filename extension in the filename of the connection profile. If you do not use a .nmconnection filename extension, the cluster will apply the connection profile to the live environment, but it will not apply the configuration when the cluster first boots up the nodes, resulting in a setup that does not work. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Create a connection profile for a bonded interface. For example, create the bond0.nmconnection file in your local directory with the following content: [connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto Create a connection profile for a secondary interface to add to the bond. For example, create the bond0-proxy-em1.nmconnection file in your local directory with the following content: [connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond Create a connection profile for a secondary interface to add to the bond. For example, create the bond0-proxy-em2.nmconnection file in your local directory with the following content: [connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following command to customize the ISO image with your configured networking: USD coreos-installer iso customize rhcos-<version>-live.x86_64.iso \ --network-keyfile bond0.nmconnection \ --network-keyfile bond0-proxy-em1.nmconnection \ --network-keyfile bond0-proxy-em2.nmconnection Network settings are applied to the live system and are carried over to the destination system. 4.11.3.8. Customizing a live RHCOS PXE environment You can customize a live RHCOS PXE environment directly with the coreos-installer pxe customize subcommand. When you boot the PXE environment, the customizations are applied automatically. You can use this feature to configure the PXE environment to automatically install RHCOS. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and the Ignition config file, and then run the following command to create a new initramfs file that contains the customizations from your Ignition config: USD coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img \ --dest-ignition bootstrap.ign \ 1 --dest-device /dev/sda \ 2 -o rhcos-<version>-custom-initramfs.x86_64.img 3 1 The Ignition config file that is generated from openshift-installer . 2 When you specify this option, the PXE environment automatically runs an install. Otherwise, the image remains configured for installing, but does not do so automatically unless you specify the coreos.inst.install_dev kernel argument. 3 Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and ignition.platform.id=metal kernel arguments if they are not already present. Applying your customizations affects every subsequent boot of RHCOS. 4.11.3.8.1. Modifying a live install PXE environment to enable the serial console On clusters installed with OpenShift Container Platform 4.12 and above, the serial console is disabled by default and all output is written to the graphical console. You can enable the serial console with the following procedure. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and the Ignition config file, and then run the following command to create a new customized initramfs file that enables the serial console to receive output: USD coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img \ --dest-ignition <path> \ 1 --dest-console tty0 \ 2 --dest-console ttyS0,<options> \ 3 --dest-device /dev/sda \ 4 -o rhcos-<version>-custom-initramfs.x86_64.img 5 1 The location of the Ignition config to install. 2 The desired secondary console. In this case, the graphical console. Omitting this option will disable the graphical console. 3 The desired primary console. In this case, the serial console. The options field defines the baud rate and other settings. A common value for this field is 115200n8 . If no options are provided, the default kernel value of 9600n8 is used. For more information on the format of this option, see the Linux kernel serial console documentation. 4 The specified disk to install to. If you omit this option, the PXE environment automatically runs the installer which will fail unless you also specify the coreos.inst.install_dev kernel argument. 5 Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and ignition.platform.id=metal kernel arguments if they are not already present. Your customizations are applied and affect every subsequent boot of the PXE environment. 4.11.3.8.2. Modifying a live install PXE environment to use a custom certificate authority You can provide certificate authority (CA) certificates to Ignition with the --ignition-ca flag of the customize subcommand. You can use the CA certificates during both the installation boot and when provisioning the installed system. Note Custom CA certificates affect how Ignition fetches remote resources but they do not affect the certificates installed onto the system. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and run the following command to create a new customized initramfs file for use with a custom CA: USD coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img \ --ignition-ca cert.pem \ -o rhcos-<version>-custom-initramfs.x86_64.img Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and ignition.platform.id=metal kernel arguments if they are not already present. Important The coreos.inst.ignition_url kernel parameter does not work with the --ignition-ca flag. You must use the --dest-ignition flag to create a customized image for each cluster. Applying your custom CA certificate affects every subsequent boot of RHCOS. 4.11.3.8.3. Modifying a live install PXE environment with customized network settings You can embed a NetworkManager keyfile into the live PXE environment and pass it through to the installed system with the --network-keyfile flag of the customize subcommand. Warning When creating a connection profile, you must use a .nmconnection filename extension in the filename of the connection profile. If you do not use a .nmconnection filename extension, the cluster will apply the connection profile to the live environment, but it will not apply the configuration when the cluster first boots up the nodes, resulting in a setup that does not work. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Create a connection profile for a bonded interface. For example, create the bond0.nmconnection file in your local directory with the following content: [connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto Create a connection profile for a secondary interface to add to the bond. For example, create the bond0-proxy-em1.nmconnection file in your local directory with the following content: [connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond Create a connection profile for a secondary interface to add to the bond. For example, create the bond0-proxy-em2.nmconnection file in your local directory with the following content: [connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and run the following command to create a new customized initramfs file that contains your configured networking: USD coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img \ --network-keyfile bond0.nmconnection \ --network-keyfile bond0-proxy-em1.nmconnection \ --network-keyfile bond0-proxy-em2.nmconnection \ -o rhcos-<version>-custom-initramfs.x86_64.img Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and ignition.platform.id=metal kernel arguments if they are not already present. Network settings are applied to the live system and are carried over to the destination system. 4.11.3.9. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 4.11.3.9.1. Networking and bonding options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking and bonding on your RHCOS nodes for ISO installations. The examples describe how to use the ip= , nameserver= , and bond= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= , nameserver= , and then bond= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page . The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 Bonding multiple network interfaces to a single interface Optional: You can bond multiple network interfaces to a single interface by using the bond= option. Refer to the following examples: The syntax for configuring a bonded interface is: bond=name[:network_interfaces][:options] name is the bonding device name ( bond0 ), network_interfaces represents a comma-separated list of physical (ethernet) interfaces ( em1,em2 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Bonding multiple network interfaces to a single interface Optional: You can configure VLANs on bonded interfaces by using the vlan= parameter and to use DHCP, for example: ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0 Use the following example to configure the bonded interface with a VLAN and to use a static IP address: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0 Using network teaming Optional: You can use a network teaming as an alternative to bonding by using the team= parameter: The syntax for configuring a team interface is: team=name[:network_interfaces] name is the team device name ( team0 ) and network_interfaces represents a comma-separated list of physical (ethernet) interfaces ( em1, em2 ). Note Teaming is planned to be deprecated when RHCOS switches to an upcoming version of RHEL. For more information, see this Red Hat Knowledgebase Article . Use the following example to configure a network team: team=team0:em1,em2 ip=team0:dhcp 4.11.3.9.2. coreos-installer options for ISO and PXE installations You can install RHCOS by running coreos-installer install <options> <device> at the command prompt, after booting into the RHCOS live environment from an ISO image. The following table shows the subcommands, options, and arguments you can pass to the coreos-installer command. Table 4.12. coreos-installer subcommands, command-line options, and arguments coreos-installer install subcommand Subcommand Description USD coreos-installer install <options> <device> Embed an Ignition config in an ISO image. coreos-installer install subcommand options Option Description -u , --image-url <url> Specify the image URL manually. -f , --image-file <path> Specify a local image file manually. Used for debugging. -i, --ignition-file <path> Embed an Ignition config from a file. -I , --ignition-url <URL> Embed an Ignition config from a URL. --ignition-hash <digest> Digest type-value of the Ignition config. -p , --platform <name> Override the Ignition platform ID for the installed system. --console <spec> Set the kernel and bootloader console for the installed system. For more information about the format of <spec> , see the Linux kernel serial console documentation. --append-karg <arg>... Append a default kernel argument to the installed system. --delete-karg <arg>... Delete a default kernel argument from the installed system. -n , --copy-network Copy the network configuration from the install environment. Important The --copy-network option only copies networking configuration found under /etc/NetworkManager/system-connections . In particular, it does not copy the system hostname. --network-dir <path> For use with -n . Default is /etc/NetworkManager/system-connections/ . --save-partlabel <lx>.. Save partitions with this label glob. --save-partindex <id>... Save partitions with this number or range. --insecure Skip RHCOS image signature verification. --insecure-ignition Allow Ignition URL without HTTPS or hash. --architecture <name> Target CPU architecture. Valid values are x86_64 and aarch64 . --preserve-on-error Do not clear partition table on error. -h , --help Print help information. coreos-installer install subcommand argument Argument Description <device> The destination device. coreos-installer ISO subcommands Subcommand Description USD coreos-installer iso customize <options> <ISO_image> Customize a RHCOS live ISO image. coreos-installer iso reset <options> <ISO_image> Restore a RHCOS live ISO image to default settings. coreos-installer iso ignition remove <options> <ISO_image> Remove the embedded Ignition config from an ISO image. coreos-installer ISO customize subcommand options Option Description --dest-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the destination system. --dest-console <spec> Specify the kernel and bootloader console for the destination system. --dest-device <path> Install and overwrite the specified destination device. --dest-karg-append <arg> Add a kernel argument to each boot of the destination system. --dest-karg-delete <arg> Delete a kernel argument from each boot of the destination system. --network-keyfile <path> Configure networking by using the specified NetworkManager keyfile for live and destination systems. --ignition-ca <path> Specify an additional TLS certificate authority to be trusted by Ignition. --pre-install <path> Run the specified script before installation. --post-install <path> Run the specified script after installation. --installer-config <path> Apply the specified installer configuration file. --live-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the live environment. --live-karg-append <arg> Add a kernel argument to each boot of the live environment. --live-karg-delete <arg> Delete a kernel argument from each boot of the live environment. --live-karg-replace <k=o=n> Replace a kernel argument in each boot of the live environment, in the form key=old=new . -f , --force Overwrite an existing Ignition config. -o , --output <path> Write the ISO to a new output file. -h , --help Print help information. coreos-installer PXE subcommands Subcommand Description Note that not all of these options are accepted by all subcommands. coreos-installer pxe customize <options> <path> Customize a RHCOS live PXE boot config. coreos-installer pxe ignition wrap <options> Wrap an Ignition config in an image. coreos-installer pxe ignition unwrap <options> <image_name> Show the wrapped Ignition config in an image. coreos-installer PXE customize subcommand options Option Description Note that not all of these options are accepted by all subcommands. --dest-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the destination system. --dest-console <spec> Specify the kernel and bootloader console for the destination system. --dest-device <path> Install and overwrite the specified destination device. --network-keyfile <path> Configure networking by using the specified NetworkManager keyfile for live and destination systems. --ignition-ca <path> Specify an additional TLS certificate authority to be trusted by Ignition. --pre-install <path> Run the specified script before installation. post-install <path> Run the specified script after installation. --installer-config <path> Apply the specified installer configuration file. --live-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the live environment. -o, --output <path> Write the initramfs to a new output file. Note This option is required for PXE environments. -h , --help Print help information. 4.11.3.9.3. coreos.inst boot options for ISO or PXE installations You can automatically invoke coreos-installer options at boot time by passing coreos.inst boot arguments to the RHCOS live installer. These are provided in addition to the standard boot arguments. For ISO installations, the coreos.inst options can be added by interrupting the automatic boot at the bootloader menu. You can interrupt the automatic boot by pressing TAB while the RHEL CoreOS (Live) menu option is highlighted. For PXE or iPXE installations, the coreos.inst options must be added to the APPEND line before the RHCOS live installer is booted. The following table shows the RHCOS live installer coreos.inst boot options for ISO and PXE installations. Table 4.13. coreos.inst boot options Argument Description coreos.inst.install_dev Required. The block device on the system to install to. It is recommended to use the full path, such as /dev/sda , although sda is allowed. coreos.inst.ignition_url Optional: The URL of the Ignition config to embed into the installed system. If no URL is specified, no Ignition config is embedded. Only HTTP and HTTPS protocols are supported. coreos.inst.save_partlabel Optional: Comma-separated labels of partitions to preserve during the install. Glob-style wildcards are permitted. The specified partitions do not need to exist. coreos.inst.save_partindex Optional: Comma-separated indexes of partitions to preserve during the install. Ranges m-n are permitted, and either m or n can be omitted. The specified partitions do not need to exist. coreos.inst.insecure Optional: Permits the OS image that is specified by coreos.inst.image_url to be unsigned. coreos.inst.image_url Optional: Download and install the specified RHCOS image. This argument should not be used in production environments and is intended for debugging purposes only. While this argument can be used to install a version of RHCOS that does not match the live media, it is recommended that you instead use the media that matches the version you want to install. If you are using coreos.inst.image_url , you must also use coreos.inst.insecure . This is because the bare-metal media are not GPG-signed for OpenShift Container Platform. Only HTTP and HTTPS protocols are supported. coreos.inst.skip_reboot Optional: The system will not reboot after installing. After the install finishes, you will receive a prompt that allows you to inspect what is happening during installation. This argument should not be used in production environments and is intended for debugging purposes only. coreos.inst.platform_id Optional: The Ignition platform ID of the platform the RHCOS image is being installed on. Default is metal . This option determines whether or not to request an Ignition config from the cloud provider, such as VMware. For example: coreos.inst.platform_id=vmware . ignition.config.url Optional: The URL of the Ignition config for the live boot. For example, this can be used to customize how coreos-installer is invoked, or to run code before or after the installation. This is different from coreos.inst.ignition_url , which is the Ignition config for the installed system. 4.11.4. Enabling multipathing with kernel arguments on RHCOS RHCOS supports multipathing on the primary disk, allowing stronger resilience to hardware failure to achieve higher host availability. You can enable multipathing at installation time for nodes that were provisioned in OpenShift Container Platform 4.8 or later. While postinstallation support is available by activating multipathing via the machine config, enabling multipathing during installation is recommended. In setups where any I/O to non-optimized paths results in I/O system errors, you must enable multipathing at installation time. Important On IBM Z and IBM(R) LinuxONE, you can enable multipathing only if you configured your cluster for it during installation. For more information, see "Installing RHCOS and starting the OpenShift Container Platform bootstrap process" in Installing a cluster with z/VM on IBM Z and IBM(R) LinuxONE . The following procedure enables multipath at installation time and appends kernel arguments to the coreos-installer install command so that the installed system itself will use multipath beginning from the first boot. Note OpenShift Container Platform does not support enabling multipathing as a day-2 activity on nodes that have been upgraded from 4.6 or earlier. Prerequisites You have created the Ignition config files for your cluster. You have reviewed Installing RHCOS and starting the OpenShift Container Platform bootstrap process . Procedure To enable multipath and start the multipathd daemon, run the following command on the installation host: USD mpathconf --enable && systemctl start multipathd.service Optional: If booting the PXE or ISO, you can instead enable multipath by adding rd.multipath=default from the kernel command line. Append the kernel arguments by invoking the coreos-installer program: If there is only one multipath device connected to the machine, it should be available at path /dev/mapper/mpatha . For example: USD coreos-installer install /dev/mapper/mpatha \ 1 --ignition-url=http://host/worker.ign \ --append-karg rd.multipath=default \ --append-karg root=/dev/disk/by-label/dm-mpath-root \ --append-karg rw 1 Indicates the path of the single multipathed device. If there are multiple multipath devices connected to the machine, or to be more explicit, instead of using /dev/mapper/mpatha , it is recommended to use the World Wide Name (WWN) symlink available in /dev/disk/by-id . For example: USD coreos-installer install /dev/disk/by-id/wwn-<wwn_ID> \ 1 --ignition-url=http://host/worker.ign \ --append-karg rd.multipath=default \ --append-karg root=/dev/disk/by-label/dm-mpath-root \ --append-karg rw 1 Indicates the WWN ID of the target multipathed device. For example, 0xx194e957fcedb4841 . This symlink can also be used as the coreos.inst.install_dev kernel argument when using special coreos.inst.* arguments to direct the live installer. For more information, see "Installing RHCOS and starting the OpenShift Container Platform bootstrap process". Reboot into the installed system. Check that the kernel arguments worked by going to one of the worker nodes and listing the kernel command line arguments (in /proc/cmdline on the host): USD oc debug node/ip-10-0-141-105.ec2.internal Example output Starting pod/ip-10-0-141-105ec2internal-debug ... To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline ... rd.multipath=default root=/dev/disk/by-label/dm-mpath-root ... sh-4.2# exit You should see the added kernel arguments. 4.11.4.1. Enabling multipathing on secondary disks RHCOS also supports multipathing on a secondary disk. Instead of kernel arguments, you use Ignition to enable multipathing for the secondary disk at installation time. Prerequisites You have read the section Disk partitioning . You have read Enabling multipathing with kernel arguments on RHCOS . You have installed the Butane utility. Procedure Create a Butane config with information similar to the following: Example multipath-config.bu variant: openshift version: 4.12.0 systemd: units: - name: mpath-configure.service enabled: true contents: | [Unit] Description=Configure Multipath on Secondary Disk ConditionFirstBoot=true ConditionPathExists=!/etc/multipath.conf Before=multipathd.service 1 DefaultDependencies=no [Service] Type=oneshot ExecStart=/usr/sbin/mpathconf --enable 2 [Install] WantedBy=multi-user.target - name: mpath-var-lib-container.service enabled: true contents: | [Unit] Description=Set Up Multipath On /var/lib/containers ConditionFirstBoot=true 3 Requires=dev-mapper-mpatha.device After=dev-mapper-mpatha.device After=ostree-remount.service Before=kubelet.service DefaultDependencies=no [Service] 4 Type=oneshot ExecStart=/usr/sbin/mkfs.xfs -L containers -m reflink=1 /dev/mapper/mpatha ExecStart=/usr/bin/mkdir -p /var/lib/containers [Install] WantedBy=multi-user.target - name: var-lib-containers.mount enabled: true contents: | [Unit] Description=Mount /var/lib/containers After=mpath-var-lib-containers.service Before=kubelet.service 5 [Mount] 6 What=/dev/disk/by-label/dm-mpath-containers Where=/var/lib/containers Type=xfs [Install] WantedBy=multi-user.target 1 The configuration must be set before launching the multipath daemon. 2 Starts the mpathconf utility. 3 This field must be set to the value true . 4 Creates the filesystem and directory /var/lib/containers . 5 The device must be mounted before starting any nodes. 6 Mounts the device to the /var/lib/containers mount point. This location cannot be a symlink. Create the Ignition configuration by running the following command: USD butane --pretty --strict multipath-config.bu > multipath-config.ign Continue with the rest of the first boot RHCOS installation process. Important Do not add the rd.multipath or root kernel arguments on the command-line during installation unless the primary disk is also multipathed. 4.12. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.25.0 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. Additional resources See Monitoring installation progress for more information about monitoring the installation logs and retrieving diagnostic data if installation issues arise. 4.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 4.14. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 4.15. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m Configure the Operators that are not available. Additional resources See Gathering logs from a failed installation for details about gathering data in the event of a failed OpenShift Container Platform installation. See Troubleshooting Operator issues for steps to check Operator pod health across the cluster and gather Operator logs for diagnosis. 4.15.1. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 4.15.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 4.15.2.1. Changing the image registry's management state To start the image registry, you must change the Image Registry Operator configuration's managementState from Removed to Managed . Procedure Change managementState Image Registry Operator configuration from Removed to Managed . For example: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"managementState":"Managed"}}' 4.15.2.2. Configuring registry storage for bare metal and other manual installations As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster that uses manually-provisioned Red Hat Enterprise Linux CoreOS (RHCOS) nodes, such as bare metal. You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resources found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.12 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 4.15.2.3. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 4.15.2.4. Configuring block registry storage for bare metal To allow the image registry to use block storage types during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes, or block persistent volumes, are supported but not recommended for use with the image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. If you choose to use a block storage volume with the image registry, you must use a filesystem persistent volume claim (PVC). Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only one ( 1 ) replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. 4.16. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Post-installation machine configuration tasks documentation for more information. Register your cluster on the Cluster registration page. 4.17. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.12, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 4.18. steps Validating an installation . Customize your cluster . Configure image streams for the Cluster Samples Operator and the must-gather tool. Learn how to use Operator Lifecycle Manager (OLM) on restricted networks . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . If necessary, see Registering your disconnected cluster
[ "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "mkdir <installation_directory>", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64", "networking: serviceNetwork: - 172.30.0.0/16 - fd02::/112", "networking: machineNetwork: - cidr: 10.0.0.0/16", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - name: worker platform: {} replicas: 0", "./openshift-install create manifests --dir <installation_directory> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". โ”œโ”€โ”€ auth โ”‚ โ”œโ”€โ”€ kubeadmin-password โ”‚ โ””โ”€โ”€ kubeconfig โ”œโ”€โ”€ bootstrap.ign โ”œโ”€โ”€ master.ign โ”œโ”€โ”€ metadata.json โ””โ”€โ”€ worker.ign", "variant: openshift version: 4.12.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony", "butane 99-worker-chrony.bu -o 99-worker-chrony.yaml", "oc apply -f ./99-worker-chrony.yaml", "sha512sum <installation_directory>/bootstrap.ign", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep '\\.iso[^.]'", "\"location\": \"<url>/art/storage/releases/rhcos-4.12-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.12-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.12-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.12/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",", "sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2", "sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b", "Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'", "\"<url>/art/storage/releases/rhcos-4.12-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.12-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.12-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.12-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.12-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.12-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.12-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.12-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.12-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.12/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.12/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.12/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"", "DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3", "kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot", "menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }", "Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied", "sudo coreos-installer install --copy-network --ignition-url=http://host/worker.ign /dev/sda", "openshift-install create manifests --dir <installation_directory>", "variant: openshift version: 4.12.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir <installation_directory> 1", ". โ”œโ”€โ”€ auth โ”‚ โ”œโ”€โ”€ kubeadmin-password โ”‚ โ””โ”€โ”€ kubeconfig โ”œโ”€โ”€ bootstrap.ign โ”œโ”€โ”€ master.ign โ”œโ”€โ”€ metadata.json โ””โ”€โ”€ worker.ign", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partlabel 'data*' /dev/sda", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 6 /dev/sda", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/sda", "coreos.inst.save_partlabel=data*", "coreos.inst.save_partindex=5-", "coreos.inst.save_partindex=6", "coreos-installer install --console=tty0 \\ 1 --console=ttyS0,<options> \\ 2 --ignition-url=http://host/worker.ign /dev/sda", "coreos-installer iso customize rhcos-<version>-live.x86_64.iso --dest-ignition bootstrap.ign \\ 1 --dest-device /dev/sda 2", "coreos-installer iso reset rhcos-<version>-live.x86_64.iso", "coreos-installer iso customize rhcos-<version>-live.x86_64.iso --dest-ignition <path> \\ 1 --dest-console tty0 \\ 2 --dest-console ttyS0,<options> \\ 3 --dest-device /dev/sda 4", "coreos-installer iso reset rhcos-<version>-live.x86_64.iso", "coreos-installer iso customize rhcos-<version>-live.x86_64.iso --ignition-ca cert.pem", "[connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto", "[connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond", "[connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond", "coreos-installer iso customize rhcos-<version>-live.x86_64.iso --network-keyfile bond0.nmconnection --network-keyfile bond0-proxy-em1.nmconnection --network-keyfile bond0-proxy-em2.nmconnection", "coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --dest-ignition bootstrap.ign \\ 1 --dest-device /dev/sda \\ 2 -o rhcos-<version>-custom-initramfs.x86_64.img 3", "coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --dest-ignition <path> \\ 1 --dest-console tty0 \\ 2 --dest-console ttyS0,<options> \\ 3 --dest-device /dev/sda \\ 4 -o rhcos-<version>-custom-initramfs.x86_64.img 5", "coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --ignition-ca cert.pem -o rhcos-<version>-custom-initramfs.x86_64.img", "[connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto", "[connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond", "[connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond", "coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --network-keyfile bond0.nmconnection --network-keyfile bond0-proxy-em1.nmconnection --network-keyfile bond0-proxy-em2.nmconnection -o rhcos-<version>-custom-initramfs.x86_64.img", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp", "bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "team=team0:em1,em2 ip=team0:dhcp", "mpathconf --enable && systemctl start multipathd.service", "coreos-installer install /dev/mapper/mpatha \\ 1 --ignition-url=http://host/worker.ign --append-karg rd.multipath=default --append-karg root=/dev/disk/by-label/dm-mpath-root --append-karg rw", "coreos-installer install /dev/disk/by-id/wwn-<wwn_ID> \\ 1 --ignition-url=http://host/worker.ign --append-karg rd.multipath=default --append-karg root=/dev/disk/by-label/dm-mpath-root --append-karg rw", "oc debug node/ip-10-0-141-105.ec2.internal", "Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline rd.multipath=default root=/dev/disk/by-label/dm-mpath-root sh-4.2# exit", "variant: openshift version: 4.12.0 systemd: units: - name: mpath-configure.service enabled: true contents: | [Unit] Description=Configure Multipath on Secondary Disk ConditionFirstBoot=true ConditionPathExists=!/etc/multipath.conf Before=multipathd.service 1 DefaultDependencies=no [Service] Type=oneshot ExecStart=/usr/sbin/mpathconf --enable 2 [Install] WantedBy=multi-user.target - name: mpath-var-lib-container.service enabled: true contents: | [Unit] Description=Set Up Multipath On /var/lib/containers ConditionFirstBoot=true 3 Requires=dev-mapper-mpatha.device After=dev-mapper-mpatha.device After=ostree-remount.service Before=kubelet.service DefaultDependencies=no [Service] 4 Type=oneshot ExecStart=/usr/sbin/mkfs.xfs -L containers -m reflink=1 /dev/mapper/mpatha ExecStart=/usr/bin/mkdir -p /var/lib/containers [Install] WantedBy=multi-user.target - name: var-lib-containers.mount enabled: true contents: | [Unit] Description=Mount /var/lib/containers After=mpath-var-lib-containers.service Before=kubelet.service 5 [Mount] 6 What=/dev/disk/by-label/dm-mpath-containers Where=/var/lib/containers Type=xfs [Install] WantedBy=multi-user.target", "butane --pretty --strict multipath-config.bu > multipath-config.ign", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.25.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"managementState\":\"Managed\"}}'", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resources found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.12 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_bare_metal/installing-restricted-networks-bare-metal
Chapter 7. Using Red Hat Subscription Manager
Chapter 7. Using Red Hat Subscription Manager 7.1. Understanding Red Hat Subscription Management Red Hat Subscription Manager tracks the Red Hat products that your organization has purchased and the systems that the products are installed on. Subscription Manager establishes the relationship between the product subscriptions that are available to the system and the elements of infrastrcuture of your business where those subscriptions are allocated. Important Red Hat subscription services have moved from Red Hat Customer Portal to Red Hat Hybrid Cloud Console , however, your technical environment might require you to perform some tasks in the Customer Portal. For example, a user with a Red Hat Satellite Server on a disconnected network will continue to use the Customer Portal to create and manage subscription manifests. Also, a connected user without a Satellite Server will use the Customer Portal to enable simple content access for their organization. Note If simple content access mode is enabled for your Red Hat organization, then you do not need to attach subscriptions or manage entitlements. The simple content access mode is enabled at the organization-level for new accounts by default. For information about enabling simple content access mode for an existing organization, see Enabling simple content access with Red Hat Subscription Management While Red Hat products are available through a GNU Public License, Red Hat supports its products through a subscription-based license. Support includes: Downloadable content and updates Access to the knowledge base Support for your product Red Hat Subscription Management provides administrators with the following information: Which products are available to your organization Which products are installed on your systems The status of your subscriptions Red Hat Subscription Management allows administrators to identify the relationship between their systems and the subscriptions used by those systems from two different perspectives: All active subscriptions for an account and which systems are consuming them All systems profiled within the inventory and which subscriptions they are consuming Additional resources For information about the changes and improvements to the subscription management platform, see Transition of Red Hat's subscription services to console.redhat.com For more information about simple content access, see Getting Started with Simple Content Access . For information about how to register your RHEL system, see Getting Started with RHEL System Registration . For information about managing user roles for services hosted on the Hybrid Cloud Console, see User Access Configuration Guide for Role-based Access Control (RBAC) . 7.2. Understanding your workflow for subscribing with Red Hat products Before you can register your system to Red Hat, you need an active subscription. Subscriptions can be purchased through the Red Hat Store or by contacting Sales directly. With a registered system and an active subscription, you can do the following tasks: View or manage any systems for your account in the Systems inventory on the Red Hat Hybrid Cloud Console View or manage any subscriptions for your account in the Subscription Inventory on the Red Hat Hybrid Cloud Console Download software packages and updates from the content delivery network for as long as the subscription is active Each element in the subscription service must be uniquely identified. This allows true relationships to be established between the system, the products, and the subscriptions. The subscription service generates and installs these certificates on the local system: An identity certificate for the system. This certificate is created when the system is registered. The system uses it to authenticate to the subscription service and periodically check for updates. A product certificate for each Red Hat product installed on the system. This certificate is installed on the system along with the product. It identifies the product, but it is not unique to the system. A subscription certificate for each subscription associated with the system. This certificate includes information about the subscription from the inventory. Subscription management delivers better information and offers administrators better control over their infrastructures. 7.3. Tools and applications available for Red Hat Subscription Management Important Red Hat Subscription services have moved from the Customer Portal to Red Hat Hybrid Cloud Console , however, your technical environment might require you to perform some tasks in the Customer Portal. For example, a user with a Red Hat Satellite Server on a disconnected network will continue to use the Customer Portal to create and manage subscription manifests. All Red Hat Enterprise Linux subscriptions automatically include the following tools for managing the subscription configuration: Red Hat Subscription Manager client tools to manage local systems on the command line Subscription services on the Red Hat Hybrid Cloud Console to manage systems and subscriptions for your account Red Hat Satellite as an on-premise solution for systems that may not regularly check in The diversity of tools allows administrators to create a workflow that fits both the business and infrastructure demands of their organization. 7.3.1. Red Hat Subscription Manager Red Hat Subscription Manager tracks and displays what subscriptions are available to the local system and what subscriptions have been consumed by the local system. It works as a conduit back to the subscription service to synchronize changes, such as available product quantities or subscription expiration dates. The Subscription Manager includes the following components: A UI-based client to manage the local machine A CLI client, which can be used with other applications or in automation scripts These tools allow authorized users to perform tasks directly related to managing subscriptions, such as registering a system to Red Hat and updating the certificates required for authentication. Some minor operations, such as updating system facts, are available to help show and track available subscriptions. Note You must have root privileges to run the Subscription Manager CLI tool because of the nature of the changes to the system. However, Subscription Manager connects to the subscription service as a user account for the subscription service. The Subscription Manager is part of the firstboot process for configuring content and updates, but you can register the system at any time through the Subscription Manager UI or CLI. New subscriptions, new products, and updates can be viewed and applied to a system through the Subscription Manager tools. Additional resources For information about how to register your RHEL system, see Getting Started with RHEL System Registration . For information about how to view and manage your subscriptions and their details, see Viewing and managing your subscription inventory on the Hybrid Cloud Console 7.3.1.1. Launching Red Hat Subscription Manager You can run Red Hat Subscription Manager from the Red Hat Enterprise Linux UI. The following instructions show you how to run Subscription Manager from the RHEL UI based on the release version of your system: In RHEL 9, click Activities > Show Applications . In RHEL 8, click Activities > Show All Programs . In RHEL 7, click System Tools > Administration . 7.4. Viewing subscriptions with Red Hat Subscription Manager To manage subscriptions, administrators need to know the following information: What subscriptions are available to the system What subscriptions are being used by the system You can view your subscriptions and their details in the following ways: From the command line interface (CLI) using the subscription-manager command From the Subscription Inventory page on the Hybrid Cloud Console. The following table shows options that you can use to manage your subscriptions with the subscription-manager command. Table 7.1. subscription-manager list Options Command Description --installed (or nothing) Lists all of the installed products on the system. If no option is given with 'list', it is the same as using the '--installed' argument. --consumed Lists all of the subscriptions associated with the system. --available[-all] Using '--available' alone lists all of the compatible, active subscriptions for the system. Using '--available --all' lists all options, even ones not compatible with the system. --ondate=YYYY-MM-DD Shows subscriptions which are active and available on the specified date. This is only used with the '--available' option. If this is not used, then the command uses the current date. --installed Lists all of the products that are installed on the system (and whether they have a subscription) and it lists all of the product subscriptions which are associated with the system (and whether those products are installed). Example 'list' showing subscriptions consumed Example 'list' showing all available subscriptions Additional resources For information about viewing your subscription inventory with the Hybrid Cloud Console GUI, see Viewing and managing your subscription inventory on the Hybrid Cloud Console 7.5. Using system purpose with Red Hat Subscription Manager You use system purpose to record the intended use of a Red Hat Enterprise Linux (RHEL) system. Setting system purpose allows you to specify system attributes, such as the role, service level agreement, and usage. The following values are available for each system purpose attribute by default. Role Red Hat Enterprise Linux Server Red Hat Enterprise Linux Workstation Red Hat Enterprise Linux Compute Node Service Level Agreement Premium Standard Self-Support Usage Production Development/Test Disaster Recovery Configuring system purpose offers the following benefits: In-depth system-level information for system administrators and business operations Reduced overhead when determining why a system was procured and its intended purpose You can set system purpose data in any of the following ways: During activation key creation During image creation During installation using the Connect to Red Hat screen to register your system During installation using the syspurpose Kickstart command After installation using the subscription-manager CLI tool Additional resources To configure system purpose with an activation key, see Creating an activation key . To configure system purpose with the Subscription Manager CLI tool, see Configuring System Purpose using the subscription-manager command-line tool 7.5.1. Listing available values for system purpose attributes As the root user, you can enter the subscription-manager syspurpose command and the role , usage , service-level , or addons subcommand with the --list option to list available values for all system purpose attributes. Listing system purpose values for an unregistered system requires you to enter additional information on the command line. The following examples show how to list the available system purposes values for the role attribute for registered and unregistered systems. When the system is registered, enter the following command: When the system is unregistered, enter the following command with the --username , --password , --organization , and --token authentication options, as required: where: The --username option specifies the name of a user with organization administrator authority in your Red Hat account. The --password option specifies the associated password. The --organization option specifies the organization ID number. The --token option specifies the token of the virt-who service account. Note Specifying the organization ID is only required if you have multiple organizations and need to specify a particular organization. Note Specifying the token is only required if you have configured virt-who to connect to OpenShift Virtualization. When you enter the command on a registered system or on an unregistered system with authentication options, the expected output is the list of available values for the role attribute: System purpose addons are specific to your organization and do not appear in the list of available values. If you try to list available system purpose addons with the --list option, then subscription-manager displays a warning message. For example: 7.5.2. Setting custom values for system purpose attributes If the value you want to set is not included in the list of valid values for the account, you can enter a custom system purpose value with the --set option. To set a custom value, you must enter the command on a registered system or enter the command with authentication options on an unregistered system. The following examples show how to set a custom value of "foo" for the system purpose role attribute on registered and unregistered systems. When the system is registered, enter the following command: When the system is unregistered, enter the following command with the --username , --password , --org , and --token authentication options, as required: where: The --username option specifies the name of a user with organization administrator authority in your Red Hat account. The --password option specifies the associated password. The --org option specifies the organization ID number. The --token option specifies the token of the virt-who service account. Note Specifying the organization ID is only required if you have multiple organizations and need to specify a particular organization. Note Specifying the token is only required if you have configured virt-who to connect to OpenShift Virtualization. When you set a custom value on a registered system or on an unregistered system with authentication options, the expected output displays a warning message because the custom value is considered invalid. However, the output also displays a confirmation message because subscription-manager sets the custom value despite the warning. Important Subscription Manager only outputs the warning message if the system is registered or if you enter authentication credentials on an unregistered system. If your system is unregistered and you do not enter authentication options, Subscription Manager sets the custom value without displaying the warning message. 7.6. Enabling simple content access with Red Hat Subscription Management If you use a Red Hat Satellite Server, then you can enable simple content access in the following ways: On a subscription manifest on the Red Hat Hybrid Cloud Console Manifests page. On a Satellite organization using the Satellite graphical user interface. Note The simple content access setting on the Satellite organization supersedes the settings on the manifest. If you do not use a Satellite Server, then you can enable simple content access through the Red Hat Customer Portal. After simple content access is enabled, you can complete additional post-enablement steps related to activation key, host group, and host configuration through the Hybrid Cloud Console. 7.6.1. Enabling simple content access without a Red Hat Satellite Server When you enable simple content access, you change the content access mode. You stop using the traditional mode, where you must attach a subscription to a system as a prerequisite of gaining access to content. You start using a new mode, where you can consume content regardless of the presence of an attached subscription. Prerequisites The Organization Administrator role for the organization Procedure To enable simple content access for the directly connected systems in Red Hat Subscription Manager without a Satellite Server, complete the following steps: Log in to the Red Hat Customer Portal. On the Overview page, set the Simple content access for Red Hat switch to Enabled . After you complete these steps, simple content access is enabled for all current and newly registered systems. Current systems will download the required simple content access certification information the time that they check in to the subscription management services. Additional resources For information about how to enable simple content access for a Satellite-supported system, see Setting the simple content access mode from Red Hat Hybrid Cloud Console . 7.7. Understanding errata Part of subscription management is tracking updates and new releases of software. Whenever an update is available - from a bug fix to a new release - a notification email can be sent to you. The notifications are only sent for registered systems which have subscriptions for that product associated with them. 7.7.1. Managing errata notification settings Errata notifications are set as a preference for the user account, not for an individual system. When Red Hat Subscription Management checks for potential errata updates, it checks the entire inventory, not specific systems. An errata notification is sent if any registered system is affected, but the email does not list what systems are actually affected. Procedure From the Overview page, click the account name. Click Account Settings . Click Errata Notifications . Select the types of errata you want to receive. Security errata relate to critical security issues. Bug fixes and enhancement notifications relate to incremental updates to the product. Select the notification frequency. Click Save . 7.7.2. Troubleshooting errata applicability If you see applicable errata displayed in Red Hat Subscription Management but have no yum updates available, it can mean one of a couple of settings are not correct. Procedure Verify that you have the proper permissions to install all available updates on the system. If you do not have the necessary permissions, contact your organization administrator. If you are running RHEL 5 or RHEL 6.4 or earlier, please consider upgrading your system so that you can have the most up-to-date errata and system updates. Force a check in and run yum update again.* If the system has not been checked in recently, you may see a discrepancy between what you see in the Customer Portal and what is actually installed on your system. Note After forcing your system to check in again, please wait up to four hours for the errata data on Red Hat Subscription Management to update to their correct data.
[ "subscription-manager list --consumed +-------------------------------------------+ Consumed Product Subscriptions +-------------------------------------------+ ProductName: Red Hat Enterprise Linux Server ContractNumber: 1458961 SerialNumber: 171286550006020205 Active: True Begins: 2009-01-01 Expires: 2011-12-31", "subscription-manager list --available --all +-------------------------------------------+ Available Subscriptions +-------------------------------------------+ ProductName: RHEL for Physical Servers ProductId: MKT-rhel-server PoolId: ff8080812bc382e3012bc3845ca000cb Quantity: 10 Expires: 2011-09-20 ProductName: RHEL Workstation ProductId: MKT-rhel-workstation-mkt PoolId: 5e09a31f95885cc4 Quantity: 10 Expires: 2011-09-20", "subscription-manager syspurpose role --list", "subscription-manager syspurpose role --list --username=<username> --password=<password> --organization=<organization_ID> --token=<token>", "+-------------------------------------------+ Available role +-------------------------------------------+ - Red Hat Enterprise Linux Workstation - Red Hat Enterprise Linux Server - Red Hat Enterprise Linux Compute Node", "subscription-manager syspurpose addons --list There are no available values for the system purpose \"addons\" from the available subscriptions in this organization.", "subscription-manager syspurpose role --set=\"foo\"", "subscription-manager syspurpose role --set=\"foo\" --username=<username> --password=<password> --organization=<organization_ID> --token=<token>", "Warning: Provided value \"foo\" is not included in the list of valid values - Red Hat Enterprise Linux Workstation - Red Hat Enterprise Linux Server - Red Hat Enterprise Linux Compute Node role set to \"foo\".", "rm -f /var/lib/rhsm/packages/packages.json service rhsmcertd stop rhsmcertd --now yum update" ]
https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/getting_started_with_rhel_system_registration/adv-reg-rhel-using-rhsm_
Using external Red Hat utilities with Identity Management
Using external Red Hat utilities with Identity Management Red Hat Enterprise Linux 9 Integrating services and Red Hat products in IdM Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/using_external_red_hat_utilities_with_identity_management/index
Chapter 18. Node [config.openshift.io/v1]
Chapter 18. Node [config.openshift.io/v1] Description Node holds cluster-wide information about node specific features. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 18.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values. 18.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description cgroupMode string CgroupMode determines the cgroups version on the node workerLatencyProfile string WorkerLatencyProfile determins the how fast the kubelet is updating the status and corresponding reaction of the cluster 18.1.2. .status Description status holds observed values. Type object 18.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/nodes DELETE : delete collection of Node GET : list objects of kind Node POST : create a Node /apis/config.openshift.io/v1/nodes/{name} DELETE : delete a Node GET : read the specified Node PATCH : partially update the specified Node PUT : replace the specified Node /apis/config.openshift.io/v1/nodes/{name}/status GET : read status of the specified Node PATCH : partially update status of the specified Node PUT : replace status of the specified Node 18.2.1. /apis/config.openshift.io/v1/nodes Table 18.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Node Table 18.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 18.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Node Table 18.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 18.5. HTTP responses HTTP code Reponse body 200 - OK NodeList schema 401 - Unauthorized Empty HTTP method POST Description create a Node Table 18.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 18.7. Body parameters Parameter Type Description body Node schema Table 18.8. HTTP responses HTTP code Reponse body 200 - OK Node schema 201 - Created Node schema 202 - Accepted Node schema 401 - Unauthorized Empty 18.2.2. /apis/config.openshift.io/v1/nodes/{name} Table 18.9. Global path parameters Parameter Type Description name string name of the Node Table 18.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a Node Table 18.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 18.12. Body parameters Parameter Type Description body DeleteOptions schema Table 18.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Node Table 18.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 18.15. HTTP responses HTTP code Reponse body 200 - OK Node schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Node Table 18.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 18.17. Body parameters Parameter Type Description body Patch schema Table 18.18. HTTP responses HTTP code Reponse body 200 - OK Node schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Node Table 18.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 18.20. Body parameters Parameter Type Description body Node schema Table 18.21. HTTP responses HTTP code Reponse body 200 - OK Node schema 201 - Created Node schema 401 - Unauthorized Empty 18.2.3. /apis/config.openshift.io/v1/nodes/{name}/status Table 18.22. Global path parameters Parameter Type Description name string name of the Node Table 18.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified Node Table 18.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 18.25. HTTP responses HTTP code Reponse body 200 - OK Node schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Node Table 18.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 18.27. Body parameters Parameter Type Description body Patch schema Table 18.28. HTTP responses HTTP code Reponse body 200 - OK Node schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Node Table 18.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 18.30. Body parameters Parameter Type Description body Node schema Table 18.31. HTTP responses HTTP code Reponse body 200 - OK Node schema 201 - Created Node schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/config_apis/node-config-openshift-io-v1
Chapter 10. Specifying link cost
Chapter 10. Specifying link cost When linking sites, you can assign a cost to each link to influence the traffic flow. By default, link cost is set to 1 for a new link. In a service network, the routing algorithm attempts to use the path with the lowest total cost from client to target server. If you have services distributed across different sites, you might want a client to favor a particular target or link. In this case, you can specify a cost of greater than 1 on the alternative links to reduce the usage of those links. Note The distribution of open connections is statistical, that is, not a round robin system. If a connection only traverses one link, then the path cost is equal to the link cost. If the connection traverses more than one link, the path cost is the sum of all the links involved in the path. Cost acts as a threshold for using a path from client to server in the network. When there is only one path, traffic flows on that path regardless of cost. Note If you start with two targets for a service, and one of the targets is no longer available, traffic flows on the remaining path regardless of cost. When there are a number of paths from a client to server instances or a service, traffic flows on the lowest cost path until the number of connections exceeds the cost of an alternative path. After this threshold of open connections is reached, new connections are spread across the alternative path and the lowest cost path. Prerequisite You have set your Kubernetes context to a site that you want to link from . A token for the site that you want to link to . Procedure Create a link to the service network: USD skupper link create <filename> --cost <integer-cost> where <integer-cost> is an integer greater than 1 and traffic favors lower cost links. Note If a service can be called without traversing a link, that service is considered local, with an implicit cost of 0 . For example, create a link with cost set to 2 using a token file named token.yaml : USD skupper link create token.yaml --cost 2 Check the link cost: USD skupper link status link1 --verbose The output is similar to the following: Cost: 2 Created: 2022-11-17 15:02:01 +0000 GMT Name: link1 Namespace: default Site: default-0d99d031-cee2-4cc6-a761-697fe0f76275 Status: Connected Observe traffic using the console. If you have a console on a site, log in and navigate to the processes for each server. You can view the traffic levels corresponding to each client. Note If there are multiple clients on different sites, filter the view to each client to determine the effect of cost on traffic. For example, in a two site network linked with a high cost with servers and clients on both sites, you can see that a client is served by the local servers while a local server is available. 10.1. Exposing services on the service network from a Linux host After creating a service network, exposed services can communicate across that network. The general flow for working with services is the same for Kubernetes and Podman sites. The skupper CLI has two options for exposing services that already exist on a host: expose supports simple use cases, for example, a host with a single service. See Section 10.1.1, "Exposing simple services on the service network" for instructions. service create and service bind is a more flexible method of exposing services, for example, if you have multiple services for a host. See Section 10.1.2, "Exposing complex services on the service network" for instructions. 10.1.1. Exposing simple services on the service network This section describes how services can be enabled for a service network for simple use cases. Prerequisites A Skupper Podman site Procedure Run a server, for example: USD podman run --name backend-target --network skupper --detach --rm -p 8080:8080 quay.io/skupper/hello-world-backend This step is not Skupper-specific, that is, this process is unchanged from standard processes for your host, for example you might have a native process you want to expose. Create a service that can communicate on the service network: USD skupper expose [host <hostname|ip>] where <host> is the name of the host where the server is running. For example, the name of the container if you run the server as a container. <ip> is the IP address where the server is running For the example deployment in step 1, you create a service using the following command: Options for this command include: --port <port-number> :: Specify the port number that this service is available on the service network. NOTE: You can specify more than one port by repeating this option. --target-port <port-number> :: Specify the port number of pods that you want to expose. --protocol <protocol> allows you specify the protocol you want to use, tcp , http or http2 If you are exposing a service that is running on the same host as your site that is not a podman container, do not use localhost . Instead, use host.containers.internal when exposing local services: skupper expose host host.containers.internal --address backend --port 8080 Create the service on another site in the service network: USD skupper service create backend 8080 10.1.2. Exposing complex services on the service network This section describes how services can be enabled for a service network for more complex use cases. Prerequisites A Skupper Podman site Procedure Run a server, for example: USD podman run --name backend-target --network skupper --detach --rm -p 8080:8080 quay.io/skupper/hello-world-backend This step is not Skupper-specific, that is, this process is unchanged from standard processes for your host. Create a service that can communicate on the service network: USD skupper service create <name> <port> where <name> is the name of the service you want to create <port> is the port the service uses For the example deployment in step 1, you create a service using the following command: USD skupper service create hello-world-backend 8080 Bind the service to a cluster service: USD skupper service bind <service-name> <target-type> <target-name> where <service-name> is the name of the service on the service network <target-type> is the object you want to expose, host is the only current valid value. <target-name> is the name of the cluster service For the example deployment in step 1, you bind the service using the following command: USD skupper service bind hello-world-backend host hello-world-backend 10.1.3. Consuming simple services from the service network Services exposed on Podman sites are not automatically available to other sites. This is the equivalent to Kubernetes sites created using skupper init --enable-service-sync false . Prerequisites A remote site where a service is exposed on the service network A Podman site Procedure Log into the host as the user associated with the Skupper site. Create the local service: USD skupper service create <service-name> <port number> 10.2. Deleting a Podman site When you no longer want the Linux host to be part of the service network, you can delete the site. Note This procedure removes all containers, volumes and networks labeled application=skupper . To check the labels associated with running containers: USD podman ps -a --format "{{.ID}} {{.Image}} {{.Labels}}" Procedure Make sure you are logged in as the user that created the site: USD skupper status Skupper is enabled for "<username>" with site name "<machine-name>-<username>". Delete the site and all podman resources (containers, volumes and networks) labeled with "application=skupper": USD skupper delete Skupper is now removed for user "<username>".
[ "skupper link create <filename> --cost <integer-cost>", "skupper link create token.yaml --cost 2", "skupper link status link1 --verbose", "Cost: 2 Created: 2022-11-17 15:02:01 +0000 GMT Name: link1 Namespace: default Site: default-0d99d031-cee2-4cc6-a761-697fe0f76275 Status: Connected", "podman run --name backend-target --network skupper --detach --rm -p 8080:8080 quay.io/skupper/hello-world-backend", "skupper expose [host <hostname|ip>]", "skupper expose host backend-target --address backend --port 8080", "skupper expose host host.containers.internal --address backend --port 8080", "skupper service create backend 8080", "podman run --name backend-target --network skupper --detach --rm -p 8080:8080 quay.io/skupper/hello-world-backend", "skupper service create <name> <port>", "skupper service create hello-world-backend 8080", "skupper service bind <service-name> <target-type> <target-name>", "skupper service bind hello-world-backend host hello-world-backend", "skupper service create <service-name> <port number>", "podman ps -a --format \"{{.ID}} {{.Image}} {{.Labels}}\"", "skupper status Skupper is enabled for \"<username>\" with site name \"<machine-name>-<username>\".", "skupper delete Skupper is now removed for user \"<username>\"." ]
https://docs.redhat.com/en/documentation/red_hat_service_interconnect/1.8/html/using_service_interconnect/podmanspecifying-link-cost
Chapter 23. Managing replication topology
Chapter 23. Managing replication topology You can manage replication between servers in an Identity Management (IdM) domain. When you create a replica, Identity Management (IdM) creates a replication agreement between the initial server and the replica. The data that is replicated is then stored in topology suffixes and when two replicas have a replication agreement between their suffixes, the suffixes form a topology segment. 23.1. Replication agreements between IdM replicas When an administrator creates a replica based on an existing server, Identity Management (IdM) creates a replication agreement between the initial server and the replica. The replication agreement ensures that the data and configuration is continuously replicated between the two servers. IdM uses multiple read/write replica replication . In this configuration, all replicas joined in a replication agreement receive and provide updates, and are therefore considered suppliers and consumers. Replication agreements are always bilateral. Figure 23.1. Server and replica agreements IdM uses two types of replication agreements: Domain replication agreements replicate the identity information. Certificate replication agreements replicate the certificate information. Both replication channels are independent. Two servers can have one or both types of replication agreements configured between them. For example, when server A and server B have only domain replication agreement configured, only identity information is replicated between them, not the certificate information. 23.2. Topology suffixes Topology suffixes store the data that is replicated. IdM supports two types of topology suffixes: domain and ca . Each suffix represents a separate server, a separate replication topology. When a replication agreement is configured, it joins two topology suffixes of the same type on two different servers. The domain suffix: dc= example ,dc= com The domain suffix contains all domain-related data. When two replicas have a replication agreement between their domain suffixes, they share directory data, such as users, groups, and policies. The ca suffix: o=ipaca The ca suffix contains data for the Certificate System component. It is only present on servers with a certificate authority (CA) installed. When two replicas have a replication agreement between their ca suffixes, they share certificate data. Figure 23.2. Topology suffixes An initial topology replication agreement is set up between two servers by the ipa-replica-install script when installing a new replica. 23.3. Topology segments When two replicas have a replication agreement between their suffixes, the suffixes form a topology segment . Each topology segment consists of a left node and a right node . The nodes represent the servers joined in the replication agreement. Topology segments in IdM are always bidirectional. Each segment represents two replication agreements: from server A to server B, and from server B to server A. The data is therefore replicated in both directions. Figure 23.3. Topology segments 23.4. Viewing and modifying the visual representation of the replication topology using the WebUI Using the Web UI, you can view, manipulate, and transform the representation of the replication topology. The topology graph in the web UI shows the relationships between the servers in the domain. You can move individual topology nodes by holding and dragging the mouse. Interpreting the topology graph Servers joined in a domain replication agreement are connected by an orange arrow. Servers joined in a CA replication agreement are connected by a blue arrow. Topology graph example: recommended topology The recommended topology example below shows one of the possible recommended topologies for four servers: each server is connected to at least two other servers, and more than one server is a CA server. Figure 23.4. Recommended topology example Topology graph example: discouraged topology In the discouraged topology example below, server1 is a single point of failure. All the other servers have replication agreements with this server, but not with any of the other servers. Therefore, if server1 fails, all the other servers will become isolated. Avoid creating topologies like this. Figure 23.5. Discouraged topology example: Single Point of Failure Prerequisites You are logged in as an IdM administrator. Procedure Select IPA Server Topology Topology Graph . Make changes to the topology: You can move the topology graph nodes using the left mouse button: You can zoom in and zoom out the topology graph using the mouse wheel: You can move the canvas of the topology graph by holding the left mouse button: If you make any changes to the topology that are not immediately reflected in the graph, click Refresh . 23.5. Viewing topology suffixes using the CLI In a replication agreement, topology suffixes store the data that is replicated. You can view topology suffixes using the CLI. Procedure Enter the ipa topologysuffix-find command to display a list of topology suffixes: Additional resources Topology suffixes 23.6. Viewing topology segments using the CLI In a replication agreement, when two replicas have a replication agreement between their suffixes, the suffixes form a topology segments. You can view topology segments using the CLI. Procedure Enter the ipa topologysegment-find command to show the current topology segments configured for the domain or CA suffixes. For example, for the domain suffix: In this example, domain-related data is only replicated between two servers: server1.example.com and server2.example.com . (Optional) To display details for a particular segment only, enter the ipa topologysegment-show command: Additional resources Topology segments 23.7. Setting up replication between two servers using the Web UI Using the Identity Management (IdM) Web UI, you can choose two servers and create a new replication agreement between them. Prerequisites You are logged in as an IdM administrator. Procedure In the topology graph, hover your mouse over one of the server nodes. Figure 23.6. Domain or CA options Click on the domain or the ca part of the circle depending on what type of topology segment you want to create. A new arrow representing the new replication agreement appears under your mouse pointer. Move your mouse to the other server node, and click on it. Figure 23.7. Creating a new segment In the Add topology segment window, click Add to confirm the properties of the new segment. The new topology segment between the two servers joins them in a replication agreement. The topology graph now shows the updated replication topology: Figure 23.8. New segment created 23.8. Stopping replication between two servers using the Web UI Using the Identity Management (IdM) Web UI, you can remove a replication agreement from servers. Prerequisites You are logged in as an IdM administrator. Procedure Click on an arrow representing the replication agreement you want to remove. This highlights the arrow. Figure 23.9. Topology segment highlighted Click Delete . In the Confirmation window, click OK . IdM removes the topology segment between the two servers, which deletes their replication agreement. The topology graph now shows the updated replication topology: Figure 23.10. Topology segment deleted 23.9. Setting up replication between two servers using the CLI You can configure replication agreements between two servers using the ipa topologysegment-add command. Prerequisites You have the IdM administrator credentials. Procedure Create a topology segment for the two servers. When prompted, provide: The required topology suffix: domain or ca The left node and the right node, representing the two servers [Optional] A custom name for the segment For example: Adding the new segment joins the servers in a replication agreement. Verification Verify that the new segment is configured: 23.10. Stopping replication between two servers using the CLI You can terminate replication agreements from command line using the ipa topology segment-del command. Prerequisites You have the IdM administrator credentials. Procedure Optional. If you do not know the name of the specific replication segment that you want to remove, display all segments available. Use the ipa topologysegment-find command. When prompted, provide the required topology suffix: domain or ca . For example: Locate the required segment in the output. Remove the topology segment joining the two servers: Deleting the segment removes the replication agreement. Verification Verify that the segment is no longer listed: 23.11. Removing server from topology using the Web UI You can use Identity Management (IdM) web interface to remove a server from the topology. This action does not uninstall the server components from the host. Prerequisites You are logged in as an IdM administrator. The server you want to remove is not the only server connecting other servers with the rest of the topology; this would cause the other servers to become isolated, which is not allowed. The server you want to remove is not your last CA or DNS server. Warning Removing a server is an irreversible action. If you remove a server, the only way to introduce it back into the topology is to install a new replica on the machine. Procedure Select IPA Server Topology IPA Servers . Click on the name of the server you want to delete. Figure 23.11. Selecting a server Click Delete Server . Additional resources Uninstalling an IdM server 23.12. Removing server from topology using the CLI You can use the command line to remove an Identity Management (IdM) server from the topology. Prerequisites You have the IdM administrator credentials. The server you want to remove is not the only server connecting other servers with the rest of the topology; this would cause the other servers to become isolated, which is not allowed. The server you want to remove is not your last CA or DNS server. Important Removing a server is an irreversible action. If you remove a server, the only way to introduce it back into the topology is to install a new replica on the machine. Procedure To remove server1.example.com : On another server, run the ipa server-del command to remove server1.example.com . The command removes all topology segments pointing to the server: [Optional] On server1.example.com , run the ipa server-install --uninstall command to uninstall the server components from the machine. 23.13. Removing obsolete RUV records If you remove a server from the IdM topology without properly removing its replication agreements, obsolete replica update vector (RUV) records will remain on one or more remaining servers in the topology. This can happen, for example, due to automation. These servers will then expect to receive updates from the now removed server. In this case, you need to clean the obsolete RUV records from the remaining servers. Prerequisites You have the IdM administrator credentials. You know which replicas are corrupted or have been improperly removed. Procedure List the details about RUVs using the ipa-replica-manage list-ruv command. The command displays the replica IDs: Important The ipa-replica-manage list-ruv command lists ALL replicas in the topology, not only the malfunctioning or improperly removed ones. Remove obsolete RUVs associated with a specified replica using the ipa-replica-manage clean-ruv command. Repeat the command for every replica ID with obsolete RUVs. For example, if you know server1.example.com and server2.example.com are the malfunctioning or improperly removed replicas: Warning Proceed with extreme caution when using ipa-replica-manage clean-ruv . Running the command against a valid replica ID will corrupt all the data associated with that replica in the replication database. If this happens, re-initialize the replica from another replica using USD ipa-replica-manage re-initialize --from server1.example.com . Verification Run ipa-replica-manage list-ruv again. If the command no longer displays any corrupt RUVs, the records have been successfully cleaned. If the command still displays corrupt RUVs, clear them manually using this task: 23.14. Viewing available server roles in the IdM topology using the IdM Web UI Based on the services installed on an IdM server, it can perform various server roles . For example: CA server DNS server Key recovery authority (KRA) server. Procedure For a complete list of the supported server roles, see IPA Server Topology Server Roles . Note Role status absent means that no server in the topology is performing the role. Role status enabled means that one or more servers in the topology are performing the role. Figure 23.12. Server roles in the web UI 23.15. Viewing available server roles in the IdM topology using the IdM CLI Based on the services installed on an IdM server, it can perform various server roles . For example: CA server DNS server Key recovery authority (KRA) server. Procedure To display all CA servers in the topology and the current CA renewal server: Alternatively, to display a list of roles enabled on a particular server, for example server.example.com : Alternatively, use the ipa server-find --servrole command to search for all servers with a particular server role enabled. For example, to search for all CA servers: 23.16. Promoting a replica to a CA renewal server and CRL publisher server If your IdM deployment uses an embedded certificate authority (CA), one of the IdM CA servers acts as the CA renewal server, a server that manages the renewal of CA subsystem certificates. One of the IdM CA servers also acts as the IdM CRL publisher server, a server that generates certificate revocation lists. By default, the CA renewal server and CRL publisher server roles are installed on the first server on which the system administrator installed the CA role using the ipa-server-install or ipa-ca-install command. You can, however, transfer either of the two roles to any other IdM server on which the CA role is enabled. Prerequisites You have the IdM administrator credentials. Procedure Change the current CA renewal server. Configure a replica to generate CRLs. 23.17. Demoting or promoting hidden replicas After a replica has been installed, you can configure whether the replica is hidden or visible. For details about hidden replicas, see The hidden replica mode . Prerequisites Ensure that the replica is not the DNSSEC key master. If it is, move the service to another replica before making this replica hidden. Ensure that the replica is not a CA renewal server. If it is, move the service to another replica before making this replica hidden. For details, see Changing and resetting IdM CA renewal server Procedure To hide a replica: To make a replica visible again: To view a list of all the hidden replicas in your topology: If all of your replicas are enabled, the command output does not mention hidden replicas. Additional resources Planning the replica topology Uninstalling an IdM server Failover, load-balancing, and high-availability in IdM
[ "ipa topologysuffix-find --------------------------- 2 topology suffixes matched --------------------------- Suffix name: ca Managed LDAP suffix DN: o=ipaca Suffix name: domain Managed LDAP suffix DN: dc=example,dc=com ---------------------------- Number of entries returned 2 ----------------------------", "ipa topologysegment-find Suffix name: domain ----------------- 1 segment matched ----------------- Segment name: server1.example.com-to-server2.example.com Left node: server1.example.com Right node: server2.example.com Connectivity: both ---------------------------- Number of entries returned 1 ----------------------------", "ipa topologysegment-show Suffix name: domain Segment name: server1.example.com-to-server2.example.com Segment name: server1.example.com-to-server2.example.com Left node: server1.example.com Right node: server2.example.com Connectivity: both", "ipa topologysegment-add Suffix name: domain Left node: server1.example.com Right node: server2.example.com Segment name [server1.example.com-to-server2.example.com]: new_segment --------------------------- Added segment \"new_segment\" --------------------------- Segment name: new_segment Left node: server1.example.com Right node: server2.example.com Connectivity: both", "ipa topologysegment-show Suffix name: domain Segment name: new_segment Segment name: new_segment Left node: server1.example.com Right node: server2.example.com Connectivity: both", "ipa topologysegment-find Suffix name: domain ------------------ 8 segments matched ------------------ Segment name: new_segment Left node: server1.example.com Right node: server2.example.com Connectivity: both ---------------------------- Number of entries returned 8 ----------------------------", "ipa topologysegment-del Suffix name: domain Segment name: new_segment ----------------------------- Deleted segment \"new_segment\" -----------------------------", "ipa topologysegment-find Suffix name: domain ------------------ 7 segments matched ------------------ Segment name: server2.example.com-to-server3.example.com Left node: server2.example.com Right node: server3.example.com Connectivity: both ---------------------------- Number of entries returned 7 ----------------------------", "[user@server2 ~]USD ipa server-del Server name: server1.example.com Removing server1.example.com from replication topology, please wait ---------------------------------------------------------- Deleted IPA server \"server1.example.com\" ----------------------------------------------------------", "ipa server-install --uninstall", "ipa-replica-manage list-ruv server1.example.com:389: 6 server2.example.com:389: 5 server3.example.com:389: 4 server4.example.com:389: 12", "ipa-replica-manage clean-ruv 6 ipa-replica-manage clean-ruv 5", "dn: cn=clean replica_ID, cn=cleanallruv, cn=tasks, cn=config objectclass: extensibleObject replica-base-dn: dc=example,dc=com replica-id: replica_ID replica-force-cleaning: no cn: clean replica_ID", "ipa config-show IPA masters: server1.example.com, server2.example.com, server3.example.com IPA CA servers: server1.example.com, server2.example.com IPA CA renewal master: server1.example.com", "ipa server-show Server name: server.example.com Enabled server roles: CA server, DNS server, KRA server", "ipa server-find --servrole \"CA server\" --------------------- 2 IPA servers matched --------------------- Server name: server1.example.com Server name: server2.example.com ---------------------------- Number of entries returned 2 ----------------------------", "ipa server-state replica.idm.example.com --state=hidden", "ipa server-state replica.idm.example.com --state=enabled", "ipa config-show" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/installing_identity_management/assembly_managing-replication-topology_installing-identity-management
24.3. SPICE Log Files
24.3. SPICE Log Files SPICE log files are useful when troubleshooting SPICE connection issues. To start SPICE debugging, change the log level to debugging . Then, identify the log location. Both the clients used to access the guest machines and the guest machines themselves have SPICE log files. For client-side logs, if a SPICE client was launched using the native client, for which a console.vv file is downloaded, use the remote-viewer command to enable debugging and generate log output. 24.3.1. SPICE Logs for Hypervisor SPICE Servers Table 24.3. SPICE Logs for Hypervisor SPICE Servers Log Type Log Location To Change Log Level: Host/Hypervisor SPICE Server /var/log/libvirt/qemu/(guest_name).log Run export SPICE_DEBUG_LEVEL=5 on the host/hypervisor prior to launching the guest. This variable is parsed by QEMU, and if run system-wide will print the debugging information of all virtual machines on the system. This command must be run on each host in the cluster. This command works only on a per-host/hypervisor basis, not a per-cluster basis. 24.3.2. SPICE Logs for Guest Machines Table 24.4. spice-vdagent Logs for Guest Machines Log Type Log Location To Change Log Level: Windows Guest C:\Windows\Temp\vdagent.log C:\Windows\Temp\vdservice.log Not applicable Red Hat Enterprise Linux Guest Use journalctl as the root user. To run the spice-vdagentd service in debug mode, as the root user create a /etc/sysconfig/spice-vdagentd file with this entry: SPICE_VDAGENTD_EXTRA_ARGS="-d -d" To run spice-vdagent in debug mode, from the command line: 24.3.3. SPICE Logs for SPICE Clients Launched Using console.vv Files For Linux client machines: Enable SPICE debugging by running the remote-viewer command with the --spice-debug option. When prompted, enter the connection URL, for example, spice:// virtual_machine_IP : port . To run SPICE client with the debug parameter and to pass a .vv file to it, download the console.vv file and run the remote-viewer command with the --spice-debug option and specify the full path to the console.vv file. For Windows client machines: In versions of virt-viewer 2.0-11.el7ev and later, virt-viewer.msi installs virt-viewer and debug-viewer.exe . Run the remote-viewer command with the spice-debug argument and direct the command at the path to the console: To view logs, connect to the virtual machine, and you will see a command prompt running GDB that prints standard output and standard error of remote-viewer .
[ "killall - u USDUSER spice-vdagent spice-vdagent -x -d [-d] [ |& tee spice-vdagent.log ]", "remote-viewer --spice-debug", "remote-viewer --spice-debug /path/to/ console.vv", "remote-viewer --spice-debug path\\to\\ console.vv" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/sect-spice_log_files
Chapter 9. Configuring and maintaining a Dovecot IMAP and POP3 server
Chapter 9. Configuring and maintaining a Dovecot IMAP and POP3 server Dovecot is a high-performance mail delivery agent (MDA) with a focus on security. You can use IMAP or POP3-compatible email clients to connect to a Dovecot server and read or download emails. Key features of Dovecot: The design and implementation focuses on security Two-way replication support for high availability to improve the performance in large environments Supports the high-performance dbox mailbox format, but also mbox and Maildir for compatibility reasons Self-healing features, such as fixing broken index files Compliance with the IMAP standards Workaround support to bypass bugs in IMAP and POP3 clients 9.1. Setting up a Dovecot server with PAM authentication Dovecot supports the Name Service Switch (NSS) interface as a user database and the Pluggable Authentication Modules (PAM) framework as an authentication backend. With this configuration, Dovecot can provide services to users who are available locally on the server through NSS. Use PAM authentication if accounts: Are defined locally in the /etc/passwd file Are stored in a remote database but they are available locally through the System Security Services Daemon (SSSD) or other NSS plugins. 9.1.1. Installing Dovecot The dovecot package provides: The dovecot service and the utilities to maintain it Services that Dovecot starts on demand, such as for authentication Plugins, such as server-side mail filtering Configuration files in the /etc/dovecot/ directory Documentation in the /usr/share/doc/dovecot/ directory Procedure Install the dovecot package: Note If Dovecot is already installed and you require clean configuration files, rename or remove the /etc/dovecot/ directory. Afterwards, reinstall the package. Without removing the configuration files, the yum reinstall dovecot command does not reset the configuration files in /etc/dovecot/ . step Configuring TLS encryption on a Dovecot server . 9.1.2. Configuring TLS encryption on a Dovecot server Dovecot provides a secure default configuration. For example, TLS is enabled by default to transmit credentials and data encrypted over networks. To configure TLS on a Dovecot server, you only need to set the paths to the certificate and private key files. Additionally, you can increase the security of TLS connections by generating and using Diffie-Hellman parameters to provide perfect forward secrecy (PFS). Prerequisites Dovecot is installed. The following files have been copied to the listed locations on the server: The server certificate: /etc/pki/dovecot/certs/server.example.com.crt The private key: /etc/pki/dovecot/private/server.example.com.key The Certificate Authority (CA) certificate: /etc/pki/dovecot/certs/ca.crt The hostname in the Subject DN field of the server certificate matches the server's Fully-qualified Domain Name (FQDN). Procedure Set secure permissions on the private key file: Generate a file with Diffie-Hellman parameters: Depending on the hardware and entropy on the server, generating Diffie-Hellman parameters with 4096 bits can take several minutes. Set the paths to the certificate and private key files in the /etc/dovecot/conf.d/10-ssl.conf file: Update the ssl_cert and ssl_key parameters, and set them to use the paths of the server's certificate and private key: Uncomment the ssl_ca parameter, and set it to use the path to the CA certificate: Uncomment the ssl_dh parameter, and set it to use the path to the Diffie-Hellman parameters file: Important To ensure that Dovecot reads the value of a parameter from a file, the path must start with a leading < character. step Preparing Dovecot to use virtual users Additional resources /usr/share/doc/dovecot/wiki/SSL.DovecotConfiguration.txt 9.1.3. Preparing Dovecot to use virtual users By default, Dovecot performs many actions on the file system as the user who uses the service. However, configuring the Dovecot back end to use one local user to perform these actions has several benefits: Dovecot performs file system actions as a specific local user instead of using the user's ID (UID). Users do not need to be available locally on the server. You can store all mailboxes and user-specific files in one root directory. Users do not require a UID and group ID (GID), which reduces administration efforts. Users who have access to the file system on the server cannot compromise their mailboxes or indexes because they cannot access these files. Setting up replication is easier. Prerequisites Dovecot is installed. Procedure Create the vmail user: Dovecot will later use this user to manage the mailboxes. For security reasons, do not use the dovecot or dovenull system users for this purpose. If you use a different path than /var/mail/ , set the mail_spool_t SELinux context on it, for example: Grant write permissions on /var/mail/ only to the vmail user: Uncomment the mail_location parameter in the /etc/dovecot/conf.d/10-mail.conf file, and set it to the mailbox format and location: With this setting: Dovecot uses the high-performant dbox mailbox format in single mode. In this mode, the service stores each mail in a separate file, similar to the maildir format. Dovecot resolves the %n variable in the path to the username. This is required to ensure that each user has a separate directory for its mailbox. step Using PAM as the Dovecot authentication backend . Additional resources /usr/share/doc/dovecot/wiki/VirtualUsers.txt /usr/share/doc/dovecot/wiki/MailLocation.txt /usr/share/doc/dovecot/wiki/MailboxFormat.dbox.txt /usr/share/doc/dovecot/wiki/Variables.txt 9.1.4. Using PAM as the Dovecot authentication backend By default, Dovecot uses the Name Service Switch (NSS) interface as the user database and the Pluggable Authentication Modules (PAM) framework as the authentication backend. Customize the settings to adapt Dovecot to your environment and to simplify administration by using the virtual users feature. Prerequisites Dovecot is installed. The virtual users feature is configured. Procedure Update the first_valid_uid parameter in the /etc/dovecot/conf.d/10-mail.conf file to define the lowest user ID (UID) that can authenticate to Dovecot: By default, users with a UID greater than or equal to 1000 can authenticate. If required, you can also set the last_valid_uid parameter to define the highest UID that Dovecot allows to log in. In the /etc/dovecot/conf.d/auth-system.conf.ext file, add the override_fields parameter to the userdb section as follows: Due to the fixed values, Dovecot does not query these settings from the /etc/passwd file. As a result, the home directory defined in /etc/passwd does not need to exist. step Complete the Dovecot configuration . Additional resources /usr/share/doc/dovecot/wiki/PasswordDatabase.PAM.txt /usr/share/doc/dovecot/wiki/VirtualUsers.Home.txt 9.1.5. Completing the Dovecot configuration Once you have installed and configured Dovecot, open the required ports in the firewalld service, and enable and start the service. Afterwards, you can test the server. Prerequisites The following has been configured in Dovecot: TLS encryption An authentication backend Clients trust the Certificate Authority (CA) certificate. Procedure If you want to provide only an IMAP or POP3 service to users, uncomment the protocols parameter in the /etc/dovecot/dovecot.conf file, and set it to the required protocols. For example, if you do not require POP3, set: By default, the imap , pop3 , and lmtp protocols are enabled. Open the ports in the local firewall. For example, to open the ports for the IMAPS, IMAP, POP3S, and POP3 protocols, enter: Enable and start the dovecot service: Verification Use a mail client, such as Mozilla Thunderbird, to connect to Dovecot and read emails. The settings for the mail client depend on the protocol you want to use: Table 9.1. Connection settings to the Dovecot server Protocol Port Connection security Authentication method IMAP 143 STARTTLS PLAIN [a] IMAPS 993 SSL/TLS PLAIN [a] POP3 110 STARTTLS PLAIN [a] POP3S 995 SSL/TLS PLAIN [a] [a] The client transmits data encrypted through the TLS connection. Consequently, credentials are not disclosed. Note that this table does not list settings for unencrypted connections because, by default, Dovecot does not accept plain text authentication on connections without TLS. Display configuration settings with non-default values: Additional resources firewall-cmd(1) man page on your system 9.2. Setting up a Dovecot server with LDAP authentication If your infrastructure uses an LDAP server to store accounts, you can authenticate Dovecot users against it. In this case, you manage accounts centrally in the directory and, users do not required local access to the file system on the Dovecot server. Centrally-managed accounts are also a benefit if you plan to set up multiple Dovecot servers with replication to make your mailboxes high available. 9.2.1. Installing Dovecot The dovecot package provides: The dovecot service and the utilities to maintain it Services that Dovecot starts on demand, such as for authentication Plugins, such as server-side mail filtering Configuration files in the /etc/dovecot/ directory Documentation in the /usr/share/doc/dovecot/ directory Procedure Install the dovecot package: Note If Dovecot is already installed and you require clean configuration files, rename or remove the /etc/dovecot/ directory. Afterwards, reinstall the package. Without removing the configuration files, the yum reinstall dovecot command does not reset the configuration files in /etc/dovecot/ . step Configuring TLS encryption on a Dovecot server . 9.2.2. Configuring TLS encryption on a Dovecot server Dovecot provides a secure default configuration. For example, TLS is enabled by default to transmit credentials and data encrypted over networks. To configure TLS on a Dovecot server, you only need to set the paths to the certificate and private key files. Additionally, you can increase the security of TLS connections by generating and using Diffie-Hellman parameters to provide perfect forward secrecy (PFS). Prerequisites Dovecot is installed. The following files have been copied to the listed locations on the server: The server certificate: /etc/pki/dovecot/certs/server.example.com.crt The private key: /etc/pki/dovecot/private/server.example.com.key The Certificate Authority (CA) certificate: /etc/pki/dovecot/certs/ca.crt The hostname in the Subject DN field of the server certificate matches the server's Fully-qualified Domain Name (FQDN). Procedure Set secure permissions on the private key file: Generate a file with Diffie-Hellman parameters: Depending on the hardware and entropy on the server, generating Diffie-Hellman parameters with 4096 bits can take several minutes. Set the paths to the certificate and private key files in the /etc/dovecot/conf.d/10-ssl.conf file: Update the ssl_cert and ssl_key parameters, and set them to use the paths of the server's certificate and private key: Uncomment the ssl_ca parameter, and set it to use the path to the CA certificate: Uncomment the ssl_dh parameter, and set it to use the path to the Diffie-Hellman parameters file: Important To ensure that Dovecot reads the value of a parameter from a file, the path must start with a leading < character. step Preparing Dovecot to use virtual users Additional resources /usr/share/doc/dovecot/wiki/SSL.DovecotConfiguration.txt 9.2.3. Preparing Dovecot to use virtual users By default, Dovecot performs many actions on the file system as the user who uses the service. However, configuring the Dovecot back end to use one local user to perform these actions has several benefits: Dovecot performs file system actions as a specific local user instead of using the user's ID (UID). Users do not need to be available locally on the server. You can store all mailboxes and user-specific files in one root directory. Users do not require a UID and group ID (GID), which reduces administration efforts. Users who have access to the file system on the server cannot compromise their mailboxes or indexes because they cannot access these files. Setting up replication is easier. Prerequisites Dovecot is installed. Procedure Create the vmail user: Dovecot will later use this user to manage the mailboxes. For security reasons, do not use the dovecot or dovenull system users for this purpose. If you use a different path than /var/mail/ , set the mail_spool_t SELinux context on it, for example: Grant write permissions on /var/mail/ only to the vmail user: Uncomment the mail_location parameter in the /etc/dovecot/conf.d/10-mail.conf file, and set it to the mailbox format and location: With this setting: Dovecot uses the high-performant dbox mailbox format in single mode. In this mode, the service stores each mail in a separate file, similar to the maildir format. Dovecot resolves the %n variable in the path to the username. This is required to ensure that each user has a separate directory for its mailbox. step Using LDAP as the Dovecot authentication backend . Additional resources /usr/share/doc/dovecot/wiki/VirtualUsers.txt /usr/share/doc/dovecot/wiki/MailLocation.txt /usr/share/doc/dovecot/wiki/MailboxFormat.dbox.txt /usr/share/doc/dovecot/wiki/Variables.txt 9.2.4. Using LDAP as the Dovecot authentication backend Users in an LDAP directory can usually authenticate themselves to the directory service. Dovecot can use this to authenticate users when they log in to the IMAP and POP3 services. This authentication method has a number of benefits, such as: Administrators can manage users centrally in the directory. The LDAP accounts do not require any special attributes. They only need to be able to authenticate to the LDAP server. Consequently, this method is independent from the password storage scheme used on the LDAP server. Users do not need to be available locally on the server through the Name Service Switch (NSS) interface and the Pluggable Authentication Modules (PAM) framework. Prerequisites Dovecot is installed. The virtual users feature is configured. Connections to the LDAP server support TLS encryption. RHEL on the Dovecot server trusts the Certificate Authority (CA) certificate of the LDAP server. If users are stored in different trees in the LDAP directory, a dedicated LDAP account for Dovecot exists to search the directory. This account requires permissions to search for Distinguished Names (DNs) of other users. Procedure Configure the authentication backends in the /etc/dovecot/conf.d/10-auth.conf file: Comment out include statements for auth-*.conf.ext authentication backend configuration files that you do not require, for example: Enable LDAP authentication by uncommenting the following line: Edit the /etc/dovecot/conf.d/auth-ldap.conf.ext file, and add the override_fields parameter as follows to the userdb section: Due to the fixed values, Dovecot does not query these settings from the LDAP server. Consequently, these attributes also do not have to be present. Create the /etc/dovecot/dovecot-ldap.conf.ext file with the following settings: Depending on the LDAP structure, configure one of the following: If users are stored in different trees in the LDAP directory, configure dynamic DN lookups: Dovecot uses the specified DN, password, and filter to search the DN of the authenticating user in the directory. In this search, Dovecot replaces %n in the filter with the username. Note that the LDAP search must return only one result. If all users are stored under a specific entry, configure a DN template: Enable authentication binds to the LDAP server to verify Dovecot users: Set the URL to the LDAP server: For security reasons, only use encrypted connections using LDAPS or the STARTTLS command over the LDAP protocol. For the latter, additionally add tls = yes to the settings. For a working certificate validation, the hostname of the LDAP server must match the hostname used in its TLS certificate. Enable the verification of the LDAP server's TLS certificate: Set the base DN to the DN where to start searching for users: Set the search scope: Dovecot searches with the onelevel scope only in the specified base DN and with the subtree scope also in subtrees. Set secure permissions on the /etc/dovecot/dovecot-ldap.conf.ext file: step Complete the Dovecot configuration . Additional resources /usr/share/doc/dovecot/example-config/dovecot-ldap.conf.ext /usr/share/doc/dovecot/wiki/UserDatabase.Static.txt /usr/share/doc/dovecot/wiki/AuthDatabase.LDAP.txt /usr/share/doc/dovecot/wiki/AuthDatabase.LDAP.AuthBinds.txt /usr/share/doc/dovecot/wiki/AuthDatabase.LDAP.PasswordLookups.txt 9.2.5. Completing the Dovecot configuration Once you have installed and configured Dovecot, open the required ports in the firewalld service, and enable and start the service. Afterwards, you can test the server. Prerequisites The following has been configured in Dovecot: TLS encryption An authentication backend Clients trust the Certificate Authority (CA) certificate. Procedure If you want to provide only an IMAP or POP3 service to users, uncomment the protocols parameter in the /etc/dovecot/dovecot.conf file, and set it to the required protocols. For example, if you do not require POP3, set: By default, the imap , pop3 , and lmtp protocols are enabled. Open the ports in the local firewall. For example, to open the ports for the IMAPS, IMAP, POP3S, and POP3 protocols, enter: Enable and start the dovecot service: Verification Use a mail client, such as Mozilla Thunderbird, to connect to Dovecot and read emails. The settings for the mail client depend on the protocol you want to use: Table 9.2. Connection settings to the Dovecot server Protocol Port Connection security Authentication method IMAP 143 STARTTLS PLAIN [a] IMAPS 993 SSL/TLS PLAIN [a] POP3 110 STARTTLS PLAIN [a] POP3S 995 SSL/TLS PLAIN [a] [a] The client transmits data encrypted through the TLS connection. Consequently, credentials are not disclosed. Note that this table does not list settings for unencrypted connections because, by default, Dovecot does not accept plain text authentication on connections without TLS. Display configuration settings with non-default values: Additional resources firewall-cmd(1) man page on your system 9.3. Setting up a Dovecot server with MariaDB SQL authentication If you store users and passwords in a MariaDB SQL server, you can configure Dovecot to use it as the user database and authentication backend. With this configuration, you manage accounts centrally in a database, and users have no local access to the file system on the Dovecot server. Centrally managed accounts are also a benefit if you plan to set up multiple Dovecot servers with replication to make your mailboxes highly available. 9.3.1. Installing Dovecot The dovecot package provides: The dovecot service and the utilities to maintain it Services that Dovecot starts on demand, such as for authentication Plugins, such as server-side mail filtering Configuration files in the /etc/dovecot/ directory Documentation in the /usr/share/doc/dovecot/ directory Procedure Install the dovecot package: Note If Dovecot is already installed and you require clean configuration files, rename or remove the /etc/dovecot/ directory. Afterwards, reinstall the package. Without removing the configuration files, the yum reinstall dovecot command does not reset the configuration files in /etc/dovecot/ . step Configuring TLS encryption on a Dovecot server . 9.3.2. Configuring TLS encryption on a Dovecot server Dovecot provides a secure default configuration. For example, TLS is enabled by default to transmit credentials and data encrypted over networks. To configure TLS on a Dovecot server, you only need to set the paths to the certificate and private key files. Additionally, you can increase the security of TLS connections by generating and using Diffie-Hellman parameters to provide perfect forward secrecy (PFS). Prerequisites Dovecot is installed. The following files have been copied to the listed locations on the server: The server certificate: /etc/pki/dovecot/certs/server.example.com.crt The private key: /etc/pki/dovecot/private/server.example.com.key The Certificate Authority (CA) certificate: /etc/pki/dovecot/certs/ca.crt The hostname in the Subject DN field of the server certificate matches the server's Fully-qualified Domain Name (FQDN). Procedure Set secure permissions on the private key file: Generate a file with Diffie-Hellman parameters: Depending on the hardware and entropy on the server, generating Diffie-Hellman parameters with 4096 bits can take several minutes. Set the paths to the certificate and private key files in the /etc/dovecot/conf.d/10-ssl.conf file: Update the ssl_cert and ssl_key parameters, and set them to use the paths of the server's certificate and private key: Uncomment the ssl_ca parameter, and set it to use the path to the CA certificate: Uncomment the ssl_dh parameter, and set it to use the path to the Diffie-Hellman parameters file: Important To ensure that Dovecot reads the value of a parameter from a file, the path must start with a leading < character. step Preparing Dovecot to use virtual users Additional resources /usr/share/doc/dovecot/wiki/SSL.DovecotConfiguration.txt 9.3.3. Preparing Dovecot to use virtual users By default, Dovecot performs many actions on the file system as the user who uses the service. However, configuring the Dovecot back end to use one local user to perform these actions has several benefits: Dovecot performs file system actions as a specific local user instead of using the user's ID (UID). Users do not need to be available locally on the server. You can store all mailboxes and user-specific files in one root directory. Users do not require a UID and group ID (GID), which reduces administration efforts. Users who have access to the file system on the server cannot compromise their mailboxes or indexes because they cannot access these files. Setting up replication is easier. Prerequisites Dovecot is installed. Procedure Create the vmail user: Dovecot will later use this user to manage the mailboxes. For security reasons, do not use the dovecot or dovenull system users for this purpose. If you use a different path than /var/mail/ , set the mail_spool_t SELinux context on it, for example: Grant write permissions on /var/mail/ only to the vmail user: Uncomment the mail_location parameter in the /etc/dovecot/conf.d/10-mail.conf file, and set it to the mailbox format and location: With this setting: Dovecot uses the high-performant dbox mailbox format in single mode. In this mode, the service stores each mail in a separate file, similar to the maildir format. Dovecot resolves the %n variable in the path to the username. This is required to ensure that each user has a separate directory for its mailbox. step Using a MariaDB SQL database as the Dovecot authentication backend Additional resources /usr/share/doc/dovecot/wiki/VirtualUsers.txt /usr/share/doc/dovecot/wiki/MailLocation.txt /usr/share/doc/dovecot/wiki/MailboxFormat.dbox.txt /usr/share/doc/dovecot/wiki/Variables.txt 9.3.4. Using a MariaDB SQL database as the Dovecot authentication backend Dovecot can read accounts and passwords from a MariaDB database and use it to authenticate users when they log in to the IMAP or POP3 service. The benefits of this authentication method include: Administrators can manage users centrally in a database. Users have no access locally on the server. Prerequisites Dovecot is installed. The virtual users feature is configured. Connections to the MariaDB server support TLS encryption. The dovecotDB database exists in MariaDB, and the users table contains at least a username and password column. The password column contains passwords encrypted with a scheme that Dovecot supports. The passwords either use the same scheme or have a { pw-storage-scheme } prefix. The dovecot MariaDB user has read permission on the users table in the dovecotDB database. The certificate of the Certificate Authority (CA) that issued the MariaDB server's TLS certificate is stored on the Dovecot server in the /etc/pki/tls/certs/ca.crt file. Procedure Install the dovecot-mysql package: Configure the authentication backends in the /etc/dovecot/conf.d/10-auth.conf file: Comment out include statements for auth-*.conf.ext authentication backend configuration files that you do not require, for example: Enable SQL authentication by uncommenting the following line: Edit the /etc/dovecot/conf.d/auth-sql.conf.ext file, and add the override_fields parameter to the userdb section as follows: Due to the fixed values, Dovecot does not query these settings from the SQL server. Create the /etc/dovecot/dovecot-sql.conf.ext file with the following settings: To use TLS encryption to the database server, set the ssl_ca option to the path of the certificate of the CA that issued the MariaDB server certificate. For a working certificate validation, the hostname of the MariaDB server must match the hostname used in its TLS certificate. If the password values in the database contain a { pw-storage-scheme } prefix, you can omit the default_pass_scheme setting. The queries in the file must be set as follows: For the user_query parameter, the query must return the username of the Dovecot user. The query must also return only one result. For the password_query parameter, the query must return the username and the password, and Dovecot must use these values in the user and password variables. Therefore, if the database uses different column names, use the AS SQL command to rename a column in the result. For the iterate_query parameter, the query must return a list of all users. Set secure permissions on the /etc/dovecot/dovecot-sql.conf.ext file: step Complete the Dovecot configuration . Additional resources /usr/share/doc/dovecot/example-config/dovecot-sql.conf.ext /usr/share/doc/dovecot/wiki/Authentication.PasswordSchemes.txt 9.3.5. Completing the Dovecot configuration Once you have installed and configured Dovecot, open the required ports in the firewalld service, and enable and start the service. Afterwards, you can test the server. Prerequisites The following has been configured in Dovecot: TLS encryption An authentication backend Clients trust the Certificate Authority (CA) certificate. Procedure If you want to provide only an IMAP or POP3 service to users, uncomment the protocols parameter in the /etc/dovecot/dovecot.conf file, and set it to the required protocols. For example, if you do not require POP3, set: By default, the imap , pop3 , and lmtp protocols are enabled. Open the ports in the local firewall. For example, to open the ports for the IMAPS, IMAP, POP3S, and POP3 protocols, enter: Enable and start the dovecot service: Verification Use a mail client, such as Mozilla Thunderbird, to connect to Dovecot and read emails. The settings for the mail client depend on the protocol you want to use: Table 9.3. Connection settings to the Dovecot server Protocol Port Connection security Authentication method IMAP 143 STARTTLS PLAIN [a] IMAPS 993 SSL/TLS PLAIN [a] POP3 110 STARTTLS PLAIN [a] POP3S 995 SSL/TLS PLAIN [a] [a] The client transmits data encrypted through the TLS connection. Consequently, credentials are not disclosed. Note that this table does not list settings for unencrypted connections because, by default, Dovecot does not accept plain text authentication on connections without TLS. Display configuration settings with non-default values: Additional resources firewall-cmd(1) man page on your system 9.4. Configuring replication between two Dovecot servers With two-way replication, you can make your Dovecot server high-available, and IMAP and POP3 clients can access a mailbox on both servers. Dovecot keeps track of changes in the index logs of each mailbox and solves conflicts in a safe way. Perform this procedure on both replication partners. Note Replication works only between server pairs. Consequently, in a large cluster, you need multiple independent backend pairs. Prerequisites Both servers use the same authentication backend. Preferably, use LDAP or SQL to maintain accounts centrally. The Dovecot user database configuration supports user listing. Use the doveadm user '*' command to verify this. Dovecot accesses mailboxes on the file system as the vmail user instead of the user's ID (UID). Procedure Create the /etc/dovecot/conf.d/10-replication.conf file and perform the following steps in it: Enable the notify and replication plug-ins: Add a service replicator section: With these settings, Dovecot starts at least one replicator process when the dovecot service starts. Additionally, this section defines the settings on the replicator-doveadm socket. Add a service aggregator section to configure the replication-notify-fifo pipe and replication-notify socket: Add a service doveadm section to define the port of the replication service: Set the password of the doveadm replication service: The password must be the same on both servers. Configure the replication partner: Optional: Define the maximum number of parallel dsync processes: The default value of replication_max_conns is 10 . Set secure permissions on the /etc/dovecot/conf.d/10-replication.conf file: Enable the nis_enabled SELinux Boolean to allow Dovecot to open the doveadm replication port: Configure firewalld rules to allow only the replication partner to access the replication port, for example: The subnet masks /32 for the IPv4 and /128 for the IPv6 address limit the access to the specified addresses. Perform this procedure also on the other replication partner. Reload Dovecot: Verification Perform an action in a mailbox on one server and then verify if Dovecot has replicated the change to the other server. Display the replicator status: Display the replicator status of a specific user: Additional resources dsync(1) man page on your system /usr/share/doc/dovecot/wiki/Replication.txt 9.5. Automatically subscribing users to IMAP mailboxes Typically, IMAP server administrators want Dovecot to automatically create certain mailboxes, such as Sent and Trash , and subscribe the users to them. You can set this in the configuration files. Additionally, you can define special-use mailboxes . IMAP clients often support defining mailboxes for special purposes, such as for sent emails. To avoid that the user has to manually select and set the correct mailboxes, IMAP servers can send a special-use attribute in the IMAP LIST command. Clients can then use this attribute to identify and set, for example, the mailbox for sent emails. Prerequisites Dovecot is configured. Procedure Update the inbox namespace section in the /etc/dovecot/conf.d/15-mailboxes.conf file: Add the auto = subscribe setting to each special-use mailbox that should be available to users, for example: If your mail clients support more special-use mailboxes, you can add similar entries. The special_use parameter defines the value that Dovecot sends in the special-use attribute to the clients. Optional: If you want to define other mailboxes that have no special purpose, add mailbox sections for them in the user's inbox, for example: You can set the auto parameter to one of the following values: subscribe : Automatically creates the mailbox and subscribes the user to it. create : Automatically creates the mailbox without subscribing the user to it. no (default): Dovecot neither creates the mailbox nor does it subscribe the user to it. Reload Dovecot: Verification Use an IMAP client and access your mailbox. Mailboxes with the setting auto = subscribe are automatically visible. If the client supports special-use mailboxes and the defined purposes, the client automatically uses them. Additional resources RFC 6154: IMAP LIST Extension for Special-Use Mailboxes /usr/share/doc/dovecot/wiki/MailboxSettings.txt 9.6. Configuring an LMTP socket and LMTPS listener SMTP servers, such as Postfix, use the Local Mail Transfer Protocol (LMTP) to deliver emails to Dovecot. If the SMTP server runs: On the same host as Dovecot, use an LMTP socket On a different host, use an LMTP service By default, the LMTP protocol is not encrypted. However, if you configured TLS encryption, Dovecot uses the same settings automatically for the LMTP service. SMTP servers can then connect to it using the LMTPS protocol or the STARTTLS command over LMTP. Prerequisites Dovecot is installed. If you want to configure an LMTP service, TLS encryption is configured in Dovecot. Procedure Verify that the LMTP protocol is enabled: The protocol is enabled, if the output contains lmtp . If the lmtp protocol is disabled, edit the /etc/dovecot/dovecot.conf file, and append lmtp to the values in the protocols parameter: Depending on whether you need an LMTP socket or service, make the following changes in the service lmtp section in the /etc/dovecot/conf.d/10-master.conf file: LMTP socket: By default, Dovecot automatically creates the /var/run/dovecot/lmtp socket. Optional: Customize the ownership and permissions: LMTP service: Add a inet_listener sub-section: Configure firewalld rules to allow only the SMTP server to access the LMTP port, for example: The subnet masks /32 for the IPv4 and /128 for the IPv6 address limit the access to the specified addresses. Reload Dovecot: Verification If you configured the LMTP socket, verify that Dovecot has created the socket and that the permissions are correct: Configure the SMTP server to submit emails to Dovecot using the LMTP socket or service. When you use the LMTP service, ensure that the SMTP server uses the LMTPS protocol or sends the STARTTLS command to use an encrypted connection. Additional resources /usr/share/doc/dovecot/wiki/LMTP.txt 9.7. Disabling the IMAP or POP3 service in Dovecot By default, Dovecot provides IMAP and POP3 services. If you require only one of them, you can disable the other to reduce the surface for attack. Prerequisites Dovecot is installed. Procedure Uncomment the protocols parameter in the /etc/dovecot/dovecot.conf file, and set it to use the required protocols. For example, if you do not require POP3, set: By default, the imap , pop3 , and lmtp protocols are enabled. Reload Dovecot: Close the ports that are no longer required in the local firewall. For example, to close the ports for the POP3S and POP3 protocols, enter: Verification Display all ports in LISTEN mode opened by the dovecot process: In this example, Dovecot listens only on the TCP ports 993 (IMAPS) and 143 (IMAP). Note that Dovecot only opens a port for the LMTP protocol if you configure the service to listen on a port instead of using a socket. Additional resources firewall-cmd(1) man page on your system 9.8. Enabling server-side email filtering using Sieve on a Dovecot IMAP server You can upload Sieve scripts to a server using the ManageSieve protocol. Sieve scripts define rules and actions that a server should validate and perform on incoming emails. For example, users can use Sieve to forward emails from a specific sender, and administrators can create a global filter to move mails flagged by a spam filter into a separate IMAP folder. The ManageSieve plugin adds support for Sieve scripts and the ManageSieve protocol to a Dovecot IMAP server. Warning Use only clients that support using the ManageSieve protocol over TLS connections. Disabling TLS for this protocol causes clients to send credentials in plain text over the network. Prerequisites Dovecot is configured and provides IMAP mailboxes. TLS encryption is configured in Dovecot. The mail clients support the ManageSieve protocol over TLS connections. Procedure Install the dovecot-pigeonhole package: Uncomment the following line in /etc/dovecot/conf.d/20-managesieve.conf to enable the sieve protocol: This setting activates Sieve in addition to the other protocols that are already enabled. Open the ManageSieve port in firewalld : Reload Dovecot: Verification Use a client and upload a Sieve script. Use the following connection settings: Port: 4190 Connection security: SSL/TLS Authentication method: PLAIN Send an email to the user who has the Sieve script uploaded. If the email matches the rules in the script, verify that the server performs the defined actions. Additional resources /usr/share/doc/dovecot/wiki/Pigeonhole.Sieve.Plugins.IMAPSieve.txt /usr/share/doc/dovecot/wiki/Pigeonhole.Sieve.Troubleshooting.txt firewall-cmd(1) man page on your system 9.9. How Dovecot processes configuration files The dovecot package provides the main configuration file /etc/dovecot/dovecot.conf and multiple configuration files in the /etc/dovecot/conf.d/ directory. Dovecot combines the files to build the configuration when you start the service. The main benefit of multiple config files is to group settings and increase readability. If you prefer a single configuration file, you can instead maintain all settings in /etc/dovecot/dovecot.conf and remove all include and include_try statements from that file. Additional resources /usr/share/doc/dovecot/wiki/ConfigFile.txt /usr/share/doc/dovecot/wiki/Variables.txt
[ "yum install dovecot", "chown root:root /etc/pki/dovecot/private/server.example.com.key chmod 600 /etc/pki/dovecot/private/server.example.com.key", "openssl dhparam -out /etc/dovecot/dh.pem 4096", "ssl_cert = < /etc/pki/dovecot/certs/server.example.com.crt ssl_key = < /etc/pki/dovecot/private/server.example.com.key", "ssl_ca = < /etc/pki/dovecot/certs/ca.crt", "ssl_dh = < /etc/dovecot/dh.pem", "useradd --home-dir /var/mail/ --shell /usr/sbin/nologin vmail", "semanage fcontext -a -t mail_spool_t \" <path> (/.*)?\" restorecon -Rv <path>", "chown vmail:vmail /var/mail/ chmod 700 /var/mail/", "mail_location = sdbox : /var/mail/%n/", "first_valid_uid = 1000", "userdb { driver = passwd override_fields = uid= vmail gid= vmail home= /var/mail/%n/ }", "protocols = imap lmtp", "firewall-cmd --permanent --add-service=imaps --add-service=imap --add-service=pop3s --add-service=pop3 firewall-cmd --reload", "systemctl enable --now dovecot", "doveconf -n", "yum install dovecot", "chown root:root /etc/pki/dovecot/private/server.example.com.key chmod 600 /etc/pki/dovecot/private/server.example.com.key", "openssl dhparam -out /etc/dovecot/dh.pem 4096", "ssl_cert = < /etc/pki/dovecot/certs/server.example.com.crt ssl_key = < /etc/pki/dovecot/private/server.example.com.key", "ssl_ca = < /etc/pki/dovecot/certs/ca.crt", "ssl_dh = < /etc/dovecot/dh.pem", "useradd --home-dir /var/mail/ --shell /usr/sbin/nologin vmail", "semanage fcontext -a -t mail_spool_t \" <path> (/.*)?\" restorecon -Rv <path>", "chown vmail:vmail /var/mail/ chmod 700 /var/mail/", "mail_location = sdbox : /var/mail/%n/", "#!include auth-system.conf.ext", "!include auth-ldap.conf.ext", "userdb { driver = ldap args = /etc/dovecot/dovecot-ldap.conf.ext override_fields = uid= vmail gid= vmail home= /var/mail/%n/ }", "dn = cn= dovecot_LDAP ,dc=example,dc=com dnpass = password pass_filter = (&(objectClass=posixAccount)(uid=%n))", "auth_bind_userdn = cn=%n,ou=People,dc=example,dc=com", "auth_bind = yes", "uris = ldaps://LDAP-srv.example.com", "tls_require_cert = hard", "base = ou=People,dc=example,dc=com", "scope = onelevel", "chown root:root /etc/dovecot/dovecot-ldap.conf.ext chmod 600 /etc/dovecot/dovecot-ldap.conf.ext", "protocols = imap lmtp", "firewall-cmd --permanent --add-service=imaps --add-service=imap --add-service=pop3s --add-service=pop3 firewall-cmd --reload", "systemctl enable --now dovecot", "doveconf -n", "yum install dovecot", "chown root:root /etc/pki/dovecot/private/server.example.com.key chmod 600 /etc/pki/dovecot/private/server.example.com.key", "openssl dhparam -out /etc/dovecot/dh.pem 4096", "ssl_cert = < /etc/pki/dovecot/certs/server.example.com.crt ssl_key = < /etc/pki/dovecot/private/server.example.com.key", "ssl_ca = < /etc/pki/dovecot/certs/ca.crt", "ssl_dh = < /etc/dovecot/dh.pem", "useradd --home-dir /var/mail/ --shell /usr/sbin/nologin vmail", "semanage fcontext -a -t mail_spool_t \" <path> (/.*)?\" restorecon -Rv <path>", "chown vmail:vmail /var/mail/ chmod 700 /var/mail/", "mail_location = sdbox : /var/mail/%n/", "yum install dovecot-mysql", "#!include auth-system.conf.ext", "!include auth-sql.conf.ext", "userdb { driver = sql args = /etc/dovecot/dovecot-sql.conf.ext override_fields = uid= vmail gid= vmail home= /var/mail/%n/ }", "driver = mysql connect = host= mariadb_srv.example.com dbname= dovecotDB user= dovecot password= dovecotPW ssl_ca= /etc/pki/tls/certs/ca.crt default_pass_scheme = SHA512-CRYPT user_query = SELECT username FROM users WHERE username ='%u'; password_query = SELECT username AS user, password FROM users WHERE username ='%u'; iterate_query = SELECT username FROM users ;", "chown root:root /etc/dovecot/dovecot-sql.conf.ext chmod 600 /etc/dovecot/dovecot-sql.conf.ext", "protocols = imap lmtp", "firewall-cmd --permanent --add-service=imaps --add-service=imap --add-service=pop3s --add-service=pop3 firewall-cmd --reload", "systemctl enable --now dovecot", "doveconf -n", "mail_plugins = USDmail_plugins notify replication", "service replicator { process_min_avail = 1 unix_listener replicator-doveadm { mode = 0600 user = vmail } }", "service aggregator { fifo_listener replication-notify-fifo { user = vmail } unix_listener replication-notify { user = vmail } }", "service doveadm { inet_listener { port = 12345 } }", "doveadm_password = replication_password", "plugin { mail_replica = tcp: server2.example.com : 12345 }", "replication_max_conns = 20", "chown root:root /etc/dovecot/conf.d/10-replication.conf chmod 600 /etc/dovecot/conf.d/10-replication.conf", "setsebool -P nis_enabled on", "firewall-cmd --permanent --zone=public --add-rich-rule=\"rule family=\"ipv4\" source address=\" 192.0.2.1/32 \" port protocol=\"tcp\" port=\" 12345 \" accept\" firewall-cmd --permanent --zone=public --add-rich-rule=\"rule family=\"ipv6\" source address=\" 2001:db8:2::1/128 \" port protocol=\"tcp\" port=\" 12345 \" accept\" firewall-cmd --reload", "systemctl reload dovecot", "doveadm replicator status Queued 'sync' requests 0 Queued 'high' requests 0 Queued 'low' requests 0 Queued 'failed' requests 0 Queued 'full resync' requests 30 Waiting 'failed' requests 0 Total number of known users 75", "doveadm replicator status example_user username priority fast sync full sync success sync failed example_user none 02:05:28 04:19:07 02:05:28 -", "namespace inbox { mailbox Drafts { special_use = \\Drafts auto = subscribe } mailbox Junk { special_use = \\Junk auto = subscribe } mailbox Trash { special_use = \\Trash auto = subscribe } mailbox Sent { special_use = \\Sent auto = subscribe } }", "namespace inbox { mailbox \" Important Emails \" { auto = <value> } }", "systemctl reload dovecot", "doveconf -a | egrep \"^protocols\" protocols = imap pop3 lmtp", "protocols = ... lmtp", "service lmtp { unix_listener lmtp { mode = 0600 user = postfix group = postfix } }", "service lmtp { inet_listener lmtp { port = 24 } }", "firewall-cmd --permanent --zone=public --add-rich-rule=\"rule family=\"ipv4\" source address=\" 192.0.2.1/32 \" port protocol=\"tcp\" port=\" 24 \" accept\" firewall-cmd --permanent --zone=public --add-rich-rule=\"rule family=\"ipv6\" source address=\" 2001:db8:2::1/128 \" port protocol=\"tcp\" port=\" 24 \" accept\" firewall-cmd --reload", "systemctl reload dovecot", "ls -l /var/run/dovecot/lmtp s rw------- . 1 postfix postfix 0 Nov 22 17:17 /var/run/dovecot/lmtp", "protocols = imap lmtp", "systemctl reload dovecot", "firewall-cmd --remove-service=pop3s --remove-service=pop3 firewall-cmd --reload", "ss -tulp | grep dovecot tcp LISTEN 0 100 0.0.0.0:993 0.0.0.0:* users:((\"dovecot\",pid= 1405 ,fd= 44 )) tcp LISTEN 0 100 0.0.0.0:143 0.0.0.0:* users:((\"dovecot\",pid= 1405 ,fd= 42 )) tcp LISTEN 0 100 [::]:993 [::]:* users:((\"dovecot\",pid= 1405 ,fd= 45 )) tcp LISTEN 0 100 [::]:143 [::]:* users:((\"dovecot\",pid= 1405 ,fd= 43 ))", "yum install dovecot-pigeonhole", "protocols = USDprotocols sieve", "firewall-cmd --permanent --add-service=managesieve firewall-cmd --reload", "systemctl reload dovecot" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/deploying_different_types_of_servers/configuring-and-maintaining-a-dovecot-imap-and-pop3-server_deploying-different-types-of-servers
Providing feedback on JBoss EAP documentation
Providing feedback on JBoss EAP documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Please include the Document URL , the section number and describe the issue . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/how_to_set_up_sso_with_kerberos/proc_providing-feedback-on-red-hat-documentation_default
8.2. Connecting
8.2. Connecting An Admin API connection, which is represented by the org.teiid.adminapi.Admin interface, is obtained through the org.teiid.adminapi.AdminFactory.createAdmin methods. AdminFactory is a singleton, see AdminFactory.getInstance() . The Admin instance automatically tests its connection and reconnects to a server in the event of a failure. The close method should be called to terminate the connection. See your Red Hat JBoss Data Virtualization installation for the appropriate admin port - the default is 9999.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_4_server_development/connecting
4.6. Dell Drac 5
4.6. Dell Drac 5 Table 4.7, "Dell DRAC 5" lists the fence device parameters used by fence_drac5 , the fence agent for Dell DRAC 5. Table 4.7. Dell DRAC 5 luci Field cluster.conf Attribute Description Name name The name assigned to the DRAC. IP Address or Hostname ipaddr The IP address or host name assigned to the DRAC. IP Port (optional) ipport The TCP port to use to connect to the device. Login login The login name used to access the DRAC. Password passwd The password used to authenticate the connection to the DRAC. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Use SSH secure Indicates that the system will use SSH to access the device. When using SSH, you must specify either a password, a password script, or an identity file. SSH Options ssh_options SSH options to use. The default value is -1 -c blowfish . Path to SSH Identity File identity_file The identity file for SSH. Module Name module_name (optional) The module name for the DRAC when you have multiple DRAC modules. Force Command Prompt cmd_prompt The command prompt to use. The default value is '\USD'. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Delay (seconds) delay The number of seconds to wait before fencing is started. The default value is 0. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Figure 4.6, "Dell Drac 5" shows the configuration screen for adding a Dell Drac 5 device Figure 4.6. Dell Drac 5 The following command creates a fence device instance for a Dell Drac 5 device: The following is the cluster.conf entry for the fence_drac5 device:
[ "ccs -f cluster.conf --addfencedev delldrac5test1 agent=fence_drac5 ipaddr=192.168.0.1 login=root passwd=password123 module_name=drac1 power_wait=60", "<fencedevices> <fencedevice agent=\"fence_drac5\" cmd_prompt=\"\\USD\" ipaddr=\"192.168.0.1\" login=\"root\" module_name=\"drac1\" name=\"delldrac5test1\" passwd=\"password123\" power_wait=\"60\"/> </fencedevices>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/fence_configuration_guide/s1-software-fence-drac5-CA
Chapter 12. Using a service account as an OAuth client
Chapter 12. Using a service account as an OAuth client 12.1. Service accounts as OAuth clients You can use a service account as a constrained form of OAuth client. Service accounts can request only a subset of scopes that allow access to some basic user information and role-based power inside of the service account's own namespace: user:info user:check-access role:<any_role>:<service_account_namespace> role:<any_role>:<service_account_namespace>:! When using a service account as an OAuth client: client_id is system:serviceaccount:<service_account_namespace>:<service_account_name> . client_secret can be any of the API tokens for that service account. For example: USD oc sa get-token <service_account_name> To get WWW-Authenticate challenges, set an serviceaccounts.openshift.io/oauth-want-challenges annotation on the service account to true . redirect_uri must match an annotation on the service account. 12.1.1. Redirect URIs for service accounts as OAuth clients Annotation keys must have the prefix serviceaccounts.openshift.io/oauth-redirecturi. or serviceaccounts.openshift.io/oauth-redirectreference. such as: In its simplest form, the annotation can be used to directly specify valid redirect URIs. For example: The first and second postfixes in the above example are used to separate the two valid redirect URIs. In more complex configurations, static redirect URIs may not be enough. For example, perhaps you want all Ingresses for a route to be considered valid. This is where dynamic redirect URIs via the serviceaccounts.openshift.io/oauth-redirectreference. prefix come into play. For example: Since the value for this annotation contains serialized JSON data, it is easier to see in an expanded format: { "kind": "OAuthRedirectReference", "apiVersion": "v1", "reference": { "kind": "Route", "name": "jenkins" } } Now you can see that an OAuthRedirectReference allows us to reference the route named jenkins . Thus, all Ingresses for that route will now be considered valid. The full specification for an OAuthRedirectReference is: { "kind": "OAuthRedirectReference", "apiVersion": "v1", "reference": { "kind": ..., 1 "name": ..., 2 "group": ... 3 } } 1 kind refers to the type of the object being referenced. Currently, only route is supported. 2 name refers to the name of the object. The object must be in the same namespace as the service account. 3 group refers to the group of the object. Leave this blank, as the group for a route is the empty string. Both annotation prefixes can be combined to override the data provided by the reference object. For example: The first postfix is used to tie the annotations together. Assuming that the jenkins route had an Ingress of https://example.com , now https://example.com/custompath is considered valid, but https://example.com is not. The format for partially supplying override data is as follows: Type Syntax Scheme "https://" Hostname "//website.com" Port "//:8000" Path "examplepath" Note Specifying a hostname override will replace the hostname data from the referenced object, which is not likely to be desired behavior. Any combination of the above syntax can be combined using the following format: <scheme:>//<hostname><:port>/<path> The same object can be referenced more than once for more flexibility: Assuming that the route named jenkins has an Ingress of https://example.com , then both https://example.com:8000 and https://example.com/custompath are considered valid. Static and dynamic annotations can be used at the same time to achieve the desired behavior:
[ "oc sa get-token <service_account_name>", "serviceaccounts.openshift.io/oauth-redirecturi.<name>", "\"serviceaccounts.openshift.io/oauth-redirecturi.first\": \"https://example.com\" \"serviceaccounts.openshift.io/oauth-redirecturi.second\": \"https://other.com\"", "\"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\"", "{ \"kind\": \"OAuthRedirectReference\", \"apiVersion\": \"v1\", \"reference\": { \"kind\": \"Route\", \"name\": \"jenkins\" } }", "{ \"kind\": \"OAuthRedirectReference\", \"apiVersion\": \"v1\", \"reference\": { \"kind\": ..., 1 \"name\": ..., 2 \"group\": ... 3 } }", "\"serviceaccounts.openshift.io/oauth-redirecturi.first\": \"custompath\" \"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\"", "\"serviceaccounts.openshift.io/oauth-redirecturi.first\": \"custompath\" \"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\" \"serviceaccounts.openshift.io/oauth-redirecturi.second\": \"//:8000\" \"serviceaccounts.openshift.io/oauth-redirectreference.second\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\"", "\"serviceaccounts.openshift.io/oauth-redirectreference.first\": \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"jenkins\\\"}}\" \"serviceaccounts.openshift.io/oauth-redirecturi.second\": \"https://other.com\"" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/authentication_and_authorization/using-service-accounts-as-oauth-client
9.3.7. Tuning Domain Process Memory Policy with virsh
9.3.7. Tuning Domain Process Memory Policy with virsh Domain process memory can be dynamically tuned. Refer to the following example command: More examples of these commands can be found in the virsh man page.
[ "% virsh numatune rhel6u4 --nodeset 0-10" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_tuning_and_optimization_guide/sect-virtualization_tuning_optimization_guide-numa-numa_and_libvirt-memory_policy_with_virsh
Chapter 3. Creating and managing remediation playbooks in Insights
Chapter 3. Creating and managing remediation playbooks in Insights The workflow to create playbooks is similar in each of the services in Insights for Red Hat Enterprise Linux. In general, you will fix one or more issues on a system or group of systems. Playbooks focus on issues identified by Insights services. A recommended practice for playbooks is to include systems of the same RHEL major/minor versions because the resolutions will be compatible. 3.1. Creating a playbook to remediate a CVE vulnerability on RHEL systems Create a remediation playbook in the Red Hat Insights vulnerability service. The workflow to create a playbook is similar for other services in Insights for Red Hat Enterprise Linux. Prerequisites You are logged into the Red Hat Hybrid Cloud Console. Note No enhanced User Access permissions are required to create remediation playbooks. Procedure Navigate to the Security > Vulnerability > CVEs page. Set filters as needed and click on a CVE. Scroll down to view affected systems. Select systems to include in a remediation playbook by clicking the box to the left of the system ID. Note Include systems of the same RHEL major/minor version, which you can do by filtering the list of affected systems. Click the Remediate button. Select whether to add the remediations to an existing or new playbook and take the following action: Click Add to existing playbook and select the desired playbook from the dropdown list, OR Click Create new playbook and add a playbook name. Click . Review the systems to include in the playbook, then click . Review the information in the Remediation review summary. By default, autoreboot is enabled. You can disable this option by clicking Turn off autoreboot . Click Submit . Verification step Navigate to Automation Toolkit > Remediations . Search for your playbook. You should see your playbook. 3.1.1. Creating playbooks to remediate CVEs with security rules when recommended and alternate resolution options exist Most CVEs in Red Hat Insights for RHEL will have one remediation option for you to use to resolve an issue. Remediating a CVE with security rules might include more than one resolution a recommended and one or more alternate resolutions. The workflow to create playbooks for CVEs that have one or more resolution options is similar to the remediation steps in the advisor service. For more information about security rules, see Security rules , and Filtering lists of systems exposed to security rules in Assessing and Monitoring Security Vulnerabilities on RHEL Systems . Prerequisites You are logged into the Red Hat Hybrid Cloud Console. Note You do not need enhanced User Access permissions to create remediation playbooks. Procedure Navigate to Security > Vulnerability > CVEs . Set filters if needed (for example, filter to see CVEs with security rules to focus on issues that have elevated risk associated with them). Or, click the CVEs with security rules tile on the dashbar. Both options show in the example image. Click a CVE in the list. Scroll to view affected systems, and select systems you want to include in a remediation playbook by clicking the box to the left of the system ID on the Review systems page. (Selecting one or more systems activates the Remediate button.) Note Recommended: Include systems of the same RHEL major or minor version by filtering the list of affected systems. Click Remediate . Decide whether to add the remediations to an existing or new playbook by taking one of the following actions: Choose Add to existing playbook and select the desired playbook from the dropdown list, OR Choose Create new playbook , and add a playbook name. For this example, HCCDOC-392. Click . A list of systems shows on the screen. Review the systems to include in the playbook (deselect any systems that you do not want to include). Click to see the Review and edit actions page, which shows you options to remediate the CVE. The number of items to remediate can vary. You will also see additional information (that you can expand and collapse) about the CVE, such as: Action: Shows the CVE ID. Resolution: Displays the recommended resolution for the CVE. Shows if you have alternate resolution options. Reboot required: Shows whether you must reboot your systems. Systems: Shows the number of systems you are remediating. On the Review and edit actions page, choose one of two options to finish creating your playbook: Option 1: To review all of the recommended and alternative remediation options available (and choose one of those options): Select Review and/or change the resolution steps for this 1 action or similar based on your actual options. Click . On the Choose action: <CVE information> page, click a tile to select your preferred remediation option. The bottom edge of the tile highlights when you select it. The recommended solution is highlighted by default. Click . Option 2: To accept all recommended remediations: Choose Accept all recommended resolutions steps for all actions . Review information about your selections and change options for autoreboot of systems on the Remediations review page. The page shows you the: Issues you are adding to your playbook. Options for changing system autoreboot requirements. Summary about CVEs and resolution options to fix them. Optional. Change autoreboot options on the Remediation review page, if needed. (Autoreboot is enabled by default, but your settings might vary based on your remediation options.) Click Submit . A notification displays that shows the number of remediation actions added to your playbook, and other information about your playbook. Verification step Navigate to Automation Toolkit > Remediations . Search for your playbook. To run (execute) your playbook, see Executing remediation playbooks from Insights for Red Hat Enterprise Linux . 3.2. Managing remediation playbooks in Insights for Red Hat Enterprise Linux You can download, archive, and delete existing remediation playbooks for your organization. The following procedures describe how to perform common playbook-management tasks. Prerequisites You are logged into the Red Hat Hybrid Cloud Console. Note No enhanced permissions are required to view, edit, or download information about existing playbooks. 3.2.1. Downloading a remediation playbook Use the following procedure to download a remediation playbook from the Insights for Red Hat Enterprise Linux application. Procedure Navigate to Automation Toolkit > Remediations . Locate the playbook you want to manage and click on the name of the playbook. The playbook details are visible. Click the Download playbook button to download the playbook YAML file to your local drive. 3.2.2. Archiving a remediation playbook You can archive a remediation playbook that is no longer needed, but the details of which you want to preserve. Procedure Navigate to Automation Toolkit > Remediations . Locate the playbook you want to archive. Click on the options icon (...) and select Archive playbook . The playbook is archived. 3.2.3. Viewing archived remediation playbooks You can view archived remediation playbooks in Insights for Red Hat Enterprise Linux. Procedure Navigate to Automation Toolkit > Remediations . Click the More options icon that is to the right of the Download playbook button and select Show archived playbooks. 3.2.4. Deleting a remediation playbook You can delete a playbooks that is no longer needed. Procedure Navigate to Automation Toolkit > Remediations . Locate and click on the name of the playbook you want to delete. On the playbook details page, click the More options icon and select Delete . 3.2.5. Monitoring remediation status You can view the remediation status for each playbook that you execute from the Insights for Red Hat Enterprise Linux remediations service. The status information tells you the results of the latest activity and provides a summary of all activity for playbook execution. You can also view log information for playbook execution. Prerequisites You are logged into the Red Hat Hybrid Cloud Console. Procedure Navigate to Automation Toolkit > Remediations . The page displays a list of remediation playbooks. Click on the name of a playbook. From the Actions tab, click any item in the Status column to view a pop-up box with the status of the resolution. To monitor the status of a playbook in the Satellite web UI, see Monitoring Remote Jobs in the Red Hat Satellite Managing Hosts guide.
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/red_hat_insights_remediations_guide/creating-managing-playbooks_red-hat-insights-remediation-guide
Chapter 5. Exposing the registry
Chapter 5. Exposing the registry By default, the OpenShift Container Platform registry is secured during cluster installation so that it serves traffic through TLS. Unlike versions of OpenShift Container Platform, the registry is not exposed outside of the cluster at the time of installation. 5.1. Exposing a default registry manually Instead of logging in to the default OpenShift Container Platform registry from within the cluster, you can gain external access to it by exposing it with a route. This external access enables you to log in to the registry from outside the cluster using the route address and to tag and push images to an existing project by using the route host. Prerequisites: The following prerequisites are automatically performed: Deploy the Registry Operator. Deploy the Ingress Operator. Procedure You can expose the route by using the defaultRoute parameter in the configs.imageregistry.operator.openshift.io resource. To expose the registry using the defaultRoute : Set defaultRoute to true : USD oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge Get the default registry route: USD HOST=USD(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}') Get the certificate of the Ingress Operator: USD oc get secret -n openshift-ingress router-certs-default -o go-template='{{index .data "tls.crt"}}' | base64 -d | sudo tee /etc/pki/ca-trust/source/anchors/USD{HOST}.crt > /dev/null Enable the cluster's default certificate to trust the route using the following commands: USD sudo update-ca-trust enable Log in with podman using the default route: USD sudo podman login -u kubeadmin -p USD(oc whoami -t) USDHOST 5.2. Exposing a secure registry manually Instead of logging in to the OpenShift Container Platform registry from within the cluster, you can gain external access to it by exposing it with a route. This allows you to log in to the registry from outside the cluster using the route address, and to tag and push images to an existing project by using the route host. Prerequisites: The following prerequisites are automatically performed: Deploy the Registry Operator. Deploy the Ingress Operator. Procedure You can expose the route by using DefaultRoute parameter in the configs.imageregistry.operator.openshift.io resource or by using custom routes. To expose the registry using DefaultRoute : Set DefaultRoute to True : USD oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge Log in with podman : USD HOST=USD(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}') USD podman login -u kubeadmin -p USD(oc whoami -t) --tls-verify=false USDHOST 1 1 --tls-verify=false is needed if the cluster's default certificate for routes is untrusted. You can set a custom, trusted certificate as the default certificate with the Ingress Operator. To expose the registry using custom routes: Create a secret with your route's TLS keys: USD oc create secret tls public-route-tls \ -n openshift-image-registry \ --cert=</path/to/tls.crt> \ --key=</path/to/tls.key> This step is optional. If you do not create a secret, the route uses the default TLS configuration from the Ingress Operator. On the Registry Operator: spec: routes: - name: public-routes hostname: myregistry.mycorp.organization secretName: public-route-tls ... Note Only set secretName if you are providing a custom TLS configuration for the registry's route.
[ "oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{\"spec\":{\"defaultRoute\":true}}' --type=merge", "HOST=USD(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')", "oc get secret -n openshift-ingress router-certs-default -o go-template='{{index .data \"tls.crt\"}}' | base64 -d | sudo tee /etc/pki/ca-trust/source/anchors/USD{HOST}.crt > /dev/null", "sudo update-ca-trust enable", "sudo podman login -u kubeadmin -p USD(oc whoami -t) USDHOST", "oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{\"spec\":{\"defaultRoute\":true}}' --type=merge", "HOST=USD(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')", "podman login -u kubeadmin -p USD(oc whoami -t) --tls-verify=false USDHOST 1", "oc create secret tls public-route-tls -n openshift-image-registry --cert=</path/to/tls.crt> --key=</path/to/tls.key>", "spec: routes: - name: public-routes hostname: myregistry.mycorp.organization secretName: public-route-tls" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/registry/securing-exposing-registry
2.2. Web Server Configuration
2.2. Web Server Configuration The following procedure configures an Apache HTTP server. Ensure that the Apache HTTP server is installed on each node in the cluster. You also need the wget tool installed on the cluster to be able to check the status of the Apache HTTP server. On each node, execute the following command. In order for the Apache resource agent to get the status of the Apache HTTP server, ensure that the following text is present in the /etc/httpd/conf/httpd.conf file on each node in the cluster, and ensure that it has not been commented out. If this text is not already present, add the text to the end of the file. When you use the apache resource agent to manage Apache, it does not use systemd . Because of this, you must edit the logrotate script supplied with Apache so that it does not use systemctl to reload Apache. Remove the following line in the /etc/logrotate.d/httpd file on each node in the cluster. Replace the line you removed with the following three lines. Create a web page for Apache to serve up. On one node in the cluster, mount the file system you created in Section 2.1, "Configuring an LVM Volume with an ext4 File System" , create the file index.html on that file system, then unmount the file system.
[ "yum install -y httpd wget", "<Location /server-status> SetHandler server-status Require local </Location>", "/bin/systemctl reload httpd.service > /dev/null 2>/dev/null || true", "/usr/bin/test -f /run/httpd.pid >/dev/null 2>/dev/null && /usr/bin/ps -q USD(/usr/bin/cat /run/httpd.pid) >/dev/null 2>/dev/null && /usr/sbin/httpd -f /etc/httpd/conf/httpd.conf -c \"PidFile /run/httpd.pid\" -k graceful > /dev/null 2>/dev/null || true", "mount /dev/my_vg/my_lv /var/www/ mkdir /var/www/html mkdir /var/www/cgi-bin mkdir /var/www/error restorecon -R /var/www cat <<-END >/var/www/html/index.html <html> <body>Hello</body> </html> END umount /var/www" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_administration/s1-webserversetup-HAAA
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/standalone_deployment_guide/making-open-source-more-inclusive
Chapter 8. Troubleshooting the Bare Metal Service
Chapter 8. Troubleshooting the Bare Metal Service The following sections contain information and steps that may be useful for diagnosing issues in a setup with the Bare Metal service enabled. 8.1. PXE Boot Errors Permission Denied Errors If you get a permission denied error on the console of your Bare Metal service node, ensure that you have applied the appropriate SELinux context to the /httpboot and /tftpboot directories as follows: Boot Process Freezes at /pxelinux.cfg/XX-XX-XX-XX-XX-XX On the console of your node, if it looks like you are getting an IP address and then the process stops as shown below: This indicates that you might be using the wrong PXE boot template in your ironic.conf file. The default template is pxe_config.template , so it is easy to omit the i and inadvertently turn this into ipxe_config.template . 8.2. Login Errors After the Bare Metal Node Boots When you try to log in at the login prompt on the console of the node with the root password that you set in the configurations steps, but are not able to, it indicates you are not booted in to the deployed image. You are probably stuck in the deploy-kernel/deploy-ramdisk image and the system has yet to get the correct image. To fix this issue, verify the PXE Boot Configuration file in the /httpboot/pxelinux.cfg/MAC_ADDRESS on the Compute or Bare Metal service node and ensure that all the IP addresses listed in this file correspond to IP addresses on the Bare Metal network. Note The only network the Bare Metal service node knows about is the Bare Metal network. If one of the endpoints is not on the network, the endpoint cannot reach the Bare Metal service node as a part of the boot process. For example, the kernel line in your file is as follows: Value in the above example kernel line Corresponding information http://192.168.200.2:8088 Parameter http_url in /etc/ironic/ironic.conf file. This IP address must be on the Bare Metal network. 5a6cdbe3-2c90-4a90-b3c6-85b449b30512 UUID of the baremetal node in openstack baremetal node list . deploy_kernel This is the deploy kernel image in the Image service that is copied down as /httpboot/<NODE_UUID>/deploy_kernel . http://192.168.200.2:6385 Parameter api_url in /etc/ironic/ironic.conf file. This IP address must be on the Bare Metal network. ipmi The IPMI Driver in use by the Bare Metal service for this node. deploy_ramdisk This is the deploy ramdisk image in the Image service that is copied down as /httpboot/<NODE_UUID>/deploy_ramdisk . If a value does not correspond between the /httpboot/pxelinux.cfg/MAC_ADDRESS and the ironic.conf file: Update the value in the ironic.conf file Restart the Bare Metal service Re-deploy the Bare Metal instance 8.3. The Bare Metal Service Is Not Getting the Right Hostname If the Bare Metal service is not getting the right hostname, it means that cloud-init is failing. To fix this, connect the Bare Metal subnet to a router in the OpenStack Networking service. The requests to the meta-data agent should now be routed correctly. 8.4. Invalid OpenStack Identity Service Credentials When Executing Bare Metal Service Commands If you are having trouble authenticating to the Identity service, check the identity_uri parameter in the ironic.conf file and ensure that you remove the /v2.0 from the keystone AdminURL. For example, set the identity_uri to http://IP:PORT . 8.5. Hardware Enrollment Issues with enrolled hardware can be caused by incorrect node registration details. Ensure that property names and values have been entered correctly. Incorrect or mistyped property names will be successfully added to the node's details, but will be ignored. Update a node's details. This example updates the amount of memory the node is registered to use to 2 GB: 8.6. No Valid Host Errors If the Compute scheduler cannot find a suitable Bare Metal node on which to boot an instance, a NoValidHost error can be seen in /var/log/nova/nova-conductor.log or immediately upon launch failure in the dashboard. This is usually caused by a mismatch between the resources Compute expects and the resources the Bare Metal node provides. Check the hypervisor resources that are available: The resources reported here should match the resources that the Bare Metal nodes provide. Check that Compute recognizes the Bare Metal nodes as hypervisors: The nodes, identified by UUID, should appear in the list. Check the details for a Bare Metal node: Verify that the node's details match those reported by Compute. Check that the selected flavor does not exceed the available resources of the Bare Metal nodes: Check the output of openstack baremetal node list to ensure that Bare Metal nodes are not in maintenance mode. Remove maintenance mode if necessary: Check the output of openstack baremetal node list to ensure that Bare Metal nodes are in an available state. Move the node to available if necessary: 8.7. Troubleshooting iDRAC issues Redfish management interface fails to set boot device When you use the idrac-redfish management interface with certain iDRAC firmware versions and attempt to set the boot device on a bare metal server with UEFI boot, iDRAC returns the following error: If you encounter this issue, set the force_persistent_boot_device parameter in the driver-info on the node to Never : Timeout when powering off Some servers can be too slow when powering off, and time out. The default retry count is 6 , which results in a 30 second timeout. To increase the timeout duration to 90 seconds, set the ironic::agent::rpc_response_timeout value to 18 in the undercloud hieradata overrides file and re-run the openstack undercloud install command: Vendor passthrough timeout When iDRAC is not available to execute vendor passthrough commands, these commands take too long and time out: To increase the timeout duration for messaging, increase the value of the ironic::default::rpc_response_timeout parameter in the undercloud hieradata overrides file and re-run the openstack undercloud install command:
[ "semanage fcontext -a -t httpd_sys_content_t \"/httpboot(/.*)?\" restorecon -r -v /httpboot semanage fcontext -a -t tftpdir_t \"/tftpboot(/.*)?\" restorecon -r -v /tftpboot", "grep ^pxe_config_template ironic.conf pxe_config_template=USDpybasedir/drivers/modules/ipxe_config.template", "kernel http://192.168.200.2:8088/5a6cdbe3-2c90-4a90-b3c6-85b449b30512/deploy_kernel selinux=0 disk=cciss/c0d0,sda,hda,vda iscsi_target_iqn=iqn.2008-10.org.openstack:5a6cdbe3-2c90-4a90-b3c6-85b449b30512 deployment_id= 5a6cdbe3-2c90-4a90-b3c6-85b449b30512 deployment_key=VWDYDVVEFCQJNOSTO9R67HKUXUGP77CK ironic_api_url= http://192.168.200.2:6385 troubleshoot=0 text nofb nomodeset vga=normal boot_option=netboot ip=USD{ip}:USD{next-server}:USD{gateway}:USD{netmask} BOOTIF=USD{mac} ipa-api-url= http://192.168.200.2:6385 ipa-driver-name= ipmi boot_mode=bios initrd= deploy_ramdisk coreos.configdrive=0 || goto deploy", "openstack baremetal node set --property memory_mb=2048 NODE_UUID", "openstack hypervisor stats show", "openstack hypervisor list", "openstack baremetal node list openstack baremetal node show NODE_UUID", "openstack flavor show FLAVOR_NAME", "openstack baremetal node maintenance unset NODE_UUID", "openstack baremetal node provide NODE_UUID", "Unable to Process the request because the value entered for the parameter Continuous is not supported by the implementation.", "openstack baremetal node set --driver-info force_persistent_boot_device=Never USD{node_uuid}", "ironic::agent::rpc_response_timeout: 18", "openstack baremetal node passthru call --http-method GET aed58dca-1b25-409a-a32f-3a817d59e1e0 list_unfinished_jobs Timed out waiting for a reply to message ID 547ce7995342418c99ef1ea4a0054572 (HTTP 500)", "ironic::default::rpc_response_timeout: 600" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/bare_metal_provisioning/sect-troubleshoot
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Use the Create Issue form in Red Hat Jira to provide your feedback. The Jira issue is created in the Red Hat Satellite Jira project, where you can track its progress. Prerequisites Ensure you have registered a Red Hat account . Procedure Click the following link: Create Issue . If Jira displays a login error, log in and proceed after you are redirected to the form. Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create .
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/api_guide/providing-feedback-on-red-hat-documentation_rest-api
Part II. Red Hat OpenStack Platform Bare Metal Hardware Certification
Part II. Red Hat OpenStack Platform Bare Metal Hardware Certification Note This chapter is applicable only for Red Hat OpenStack Platform bare metal hardware certification.
null
https://docs.redhat.com/en/documentation/red_hat_hardware_certification/2025/html/red_hat_bare_metal_hardware_certification_workflow_guide/con_red-hat-openstack-platform-hardware-baremetal-certification_onboarding-certification-partners
Chapter 4. Configuring user access for your private automation hub
Chapter 4. Configuring user access for your private automation hub You can manage user access to content and features in automation hub by creating groups of users that have specific permissions. 4.1. Implementing user access User access is based on managing permissions to system objects (users, groups, namespaces) rather than by assigning permissions individually to specific users. You assign permissions to the groups that you create. You can then assign users to these groups. This means that each user in a group has the permissions assigned to that group. Groups created in private automation hub can range from system administrators responsible for governing internal collections, configuring user access, and repository management to groups with access to organize and upload internally developed content to the private automation hub. Additional resources See Automation Hub permissions for information on system permissions. 4.1.1. Default user access for private automation hub When you install automation hub, the system automatically creates the default admin user in the Admin group. The Admin group is assigned all permissions in the system. The following sections describe the workflows associated with organizing your users who will access private automation hub and providing them with required permissions to reach their goals. See the permissions reference table for a full list and description of all permissions available. 4.1.2. Creating a new group in private automation hub You can create and assign permissions to a group in private automation hub that enables users to access specified features in the system. By default, the Admin group in the automation hub has all permissions assigned and is available on initial login. Use the credentials created when installing private automation hub. For more information, see Creating a new group in private automation hub in the Getting started with automation hub guide. 4.1.3. Assigning permissions to groups By default, new groups do not have any assigned permissions. You can assign permissions to groups in private automation hub that enable users to access specific features in the system. You can add permissions when first creating a group or edit an existing group to add or remove permissions For more information, see Assigning permissions to groups in the Getting started with automation hub guide. 4.1.4. Creating new users and giving them permissons After you create a user in private automation hub, you can give them permissions by adding them to groups. Each group that can access features in the system associated to the level of assigned permissions. Prerequisites You have user permissions and can create users in private automation hub. Procedure Log in to your private automation hub. From the navigation panel, select User Access Users . Click Create user . Enter information in the field. Username and Password are required. Optional: To assign the user to a group, click the Groups field and select from the list of groups. Click Save . The new user is now displayed in the list on the Users page. 4.1.5. Creating a super user If you want to spread administration across your team, you can create a super user in private automation hub. Prerequisites You must be a Super user . Procedure Log in to your private automation hub. From the navigation panel, select User Access Users . Select the user that you want to make a super user. The User details for that user are displayed. Under User type , select Super User . The user now has Super user permissions. 4.1.6. Adding users to existing groups You can add users to groups when you create a group. But, you can also manually add users to existing groups. For more information, see Adding users to existing groups in the Getting started with automation hub guide. 4.1.7. Creating a new group for content curators You can create a new group in private automation hub designed to support content curation in your organization. This group can contribute internally developed collections for publication in private automation hub. To help content developers create a namespace and upload their internally developed collections to private automation hub, you must first create and edit a group and assign the required permissions. Prerequisites You have administrative permissions in private automation hub and can create groups. Procedure Log in to your private automation hub. From the navigation panel, select User Access Groups and click Create . Enter Content Engineering as a Name for the group in the modal and click Create . You have created the new group and the Groups page opens. On the Permissions tab, click Edit . Under Namespaces , add permissions for Add Namespace , Upload to Namespace , and Change Namespace . Click Save . The new group is created with the permissions that you assigned. You can then add users to the group. Click the Users tab on the Groups page. Click Add . Select users and click Add . 4.1.8. Automation hub permissions Permissions provide a defined set of actions each group can perform on a given object. Determine the required level of access for your groups based on the permissions described in this table. Table 4.1. Permissions Reference Table Object Permission Description collection namespaces Add namespace Upload to namespace Change namespace Delete namespace Groups with these permissions can create, upload collections, and delete a namespace. collections Modify Ansible repo content Delete collections Groups with this permission can perform these actions: Move content between repositories by using the Approval feature. Certify or reject features to move content from the staging to published or rejected repositories. Delete collections. users View user Delete user Add user Change user Groups with these permissions can manage user configuration and access in private automation hub. groups View group Delete group Add group Change group Groups with these permissions can manage group configuration and access in private automation hub. collection remotes Change collection remote View collection remote Groups with these permissions can configure a remote repository by navigating to Collection Repositories . containers Change container namespace permissions Change containers Change image tags Create new containers Push to existing containers Delete container repository Groups with these permissions can manage container repositories in private automation hub. remote registries Add remote registry Change remote registry Delete remote registry Groups with these permissions can add, change, or delete remote registries added to private automation hub. task management Change task Delete task View all tasks Groups with these permissions can manage tasks added to Task Management in private automation hub. 4.1.9. Deleting a user from private automation hub When you delete a user account, the name and email of the user are permanently removed from private automation hub. Prerequisites You have user permissions in private automation hub. Procedure Log in to private automation hub. From the navigation panel, select User Access Users . Click Users to display a list of the current users. Click the More Actions icon ... icon beside the user that you want to remove, then click Delete . Click Delete in the warning message to permanently delete the user. 4.2. Enable view-only access for your private automation hub By enabling view-only access, you can grant access for users to view collections or namespaces on your private automation hub without requiring them to log in. View-only access allows you to share content with unauthorized users while restricting their ability to view or download source code. They will not have permissions to edit anything on your private automation hub. To enable view-only access for your private automation hub, you must edit the inventory file on your Red Hat Ansible Automation Platform installer. If you are installing a new instance of Ansible Automation Platform, add the automationhub_enable_unauthenticated_collection_access and automationhub_enable_unauthenticated_collection_download parameters to your inventory file along with your other installation configurations: If you are updating an existing Ansible Automation Platform installation to include view-only access, add the automationhub_enable_unauthenticated_collection_access and automationhub_enable_unauthenticated_collection_download parameters to your inventory file and then run the setup.sh script to apply the updates: Procedure Navigate to the installer. Bundled installer USD cd ansible-automation-platform-setup-bundle-<latest-version> Online installer USD cd ansible-automation-platform-setup-<latest-version> Open the inventory file with a text editor. Add the automationhub_enable_unauthenticated_collection_access and automationhub_enable_unauthenticated_collection_download parameters to the inventory file and set both to True , following the example below: [all:vars] automationhub_enable_unauthenticated_collection_access = True 1 automationhub_enable_unauthenticated_collection_download = True 2 1 Allows unauthorized users to view collections 2 Allows unathorized users to download collections Run the setup.sh script. The installer enables view-only access to your private automation hub. Verification After the installation is complete, verify that you have view-only access on your private automation hub by attempting to view content on your private automation hub without logging in. Navigate to your private automation hub. On the login screen, click View only mode . Verify that you are able to view content on your automation hub, such as namespaces or collections, without having to log in.
[ "cd ansible-automation-platform-setup-bundle-<latest-version>", "cd ansible-automation-platform-setup-<latest-version>", "[all:vars] automationhub_enable_unauthenticated_collection_access = True 1 automationhub_enable_unauthenticated_collection_download = True 2" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/getting_started_with_automation_hub/assembly-user-access
Configuring basic system settings
Configuring basic system settings Red Hat Enterprise Linux 9 Set up the essential functions of your system and customize your system environment Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_basic_system_settings/index
Chapter 4. Serving and chatting with the models
Chapter 4. Serving and chatting with the models To interact with various models on Red Hat Enterprise Linux AI you must serve the model, which hosts it on a server, then you can chat with the models. 4.1. Serving the model To interact with the models, you must first activate the model in a machine through serving. The ilab model serve commands starts a vLLM server that allows you to chat with the model. Prerequisites You installed RHEL AI with the bootable container image. You initialized InstructLab. You installed your preferred Granite LLMs. You have root user access on your machine. Procedure If you do not specify a model, you can serve the default model, granite-7b-redhat-lab , by running the following command: USD ilab model serve To serve a specific model, run the following command USD ilab model serve --model-path <model-path> Example command USD ilab model serve --model-path ~/.cache/instructlab/models/granite-8b-code-instruct Example output of when the model is served and ready INFO 2024-03-02 02:21:11,352 lab.py:201 Using model 'models/granite-8b-code-instruct' with -1 gpu-layers and 4096 max context size. Starting server process After application startup complete see http://127.0.0.1:8000/docs for API. Press CTRL+C to shut down the server. 4.1.1. Optional: Running ilab model serve as a service You can set up a systemd service so that the ilab model serve command runs as a running service. The systemd service runs the ilab model serve command in the background and restarts if it crashes or fails. You can configure the service to start upon system boot. Prerequisites You installed the Red Hat Enterprise Linux AI image on bare metal. You initialized InstructLab You downloaded your preferred Granite LLMs. You have root user access on your machine. Procedure. Create a directory for your systemd user service by running the following command: USD mkdir -p USDHOME/.config/systemd/user Create your systemd service file with the following example configurations: USD cat << EOF > USDHOME/.config/systemd/user/ilab-serve.service [Unit] Description=ilab model serve service [Install] WantedBy=multi-user.target default.target 1 [Service] ExecStart=ilab model serve --model-family granite Restart=always EOF 1 Specifies to start by default on boot. Reload the systemd manager configuration by running the following command: USD systemctl --user daemon-reload Start the ilab model serve systemd service by running the following command: USD systemctl --user start ilab-serve.service You can check that the service is running with the following command: USD systemctl --user status ilab-serve.service You can check the service logs by running the following command: USD journalctl --user-unit ilab-serve.service To allow the service to start on boot, run the following command: USD sudo loginctl enable-linger Optional: There are a few optional commands you can run for maintaining your systemd service. You can stop the ilab-serve system service by running the following command: USD systemctl --user stop ilab-serve.service You can prevent the service from starting on boot by removing the "WantedBy=multi-user.target default.target" from the USDHOME/.config/systemd/user/ilab-serve.service file. 4.1.2. Optional: Allowing access to a model from a secure endpoint You can serve an inference endpoint and allow others to interact with models provided with Red Hat Enterprise Linux AI on secure connections by creating a systemd service and setting up a nginx reverse proxy that exposes a secure endpoint. This allows you to share the secure endpoint with others so they can chat with the model over a network. The following procedure uses self-signed certifications, but it is recommended to use certificates issued by a trusted Certificate Authority (CA). Note The following procedure is supported only on bare metal platforms. Prerequisites You installed the Red Hat Enterprise Linux AI image on bare-metal. You initialized InstructLab You downloaded your preferred Granite LLMs. You have root user access on your machine. Procedure Create a directory for your certificate file and key by running the following command: USD mkdir -p `pwd`/nginx/ssl/ Create an OpenSSL configuration file with the proper configurations by running the following command: USD cat > openssl.cnf <<EOL [ req ] default_bits = 2048 distinguished_name = <req-distinguished-name> 1 x509_extensions = v3_req prompt = no [ req_distinguished_name ] C = US ST = California L = San Francisco O = My Company OU = My Division CN = rhelai.redhat.com [ v3_req ] subjectAltName = <alt-names> 2 basicConstraints = critical, CA:true subjectKeyIdentifier = hash authorityKeyIdentifier = keyid:always,issuer [ alt_names ] DNS.1 = rhelai.redhat.com 3 DNS.2 = www.rhelai.redhat.com 4 1 Specify the distinguished name for your requirements. 2 Specify the alternate name for your requirements. 3 4 Specify the server common name for RHEL AI. In the example, the server name is rhelai.redhat.com . Generate a self signed certificate with a Subject Alternative Name (SAN) enabled with the following commands: USD openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout `pwd`/nginx/ssl/rhelai.redhat.com.key -out `pwd`/nginx/ssl/rhelai.redhat.com.crt -config openssl.cnf USD openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout Create the Nginx Configuration file and add it to the `pwd /nginx/conf.d` by running the following command: mkdir -p `pwd`/nginx/conf.d echo 'server { listen 8443 ssl; server_name <rhelai.redhat.com> 1 ssl_certificate /etc/nginx/ssl/rhelai.redhat.com.crt; ssl_certificate_key /etc/nginx/ssl/rhelai.redhat.com.key; location / { proxy_pass http://127.0.0.1:8000; proxy_set_header Host USDhost; proxy_set_header X-Real-IP USDremote_addr; proxy_set_header X-Forwarded-For USDproxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto USDscheme; } } ' > `pwd`/nginx/conf.d/rhelai.redhat.com.conf 1 Specify the name of your server. In the example, the server name is rhelai.redhat.com Run the Nginx container with the new configurations by running the following command: USD podman run --net host -v `pwd`/nginx/conf.d:/etc/nginx/conf.d:ro,Z -v `pwd`/nginx/ssl:/etc/nginx/ssl:ro,Z nginx If you want to use port 443, you must run the podman run command as a root user.. You can now connect to a serving ilab machine using a secure endpoint URL. Example command: USD ilab model chat -m /instructlab/instructlab/granite-7b-redhat-lab --endpoint-url You can also connect to the serving RHEL AI machine with the following command: USD curl --location 'https://rhelai.redhat.com:8443/v1' \ --header 'Content-Type: application/json' \ --header 'Authorization: Bearer <api-key>' \ --data '{ "model": "/var/home/cloud-user/.cache/instructlab/models/granite-7b-redhat-lab", "messages": [ { "role": "system", "content": "You are a helpful assistant." }, { "role": "user", "content": "Hello!" } ] }' | jq . where <api-key> Specify your API key. You can create your own API key by following the procedure in "Creating an API key for chatting with a model". Optional: You can also get the server certificate and append it to the Certifi CA Bundle Get the server certificate by running the following command: USD openssl s_client -connect rhelai.redhat.com:8443 </dev/null 2>/dev/null | openssl x509 -outform PEM > server.crt Copy the certificate to you system's trusted CA storage directory and update the CA trust store with the following commands: USD sudo cp server.crt /etc/pki/ca-trust/source/anchors/ USD sudo update-ca-trust You can append your certificate to the Certifi CA bundle by running the following command: USD cat server.crt >> USD(python -m certifi) You can now run ilab model chat with a self-signed certificate. Example command: USD ilab model chat -m /instructlab/instructlab/granite-7b-redhat-lab --endpoint-url https://rhelai.redhat.com:8443/v1 4.2. Chatting with the model Once you serve your model, you can now chat with the model. Important The model you are chatting with must match the model you are serving. With the default config.yaml file, the granite-7b-redhat-lab model is the default for serving and chatting. Prerequisites You installed RHEL AI with the bootable container image. You initialized InstructLab. You downloaded your preferred Granite LLMs. You are serving a model. You have root user access on your machine. Procedure Since you are serving the model in one terminal window, you must open another terminal to chat with the model. To chat with the default model, run the following command: USD ilab model chat To chat with a specific model run the following command: USD ilab model chat --model <model-path> Example command USD ilab model chat --model ~/.cache/instructlab/models/granite-8b-code-instruct Example output of the chatbot USD ilab model chat โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ system โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ Welcome to InstructLab Chat w/ GRANITE-8B-CODE-INSTRUCT (type /h for help) โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ >>> [S][default] + Type exit to leave the chatbot. 4.2.1. Optional: Creating an API key for chatting with a model By default, the ilab CLI does not use authentication. If you want to expose your server to the internet, you can create a API key that connects to your server with the following procedures. Prerequisites You installed the Red Hat Enterprise Linux AI image on bare metal. You initialized InstructLab You downloaded your preferred Granite LLMs. You have root user access on your machine. Procedure Create a API key that is held in USDVLLM_API_KEY parameter by running the following command: USD export VLLM_API_KEY=USD(python -c 'import secrets; print(secrets.token_urlsafe())') You can view the API key by running the following command: USD echo USDVLLM_API_KEY Update the config.yaml by running the following command: USD ilab config edit Add the following parameters to the vllm_args section of your config.yaml file. serve: vllm: vllm_args: - --api-key - <api-key-string> where <api-key-string> Specify your API key string. You can verify that the server is using API key authentication by running the following command: USD ilab model chat Then, seeing the following error that shows an unauthorized user. openai.AuthenticationError: Error code: 401 - {'error': 'Unauthorized'} Verify that your API key is working by running the following command: USD ilab model chat -m granite-7b-redhat-lab --endpoint-url https://inference.rhelai.com/v1 --api-key USDVLLM_API_KEY Example output USD ilab model chat โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ system โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ Welcome to InstructLab Chat w/ GRANITE-7B-LAB (type /h for help) โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ >>> [S][default]
[ "ilab model serve", "ilab model serve --model-path <model-path>", "ilab model serve --model-path ~/.cache/instructlab/models/granite-8b-code-instruct", "INFO 2024-03-02 02:21:11,352 lab.py:201 Using model 'models/granite-8b-code-instruct' with -1 gpu-layers and 4096 max context size. Starting server process After application startup complete see http://127.0.0.1:8000/docs for API. Press CTRL+C to shut down the server.", "mkdir -p USDHOME/.config/systemd/user", "cat << EOF > USDHOME/.config/systemd/user/ilab-serve.service [Unit] Description=ilab model serve service [Install] WantedBy=multi-user.target default.target 1 [Service] ExecStart=ilab model serve --model-family granite Restart=always EOF", "systemctl --user daemon-reload", "systemctl --user start ilab-serve.service", "systemctl --user status ilab-serve.service", "journalctl --user-unit ilab-serve.service", "sudo loginctl enable-linger", "systemctl --user stop ilab-serve.service", "mkdir -p `pwd`/nginx/ssl/", "cat > openssl.cnf <<EOL [ req ] default_bits = 2048 distinguished_name = <req-distinguished-name> 1 x509_extensions = v3_req prompt = no [ req_distinguished_name ] C = US ST = California L = San Francisco O = My Company OU = My Division CN = rhelai.redhat.com [ v3_req ] subjectAltName = <alt-names> 2 basicConstraints = critical, CA:true subjectKeyIdentifier = hash authorityKeyIdentifier = keyid:always,issuer [ alt_names ] DNS.1 = rhelai.redhat.com 3 DNS.2 = www.rhelai.redhat.com 4", "openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout `pwd`/nginx/ssl/rhelai.redhat.com.key -out `pwd`/nginx/ssl/rhelai.redhat.com.crt -config openssl.cnf", "openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout", "mkdir -p `pwd`/nginx/conf.d echo 'server { listen 8443 ssl; server_name <rhelai.redhat.com> 1 ssl_certificate /etc/nginx/ssl/rhelai.redhat.com.crt; ssl_certificate_key /etc/nginx/ssl/rhelai.redhat.com.key; location / { proxy_pass http://127.0.0.1:8000; proxy_set_header Host USDhost; proxy_set_header X-Real-IP USDremote_addr; proxy_set_header X-Forwarded-For USDproxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto USDscheme; } } ' > `pwd`/nginx/conf.d/rhelai.redhat.com.conf", "podman run --net host -v `pwd`/nginx/conf.d:/etc/nginx/conf.d:ro,Z -v `pwd`/nginx/ssl:/etc/nginx/ssl:ro,Z nginx", "ilab model chat -m /instructlab/instructlab/granite-7b-redhat-lab --endpoint-url", "curl --location 'https://rhelai.redhat.com:8443/v1' --header 'Content-Type: application/json' --header 'Authorization: Bearer <api-key>' --data '{ \"model\": \"/var/home/cloud-user/.cache/instructlab/models/granite-7b-redhat-lab\", \"messages\": [ { \"role\": \"system\", \"content\": \"You are a helpful assistant.\" }, { \"role\": \"user\", \"content\": \"Hello!\" } ] }' | jq .", "openssl s_client -connect rhelai.redhat.com:8443 </dev/null 2>/dev/null | openssl x509 -outform PEM > server.crt", "sudo cp server.crt /etc/pki/ca-trust/source/anchors/", "sudo update-ca-trust", "cat server.crt >> USD(python -m certifi)", "ilab model chat -m /instructlab/instructlab/granite-7b-redhat-lab --endpoint-url https://rhelai.redhat.com:8443/v1", "ilab model chat", "ilab model chat --model <model-path>", "ilab model chat --model ~/.cache/instructlab/models/granite-8b-code-instruct", "ilab model chat โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ system โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ Welcome to InstructLab Chat w/ GRANITE-8B-CODE-INSTRUCT (type /h for help) โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ >>> [S][default]", "export VLLM_API_KEY=USD(python -c 'import secrets; print(secrets.token_urlsafe())')", "echo USDVLLM_API_KEY", "ilab config edit", "serve: vllm: vllm_args: - --api-key - <api-key-string>", "ilab model chat", "openai.AuthenticationError: Error code: 401 - {'error': 'Unauthorized'}", "ilab model chat -m granite-7b-redhat-lab --endpoint-url https://inference.rhelai.com/v1 --api-key USDVLLM_API_KEY", "ilab model chat โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ system โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ Welcome to InstructLab Chat w/ GRANITE-7B-LAB (type /h for help) โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ >>> [S][default]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.3/html/building_your_rhel_ai_environment/serving_and_chatting
Getting Started with Debezium
Getting Started with Debezium Red Hat build of Debezium 2.7.3 For use with Red Hat build of Debezium 2.7.3 Red Hat build of Debezium Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_build_of_debezium/2.7.3/html/getting_started_with_debezium/index
Chapter 9. ConsoleYAMLSample [console.openshift.io/v1]
Chapter 9. ConsoleYAMLSample [console.openshift.io/v1] Description ConsoleYAMLSample is an extension for customizing OpenShift web console YAML samples. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object Required metadata spec 9.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ConsoleYAMLSampleSpec is the desired YAML sample configuration. Samples will appear with their descriptions in a samples sidebar when creating a resources in the web console. 9.1.1. .spec Description ConsoleYAMLSampleSpec is the desired YAML sample configuration. Samples will appear with their descriptions in a samples sidebar when creating a resources in the web console. Type object Required description targetResource title yaml Property Type Description description string description of the YAML sample. snippet boolean snippet indicates that the YAML sample is not the full YAML resource definition, but a fragment that can be inserted into the existing YAML document at the user's cursor. targetResource object targetResource contains apiVersion and kind of the resource YAML sample is representating. title string title of the YAML sample. yaml string yaml is the YAML sample to display. 9.1.2. .spec.targetResource Description targetResource contains apiVersion and kind of the resource YAML sample is representating. Type object Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 9.2. API endpoints The following API endpoints are available: /apis/console.openshift.io/v1/consoleyamlsamples DELETE : delete collection of ConsoleYAMLSample GET : list objects of kind ConsoleYAMLSample POST : create a ConsoleYAMLSample /apis/console.openshift.io/v1/consoleyamlsamples/{name} DELETE : delete a ConsoleYAMLSample GET : read the specified ConsoleYAMLSample PATCH : partially update the specified ConsoleYAMLSample PUT : replace the specified ConsoleYAMLSample 9.2.1. /apis/console.openshift.io/v1/consoleyamlsamples HTTP method DELETE Description delete collection of ConsoleYAMLSample Table 9.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ConsoleYAMLSample Table 9.2. HTTP responses HTTP code Reponse body 200 - OK ConsoleYAMLSampleList schema 401 - Unauthorized Empty HTTP method POST Description create a ConsoleYAMLSample Table 9.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.4. Body parameters Parameter Type Description body ConsoleYAMLSample schema Table 9.5. HTTP responses HTTP code Reponse body 200 - OK ConsoleYAMLSample schema 201 - Created ConsoleYAMLSample schema 202 - Accepted ConsoleYAMLSample schema 401 - Unauthorized Empty 9.2.2. /apis/console.openshift.io/v1/consoleyamlsamples/{name} Table 9.6. Global path parameters Parameter Type Description name string name of the ConsoleYAMLSample HTTP method DELETE Description delete a ConsoleYAMLSample Table 9.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 9.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ConsoleYAMLSample Table 9.9. HTTP responses HTTP code Reponse body 200 - OK ConsoleYAMLSample schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ConsoleYAMLSample Table 9.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.11. HTTP responses HTTP code Reponse body 200 - OK ConsoleYAMLSample schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ConsoleYAMLSample Table 9.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.13. Body parameters Parameter Type Description body ConsoleYAMLSample schema Table 9.14. HTTP responses HTTP code Reponse body 200 - OK ConsoleYAMLSample schema 201 - Created ConsoleYAMLSample schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/console_apis/consoleyamlsample-console-openshift-io-v1
Chapter 6. Uninstalling a cluster on Nutanix
Chapter 6. Uninstalling a cluster on Nutanix You can remove a cluster that you deployed to Nutanix. 6.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. Procedure From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program.
[ "./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_nutanix/uninstalling-cluster-nutanix