title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 1. Preparing to deploy OpenShift Data Foundation | Chapter 1. Preparing to deploy OpenShift Data Foundation When you deploy OpenShift Data Foundation on OpenShift Container Platform using the local storage devices, you can create internal cluster resources. This approach internally provisions base services so that all the applications can access additional storage classes. Before you begin the deployment of Red Hat OpenShift Data Foundation using a local storage, ensure that you meet the resource requirements. See Requirements for installing OpenShift Data Foundation using local storage devices . Optional: If you want to enable cluster-wide encryption using an external Key Management System (KMS) follow these steps: Ensure that you have a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . When the Token authentication method is selected for encryption, refer to Enabling cluster-wide encryption with the Token authentication using KMS . When the Kubernetes authentication method is selected for encryption then refer to Enabling cluster-wide encryption with KMS using the Kubernetes authentication method . Ensure that you are using signed certificates on your vault servers. After you have addressed the above, perform the following steps: Install the Local Storage Operator . Install the Red Hat OpenShift Data Foundation Operator . Create the OpenShift Data Foundation cluster on bare metal . 1.1. Requirements for installing OpenShift Data Foundation using local storage devices Node requirements The cluster must consist of at least three OpenShift Container Platform worker or infrastructure nodes with locally attached-storage devices on each of them. Each of the three selected nodes must have at least one raw block device available. OpenShift Data Foundation uses the one or more available raw block devices. The devices you use must be empty, the disks must not include Physical Volumes (PVs), Volume Groups (VGs), or Logical Volumes (LVs) remaining on the disk. For more information, see the Resource requirements section in the Planning guide . Disaster recovery requirements Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced subscription. A valid Red Hat Advanced Cluster Management (RHACM) for Kubernetes subscription. To know in detail how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . For detailed disaster recovery solution requirements, see Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads guide, and Requirements and recommendations section of the Install guide in Red Hat Advanced Cluster Management for Kubernetes documentation. Arbiter stretch cluster requirements In this case, a single cluster is stretched across two zones with a third zone as the location for the arbiter. This solution is currently intended for deployment in the OpenShift Container Platform on-premises and in the same data center. This solution is not recommended for deployments stretching over multiple data centers. Instead, consider Metro-DR as a first option for no data loss DR solution deployed over multiple data centers with low latency networks. To know in detail how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . Note You cannot enable Flexible scaling and Arbiter both at the same time as they have conflicting scaling logic. With Flexible scaling, you can add one node at a time to your OpenShift Data Foundation cluster. Whereas, in an Arbiter cluster, you need to add at least one node in each of the two data zones. Compact mode requirements You can install OpenShift Data Foundation on a three-node OpenShift compact bare-metal cluster, where all the workloads run on three strong master nodes. There are no worker or storage nodes. To configure OpenShift Container Platform in compact mode, see the Configuring a three-node cluster section of the Installing guide in OpenShift Container Platform documentation, and Delivering a Three-node Architecture for Edge Deployments . Minimum starting node requirements An OpenShift Data Foundation cluster is deployed with a minimum configuration when the resource requirement for a standard deployment is not met. For more information, see the Resource requirements section in the Planning guide . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_openshift_data_foundation_using_bare_metal_infrastructure/preparing_to_deploy_openshift_data_foundation |
Chapter 5. Performing the upgrade | Chapter 5. Performing the upgrade After you have completed the preparatory steps and reviewed and resolved the problems found in the pre-upgrade report, you can perform the in-place upgrade on your system. 5.1. Performing the upgrade from RHEL 8.10 to RHEL 9.4 and 9.5 This procedure lists steps required to perform the upgrade by using the Leapp utility. Prerequisites The steps listed in Preparing for the upgrade have been completed, including a full system backup. The steps listed in Reviewing the pre-upgrade report have been completed and all reported issues resolved. Procedure On your RHEL 8 system, start the upgrade process: If you are using custom repositories from the /etc/yum.repos.d/ directory for the upgrade, enable the selected repositories as follows: If you are upgrading without RHSM or using RHUI, add the --no-rhsm option. If you are upgrading by using an ISO image, add the --no-rhsm and --iso <file_path> options. Replace <file_path> with the file path to the saved ISO image, for example /home/rhel9.iso . If you have an Extended Upgrade Support (EUS) , Advanced Update Support (AUS), or Update Services for SAP Solutions (E4S) (Red Hat Knowledgebase) subscription, add the --channel channel option. Replace channel with the value you used with the leapp preupgrade command, for example, eus , aus , or e4s . Note that you must use the same value with the --channel option in both the leapp preupgrade and leapp upgrade commands. If you are using RHEL for Real Time or the Real Time for Network Functions Virtualization (NFV) in your Red Hat OpenStack Platform, enable the deployment by using the --enablerepo option. For example: For more information, see Configuring Real-Time Compute . At the beginning of the upgrade process, Leapp performs the pre-upgrade phase described in Reviewing the pre-upgrade report . If the system is upgradable, Leapp downloads necessary data and prepares an RPM transaction for the upgrade. If your system does not meet the parameters for a reliable upgrade, Leapp terminates the upgrade process and provides a record describing the issue and a recommended solution in the /var/log/leapp/leapp-report.txt file. For more information, see Troubleshooting . Manually reboot the system: In this phase, the system boots into a RHEL 9-based initial RAM disk image, initramfs. Leapp upgrades all packages and automatically reboots to the RHEL 9 system. Alternatively, you can run the leapp upgrade command with the --reboot option and skip this manual step. If a failure occurs, investigate logs and known issues as described in Troubleshooting . Log in to the RHEL 9 system and verify its state as described in Verifying the post-upgrade state . Perform all post-upgrade tasks described in the upgrade report and in Performing post-upgrade tasks . 5.2. Performing the upgrade from RHEL 8.8 to RHEL 9.2 This procedure lists steps required to perform the upgrade from RHEL 8.8 to RHEL 9.2 by using the Leapp utility. Prerequisites The steps listed in Preparing for the upgrade have been completed, including a full system backup. The steps listed in Reviewing the pre-upgrade report have been completed and all reported issues resolved. Procedure On your RHEL 8 system, start the upgrade process: If you are using custom repositories from the /etc/yum.repos.d/ directory for the upgrade, enable the selected repositories as follows: If you are upgrading without RHSM or using RHUI, add the --no-rhsm option. If you are upgrading by using an ISO image, add the --no-rhsm and --iso <file_path> options. Replace <file_path> with the file path to the saved ISO image, for example /home/rhel9.iso . If you have an Extended Upgrade Support (EUS) , Advanced Update Support (AUS), or Update Services for SAP Solutions (E4S) (Red Hat Knowledgebase) subscription, add the --channel channel option. Replace channel with the value you used with the leapp preupgrade command, for example, eus , aus , or e4s . Note that you must use the same value with the --channel option in both the leapp preupgrade and leapp upgrade commands. At the beginning of the upgrade process, Leapp performs the pre-upgrade phase described in Reviewing the pre-upgrade report . If the system is upgradable, Leapp downloads necessary data and prepares an RPM transaction for the upgrade. If your system does not meet the parameters for a reliable upgrade, Leapp terminates the upgrade process and provides a record describing the issue and a recommended solution in the /var/log/leapp/leapp-report.txt file. For more information, see Troubleshooting . Manually reboot the system: In this phase, the system boots into a RHEL 9-based initial RAM disk image, initramfs. Leapp upgrades all packages and automatically reboots to the RHEL 9 system. Alternatively, you can run the leapp upgrade command with the --reboot option and skip this manual step. If a failure occurs, investigate logs and known issues as described in Troubleshooting . Log in to the RHEL 9 system and verify its state as described in Verifying the post-upgrade state . Perform all post-upgrade tasks described in the upgrade report and in Performing post-upgrade tasks . | [
"leapp upgrade",
"leapp upgrade --enablerepo <repository_id1> --enablerepo <repository_id2>",
"leapp upgrade --enablerepo rhel-9-for-x86_64-rt-rpms",
"reboot",
"leapp upgrade",
"leapp upgrade --enablerepo <repository_id1> --enablerepo <repository_id2>",
"reboot"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/upgrading_from_rhel_8_to_rhel_9/performing-the-upgrade_upgrading-from-rhel-8-to-rhel-9 |
Chapter 9. Using the the web console for managing virtual machines | Chapter 9. Using the the web console for managing virtual machines To manage virtual machines in a graphical interface, you can use the Virtual Machines pane in the the web console . The following sections describe the web console's virtualization management capabilities and provide instructions for using them. 9.1. Overview of virtual machine management using the the web console The the web console is a web-based interface for system administration. With the installation of a web console plug-in, the web console can be used to manage virtual machines (VMs) on the servers to which the web console can connect. It provides a graphical view of VMs on a host system to which the web console can connect, and allows monitoring system resources and adjusting configuration with ease. Using the the web console for VM management, you can do the following: Create and delete VMs Install operating systems on VMs Run and shut down VMs View information about VMs Create and attach disks to VMs Configure virtual CPU settings for VMs Manage virtual network interfaces Interact with VMs using VM consoles 9.2. Setting up the the web console to manage virtual machines Before using the the web console to manage VMs, you must install the web console virtual machine plug-in. Prerequisites Ensure that the web console is installed on your machine. Procedure Install the cockpit-machines plug-in. If the installation is successful, Virtual Machines appears in the web console side menu. 9.3. Creating virtual machines and installing guest operating systems using the the web console The following sections provide information on how to use the the web console to create virtual machines (VMs) and install operating systems on VMs. 9.3.1. Creating virtual machines using the the web console To create a VM on the host machine to which the web console is connected, follow the instructions below. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Before creating VMs, consider the amount of system resources you need to allocate to your VMs, such as disk space, RAM, or CPUs. The recommended values may vary significantly depending on the intended tasks and workload of the VMs. A locally available operating system (OS) installation source, which can be one of the following: An ISO image of an installation medium A disk image of an existing guest installation Procedure Click Create VM in the Virtual Machines interface of the the web console. The Create New Virtual Machine dialog appears. Enter the basic configuration of the virtual machine you want to create. Connection - The connection to the host to be used by the virtual machine. Name - The name of the virtual machine. Installation Source Type - The type of the installation source: Filesystem, URL Installation Source - The path or URL that points to the installation source. OS Vendor - The vendor of the virtual machine's operating system. Operating System - The virtual machine's operating system. Memory - The amount of memory with which to configure the virtual machine. Storage Size - The amount of storage space with which to configure the virtual machine. Immediately Start VM - Whether or not the virtual machine will start immediately after it is created. Click Create . The virtual machine is created. If the Immediately Start VM checkbox is selected, the VM will immediately start and begin installing the guest operating system. You must install the operating system the first time the virtual machine is run. Additional resources For information on installing an operating system on a virtual machine, see Section 9.3.2, "Installing operating systems using the the web console" . 9.3.2. Installing operating systems using the the web console The first time a virtual machine loads, you must install an operating system on the virtual machine. Prerequisites Before using the the web console to manage virtual machines, you must install the web console virtual machine plug-in. A VM on which to install an operating system. Procedure Click Install . The installation routine of the operating system runs in the virtual machine console. Note If the Immediately Start VM checkbox in the Create New Virtual Machine dialog is checked, the installation routine of the operating system starts automatically when the virtual machine is created. Note If the installation routine fails, the virtual machine must be deleted and recreated. 9.4. Deleting virtual machines using the the web console You can delete a virtual machine and its associated storage files from the host to which the the web console is connected. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure In the Virtual Machines interface of the the web console, click the name of the VM you want to delete. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Delete . A confirmation dialog appears. [Optional] To delete all or some of the storage files associated with the virtual machine, select the checkboxes to the storage files you want to delete. Click Delete . The virtual machine and any selected associated storage files are deleted. 9.5. Powering up and powering down virtual machines using the the web console Using the the web console, you can run , shut down , and restart virtual machines. You can also send a non-maskable interrupt to a virtual machine that is unresponsive. 9.5.1. Powering up virtual machines in the the web console If a VM is in the shut off state, you can start it using the the web console. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure Click a row with the name of the virtual machine you want to start. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Run . The virtual machine starts. Additional resources For information on shutting down a virtual machine, see Section 9.5.2, "Powering down virtual machines in the the web console" . For information on restarting a virtual machine, see Section 9.5.3, "Restarting virtual machines using the the web console" . For information on sending a non-maskable interrupt to a virtual machine, see Section 9.5.4, "Sending non-maskable interrupts to VMs using the the web console" . 9.5.2. Powering down virtual machines in the the web console If a virtual machine is in the running state, you can shut it down using the the web console. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure Click a row with the name of the virtual machine you want to shut down. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Shut Down . The virtual machine shuts down. Note If the virtual machine does not shut down, click the arrow to the Shut Down button and select Force Shut Down . Additional resources For information on starting a virtual machine, see Section 9.5.1, "Powering up virtual machines in the the web console" . For information on restarting a virtual machine, see Section 9.5.3, "Restarting virtual machines using the the web console" . For information on sending a non-maskable interrupt to a virtual machine, see Section 9.5.4, "Sending non-maskable interrupts to VMs using the the web console" . 9.5.3. Restarting virtual machines using the the web console If a virtual machine is in the running state, you can restart it using the the web console. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure Click a row with the name of the virtual machine you want to restart. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Restart . The virtual machine shuts down and restarts. Note If the virtual machine does not restart, click the arrow to the Restart button and select Force Restart . Additional resources For information on starting a virtual machine, see Section 9.5.1, "Powering up virtual machines in the the web console" . For information on shutting down a virtual machine, see Section 9.5.2, "Powering down virtual machines in the the web console" . For information on sending a non-maskable interrupt to a virtual machine, see Section 9.5.4, "Sending non-maskable interrupts to VMs using the the web console" . 9.5.4. Sending non-maskable interrupts to VMs using the the web console Sending a non-maskable interrupt (NMI) may cause an unresponsive running VM to respond or shut down. For example, you can send the Ctrl + Alt + Del NMI to a VM that is not responsive. Prerequisites Before using the the web console to manage VMs, you must install the web console virtual machine plug-in. Procedure Click a row with the name of the virtual machine to which you want to send an NMI. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click the arrow to the Shut Down button and select Send Non-Maskable Interrupt . An NMI is sent to the virtual machine. Additional resources For information on starting a virtual machine, see Section 9.5.1, "Powering up virtual machines in the the web console" . For information on restarting a virtual machine, see Section 9.5.3, "Restarting virtual machines using the the web console" . For information on shutting down a virtual machine, see Section 9.5.2, "Powering down virtual machines in the the web console" . 9.6. Viewing virtual machine information using the the web console Using the the web console, you can view information about the virtual storage and VMs to which the web console is connected. 9.6.1. Viewing a virtualization overview in the the web console The following describes how to view an overview of the available virtual storage and the VMs to which the web console session is connected. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure To view information about the available storage and the virtual machines to which the web console is attached. Click Virtual Machines in the web console's side menu. Information about the available storage and the virtual machines to which the web console session is connected appears. The information includes the following: Storage Pools - The number of storage pools that can be accessed by the web console and their state. Networks - The number of networks that can be accessed by the web console and their state. Name - The name of the virtual machine. Connection - The type of libvirt connection, system or session. State - The state of the virtual machine. Additional resources For information on viewing detailed information about the storage pools the web console session can access, see Section 9.6.2, "Viewing storage pool information using the the web console" . For information on viewing basic information about a selected virtual machine to which the web console session is connected, see Section 9.6.3, "Viewing basic virtual machine information in the the web console" . For information on viewing resource usage for a selected virtual machine to which the web console session is connected, see Section 9.6.4, "Viewing virtual machine resource usage in the the web console" . For information on viewing disk information about a selected virtual machine to which the web console session is connected, see Section 9.6.5, "Viewing virtual machine disk information in the the web console" . For information on viewing virtual network interface card information about a selected virtual machine to which the web console session is connected, see Section 9.6.6, "Viewing virtual NIC information in the the web console" . 9.6.2. Viewing storage pool information using the the web console The following describes how to view detailed storage pool information about the storage pools that the web console session can access. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure To view storage pool information: Click Storage Pools at the top of the Virtual Machines tab. The Storage Pools window appears showing a list of configured storage pools. The information includes the following: Name - The name of the storage pool. Size - The size of the storage pool. Connection - The connection used to access the storage pool. State - The state of the storage pool. Click a row with the name of the storage whose information you want to see. The row expands to reveal the Overview pane with following information about the selected storage pool: Path - The path to the storage pool. Persistent - Whether or not the storage pool is persistent. Autostart - Whether or not the storage pool starts automatically. Type - The storage pool type. To view a list of storage volumes created from the storage pool, click Storage Volumes . The Storage Volumes pane appears showing a list of configured storage volumes with their sizes and the amount of space used. Additional resources For information on viewing information about all of the virtual machines to which the web console session is connected, see Section 9.6.1, "Viewing a virtualization overview in the the web console" . For information on viewing basic information about a selected virtual machine to which the web console session is connected, see Section 9.6.3, "Viewing basic virtual machine information in the the web console" . For information on viewing resource usage for a selected virtual machine to which the web console session is connected, see Section 9.6.4, "Viewing virtual machine resource usage in the the web console" . For information on viewing disk information about a selected virtual machine to which the web console session is connected, see Section 9.6.5, "Viewing virtual machine disk information in the the web console" . For information on viewing virtual network interface card information about a selected virtual machine to which the web console session is connected, see Section 9.6.6, "Viewing virtual NIC information in the the web console" . 9.6.3. Viewing basic virtual machine information in the the web console The following describes how to view basic information about a selected virtual machine to which the web console session is connected. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure To view basic information about a selected virtual machine. Click a row with the name of the virtual machine whose information you want to see. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Note If another tab is selected, click Overview . The information includes the following: Memory - The amount of memory assigned to the virtual machine. Emulated Machine - The machine type emulated by the virtual machine. vCPUs - The number of virtual CPUs configured for the virtual machine. Note To see more detailed virtual CPU information and configure the virtual CPUs configured for a virtual machine, see Section 9.7, "Managing virtual CPUs using the the web console" . Boot Order - The boot order configured for the virtual machine. CPU Type - The architecture of the virtual CPUs configured for the virtual machine. Autostart - Whether or not autostart is enabled for the virtual machine. Additional resources For information on viewing information about all of the virtual machines to which the web console session is connected, see Section 9.6.1, "Viewing a virtualization overview in the the web console" . For information on viewing information about the storage pools to which the web console session is connected, see Section 9.6.2, "Viewing storage pool information using the the web console" . For information on viewing resource usage for a selected virtual machine to which the web console session is connected, see Section 9.6.4, "Viewing virtual machine resource usage in the the web console" . For information on viewing disk information about a selected virtual machine to which the web console session is connected, see Section 9.6.5, "Viewing virtual machine disk information in the the web console" . For information on viewing virtual network interface card information about a selected virtual machine to which the web console session is connected, see Section 9.6.6, "Viewing virtual NIC information in the the web console" . 9.6.4. Viewing virtual machine resource usage in the the web console The following describes how to view resource usage information about a selected virtual machine to which the web console session is connected. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure To view information about the memory and virtual CPU usage of a selected virtual machine. Click a row with the name of the virtual machine whose information you want to see. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Usage . The Usage pane appears with information about the memory and virtual CPU usage of the virtual machine. Additional resources For information on viewing information about all of the virtual machines to which the web console session is connected, see Section 9.6.1, "Viewing a virtualization overview in the the web console" . For information on viewing information about the storage pools to which the web console session is connected, see Section 9.6.2, "Viewing storage pool information using the the web console" . For information on viewing basic information about a selected virtual machine to which the web console session is connected, see Section 9.6.3, "Viewing basic virtual machine information in the the web console" . For information on viewing disk information about a selected virtual machine to which the web console session is connected, see Section 9.6.5, "Viewing virtual machine disk information in the the web console" . For information on viewing virtual network interface card information about a selected virtual machine to which the web console session is connected, see Section 9.6.6, "Viewing virtual NIC information in the the web console" . 9.6.5. Viewing virtual machine disk information in the the web console The following describes how to view disk information about a virtual machine to which the web console session is connected. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure To view disk information about a selected virtual machine. Click a row with the name of the virtual machine whose information you want to see. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Disks . The Disks pane appears with information about the disks assigned to the virtual machine. The information includes the following: Device - The device type of the disk. Target - The controller type of the disk. Used - The amount of the disk that is used. Capacity - The size of the disk. Bus - The bus type of the disk. Readonly - Whether or not the disk is read-only. Source - The disk device or file. Additional resources For information on viewing information about all of the virtual machines to which the web console session is connected, see Section 9.6.1, "Viewing a virtualization overview in the the web console" . For information on viewing information about the storage pools to which the web console session is connected, see Section 9.6.2, "Viewing storage pool information using the the web console" . For information on viewing basic information about a selected virtual machine to which the web console session is connected, see Section 9.6.3, "Viewing basic virtual machine information in the the web console" . For information on viewing resource usage for a selected virtual machine to which the web console session is connected, see Section 9.6.4, "Viewing virtual machine resource usage in the the web console" . For information on viewing virtual network interface card information about a selected virtual machine to which the web console session is connected, see Section 9.6.6, "Viewing virtual NIC information in the the web console" . 9.6.6. Viewing virtual NIC information in the the web console The following describes how to view information about the virtual network interface cards (vNICs) on a selected virtual machine: Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure To view information about the virtual network interface cards (NICs) on a selected virtual machine. Click a row with the name of the virtual machine whose information you want to see. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Networks . The Networks pane appears with information about the virtual NICs configured for the virtual machine. The information includes the following: Type - The type of network interface for the virtual machine. Types include direct, network, bridge, ethernet, hostdev, mcast, user, and server. Model type - The model of the virtual NIC. MAC Address - The MAC address of the virtual NIC. Source - The source of the network interface. This is dependent on the network type. State - The state of the virtual NIC. To edit the virtual network settings, Click Edit . The Virtual Network Interface Settings. Change the Network Type and Model. Click Save . The network interface is modified. Note When the virtual machine is running, changes to the virtual network interface settings only take effect after the virtual machine is stopped and restarted. Additional resources For information on viewing information about all of the virtual machines to which the web console session is connected, see Section 9.6.1, "Viewing a virtualization overview in the the web console" . For information on viewing information about the storage pools to which the web console session is connected, see Section 9.6.2, "Viewing storage pool information using the the web console" . For information on viewing basic information about a selected virtual machine to which the web console session is connected, see Section 9.6.3, "Viewing basic virtual machine information in the the web console" . For information on viewing resource usage for a selected virtual machine to which the web console session is connected, see Section 9.6.4, "Viewing virtual machine resource usage in the the web console" . For information on viewing disk information about a selected virtual machine to which the web console session is connected, see Section 9.6.5, "Viewing virtual machine disk information in the the web console" . 9.7. Managing virtual CPUs using the the web console Using the the web console, you can manage the virtual CPUs configured for the virtual machines to which the web console is connected. You can view information about the virtual machines. You can also configure the virtual CPUs for virtual machines. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure Click a row with the name of the virtual machine for which you want to view and configure virtual CPU parameters. The row expands to reveal the Overview pane with basic information about the selected virtual machine, including the number of virtual CPUs, and controls for shutting down and deleting the virtual machine. Click the number of vCPUs in the Overview pane. The vCPU Details dialog appears. Note The warning in the vCPU Details dialog only appears after the virtual CPU settings are changed. Configure the virtual CPUs for the selected virtual machine. vCPU Count - Enter the number of virtual CPUs for the virtual machine. Note The vCPU count cannot be greater than the vCPU Maximum. vCPU Maximum - Enter the maximum number of virtual CPUs that can be configured for the virtual machine. Sockets - Select the number of sockets to expose to the virtual machine. Cores per socket - Select the number of cores for each socket to expose to the virtual machine. Threads per core - Select the number of threads for each core to expose to the virtual machine. Click Apply . The virtual CPUs for the virtual machine are configured. Note When the virtual machine is running, changes to the virtual CPU settings only take effect after the virtual machine is stopped and restarted. 9.8. Managing virtual machine disks using the the web console Using the the web console, you can manage the disks configured for the virtual machines to which the web console is connected. You can: View information about disks. Create and attach new virtual disks to virtual machines. Attach existing virtual disks to virtual machines. Detach virtual disks from virtual machines. 9.8.1. Viewing virtual machine disk information in the the web console The following describes how to view disk information about a virtual machine to which the web console session is connected. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure To view disk information about a selected virtual machine. Click a row with the name of the virtual machine whose information you want to see. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Disks . The Disks pane appears with information about the disks assigned to the virtual machine. The information includes the following: Device - The device type of the disk. Target - The controller type of the disk. Used - The amount of the disk that is used. Capacity - The size of the disk. Bus - The bus type of the disk. Readonly - Whether or not the disk is read-only. Source - The disk device or file. Additional resources For information on viewing information about all of the virtual machines to which the web console session is connected, see Section 9.6.1, "Viewing a virtualization overview in the the web console" . For information on viewing information about the storage pools to which the web console session is connected, see Section 9.6.2, "Viewing storage pool information using the the web console" . For information on viewing basic information about a selected virtual machine to which the web console session is connected, see Section 9.6.3, "Viewing basic virtual machine information in the the web console" . For information on viewing resource usage for a selected virtual machine to which the web console session is connected, see Section 9.6.4, "Viewing virtual machine resource usage in the the web console" . For information on viewing virtual network interface card information about a selected virtual machine to which the web console session is connected, see Section 9.6.6, "Viewing virtual NIC information in the the web console" . 9.8.2. Adding new disks to virtual machines using the the web console You can add new disks to virtual machines by creating a new disk (storage pool) and attaching it to a virtual machine using the the web console. Note You can only use directory-type storage pools when creating new disks for virtual machines using the the web console. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure Click a row with the name of the virtual machine for which you want to create and attach a new disk. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Disks . The Disks pane appears with information about the disks configured for the virtual machine. Click Add Disk . The Add Disk dialog appears. Ensure that the Create New option button is selected. Configure the new disk. Pool - Select the storage pool from which the virtual disk will be created. Target - Select a target for the virtual disk that will be created. Name - Enter a name for the virtual disk that will be created. Size - Enter the size and select the unit (MiB or GiB) of the virtual disk that will be created. Format - Select the format for the virtual disk that will be created. Supported types: qcow2, raw Persistence - Whether or not the virtual disk will be persistent. If checked, the virtual disk is persistent. If not checked, the virtual disk is not persistent. Note Transient disks can only be added to VMs that are running. Click Add . The virtual disk is created and connected to the virtual machine. Additional resources For information on viewing disk information about a selected virtual machine to which the web console session is connected, see Section 9.8.1, "Viewing virtual machine disk information in the the web console" . For information on attaching existing disks to virtual machines, see Section 9.8.3, "Attaching existing disks to virtual machines using the the web console" . For information on detaching disks from virtual machines, see Section 9.8.4, "Detaching disks from virtual machines" . 9.8.3. Attaching existing disks to virtual machines using the the web console The following describes how to attach existing disks to a virtual machine using the the web console. Note You can only attach directory-type storage pools to virtual machines using the the web console. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure Click a row with the name of the virtual machine to which you want to attach an existing disk. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Disks . The Disks pane appears with information about the disks configured for the virtual machine. Click Add Disk . The Add Disk dialog appears. Click the Use Existing option button. The appropriate configuration fields appear in the Add Disk dialog. Configure the disk for the virtual machine. Pool - Select the storage pool from which the virtual disk will be attached. Target - Select a target for the virtual disk that will be attached. Volume - Select the storage volume that will be attached. Persistence - Check to make the virtual disk persistent. Clear to make the virtual disk transient. Click Add The selected virtual disk is attached to the virtual machine. Additional resources For information on viewing disk information about a selected virtual machine to which the web console session is connected, see Section 9.8.1, "Viewing virtual machine disk information in the the web console" . For information on creating new disks and attaching them to virtual machines, see Section 9.8.2, "Adding new disks to virtual machines using the the web console" . For information on detaching disks from virtual machines, see Section 9.8.4, "Detaching disks from virtual machines" . 9.8.4. Detaching disks from virtual machines The following describes how to detach disks from virtual machines using the the web console. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure Click a row with the name of the virtual machine from which you want to detach an existing disk. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Disks . The Disks pane appears with information about the disks configured for the virtual machine. Click to the disk you want to detach from the virtual machine. The virtual disk is detached from the virtual machine. Caution There is no confirmation before detaching the disk from the virtual machine. Additional resources For information on viewing disk information about a selected virtual machine to which the web console session is connected, see Section 9.8.1, "Viewing virtual machine disk information in the the web console" . For information on creating new disks and attaching them to virtual machines, see Section 9.8.2, "Adding new disks to virtual machines using the the web console" . For information on attaching existing disks to virtual machines, see Section 9.8.3, "Attaching existing disks to virtual machines using the the web console" . 9.9. Using the the web console for managing virtual machine vNICs Using the the web console, you can manage the virtual network interface cards (vNICs) configured for the virtual machines to which the web console is connected. You can view information about vNICs. You can also connect and disconnect vNICs from virtual machines. 9.9.1. Viewing virtual NIC information in the the web console The following describes how to view information about the virtual network interface cards (vNICs) on a selected virtual machine: Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure To view information about the virtual network interface cards (NICs) on a selected virtual machine. Click a row with the name of the virtual machine whose information you want to see. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Networks . The Networks pane appears with information about the virtual NICs configured for the virtual machine. The information includes the following: Type - The type of network interface for the virtual machine. Types include direct, network, bridge, ethernet, hostdev, mcast, user, and server. Model type - The model of the virtual NIC. MAC Address - The MAC address of the virtual NIC. Source - The source of the network interface. This is dependent on the network type. State - The state of the virtual NIC. To edit the virtual network settings, Click Edit . The Virtual Network Interface Settings. Change the Network Type and Model. Click Save . The network interface is modified. Note When the virtual machine is running, changes to the virtual network interface settings only take effect after the virtual machine is stopped and restarted. Additional resources For information on viewing information about all of the virtual machines to which the web console session is connected, see Section 9.6.1, "Viewing a virtualization overview in the the web console" . For information on viewing information about the storage pools to which the web console session is connected, see Section 9.6.2, "Viewing storage pool information using the the web console" . For information on viewing basic information about a selected virtual machine to which the web console session is connected, see Section 9.6.3, "Viewing basic virtual machine information in the the web console" . For information on viewing resource usage for a selected virtual machine to which the web console session is connected, see Section 9.6.4, "Viewing virtual machine resource usage in the the web console" . For information on viewing disk information about a selected virtual machine to which the web console session is connected, see Section 9.6.5, "Viewing virtual machine disk information in the the web console" . 9.9.2. Connecting virtual NICs in the the web console Using the the web console, you can reconnect disconnected virtual network interface cards (NICs) configured for a selected virtual machine. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure Click a row with the name of the virtual machine whose virtual NIC you want to connect. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Networks . The Networks pane appears with information about the virtual NICs configured for the virtual machine. Click Plug in the row of the virtual NIC you want to connect. The selected virtual NIC connects to the virtual machine. 9.9.3. Disconnecting virtual NICs in the the web console Using the the web console, you can disconnect the virtual network interface cards (NICs) connected to a selected virtual machine. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure Click a row with the name of the virtual machine whose virtual NIC you want to disconnect. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Networks . The Networks pane appears with information about the virtual NICs configured for the virtual machine. Click Unplug in the row of the virtual NIC you want to disconnect. The selected virtual NIC disconnects from the virtual machine. 9.10. Interacting with virtual machines using the the web console To interact with a VM in the the web console, you need to connect to the VM's console. Using the the web console, you can view the virtual machine's consoles. These include both graphical and serial consoles. To interact with the VM's graphical interface in the the web console, use the graphical console in the the web console . To interact with the VM's graphical interface in a remote viewer, use the graphical console in remote viewers . To interact with the VM's CLI in the the web console, use the serial console in the the web console . 9.10.1. Viewing the virtual machine graphical console in the the web console You can view the graphical console of a selected virtual machine in the the web console. The virtual machine console shows the graphical output of the virtual machine. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Ensure that both the host and the VM support a graphical interface. Procedure Click a row with the name of the virtual machine whose graphical console you want to view. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Consoles . The graphical console appears in the web interface. You can interact with the virtual machine console using the mouse and keyboard in the same manner you interact with a real machine. The display in the virtual machine console reflects the activities being performed on the virtual machine. Note The server on which the the web console is running can intercept specific key combinations, such as Ctrl + Alt + F1 , preventing them from being sent to the virtual machine. To send such key combinations, click the Send key menu and select the key sequence to send. For example, to send the Ctrl + Alt + F1 combination to the virtual machine, click the Send key menu and select the Ctrl+Alt+F1 menu entry. Additional Resources For details on viewing the graphical console in a remote viewer, see Section 9.10.2, "Viewing virtual machine consoles in remote viewers using the the web console" . For details on viewing the serial console in the the web console, see Section 9.10.3, "Viewing the virtual machine serial console in the the web console" . 9.10.2. Viewing virtual machine consoles in remote viewers using the the web console You can view the virtual machine's consoles in a remote viewer. The connection can be made by the web console or manually. 9.10.2.1. Viewing the graphical console in a remote viewer You can view the graphical console of a selected virtual machine in a remote viewer. The virtual machine console shows the graphical output of the virtual machine. Note You can launch Virt Viewer from within the the web console. Other VNC and SPICE remote viewers can be launched manually. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Ensure that both the host and the VM support a graphical interface. Before you can view the graphical console in Virt Viewer, Virt Viewer must be installed on the machine to which the web console is connected. To view information on installing Virt Viewer, select the Graphics Console in Desktop Viewer Console Type and click More Information in the Consoles window. Note Some browser extensions and plug-ins do not allow the web console to open Virt Viewer. Procedure Click a row with the name of the virtual machine whose graphical console you want to view. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Consoles . The graphical console appears in the web interface. Select the Graphics Console in Desktop Viewer Console Type. Click Launch Remote Viewer . The graphical console appears in Virt Viewer. You can interact with the virtual machine console using the mouse and keyboard in the same manner you interact with a real machine. The display in the virtual machine console reflects the activities being performed on the virtual machine. Note The server on which the the web console is running can intercept specific key combinations, such as Ctrl + Alt + F1 , preventing them from being sent to the virtual machine. To send such key combinations, click the Send key menu and select the key sequence to send. For example, to send the Ctrl + Alt + F1 combination to the virtual machine, click the Send key menu and select the Ctrl+Alt+F1 menu entry. Additional Resources For details on viewing the graphical console in a remote viewer using a manual connection, see Section 9.10.2.2, "Viewing the graphical console in a remote viewer connecting manually" . For details on viewing the graphical console in the the web console, see Section 9.10.1, "Viewing the virtual machine graphical console in the the web console" . For details on viewing the serial console in the the web console, see Section 9.10.3, "Viewing the virtual machine serial console in the the web console" . 9.10.2.2. Viewing the graphical console in a remote viewer connecting manually You can view the graphical console of a selected virtual machine in a remote viewer. The virtual machine console shows the graphical output of the virtual machine. The web interface provides the information necessary to launch any SPICE or VNC viewer to view the virtual machine console. w Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Before you can view the graphical console in a remote viewer, ensure that a SPICE or VNC viewer application is installed on the machine to which the web console is connected. To view information on installing Virt Viewer, select the Graphics Console in Desktop Viewer Console Type and click More Information in the Consoles window. Procedure You can view the virtual machine graphics console in any SPICE or VNC viewer application. Click a row with the name of the virtual machine whose graphical console you want to view. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Consoles . The graphical console appears in the web interface. Select the Graphics Console in Desktop Viewer Console Type. The following Manual Connection information appears on the right side of the pane. Enter the information in the SPICE or VNC viewer. For more information, see the documentation for the SPICE or VNC viewer. Additional Resources For details onviewing the graphical console in a remote viewer using the the web console to make the connection, see Section 9.10.2.1, "Viewing the graphical console in a remote viewer" . For details on viewing the graphical console in the the web console, see Section 9.10.1, "Viewing the virtual machine graphical console in the the web console" . For details on viewing the serial console in the the web console, see Section 9.10.3, "Viewing the virtual machine serial console in the the web console" . 9.10.3. Viewing the virtual machine serial console in the the web console You can view the serial console of a selected virtual machine in the the web console. This is useful when the host machine or the VM is not configured with a graphical interface. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in . Procedure Click a row with the name of the virtual machine whose serial console you want to view. The row expands to reveal the Overview pane with basic information about the selected virtual machine and controls for shutting down and deleting the virtual machine. Click Consoles . The graphical console appears in the web interface. Select the Serial Console Console Type. The serial console appears in the web interface. You can disconnect and reconnect the serial console from the virtual machine. To disconnect the serial console from the virtual machine, click Disconnect . To reconnect the serial console to the virtual machine, click Reconnect . Additional Resources For details on viewing the graphical console in the the web console, see Section 9.10.1, "Viewing the virtual machine graphical console in the the web console" . For details on viewing the graphical console in a remote viewer, see Section 9.10.2, "Viewing virtual machine consoles in remote viewers using the the web console" . 9.11. Creating storage pools using the the web console You can create storage pools using the the web console. Prerequisites To be able to use the the web console to manage virtual machines, you must install the web console virtual machine plug-in. If the web console plug-in is not installed, see Section 9.2, "Setting up the the web console to manage virtual machines" for information about installing the web console virtual machine plug-in. Procedure Click Storage Pools at the top of the Virtual Machines tab. The Storage Pools window appears showing a list of configured storage pools. Click Create Storage Pool . The Create Storage Pool dialog appears. Enter the following information in the Create Storage Pool dialog: Connection - The connection to the host to be used by the storage pool. Name - The name of the storage pool. Type - The type of the storage pool: Filesystem Directory, Network File System Target Path - The storage pool path on the host's file system. Startup - Whether or not the storage pool starts when the host boots. Click Create . The storage pool is created, the Create Storage Pool dialog closes, and the new storage pool appears in the list of storage pools. Related information For information on viewing information about storage pools using the the web console, see Section 9.6.2, "Viewing storage pool information using the the web console" . | [
"yum info cockpit Installed Packages Name : cockpit [...]",
"yum install cockpit-machines"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/managing_systems_using_the_rhel_7_web_console/using-the-rhel-8-web-console-for-managing-vms_system-management-using-the-rhel-7-web-console |
Chapter 3. Red Hat build of OpenJDK 11.0.14.1 release notes | Chapter 3. Red Hat build of OpenJDK 11.0.14.1 release notes Review the following release notes to understand changes from this Red Hat build of OpenJDK 11.0.14 patch release: 3.1. Resolved https://google.com connection issue The Red Hat build of OpenJDK 11.0.14.1 release resolves an issue that was identified when using the Java HTTP client, java.net.HTTPClient , to connect to the https://google.com URL. This issue persisted on the Red Hat build of OpenJDK build for Microsoft Windows and on the Red Hat build of OpenJDK build for RHEL. The initial Red Hat build of OpenJDK 11.0.14 release contained a regression that was introduced by improvements to the HTTP client. This regression caused both the :authority and the Host header fields to be sent in HTTP/2 requests, which are rejected by some HTTP servers, such as Google's server. When you attempted to establish this connection, you would receive an exception message. This exception message would indicate that the Java HTTP client could not successfully communicate by using the HTTP/2 protocol. Example of an exception message when attempting to connect to https://google.com with java.net.HTTPClient java.util.concurrent.ExecutionException: java.io.IOException: Received RST_STREAM: Protocol error The Red Hat build of OpenJDK 11.0.14.1 release resolves the issue by reverting to the original behavior of only setting the :authority header field to be sent in an HTTP/2 request. For more information about this issue and how it was resolved, see JDK-8218546 and see the advisories related to the Red Hat build of OpenJDK 11.0.14.1 release. 3.2. Advisories related to the Red Hat build of OpenJDK 11.0.14.1 release The following advisories have been issued to bugfixes and CVE fixes included in this release: RHBA-2022:0732 RHBA-2022:0733 | [
"java.util.concurrent.ExecutionException: java.io.IOException: Received RST_STREAM: Protocol error"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.14/rn-openjdk-11-0-14-1-release-notes |
10.3. Red Hat Enterprise Linux hosts | 10.3. Red Hat Enterprise Linux hosts You can use a Red Hat Enterprise Linux 7 installation on capable hardware as a host. Red Hat Virtualization supports hosts running Red Hat Enterprise Linux 7 Server AMD64/Intel 64 version with Intel VT or AMD-V extensions. To use your Red Hat Enterprise Linux machine as a host, you must also attach the Red Hat Enterprise Linux Server and Red Hat Virtualization subscriptions. Adding a host can take some time, as the following steps are completed by the platform: virtualization checks, installation of packages, and the creation of a bridge. Use the details view to monitor the process as the host and management system establish a connection. Optionally, you can install a Cockpit web interface for monitoring the host's resources and performing administrative tasks. The Cockpit web interface provides a graphical user interface for tasks that are performed before the host is added to the Red Hat Virtualization Manager, such as configuring networking and deploying a self-hosted engine, and can also be used to run terminal commands via the Terminal sub-tab. Important Third-party watchdogs should not be installed on Red Hat Enterprise Linux hosts, as they can interfere with the watchdog daemon provided by VDSM. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/adding_red_hat_enterprise_linux_hosts |
B.8. Log tracepoints | B.8. Log tracepoints The tracepoints in this subsystem track blocks being added to and removed from the journal ( gfs2_pin ), as well as the time taken to commit the transactions to the log ( gfs2_log_flush ). This can be very useful when trying to debug journaling performance issues. The gfs2_log_blocks tracepoint keeps track of the reserved blocks in the log, which can help show if the log is too small for the workload, for example. The gfs2_ail_flush tracepoint is similar to the gfs2_log_flush tracepoint in that it keeps track of the start and end of flushes of the AIL list. The AIL list contains buffers which have been through the log, but have not yet been written back in place and this is periodically flushed in order to release more log space for use by the file system, or when a process requests a sync or fsync . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/global_file_system_2/ap-log-gracepoints-gfs2 |
Chapter 1. Understanding Red Hat OpenStack Platform | Chapter 1. Understanding Red Hat OpenStack Platform Red Hat OpenStack Platform (RHOSP) provides the foundation to build a private or public Infrastructure-as-a-Service (IaaS) cloud on top of Red Hat Enterprise Linux. It is a scalable, fault-tolerant platform for the development of cloud-enabled workloads. RHOSP delivers an integrated foundation to create, deploy, and scale a secure and reliable public or private OpenStack cloud. RHOSP is packaged so that you can create private, public, or hybrid cloud platforms from your available physical hardware. RHOSP clouds include the following components: Fully distributed object storage Persistent block-level storage Virtual machine provisioning engine and image storage Authentication and authorization mechanisms Integrated networking Web browser-based interface accessible to users and administrators The RHOSP IaaS cloud is implemented by a collection of interacting services that control its computing, storage, and networking resources. You can manage the cloud with a web-based interface to control, provision, and automate RHOSP resources. Additionally, an extensive API controls the RHOSP infrastructure and this API is also available to end users of the cloud. 1.1. Advantages of using Red Hat OpenStack Platform You can use Red Hat OpenStack Platform to combine virtualization, networking, and storage based on your requirements. The following capabilities are some of the advantages of the Red Hat OpenStack Platform: You can create public, private, or hybrid clouds that you can scale up or down based on your requirements. You can deploy cloud-enabled workloads based on your needs. You can address customer demands in hours or minutes instead of weeks or days, without sacrificing security, performance, or budget. You can implement stability and agility for your cloud environments, using hybrid cloud management, monitoring, and reporting with Red Hat CloudForms. 1.2. Relationship between RDO and OpenStack Foundation OpenStack Foundation promotes the global development, distribution, and adoption of the OpenStack cloud operating system. The goal of the OpenStack Foundation is to serve developers, users, and the entire ecosystem globally by providing a set of shared resources to grow the footprint of public and private OpenStack clouds, enable technology vendors targeting the platform and assist developers to produce the best cloud software in the industry. RPM Distribution of OpenStack (RDO) is a free, community-supported distribution of the Red Hat version of OpenStack that runs on Red Hat Enterprise Linux (RHEL) and its derivatives, such as CentOS. RDO also makes the latest OpenStack development release available for Fedora. In addition to providing a set of software packages, RDO is a community of users of cloud computing platforms on Red Hat-based operating systems to get help and compare notes on running OpenStack. For enterprise-level support or information on partner certification, Red Hat offers Red Hat OpenStack Platform. For more information, see Red Hat OpenStack Platform . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/product_guide/ch-understanding-rhosp |
Chapter 16. Using Precision Time Protocol hardware | Chapter 16. Using Precision Time Protocol hardware You can configure linuxptp services and use PTP-capable hardware in OpenShift Container Platform cluster nodes. 16.1. About PTP hardware You can use the OpenShift Container Platform console or OpenShift CLI ( oc ) to install PTP by deploying the PTP Operator. The PTP Operator creates and manages the linuxptp services and provides the following features: Discovery of the PTP-capable devices in the cluster. Management of the configuration of linuxptp services. Notification of PTP clock events that negatively affect the performance and reliability of your application with the PTP Operator cloud-event-proxy sidecar. Note The PTP Operator works with PTP-capable devices on clusters provisioned only on bare-metal infrastructure. 16.2. About PTP Precision Time Protocol (PTP) is used to synchronize clocks in a network. When used in conjunction with hardware support, PTP is capable of sub-microsecond accuracy, and is more accurate than Network Time Protocol (NTP). The linuxptp package includes the ptp4l and phc2sys programs for clock synchronization. ptp4l implements the PTP boundary clock and ordinary clock. ptp4l synchronizes the PTP hardware clock to the source clock with hardware time stamping and synchronizes the system clock to the source clock with software time stamping. phc2sys is used for hardware time stamping to synchronize the system clock to the PTP hardware clock on the network interface controller (NIC). 16.2.1. Elements of a PTP domain PTP is used to synchronize multiple nodes connected in a network, with clocks for each node. The clocks synchronized by PTP are organized in a source-destination hierarchy. The hierarchy is created and updated automatically by the best master clock (BMC) algorithm, which runs on every clock. Destination clocks are synchronized to source clocks, and destination clocks can themselves be the source for other downstream clocks. The following types of clocks can be included in configurations: Grandmaster clock The grandmaster clock provides standard time information to other clocks across the network and ensures accurate and stable synchronisation. It writes time stamps and responds to time requests from other clocks. Grandmaster clocks can be synchronized to a Global Positioning System (GPS) time source. Ordinary clock The ordinary clock has a single port connection that can play the role of source or destination clock, depending on its position in the network. The ordinary clock can read and write time stamps. Boundary clock The boundary clock has ports in two or more communication paths and can be a source and a destination to other destination clocks at the same time. The boundary clock works as a destination clock upstream. The destination clock receives the timing message, adjusts for delay, and then creates a new source time signal to pass down the network. The boundary clock produces a new timing packet that is still correctly synced with the source clock and can reduce the number of connected devices reporting directly to the source clock. 16.2.2. Advantages of PTP over NTP One of the main advantages that PTP has over NTP is the hardware support present in various network interface controllers (NIC) and network switches. The specialized hardware allows PTP to account for delays in message transfer and improves the accuracy of time synchronization. To achieve the best possible accuracy, it is recommended that all networking components between PTP clocks are PTP hardware enabled. Hardware-based PTP provides optimal accuracy, since the NIC can time stamp the PTP packets at the exact moment they are sent and received. Compare this to software-based PTP, which requires additional processing of the PTP packets by the operating system. Important Before enabling PTP, ensure that NTP is disabled for the required nodes. You can disable the chrony time service ( chronyd ) using a MachineConfig custom resource. For more information, see Disabling chrony time service . 16.2.3. Using PTP with dual NIC hardware OpenShift Container Platform supports single and dual NIC hardware for precision PTP timing in the cluster. For 5G telco networks that deliver mid-band spectrum coverage, each virtual distributed unit (vDU) requires connections to 6 radio units (RUs). To make these connections, each vDU host requires 2 NICs configured as boundary clocks. Dual NIC hardware allows you to connect each NIC to the same upstream leader clock with separate ptp4l instances for each NIC feeding the downstream clocks. 16.3. Installing the PTP Operator using the CLI As a cluster administrator, you can install the Operator by using the CLI. Prerequisites A cluster installed on bare-metal hardware with nodes that have hardware that supports PTP. Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a namespace for the PTP Operator. Save the following YAML in the ptp-namespace.yaml file: apiVersion: v1 kind: Namespace metadata: name: openshift-ptp annotations: workload.openshift.io/allowed: management labels: name: openshift-ptp openshift.io/cluster-monitoring: "true" Create the Namespace CR: USD oc create -f ptp-namespace.yaml Create an Operator group for the PTP Operator. Save the following YAML in the ptp-operatorgroup.yaml file: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ptp-operators namespace: openshift-ptp spec: targetNamespaces: - openshift-ptp Create the OperatorGroup CR: USD oc create -f ptp-operatorgroup.yaml Subscribe to the PTP Operator. Save the following YAML in the ptp-sub.yaml file: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ptp-operator-subscription namespace: openshift-ptp spec: channel: "stable" name: ptp-operator source: redhat-operators sourceNamespace: openshift-marketplace Create the Subscription CR: USD oc create -f ptp-sub.yaml To verify that the Operator is installed, enter the following command: USD oc get csv -n openshift-ptp -o custom-columns=Name:.metadata.name,Phase:.status.phase Example output Name Phase 4.12.0-202301261535 Succeeded 16.4. Installing the PTP Operator using the web console As a cluster administrator, you can install the PTP Operator using the web console. Note You have to create the namespace and Operator group as mentioned in the section. Procedure Install the PTP Operator using the OpenShift Container Platform web console: In the OpenShift Container Platform web console, click Operators OperatorHub . Choose PTP Operator from the list of available Operators, and then click Install . On the Install Operator page, under A specific namespace on the cluster select openshift-ptp . Then, click Install . Optional: Verify that the PTP Operator installed successfully: Switch to the Operators Installed Operators page. Ensure that PTP Operator is listed in the openshift-ptp project with a Status of InstallSucceeded . Note During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message. If the Operator does not appear as installed, to troubleshoot further: Go to the Operators Installed Operators page and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status . Go to the Workloads Pods page and check the logs for pods in the openshift-ptp project. 16.5. Configuring PTP devices The PTP Operator adds the NodePtpDevice.ptp.openshift.io custom resource definition (CRD) to OpenShift Container Platform. When installed, the PTP Operator searches your cluster for PTP-capable network devices on each node. It creates and updates a NodePtpDevice custom resource (CR) object for each node that provides a compatible PTP-capable network device. 16.5.1. Discovering PTP-capable network devices in your cluster Identify PTP-capable network devices that exist in your cluster so that you can configure them Prerequisties You installed the PTP Operator. Procedure To return a complete list of PTP capable network devices in your cluster, run the following command: USD oc get NodePtpDevice -n openshift-ptp -o yaml Example output apiVersion: v1 items: - apiVersion: ptp.openshift.io/v1 kind: NodePtpDevice metadata: creationTimestamp: "2022-01-27T15:16:28Z" generation: 1 name: dev-worker-0 1 namespace: openshift-ptp resourceVersion: "6538103" uid: d42fc9ad-bcbf-4590-b6d8-b676c642781a spec: {} status: devices: 2 - name: eno1 - name: eno2 - name: eno3 - name: eno4 - name: enp5s0f0 - name: enp5s0f1 ... 1 The value for the name parameter is the same as the name of the parent node. 2 The devices collection includes a list of the PTP capable devices that the PTP Operator discovers for the node. 16.5.2. Configuring linuxptp services as a grandmaster clock You can configure the linuxptp services ( ptp4l , phc2sys , ts2phc ) as grandmaster clock by creating a PtpConfig custom resource (CR) that configures the host NIC. The ts2phc utility allows you to synchronize the system clock with the PTP grandmaster clock so that the node can stream precision clock signal to downstream PTP ordinary clocks and boundary clocks. Note Use the following example PtpConfig CR as the basis to configure linuxptp services as the grandmaster clock for your particular hardware and environment. This example CR does not configure PTP fast events. To configure PTP fast events, set appropriate values for ptp4lOpts , ptp4lConf , and ptpClockThreshold . ptpClockThreshold is used only when events are enabled. See "Configuring the PTP fast event notifications publisher" for more information. Prerequisites Install an Intel Westport Channel network interface in the bare-metal cluster host. Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Install the PTP Operator. Procedure Create the PtpConfig resource. For example: Save the following YAML in the grandmaster-clock-ptp-config.yaml file: Example PTP grandmaster clock configuration apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: grandmaster-clock namespace: openshift-ptp annotations: {} spec: profile: - name: grandmaster-clock # The interface name is hardware-specific interface: USDinterface ptp4lOpts: "-2" phc2sysOpts: "-a -r -r -n 24" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: "true" ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 255 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type OC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: grandmaster-clock priority: 4 match: - nodeLabel: "node-role.kubernetes.io/USDmcp" Create the CR by running the following command: USD oc create -f grandmaster-clock-ptp-config.yaml Verification Check that the PtpConfig profile is applied to the node. Get the list of pods in the openshift-ptp namespace by running the following command: USD oc get pods -n openshift-ptp -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-74m2g 3/3 Running 3 4d15h 10.16.230.7 compute-1.example.com ptp-operator-5f4f48d7c-x7zkf 1/1 Running 1 4d15h 10.128.1.145 compute-1.example.com Check that the profile is correct. Examine the logs of the linuxptp daemon that corresponds to the node you specified in the PtpConfig profile. Run the following command: USD oc logs linuxptp-daemon-74m2g -n openshift-ptp -c linuxptp-daemon-container Example output ts2phc[94980.334]: [ts2phc.0.config] nmea delay: 98690975 ns ts2phc[94980.334]: [ts2phc.0.config] ens3f0 extts index 0 at 1676577329.999999999 corr 0 src 1676577330.901342528 diff -1 ts2phc[94980.334]: [ts2phc.0.config] ens3f0 master offset -1 s2 freq -1 ts2phc[94980.441]: [ts2phc.0.config] nmea sentence: GNRMC,195453.00,A,4233.24427,N,07126.64420,W,0.008,,160223,,,A,V phc2sys[94980.450]: [ptp4l.0.config] CLOCK_REALTIME phc offset 943 s2 freq -89604 delay 504 phc2sys[94980.512]: [ptp4l.0.config] CLOCK_REALTIME phc offset 1000 s2 freq -89264 delay 474 16.5.3. Configuring linuxptp services as an ordinary clock You can configure linuxptp services ( ptp4l , phc2sys ) as ordinary clock by creating a PtpConfig custom resource (CR) object. Note Use the following example PtpConfig CR as the basis to configure linuxptp services as an ordinary clock for your particular hardware and environment. This example CR does not configure PTP fast events. To configure PTP fast events, set appropriate values for ptp4lOpts , ptp4lConf , and ptpClockThreshold . ptpClockThreshold is required only when events are enabled. See "Configuring the PTP fast event notifications publisher" for more information. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Install the PTP Operator. Procedure Create the following PtpConfig CR, and then save the YAML in the ordinary-clock-ptp-config.yaml file. Example PTP ordinary clock configuration apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: ordinary-clock namespace: openshift-ptp annotations: {} spec: profile: - name: ordinary-clock # The interface name is hardware-specific interface: USDinterface ptp4lOpts: "-2 -s" phc2sysOpts: "-a -r -n 24" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: "true" ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 255 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type OC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: ordinary-clock priority: 4 match: - nodeLabel: "node-role.kubernetes.io/USDmcp" Table 16.1. PTP ordinary clock CR configuration options Custom resource field Description name The name of the PtpConfig CR. profile Specify an array of one or more profile objects. Each profile must be uniquely named. interface Specify the network interface to be used by the ptp4l service, for example ens787f1 . ptp4lOpts Specify system config options for the ptp4l service, for example -2 to select the IEEE 802.3 network transport. The options should not include the network interface name -i <interface> and service config file -f /etc/ptp4l.conf because the network interface name and the service config file are automatically appended. Append --summary_interval -4 to use PTP fast events with this interface. phc2sysOpts Specify system config options for the phc2sys service. If this field is empty, the PTP Operator does not start the phc2sys service. For Intel Columbiaville 800 Series NICs, set phc2sysOpts options to -a -r -m -n 24 -N 8 -R 16 . -m prints messages to stdout . The linuxptp-daemon DaemonSet parses the logs and generates Prometheus metrics. ptp4lConf Specify a string that contains the configuration to replace the default /etc/ptp4l.conf file. To use the default configuration, leave the field empty. tx_timestamp_timeout For Intel Columbiaville 800 Series NICs, set tx_timestamp_timeout to 50 . boundary_clock_jbod For Intel Columbiaville 800 Series NICs, set boundary_clock_jbod to 0 . ptpSchedulingPolicy Scheduling policy for ptp4l and phc2sys processes. Default value is SCHED_OTHER . Use SCHED_FIFO on systems that support FIFO scheduling. ptpSchedulingPriority Integer value from 1-65 used to set FIFO priority for ptp4l and phc2sys processes when ptpSchedulingPolicy is set to SCHED_FIFO . The ptpSchedulingPriority field is not used when ptpSchedulingPolicy is set to SCHED_OTHER . ptpClockThreshold Optional. If ptpClockThreshold is not present, default values are used for the ptpClockThreshold fields. ptpClockThreshold configures how long after the PTP master clock is disconnected before PTP events are triggered. holdOverTimeout is the time value in seconds before the PTP clock event state changes to FREERUN when the PTP master clock is disconnected. The maxOffsetThreshold and minOffsetThreshold settings configure offset values in nanoseconds that compare against the values for CLOCK_REALTIME ( phc2sys ) or master offset ( ptp4l ). When the ptp4l or phc2sys offset value is outside this range, the PTP clock state is set to FREERUN . When the offset value is within this range, the PTP clock state is set to LOCKED . recommend Specify an array of one or more recommend objects that define rules on how the profile should be applied to nodes. .recommend.profile Specify the .recommend.profile object name defined in the profile section. .recommend.priority Set .recommend.priority to 0 for ordinary clock. .recommend.match Specify .recommend.match rules with nodeLabel or nodeName values. .recommend.match.nodeLabel Set nodeLabel with the key of the node.Labels field from the node object by using the oc get nodes --show-labels command. For example, node-role.kubernetes.io/worker . .recommend.match.nodeName Set nodeName with the value of the node.Name field from the node object by using the oc get nodes command. For example, compute-1.example.com . Create the PtpConfig CR by running the following command: USD oc create -f ordinary-clock-ptp-config.yaml Verification Check that the PtpConfig profile is applied to the node. Get the list of pods in the openshift-ptp namespace by running the following command: USD oc get pods -n openshift-ptp -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-4xkbb 1/1 Running 0 43m 10.1.196.24 compute-0.example.com linuxptp-daemon-tdspf 1/1 Running 0 43m 10.1.196.25 compute-1.example.com ptp-operator-657bbb64c8-2f8sj 1/1 Running 0 43m 10.129.0.61 control-plane-1.example.com Check that the profile is correct. Examine the logs of the linuxptp daemon that corresponds to the node you specified in the PtpConfig profile. Run the following command: USD oc logs linuxptp-daemon-4xkbb -n openshift-ptp -c linuxptp-daemon-container Example output I1115 09:41:17.117596 4143292 daemon.go:107] in applyNodePTPProfile I1115 09:41:17.117604 4143292 daemon.go:109] updating NodePTPProfile to: I1115 09:41:17.117607 4143292 daemon.go:110] ------------------------------------ I1115 09:41:17.117612 4143292 daemon.go:102] Profile Name: profile1 I1115 09:41:17.117616 4143292 daemon.go:102] Interface: ens787f1 I1115 09:41:17.117620 4143292 daemon.go:102] Ptp4lOpts: -2 -s I1115 09:41:17.117623 4143292 daemon.go:102] Phc2sysOpts: -a -r -n 24 I1115 09:41:17.117626 4143292 daemon.go:116] ------------------------------------ Additional resources For more information about FIFO priority scheduling on PTP hardware, see Configuring FIFO priority scheduling for PTP hardware . For more information about configuring PTP fast events, see Configuring the PTP fast event notifications publisher . 16.5.4. Configuring linuxptp services as a boundary clock You can configure the linuxptp services ( ptp4l , phc2sys ) as boundary clock by creating a PtpConfig custom resource (CR) object. Note Use the following example PtpConfig CR as the basis to configure linuxptp services as the boundary clock for your particular hardware and environment. This example CR does not configure PTP fast events. To configure PTP fast events, set appropriate values for ptp4lOpts , ptp4lConf , and ptpClockThreshold . ptpClockThreshold is used only when events are enabled. See "Configuring the PTP fast event notifications publisher" for more information. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Install the PTP Operator. Procedure Create the following PtpConfig CR, and then save the YAML in the boundary-clock-ptp-config.yaml file. Example PTP boundary clock configuration apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary-clock namespace: openshift-ptp annotations: {} spec: profile: - name: boundary-clock ptp4lOpts: "-2" phc2sysOpts: "-a -r -n 24" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: "true" ptp4lConf: | # The interface name is hardware-specific [USDiface_slave] masterOnly 0 [USDiface_master_1] masterOnly 1 [USDiface_master_2] masterOnly 1 [USDiface_master_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 248 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 135 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: boundary-clock priority: 4 match: - nodeLabel: "node-role.kubernetes.io/USDmcp" Table 16.2. PTP boundary clock CR configuration options Custom resource field Description name The name of the PtpConfig CR. profile Specify an array of one or more profile objects. name Specify the name of a profile object which uniquely identifies a profile object. ptp4lOpts Specify system config options for the ptp4l service. The options should not include the network interface name -i <interface> and service config file -f /etc/ptp4l.conf because the network interface name and the service config file are automatically appended. ptp4lConf Specify the required configuration to start ptp4l as boundary clock. For example, ens1f0 synchronizes from a grandmaster clock and ens1f3 synchronizes connected devices. <interface_1> The interface that receives the synchronization clock. <interface_2> The interface that sends the synchronization clock. tx_timestamp_timeout For Intel Columbiaville 800 Series NICs, set tx_timestamp_timeout to 50 . boundary_clock_jbod For Intel Columbiaville 800 Series NICs, ensure boundary_clock_jbod is set to 0 . For Intel Fortville X710 Series NICs, ensure boundary_clock_jbod is set to 1 . phc2sysOpts Specify system config options for the phc2sys service. If this field is empty, the PTP Operator does not start the phc2sys service. ptpSchedulingPolicy Scheduling policy for ptp4l and phc2sys processes. Default value is SCHED_OTHER . Use SCHED_FIFO on systems that support FIFO scheduling. ptpSchedulingPriority Integer value from 1-65 used to set FIFO priority for ptp4l and phc2sys processes when ptpSchedulingPolicy is set to SCHED_FIFO . The ptpSchedulingPriority field is not used when ptpSchedulingPolicy is set to SCHED_OTHER . ptpClockThreshold Optional. If ptpClockThreshold is not present, default values are used for the ptpClockThreshold fields. ptpClockThreshold configures how long after the PTP master clock is disconnected before PTP events are triggered. holdOverTimeout is the time value in seconds before the PTP clock event state changes to FREERUN when the PTP master clock is disconnected. The maxOffsetThreshold and minOffsetThreshold settings configure offset values in nanoseconds that compare against the values for CLOCK_REALTIME ( phc2sys ) or master offset ( ptp4l ). When the ptp4l or phc2sys offset value is outside this range, the PTP clock state is set to FREERUN . When the offset value is within this range, the PTP clock state is set to LOCKED . recommend Specify an array of one or more recommend objects that define rules on how the profile should be applied to nodes. .recommend.profile Specify the .recommend.profile object name defined in the profile section. .recommend.priority Specify the priority with an integer value between 0 and 99 . A larger number gets lower priority, so a priority of 99 is lower than a priority of 10 . If a node can be matched with multiple profiles according to rules defined in the match field, the profile with the higher priority is applied to that node. .recommend.match Specify .recommend.match rules with nodeLabel or nodeName values. .recommend.match.nodeLabel Set nodeLabel with the key of the node.Labels field from the node object by using the oc get nodes --show-labels command. For example, node-role.kubernetes.io/worker . .recommend.match.nodeName Set nodeName with the value of the node.Name field from the node object by using the oc get nodes command. For example, compute-1.example.com . Create the CR by running the following command: USD oc create -f boundary-clock-ptp-config.yaml Verification Check that the PtpConfig profile is applied to the node. Get the list of pods in the openshift-ptp namespace by running the following command: USD oc get pods -n openshift-ptp -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-4xkbb 1/1 Running 0 43m 10.1.196.24 compute-0.example.com linuxptp-daemon-tdspf 1/1 Running 0 43m 10.1.196.25 compute-1.example.com ptp-operator-657bbb64c8-2f8sj 1/1 Running 0 43m 10.129.0.61 control-plane-1.example.com Check that the profile is correct. Examine the logs of the linuxptp daemon that corresponds to the node you specified in the PtpConfig profile. Run the following command: USD oc logs linuxptp-daemon-4xkbb -n openshift-ptp -c linuxptp-daemon-container Example output I1115 09:41:17.117596 4143292 daemon.go:107] in applyNodePTPProfile I1115 09:41:17.117604 4143292 daemon.go:109] updating NodePTPProfile to: I1115 09:41:17.117607 4143292 daemon.go:110] ------------------------------------ I1115 09:41:17.117612 4143292 daemon.go:102] Profile Name: profile1 I1115 09:41:17.117616 4143292 daemon.go:102] Interface: I1115 09:41:17.117620 4143292 daemon.go:102] Ptp4lOpts: -2 I1115 09:41:17.117623 4143292 daemon.go:102] Phc2sysOpts: -a -r -n 24 I1115 09:41:17.117626 4143292 daemon.go:116] ------------------------------------ Additional resources For more information about FIFO priority scheduling on PTP hardware, see Configuring FIFO priority scheduling for PTP hardware . For more information about configuring PTP fast events, see Configuring the PTP fast event notifications publisher . 16.5.5. Configuring linuxptp services as boundary clocks for dual NIC hardware Important Precision Time Protocol (PTP) hardware with dual NIC configured as boundary clocks is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can configure the linuxptp services ( ptp4l , phc2sys ) as boundary clocks for dual NIC hardware by creating a PtpConfig custom resource (CR) object for each NIC. Dual NIC hardware allows you to connect each NIC to the same upstream leader clock with separate ptp4l instances for each NIC feeding the downstream clocks. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Install the PTP Operator. Procedure Create two separate PtpConfig CRs, one for each NIC, using the reference CR in "Configuring linuxptp services as a boundary clock" as the basis for each CR. For example: Create boundary-clock-ptp-config-nic1.yaml , specifying values for phc2sysOpts : apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary-clock-ptp-config-nic1 namespace: openshift-ptp spec: profile: - name: "profile1" ptp4lOpts: "-2 --summary_interval -4" ptp4lConf: | 1 [ens5f1] masterOnly 1 [ens5f0] masterOnly 0 ... phc2sysOpts: "-a -r -m -n 24 -N 8 -R 16" 2 1 Specify the required interfaces to start ptp4l as a boundary clock. For example, ens5f0 synchronizes from a grandmaster clock and ens5f1 synchronizes connected devices. 2 Required phc2sysOpts values. -m prints messages to stdout . The linuxptp-daemon DaemonSet parses the logs and generates Prometheus metrics. Create boundary-clock-ptp-config-nic2.yaml , removing the phc2sysOpts field altogether to disable the phc2sys service for the second NIC: apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary-clock-ptp-config-nic2 namespace: openshift-ptp spec: profile: - name: "profile2" ptp4lOpts: "-2 --summary_interval -4" ptp4lConf: | 1 [ens7f1] masterOnly 1 [ens7f0] masterOnly 0 ... 1 Specify the required interfaces to start ptp4l as a boundary clock on the second NIC. Note You must completely remove the phc2sysOpts field from the second PtpConfig CR to disable the phc2sys service on the second NIC. Create the dual NIC PtpConfig CRs by running the following commands: Create the CR that configures PTP for the first NIC: USD oc create -f boundary-clock-ptp-config-nic1.yaml Create the CR that configures PTP for the second NIC: USD oc create -f boundary-clock-ptp-config-nic2.yaml Verification Check that the PTP Operator has applied the PtpConfig CRs for both NICs. Examine the logs for the linuxptp daemon corresponding to the node that has the dual NIC hardware installed. For example, run the following command: USD oc logs linuxptp-daemon-cvgr6 -n openshift-ptp -c linuxptp-daemon-container Example output ptp4l[80828.335]: [ptp4l.1.config] master offset 5 s2 freq -5727 path delay 519 ptp4l[80828.343]: [ptp4l.0.config] master offset -5 s2 freq -10607 path delay 533 phc2sys[80828.390]: [ptp4l.0.config] CLOCK_REALTIME phc offset 1 s2 freq -87239 delay 539 16.5.6. Intel Columbiaville E800 series NIC as PTP ordinary clock reference The following table describes the changes that you must make to the reference PTP configuration in order to use Intel Columbiaville E800 series NICs as ordinary clocks. Make the changes in a PtpConfig custom resource (CR) that you apply to the cluster. Table 16.3. Recommended PTP settings for Intel Columbiaville NIC PTP configuration Recommended setting phc2sysOpts -a -r -m -n 24 -N 8 -R 16 tx_timestamp_timeout 50 boundary_clock_jbod 0 Note For phc2sysOpts , -m prints messages to stdout . The linuxptp-daemon DaemonSet parses the logs and generates Prometheus metrics. Additional resources For a complete example CR that configures linuxptp services as an ordinary clock with PTP fast events, see Configuring linuxptp services as ordinary clock . 16.5.7. Configuring FIFO priority scheduling for PTP hardware In telco or other deployment configurations that require low latency performance, PTP daemon threads run in a constrained CPU footprint alongside the rest of the infrastructure components. By default, PTP threads run with the SCHED_OTHER policy. Under high load, these threads might not get the scheduling latency they require for error-free operation. To mitigate against potential scheduling latency errors, you can configure the PTP Operator linuxptp services to allow threads to run with a SCHED_FIFO policy. If SCHED_FIFO is set for a PtpConfig CR, then ptp4l and phc2sys will run in the parent container under chrt with a priority set by the ptpSchedulingPriority field of the PtpConfig CR. Note Setting ptpSchedulingPolicy is optional, and is only required if you are experiencing latency errors. Procedure Edit the PtpConfig CR profile: USD oc edit PtpConfig -n openshift-ptp Change the ptpSchedulingPolicy and ptpSchedulingPriority fields: apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: <ptp_config_name> namespace: openshift-ptp ... spec: profile: - name: "profile1" ... ptpSchedulingPolicy: SCHED_FIFO 1 ptpSchedulingPriority: 10 2 1 Scheduling policy for ptp4l and phc2sys processes. Use SCHED_FIFO on systems that support FIFO scheduling. 2 Required. Sets the integer value 1-65 used to configure FIFO priority for ptp4l and phc2sys processes. Save and exit to apply the changes to the PtpConfig CR. Verification Get the name of the linuxptp-daemon pod and corresponding node where the PtpConfig CR has been applied: USD oc get pods -n openshift-ptp -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-gmv2n 3/3 Running 0 1d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-lgm55 3/3 Running 0 1d17h 10.1.196.25 compute-1.example.com ptp-operator-3r4dcvf7f4-zndk7 1/1 Running 0 1d7h 10.129.0.61 control-plane-1.example.com Check that the ptp4l process is running with the updated chrt FIFO priority: USD oc -n openshift-ptp logs linuxptp-daemon-lgm55 -c linuxptp-daemon-container|grep chrt Example output I1216 19:24:57.091872 1600715 daemon.go:285] /bin/chrt -f 65 /usr/sbin/ptp4l -f /var/run/ptp4l.0.config -2 --summary_interval -4 -m 16.5.8. Configuring log filtering for linuxptp services The linuxptp daemon generates logs that you can use for debugging purposes. In telco or other deployment configurations that feature a limited storage capacity, these logs can add to the storage demand. To reduce the number log messages, you can configure the PtpConfig custom resource (CR) to exclude log messages that report the master offset value. The master offset log message reports the difference between the current node's clock and the master clock in nanoseconds. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Install the PTP Operator. Procedure Edit the PtpConfig CR: USD oc edit PtpConfig -n openshift-ptp In spec.profile , add the ptpSettings.logReduce specification and set the value to true : apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: <ptp_config_name> namespace: openshift-ptp ... spec: profile: - name: "profile1" ... ptpSettings: logReduce: "true" Note For debugging purposes, you can revert this specification to False to include the master offset messages. Save and exit to apply the changes to the PtpConfig CR. Verification Get the name of the linuxptp-daemon pod and corresponding node where the PtpConfig CR has been applied: USD oc get pods -n openshift-ptp -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-gmv2n 3/3 Running 0 1d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-lgm55 3/3 Running 0 1d17h 10.1.196.25 compute-1.example.com ptp-operator-3r4dcvf7f4-zndk7 1/1 Running 0 1d7h 10.129.0.61 control-plane-1.example.com Verify that master offset messages are excluded from the logs by running the following command: USD oc -n openshift-ptp logs <linux_daemon_container> -c linuxptp-daemon-container | grep "master offset" 1 1 <linux_daemon_container> is the name of the linuxptp-daemon pod, for example linuxptp-daemon-gmv2n . When you configure the logReduce specification, this command does not report any instances of master offset in the logs of the linuxptp daemon. 16.6. Troubleshooting common PTP Operator issues Troubleshoot common problems with the PTP Operator by performing the following steps. Prerequisites Install the OpenShift Container Platform CLI ( oc ). Log in as a user with cluster-admin privileges. Install the PTP Operator on a bare-metal cluster with hosts that support PTP. Procedure Check the Operator and operands are successfully deployed in the cluster for the configured nodes. USD oc get pods -n openshift-ptp -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-lmvgn 3/3 Running 0 4d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-qhfg7 3/3 Running 0 4d17h 10.1.196.25 compute-1.example.com ptp-operator-6b8dcbf7f4-zndk7 1/1 Running 0 5d7h 10.129.0.61 control-plane-1.example.com Note When the PTP fast event bus is enabled, the number of ready linuxptp-daemon pods is 3/3 . If the PTP fast event bus is not enabled, 2/2 is displayed. Check that supported hardware is found in the cluster. USD oc -n openshift-ptp get nodeptpdevices.ptp.openshift.io Example output NAME AGE control-plane-0.example.com 10d control-plane-1.example.com 10d compute-0.example.com 10d compute-1.example.com 10d compute-2.example.com 10d Check the available PTP network interfaces for a node: USD oc -n openshift-ptp get nodeptpdevices.ptp.openshift.io <node_name> -o yaml where: <node_name> Specifies the node you want to query, for example, compute-0.example.com . Example output apiVersion: ptp.openshift.io/v1 kind: NodePtpDevice metadata: creationTimestamp: "2021-09-14T16:52:33Z" generation: 1 name: compute-0.example.com namespace: openshift-ptp resourceVersion: "177400" uid: 30413db0-4d8d-46da-9bef-737bacd548fd spec: {} status: devices: - name: eno1 - name: eno2 - name: eno3 - name: eno4 - name: enp5s0f0 - name: enp5s0f1 Check that the PTP interface is successfully synchronized to the primary clock by accessing the linuxptp-daemon pod for the corresponding node. Get the name of the linuxptp-daemon pod and corresponding node you want to troubleshoot by running the following command: USD oc get pods -n openshift-ptp -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-lmvgn 3/3 Running 0 4d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-qhfg7 3/3 Running 0 4d17h 10.1.196.25 compute-1.example.com ptp-operator-6b8dcbf7f4-zndk7 1/1 Running 0 5d7h 10.129.0.61 control-plane-1.example.com Remote shell into the required linuxptp-daemon container: USD oc rsh -n openshift-ptp -c linuxptp-daemon-container <linux_daemon_container> where: <linux_daemon_container> is the container you want to diagnose, for example linuxptp-daemon-lmvgn . In the remote shell connection to the linuxptp-daemon container, use the PTP Management Client ( pmc ) tool to diagnose the network interface. Run the following pmc command to check the sync status of the PTP device, for example ptp4l . # pmc -u -f /var/run/ptp4l.0.config -b 0 'GET PORT_DATA_SET' Example output when the node is successfully synced to the primary clock sending: GET PORT_DATA_SET 40a6b7.fffe.166ef0-1 seq 0 RESPONSE MANAGEMENT PORT_DATA_SET portIdentity 40a6b7.fffe.166ef0-1 portState SLAVE logMinDelayReqInterval -4 peerMeanPathDelay 0 logAnnounceInterval -3 announceReceiptTimeout 3 logSyncInterval -4 delayMechanism 1 logMinPdelayReqInterval -4 versionNumber 2 16.6.1. Collecting Precision Time Protocol (PTP) Operator data You can use the oc adm must-gather CLI command to collect information about your cluster, including features and objects associated with Precision Time Protocol (PTP) Operator. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). You have installed the PTP Operator. Procedure To collect PTP Operator data with must-gather , you must specify the PTP Operator must-gather image. USD oc adm must-gather --image=registry.redhat.io/openshift4/ptp-must-gather-rhel8:v4.12 16.7. PTP hardware fast event notifications framework Cloud native applications such as virtual RAN (vRAN) require access to notifications about hardware timing events that are critical to the functioning of the overall network. PTP clock synchronization errors can negatively affect the performance and reliability of your low-latency application, for example, a vRAN application running in a distributed unit (DU). 16.7.1. About PTP and clock synchronization error events Loss of PTP synchronization is a critical error for a RAN network. If synchronization is lost on a node, the radio might be shut down and the network Over the Air (OTA) traffic might be shifted to another node in the wireless network. Fast event notifications mitigate against workload errors by allowing cluster nodes to communicate PTP clock sync status to the vRAN application running in the DU. Event notifications are available to vRAN applications running on the same DU node. A publish-subscribe REST API passes events notifications to the messaging bus. Publish-subscribe messaging, or pub-sub messaging, is an asynchronous service-to-service communication architecture where any message published to a topic is immediately received by all of the subscribers to the topic. The PTP Operator generates fast event notifications for every PTP-capable network interface. You can access the events by using a cloud-event-proxy sidecar container over an HTTP or Advanced Message Queuing Protocol (AMQP) message bus. Note PTP fast event notifications are available for network interfaces configured to use PTP ordinary clocks or PTP boundary clocks. Note HTTP transport is the default transport for PTP and bare-metal events. Use HTTP transport instead of AMQP for PTP and bare-metal events where possible. AMQ Interconnect is EOL from 30 June 2024. Extended life cycle support (ELS) for AMQ Interconnect ends 29 November 2029. For more information see, Red Hat AMQ Interconnect support status . 16.7.2. About the PTP fast event notifications framework Use the Precision Time Protocol (PTP) fast event notifications framework to subscribe cluster applications to PTP events that the bare-metal cluster node generates. Note The fast events notifications framework uses a REST API for communication. The REST API is based on the O-RAN O-Cloud Notification API Specification for Event Consumers 3.0 that is available from O-RAN ALLIANCE Specifications . The framework consists of a publisher, subscriber, and an AMQ or HTTP messaging protocol to handle communications between the publisher and subscriber applications. Applications run the cloud-event-proxy container in a sidecar pattern to subscribe to PTP events. The cloud-event-proxy sidecar container can access the same resources as the primary application container without using any of the resources of the primary application and with no significant latency. Note HTTP transport is the default transport for PTP and bare-metal events. Use HTTP transport instead of AMQP for PTP and bare-metal events where possible. AMQ Interconnect is EOL from 30 June 2024. Extended life cycle support (ELS) for AMQ Interconnect ends 29 November 2029. For more information see, Red Hat AMQ Interconnect support status . Figure 16.1. Overview of PTP fast events Event is generated on the cluster host linuxptp-daemon in the PTP Operator-managed pod runs as a Kubernetes DaemonSet and manages the various linuxptp processes ( ptp4l , phc2sys , and optionally for grandmaster clocks, ts2phc ). The linuxptp-daemon passes the event to the UNIX domain socket. Event is passed to the cloud-event-proxy sidecar The PTP plugin reads the event from the UNIX domain socket and passes it to the cloud-event-proxy sidecar in the PTP Operator-managed pod. cloud-event-proxy delivers the event from the Kubernetes infrastructure to Cloud-Native Network Functions (CNFs) with low latency. Event is persisted The cloud-event-proxy sidecar in the PTP Operator-managed pod processes the event and publishes the cloud-native event by using a REST API. Message is transported The message transporter transports the event to the cloud-event-proxy sidecar in the application pod over HTTP or AMQP 1.0 QPID. Event is available from the REST API The cloud-event-proxy sidecar in the Application pod processes the event and makes it available by using the REST API. Consumer application requests a subscription and receives the subscribed event The consumer application sends an API request to the cloud-event-proxy sidecar in the application pod to create a PTP events subscription. The cloud-event-proxy sidecar creates an AMQ or HTTP messaging listener protocol for the resource specified in the subscription. The cloud-event-proxy sidecar in the application pod receives the event from the PTP Operator-managed pod, unwraps the cloud events object to retrieve the data, and posts the event to the consumer application. The consumer application listens to the address specified in the resource qualifier and receives and processes the PTP event. 16.7.3. Configuring the PTP fast event notifications publisher To start using PTP fast event notifications for a network interface in your cluster, you must enable the fast event publisher in the PTP Operator PtpOperatorConfig custom resource (CR) and configure ptpClockThreshold values in a PtpConfig CR that you create. Prerequisites You have installed the OpenShift Container Platform CLI ( oc ). You have logged in as a user with cluster-admin privileges. You have installed the PTP Operator. Procedure Modify the default PTP Operator config to enable PTP fast events. Save the following YAML in the ptp-operatorconfig.yaml file: apiVersion: ptp.openshift.io/v1 kind: PtpOperatorConfig metadata: name: default namespace: openshift-ptp spec: daemonNodeSelector: node-role.kubernetes.io/worker: "" ptpEventConfig: enableEventPublisher: true 1 1 Set enableEventPublisher to true to enable PTP fast event notifications. Note In OpenShift Container Platform 4.12 or later, you do not need to set the spec.ptpEventConfig.transportHost field in the PtpOperatorConfig resource when you use HTTP transport for PTP events. Set transportHost only when you use AMQP transport for PTP events. Update the PtpOperatorConfig CR: USD oc apply -f ptp-operatorconfig.yaml Create a PtpConfig custom resource (CR) for the PTP enabled interface, and set the required values for ptpClockThreshold and ptp4lOpts . The following YAML illustrates the required values that you must set in the PtpConfig CR: spec: profile: - name: "profile1" interface: "enp5s0f0" ptp4lOpts: "-2 -s --summary_interval -4" 1 phc2sysOpts: "-a -r -m -n 24 -N 8 -R 16" 2 ptp4lConf: "" 3 ptpClockThreshold: 4 holdOverTimeout: 5 maxOffsetThreshold: 100 minOffsetThreshold: -100 1 Append --summary_interval -4 to use PTP fast events. 2 Required phc2sysOpts values. -m prints messages to stdout . The linuxptp-daemon DaemonSet parses the logs and generates Prometheus metrics. 3 Specify a string that contains the configuration to replace the default /etc/ptp4l.conf file. To use the default configuration, leave the field empty. 4 Optional. If the ptpClockThreshold stanza is not present, default values are used for the ptpClockThreshold fields. The stanza shows default ptpClockThreshold values. The ptpClockThreshold values configure how long after the PTP master clock is disconnected before PTP events are triggered. holdOverTimeout is the time value in seconds before the PTP clock event state changes to FREERUN when the PTP master clock is disconnected. The maxOffsetThreshold and minOffsetThreshold settings configure offset values in nanoseconds that compare against the values for CLOCK_REALTIME ( phc2sys ) or master offset ( ptp4l ). When the ptp4l or phc2sys offset value is outside this range, the PTP clock state is set to FREERUN . When the offset value is within this range, the PTP clock state is set to LOCKED . Additional resources For a complete example CR that configures linuxptp services as an ordinary clock with PTP fast events, see Configuring linuxptp services as ordinary clock . 16.7.4. Migrating consumer applications to use HTTP transport for PTP or bare-metal events If you have previously deployed PTP or bare-metal events consumer applications, you need to update the applications to use HTTP message transport. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. You have updated the PTP Operator or Bare Metal Event Relay to version 4.12 or later which uses HTTP transport by default. Procedure Update your events consumer application to use HTTP transport. Set the http-event-publishers variable for the cloud event sidecar deployment. For example, in a cluster with PTP events configured, the following YAML snippet illustrates a cloud event sidecar deployment: containers: - name: cloud-event-sidecar image: cloud-event-sidecar args: - "--metrics-addr=127.0.0.1:9091" - "--store-path=/store" - "--transport-host=consumer-events-subscription-service.cloud-events.svc.cluster.local:9043" - "--http-event-publishers=ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043" 1 - "--api-port=8089" 1 The PTP Operator automatically resolves NODE_NAME to the host that is generating the PTP events. For example, compute-1.example.com . In a cluster with bare-metal events configured, set the http-event-publishers field to hw-event-publisher-service.openshift-bare-metal-events.svc.cluster.local:9043 in the cloud event sidecar deployment CR. Deploy the consumer-events-subscription-service service alongside the events consumer application. For example: apiVersion: v1 kind: Service metadata: annotations: prometheus.io/scrape: "true" service.alpha.openshift.io/serving-cert-secret-name: sidecar-consumer-secret name: consumer-events-subscription-service namespace: cloud-events labels: app: consumer-service spec: ports: - name: sub-port port: 9043 selector: app: consumer clusterIP: None sessionAffinity: None type: ClusterIP 16.7.5. Installing the AMQ messaging bus To pass PTP fast event notifications between publisher and subscriber on a node, you can install and configure an AMQ messaging bus to run locally on the node. To use AMQ messaging, you must install the AMQ Interconnect Operator. Note HTTP transport is the default transport for PTP and bare-metal events. Use HTTP transport instead of AMQP for PTP and bare-metal events where possible. AMQ Interconnect is EOL from 30 June 2024. Extended life cycle support (ELS) for AMQ Interconnect ends 29 November 2029. For more information see, Red Hat AMQ Interconnect support status . Prerequisites Install the OpenShift Container Platform CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Install the AMQ Interconnect Operator to its own amq-interconnect namespace. See Adding the Red Hat Integration - AMQ Interconnect Operator . Verification Check that the AMQ Interconnect Operator is available and the required pods are running: USD oc get pods -n amq-interconnect Example output NAME READY STATUS RESTARTS AGE amq-interconnect-645db76c76-k8ghs 1/1 Running 0 23h interconnect-operator-5cb5fc7cc-4v7qm 1/1 Running 0 23h Check that the required linuxptp-daemon PTP event producer pods are running in the openshift-ptp namespace. USD oc get pods -n openshift-ptp Example output NAME READY STATUS RESTARTS AGE linuxptp-daemon-2t78p 3/3 Running 0 12h linuxptp-daemon-k8n88 3/3 Running 0 12h 16.7.6. Subscribing DU applications to PTP events REST API reference Use the PTP event notifications REST API to subscribe a distributed unit (DU) application to the PTP events that are generated on the parent node. Subscribe applications to PTP events by using the resource address /cluster/node/<node_name>/ptp , where <node_name> is the cluster node running the DU application. Deploy your cloud-event-consumer DU application container and cloud-event-proxy sidecar container in a separate DU application pod. The cloud-event-consumer DU application subscribes to the cloud-event-proxy container in the application pod. Use the following API endpoints to subscribe the cloud-event-consumer DU application to PTP events posted by the cloud-event-proxy container at http://localhost:8089/api/ocloudNotifications/v1/ in the DU application pod: /api/ocloudNotifications/v1/subscriptions POST : Creates a new subscription GET : Retrieves a list of subscriptions DELETE : Deletes all subscriptions /api/ocloudNotifications/v1/subscriptions/<subscription_id> GET : Returns details for the specified subscription ID DELETE : Deletes the subscription associated with the specified subscription ID /api/ocloudNotifications/v1/health GET : Returns the health status of ocloudNotifications API api/ocloudNotifications/v1/publishers GET : Returns an array of os-clock-sync-state , ptp-clock-class-change , and lock-state messages for the cluster node /api/ocloudnotifications/v1/{resource_address}/CurrentState GET : Returns the current state of one the following event types: os-clock-sync-state , ptp-clock-class-change , or lock-state events Note 9089 is the default port for the cloud-event-consumer container deployed in the application pod. You can configure a different port for your DU application as required. 16.7.6.1. api/ocloudNotifications/v1/subscriptions HTTP method GET api/ocloudNotifications/v1/subscriptions Description Returns a list of subscriptions. If subscriptions exist, a 200 OK status code is returned along with the list of subscriptions. Example API response [ { "id": "75b1ad8f-c807-4c23-acf5-56f4b7ee3826", "endpointUri": "http://localhost:9089/event", "uriLocation": "http://localhost:8089/api/ocloudNotifications/v1/subscriptions/75b1ad8f-c807-4c23-acf5-56f4b7ee3826", "resource": "/cluster/node/compute-1.example.com/ptp" } ] HTTP method POST api/ocloudNotifications/v1/subscriptions Description Creates a new subscription. If a subscription is successfully created, or if it already exists, a 201 Created status code is returned. Table 16.4. Query parameters Parameter Type subscription data Example payload { "uriLocation": "http://localhost:8089/api/ocloudNotifications/v1/subscriptions", "resource": "/cluster/node/compute-1.example.com/ptp" } HTTP method DELETE api/ocloudNotifications/v1/subscriptions Description Deletes all subscriptions. Example API response { "status": "deleted all subscriptions" } 16.7.6.2. api/ocloudNotifications/v1/subscriptions/{subscription_id} HTTP method GET api/ocloudNotifications/v1/subscriptions/{subscription_id} Description Returns details for the subscription with ID subscription_id . Table 16.5. Global path parameters Parameter Type subscription_id string Example API response { "id":"48210fb3-45be-4ce0-aa9b-41a0e58730ab", "endpointUri": "http://localhost:9089/event", "uriLocation":"http://localhost:8089/api/ocloudNotifications/v1/subscriptions/48210fb3-45be-4ce0-aa9b-41a0e58730ab", "resource":"/cluster/node/compute-1.example.com/ptp" } HTTP method DELETE api/ocloudNotifications/v1/subscriptions/{subscription_id} Description Deletes the subscription with ID subscription_id . Table 16.6. Global path parameters Parameter Type subscription_id string Example API response { "status": "OK" } 16.7.6.3. api/ocloudNotifications/v1/health HTTP method GET api/ocloudNotifications/v1/health/ Description Returns the health status for the ocloudNotifications REST API. Example API response OK 16.7.6.4. api/ocloudNotifications/v1/publishers HTTP method GET api/ocloudNotifications/v1/publishers Description Returns an array of os-clock-sync-state , ptp-clock-class-change , and lock-state details for the cluster node. The system generates notifications when the relevant equipment state changes. os-clock-sync-state notifications describe the host operating system clock synchronization state. Can be in LOCKED or FREERUN state. ptp-clock-class-change notifications describe the current state of the PTP clock class. lock-state notifications describe the current status of the PTP equipment lock state. Can be in LOCKED , HOLDOVER or FREERUN state. Example API response [ { "id": "0fa415ae-a3cf-4299-876a-589438bacf75", "endpointUri": "http://localhost:9085/api/ocloudNotifications/v1/dummy", "uriLocation": "http://localhost:9085/api/ocloudNotifications/v1/publishers/0fa415ae-a3cf-4299-876a-589438bacf75", "resource": "/cluster/node/compute-1.example.com/sync/sync-status/os-clock-sync-state" }, { "id": "28cd82df-8436-4f50-bbd9-7a9742828a71", "endpointUri": "http://localhost:9085/api/ocloudNotifications/v1/dummy", "uriLocation": "http://localhost:9085/api/ocloudNotifications/v1/publishers/28cd82df-8436-4f50-bbd9-7a9742828a71", "resource": "/cluster/node/compute-1.example.com/sync/ptp-status/ptp-clock-class-change" }, { "id": "44aa480d-7347-48b0-a5b0-e0af01fa9677", "endpointUri": "http://localhost:9085/api/ocloudNotifications/v1/dummy", "uriLocation": "http://localhost:9085/api/ocloudNotifications/v1/publishers/44aa480d-7347-48b0-a5b0-e0af01fa9677", "resource": "/cluster/node/compute-1.example.com/sync/ptp-status/lock-state" } ] You can find os-clock-sync-state , ptp-clock-class-change and lock-state events in the logs for the cloud-event-proxy container. For example: USD oc logs -f linuxptp-daemon-cvgr6 -n openshift-ptp -c cloud-event-proxy Example os-clock-sync-state event { "id":"c8a784d1-5f4a-4c16-9a81-a3b4313affe5", "type":"event.sync.sync-status.os-clock-sync-state-change", "source":"/cluster/compute-1.example.com/ptp/CLOCK_REALTIME", "dataContentType":"application/json", "time":"2022-05-06T15:31:23.906277159Z", "data":{ "version":"v1", "values":[ { "resource":"/sync/sync-status/os-clock-sync-state", "dataType":"notification", "valueType":"enumeration", "value":"LOCKED" }, { "resource":"/sync/sync-status/os-clock-sync-state", "dataType":"metric", "valueType":"decimal64.3", "value":"-53" } ] } } Example ptp-clock-class-change event { "id":"69eddb52-1650-4e56-b325-86d44688d02b", "type":"event.sync.ptp-status.ptp-clock-class-change", "source":"/cluster/compute-1.example.com/ptp/ens2fx/master", "dataContentType":"application/json", "time":"2022-05-06T15:31:23.147100033Z", "data":{ "version":"v1", "values":[ { "resource":"/sync/ptp-status/ptp-clock-class-change", "dataType":"metric", "valueType":"decimal64.3", "value":"135" } ] } } Example lock-state event { "id":"305ec18b-1472-47b3-aadd-8f37933249a9", "type":"event.sync.ptp-status.ptp-state-change", "source":"/cluster/compute-1.example.com/ptp/ens2fx/master", "dataContentType":"application/json", "time":"2022-05-06T15:31:23.467684081Z", "data":{ "version":"v1", "values":[ { "resource":"/sync/ptp-status/lock-state", "dataType":"notification", "valueType":"enumeration", "value":"LOCKED" }, { "resource":"/sync/ptp-status/lock-state", "dataType":"metric", "valueType":"decimal64.3", "value":"62" } ] } } 16.7.6.5. /api/ocloudnotifications/v1/{resource_address}/CurrentState HTTP method GET api/ocloudNotifications/v1/cluster/node/<node_name>/sync/ptp-status/lock-state/CurrentState GET api/ocloudNotifications/v1/cluster/node/<node_name>/sync/sync-status/os-clock-sync-state/CurrentState GET api/ocloudNotifications/v1/cluster/node/<node_name>/sync/ptp-status/ptp-clock-class-change/CurrentState Description Configure the CurrentState API endpoint to return the current state of the os-clock-sync-state , ptp-clock-class-change , or lock-state events for the cluster node. os-clock-sync-state notifications describe the host operating system clock synchronization state. Can be in LOCKED or FREERUN state. ptp-clock-class-change notifications describe the current state of the PTP clock class. lock-state notifications describe the current status of the PTP equipment lock state. Can be in LOCKED , HOLDOVER or FREERUN state. Table 16.7. Global path parameters Parameter Type resource_address string Example lock-state API response { "id": "c1ac3aa5-1195-4786-84f8-da0ea4462921", "type": "event.sync.ptp-status.ptp-state-change", "source": "/cluster/node/compute-1.example.com/sync/ptp-status/lock-state", "dataContentType": "application/json", "time": "2023-01-10T02:41:57.094981478Z", "data": { "version": "v1", "values": [ { "resource": "/cluster/node/compute-1.example.com/ens5fx/master", "dataType": "notification", "valueType": "enumeration", "value": "LOCKED" }, { "resource": "/cluster/node/compute-1.example.com/ens5fx/master", "dataType": "metric", "valueType": "decimal64.3", "value": "29" } ] } } Example os-clock-sync-state API response { "specversion": "0.3", "id": "4f51fe99-feaa-4e66-9112-66c5c9b9afcb", "source": "/cluster/node/compute-1.example.com/sync/sync-status/os-clock-sync-state", "type": "event.sync.sync-status.os-clock-sync-state-change", "subject": "/cluster/node/compute-1.example.com/sync/sync-status/os-clock-sync-state", "datacontenttype": "application/json", "time": "2022-11-29T17:44:22.202Z", "data": { "version": "v1", "values": [ { "resource": "/cluster/node/compute-1.example.com/CLOCK_REALTIME", "dataType": "notification", "valueType": "enumeration", "value": "LOCKED" }, { "resource": "/cluster/node/compute-1.example.com/CLOCK_REALTIME", "dataType": "metric", "valueType": "decimal64.3", "value": "27" } ] } } Example ptp-clock-class-change API response { "id": "064c9e67-5ad4-4afb-98ff-189c6aa9c205", "type": "event.sync.ptp-status.ptp-clock-class-change", "source": "/cluster/node/compute-1.example.com/sync/ptp-status/ptp-clock-class-change", "dataContentType": "application/json", "time": "2023-01-10T02:41:56.785673989Z", "data": { "version": "v1", "values": [ { "resource": "/cluster/node/compute-1.example.com/ens5fx/master", "dataType": "metric", "valueType": "decimal64.3", "value": "165" } ] } } 16.7.7. Monitoring PTP fast event metrics You can monitor PTP fast events metrics from cluster nodes where the linuxptp-daemon is running. You can also monitor PTP fast event metrics in the OpenShift Container Platform web console by using the preconfigured and self-updating Prometheus monitoring stack. Prerequisites Install the OpenShift Container Platform CLI oc . Log in as a user with cluster-admin privileges. Install and configure the PTP Operator on a node with PTP-capable hardware. Procedure Check for exposed PTP metrics on any node where the linuxptp-daemon is running. For example, run the following command: USD curl http://<node_name>:9091/metrics Example output To view the PTP event in the OpenShift Container Platform web console, copy the name of the PTP metric you want to query, for example, openshift_ptp_offset_ns . In the OpenShift Container Platform web console, click Observe Metrics . Paste the PTP metric name into the Expression field, and click Run queries . Additional resources Managing metrics | [
"apiVersion: v1 kind: Namespace metadata: name: openshift-ptp annotations: workload.openshift.io/allowed: management labels: name: openshift-ptp openshift.io/cluster-monitoring: \"true\"",
"oc create -f ptp-namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ptp-operators namespace: openshift-ptp spec: targetNamespaces: - openshift-ptp",
"oc create -f ptp-operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ptp-operator-subscription namespace: openshift-ptp spec: channel: \"stable\" name: ptp-operator source: redhat-operators sourceNamespace: openshift-marketplace",
"oc create -f ptp-sub.yaml",
"oc get csv -n openshift-ptp -o custom-columns=Name:.metadata.name,Phase:.status.phase",
"Name Phase 4.12.0-202301261535 Succeeded",
"oc get NodePtpDevice -n openshift-ptp -o yaml",
"apiVersion: v1 items: - apiVersion: ptp.openshift.io/v1 kind: NodePtpDevice metadata: creationTimestamp: \"2022-01-27T15:16:28Z\" generation: 1 name: dev-worker-0 1 namespace: openshift-ptp resourceVersion: \"6538103\" uid: d42fc9ad-bcbf-4590-b6d8-b676c642781a spec: {} status: devices: 2 - name: eno1 - name: eno2 - name: eno3 - name: eno4 - name: enp5s0f0 - name: enp5s0f1",
"apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: grandmaster-clock namespace: openshift-ptp annotations: {} spec: profile: - name: grandmaster-clock # The interface name is hardware-specific interface: USDinterface ptp4lOpts: \"-2\" phc2sysOpts: \"-a -r -r -n 24\" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 255 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type OC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: grandmaster-clock priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"",
"oc create -f grandmaster-clock-ptp-config.yaml",
"oc get pods -n openshift-ptp -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-74m2g 3/3 Running 3 4d15h 10.16.230.7 compute-1.example.com ptp-operator-5f4f48d7c-x7zkf 1/1 Running 1 4d15h 10.128.1.145 compute-1.example.com",
"oc logs linuxptp-daemon-74m2g -n openshift-ptp -c linuxptp-daemon-container",
"ts2phc[94980.334]: [ts2phc.0.config] nmea delay: 98690975 ns ts2phc[94980.334]: [ts2phc.0.config] ens3f0 extts index 0 at 1676577329.999999999 corr 0 src 1676577330.901342528 diff -1 ts2phc[94980.334]: [ts2phc.0.config] ens3f0 master offset -1 s2 freq -1 ts2phc[94980.441]: [ts2phc.0.config] nmea sentence: GNRMC,195453.00,A,4233.24427,N,07126.64420,W,0.008,,160223,,,A,V phc2sys[94980.450]: [ptp4l.0.config] CLOCK_REALTIME phc offset 943 s2 freq -89604 delay 504 phc2sys[94980.512]: [ptp4l.0.config] CLOCK_REALTIME phc offset 1000 s2 freq -89264 delay 474",
"apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: ordinary-clock namespace: openshift-ptp annotations: {} spec: profile: - name: ordinary-clock # The interface name is hardware-specific interface: USDinterface ptp4lOpts: \"-2 -s\" phc2sysOpts: \"-a -r -n 24\" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 255 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type OC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: ordinary-clock priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"",
"oc create -f ordinary-clock-ptp-config.yaml",
"oc get pods -n openshift-ptp -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-4xkbb 1/1 Running 0 43m 10.1.196.24 compute-0.example.com linuxptp-daemon-tdspf 1/1 Running 0 43m 10.1.196.25 compute-1.example.com ptp-operator-657bbb64c8-2f8sj 1/1 Running 0 43m 10.129.0.61 control-plane-1.example.com",
"oc logs linuxptp-daemon-4xkbb -n openshift-ptp -c linuxptp-daemon-container",
"I1115 09:41:17.117596 4143292 daemon.go:107] in applyNodePTPProfile I1115 09:41:17.117604 4143292 daemon.go:109] updating NodePTPProfile to: I1115 09:41:17.117607 4143292 daemon.go:110] ------------------------------------ I1115 09:41:17.117612 4143292 daemon.go:102] Profile Name: profile1 I1115 09:41:17.117616 4143292 daemon.go:102] Interface: ens787f1 I1115 09:41:17.117620 4143292 daemon.go:102] Ptp4lOpts: -2 -s I1115 09:41:17.117623 4143292 daemon.go:102] Phc2sysOpts: -a -r -n 24 I1115 09:41:17.117626 4143292 daemon.go:116] ------------------------------------",
"apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary-clock namespace: openshift-ptp annotations: {} spec: profile: - name: boundary-clock ptp4lOpts: \"-2\" phc2sysOpts: \"-a -r -n 24\" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" ptp4lConf: | # The interface name is hardware-specific [USDiface_slave] masterOnly 0 [USDiface_master_1] masterOnly 1 [USDiface_master_2] masterOnly 1 [USDiface_master_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 248 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 135 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: boundary-clock priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"",
"oc create -f boundary-clock-ptp-config.yaml",
"oc get pods -n openshift-ptp -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-4xkbb 1/1 Running 0 43m 10.1.196.24 compute-0.example.com linuxptp-daemon-tdspf 1/1 Running 0 43m 10.1.196.25 compute-1.example.com ptp-operator-657bbb64c8-2f8sj 1/1 Running 0 43m 10.129.0.61 control-plane-1.example.com",
"oc logs linuxptp-daemon-4xkbb -n openshift-ptp -c linuxptp-daemon-container",
"I1115 09:41:17.117596 4143292 daemon.go:107] in applyNodePTPProfile I1115 09:41:17.117604 4143292 daemon.go:109] updating NodePTPProfile to: I1115 09:41:17.117607 4143292 daemon.go:110] ------------------------------------ I1115 09:41:17.117612 4143292 daemon.go:102] Profile Name: profile1 I1115 09:41:17.117616 4143292 daemon.go:102] Interface: I1115 09:41:17.117620 4143292 daemon.go:102] Ptp4lOpts: -2 I1115 09:41:17.117623 4143292 daemon.go:102] Phc2sysOpts: -a -r -n 24 I1115 09:41:17.117626 4143292 daemon.go:116] ------------------------------------",
"apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary-clock-ptp-config-nic1 namespace: openshift-ptp spec: profile: - name: \"profile1\" ptp4lOpts: \"-2 --summary_interval -4\" ptp4lConf: | 1 [ens5f1] masterOnly 1 [ens5f0] masterOnly 0 phc2sysOpts: \"-a -r -m -n 24 -N 8 -R 16\" 2",
"apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary-clock-ptp-config-nic2 namespace: openshift-ptp spec: profile: - name: \"profile2\" ptp4lOpts: \"-2 --summary_interval -4\" ptp4lConf: | 1 [ens7f1] masterOnly 1 [ens7f0] masterOnly 0",
"oc create -f boundary-clock-ptp-config-nic1.yaml",
"oc create -f boundary-clock-ptp-config-nic2.yaml",
"oc logs linuxptp-daemon-cvgr6 -n openshift-ptp -c linuxptp-daemon-container",
"ptp4l[80828.335]: [ptp4l.1.config] master offset 5 s2 freq -5727 path delay 519 ptp4l[80828.343]: [ptp4l.0.config] master offset -5 s2 freq -10607 path delay 533 phc2sys[80828.390]: [ptp4l.0.config] CLOCK_REALTIME phc offset 1 s2 freq -87239 delay 539",
"oc edit PtpConfig -n openshift-ptp",
"apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: <ptp_config_name> namespace: openshift-ptp spec: profile: - name: \"profile1\" ptpSchedulingPolicy: SCHED_FIFO 1 ptpSchedulingPriority: 10 2",
"oc get pods -n openshift-ptp -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-gmv2n 3/3 Running 0 1d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-lgm55 3/3 Running 0 1d17h 10.1.196.25 compute-1.example.com ptp-operator-3r4dcvf7f4-zndk7 1/1 Running 0 1d7h 10.129.0.61 control-plane-1.example.com",
"oc -n openshift-ptp logs linuxptp-daemon-lgm55 -c linuxptp-daemon-container|grep chrt",
"I1216 19:24:57.091872 1600715 daemon.go:285] /bin/chrt -f 65 /usr/sbin/ptp4l -f /var/run/ptp4l.0.config -2 --summary_interval -4 -m",
"oc edit PtpConfig -n openshift-ptp",
"apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: <ptp_config_name> namespace: openshift-ptp spec: profile: - name: \"profile1\" ptpSettings: logReduce: \"true\"",
"oc get pods -n openshift-ptp -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-gmv2n 3/3 Running 0 1d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-lgm55 3/3 Running 0 1d17h 10.1.196.25 compute-1.example.com ptp-operator-3r4dcvf7f4-zndk7 1/1 Running 0 1d7h 10.129.0.61 control-plane-1.example.com",
"oc -n openshift-ptp logs <linux_daemon_container> -c linuxptp-daemon-container | grep \"master offset\" 1",
"oc get pods -n openshift-ptp -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-lmvgn 3/3 Running 0 4d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-qhfg7 3/3 Running 0 4d17h 10.1.196.25 compute-1.example.com ptp-operator-6b8dcbf7f4-zndk7 1/1 Running 0 5d7h 10.129.0.61 control-plane-1.example.com",
"oc -n openshift-ptp get nodeptpdevices.ptp.openshift.io",
"NAME AGE control-plane-0.example.com 10d control-plane-1.example.com 10d compute-0.example.com 10d compute-1.example.com 10d compute-2.example.com 10d",
"oc -n openshift-ptp get nodeptpdevices.ptp.openshift.io <node_name> -o yaml",
"apiVersion: ptp.openshift.io/v1 kind: NodePtpDevice metadata: creationTimestamp: \"2021-09-14T16:52:33Z\" generation: 1 name: compute-0.example.com namespace: openshift-ptp resourceVersion: \"177400\" uid: 30413db0-4d8d-46da-9bef-737bacd548fd spec: {} status: devices: - name: eno1 - name: eno2 - name: eno3 - name: eno4 - name: enp5s0f0 - name: enp5s0f1",
"oc get pods -n openshift-ptp -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-lmvgn 3/3 Running 0 4d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-qhfg7 3/3 Running 0 4d17h 10.1.196.25 compute-1.example.com ptp-operator-6b8dcbf7f4-zndk7 1/1 Running 0 5d7h 10.129.0.61 control-plane-1.example.com",
"oc rsh -n openshift-ptp -c linuxptp-daemon-container <linux_daemon_container>",
"pmc -u -f /var/run/ptp4l.0.config -b 0 'GET PORT_DATA_SET'",
"sending: GET PORT_DATA_SET 40a6b7.fffe.166ef0-1 seq 0 RESPONSE MANAGEMENT PORT_DATA_SET portIdentity 40a6b7.fffe.166ef0-1 portState SLAVE logMinDelayReqInterval -4 peerMeanPathDelay 0 logAnnounceInterval -3 announceReceiptTimeout 3 logSyncInterval -4 delayMechanism 1 logMinPdelayReqInterval -4 versionNumber 2",
"oc adm must-gather --image=registry.redhat.io/openshift4/ptp-must-gather-rhel8:v4.12",
"apiVersion: ptp.openshift.io/v1 kind: PtpOperatorConfig metadata: name: default namespace: openshift-ptp spec: daemonNodeSelector: node-role.kubernetes.io/worker: \"\" ptpEventConfig: enableEventPublisher: true 1",
"oc apply -f ptp-operatorconfig.yaml",
"spec: profile: - name: \"profile1\" interface: \"enp5s0f0\" ptp4lOpts: \"-2 -s --summary_interval -4\" 1 phc2sysOpts: \"-a -r -m -n 24 -N 8 -R 16\" 2 ptp4lConf: \"\" 3 ptpClockThreshold: 4 holdOverTimeout: 5 maxOffsetThreshold: 100 minOffsetThreshold: -100",
"containers: - name: cloud-event-sidecar image: cloud-event-sidecar args: - \"--metrics-addr=127.0.0.1:9091\" - \"--store-path=/store\" - \"--transport-host=consumer-events-subscription-service.cloud-events.svc.cluster.local:9043\" - \"--http-event-publishers=ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043\" 1 - \"--api-port=8089\"",
"apiVersion: v1 kind: Service metadata: annotations: prometheus.io/scrape: \"true\" service.alpha.openshift.io/serving-cert-secret-name: sidecar-consumer-secret name: consumer-events-subscription-service namespace: cloud-events labels: app: consumer-service spec: ports: - name: sub-port port: 9043 selector: app: consumer clusterIP: None sessionAffinity: None type: ClusterIP",
"oc get pods -n amq-interconnect",
"NAME READY STATUS RESTARTS AGE amq-interconnect-645db76c76-k8ghs 1/1 Running 0 23h interconnect-operator-5cb5fc7cc-4v7qm 1/1 Running 0 23h",
"oc get pods -n openshift-ptp",
"NAME READY STATUS RESTARTS AGE linuxptp-daemon-2t78p 3/3 Running 0 12h linuxptp-daemon-k8n88 3/3 Running 0 12h",
"[ { \"id\": \"75b1ad8f-c807-4c23-acf5-56f4b7ee3826\", \"endpointUri\": \"http://localhost:9089/event\", \"uriLocation\": \"http://localhost:8089/api/ocloudNotifications/v1/subscriptions/75b1ad8f-c807-4c23-acf5-56f4b7ee3826\", \"resource\": \"/cluster/node/compute-1.example.com/ptp\" } ]",
"{ \"uriLocation\": \"http://localhost:8089/api/ocloudNotifications/v1/subscriptions\", \"resource\": \"/cluster/node/compute-1.example.com/ptp\" }",
"{ \"status\": \"deleted all subscriptions\" }",
"{ \"id\":\"48210fb3-45be-4ce0-aa9b-41a0e58730ab\", \"endpointUri\": \"http://localhost:9089/event\", \"uriLocation\":\"http://localhost:8089/api/ocloudNotifications/v1/subscriptions/48210fb3-45be-4ce0-aa9b-41a0e58730ab\", \"resource\":\"/cluster/node/compute-1.example.com/ptp\" }",
"{ \"status\": \"OK\" }",
"OK",
"[ { \"id\": \"0fa415ae-a3cf-4299-876a-589438bacf75\", \"endpointUri\": \"http://localhost:9085/api/ocloudNotifications/v1/dummy\", \"uriLocation\": \"http://localhost:9085/api/ocloudNotifications/v1/publishers/0fa415ae-a3cf-4299-876a-589438bacf75\", \"resource\": \"/cluster/node/compute-1.example.com/sync/sync-status/os-clock-sync-state\" }, { \"id\": \"28cd82df-8436-4f50-bbd9-7a9742828a71\", \"endpointUri\": \"http://localhost:9085/api/ocloudNotifications/v1/dummy\", \"uriLocation\": \"http://localhost:9085/api/ocloudNotifications/v1/publishers/28cd82df-8436-4f50-bbd9-7a9742828a71\", \"resource\": \"/cluster/node/compute-1.example.com/sync/ptp-status/ptp-clock-class-change\" }, { \"id\": \"44aa480d-7347-48b0-a5b0-e0af01fa9677\", \"endpointUri\": \"http://localhost:9085/api/ocloudNotifications/v1/dummy\", \"uriLocation\": \"http://localhost:9085/api/ocloudNotifications/v1/publishers/44aa480d-7347-48b0-a5b0-e0af01fa9677\", \"resource\": \"/cluster/node/compute-1.example.com/sync/ptp-status/lock-state\" } ]",
"oc logs -f linuxptp-daemon-cvgr6 -n openshift-ptp -c cloud-event-proxy",
"{ \"id\":\"c8a784d1-5f4a-4c16-9a81-a3b4313affe5\", \"type\":\"event.sync.sync-status.os-clock-sync-state-change\", \"source\":\"/cluster/compute-1.example.com/ptp/CLOCK_REALTIME\", \"dataContentType\":\"application/json\", \"time\":\"2022-05-06T15:31:23.906277159Z\", \"data\":{ \"version\":\"v1\", \"values\":[ { \"resource\":\"/sync/sync-status/os-clock-sync-state\", \"dataType\":\"notification\", \"valueType\":\"enumeration\", \"value\":\"LOCKED\" }, { \"resource\":\"/sync/sync-status/os-clock-sync-state\", \"dataType\":\"metric\", \"valueType\":\"decimal64.3\", \"value\":\"-53\" } ] } }",
"{ \"id\":\"69eddb52-1650-4e56-b325-86d44688d02b\", \"type\":\"event.sync.ptp-status.ptp-clock-class-change\", \"source\":\"/cluster/compute-1.example.com/ptp/ens2fx/master\", \"dataContentType\":\"application/json\", \"time\":\"2022-05-06T15:31:23.147100033Z\", \"data\":{ \"version\":\"v1\", \"values\":[ { \"resource\":\"/sync/ptp-status/ptp-clock-class-change\", \"dataType\":\"metric\", \"valueType\":\"decimal64.3\", \"value\":\"135\" } ] } }",
"{ \"id\":\"305ec18b-1472-47b3-aadd-8f37933249a9\", \"type\":\"event.sync.ptp-status.ptp-state-change\", \"source\":\"/cluster/compute-1.example.com/ptp/ens2fx/master\", \"dataContentType\":\"application/json\", \"time\":\"2022-05-06T15:31:23.467684081Z\", \"data\":{ \"version\":\"v1\", \"values\":[ { \"resource\":\"/sync/ptp-status/lock-state\", \"dataType\":\"notification\", \"valueType\":\"enumeration\", \"value\":\"LOCKED\" }, { \"resource\":\"/sync/ptp-status/lock-state\", \"dataType\":\"metric\", \"valueType\":\"decimal64.3\", \"value\":\"62\" } ] } }",
"{ \"id\": \"c1ac3aa5-1195-4786-84f8-da0ea4462921\", \"type\": \"event.sync.ptp-status.ptp-state-change\", \"source\": \"/cluster/node/compute-1.example.com/sync/ptp-status/lock-state\", \"dataContentType\": \"application/json\", \"time\": \"2023-01-10T02:41:57.094981478Z\", \"data\": { \"version\": \"v1\", \"values\": [ { \"resource\": \"/cluster/node/compute-1.example.com/ens5fx/master\", \"dataType\": \"notification\", \"valueType\": \"enumeration\", \"value\": \"LOCKED\" }, { \"resource\": \"/cluster/node/compute-1.example.com/ens5fx/master\", \"dataType\": \"metric\", \"valueType\": \"decimal64.3\", \"value\": \"29\" } ] } }",
"{ \"specversion\": \"0.3\", \"id\": \"4f51fe99-feaa-4e66-9112-66c5c9b9afcb\", \"source\": \"/cluster/node/compute-1.example.com/sync/sync-status/os-clock-sync-state\", \"type\": \"event.sync.sync-status.os-clock-sync-state-change\", \"subject\": \"/cluster/node/compute-1.example.com/sync/sync-status/os-clock-sync-state\", \"datacontenttype\": \"application/json\", \"time\": \"2022-11-29T17:44:22.202Z\", \"data\": { \"version\": \"v1\", \"values\": [ { \"resource\": \"/cluster/node/compute-1.example.com/CLOCK_REALTIME\", \"dataType\": \"notification\", \"valueType\": \"enumeration\", \"value\": \"LOCKED\" }, { \"resource\": \"/cluster/node/compute-1.example.com/CLOCK_REALTIME\", \"dataType\": \"metric\", \"valueType\": \"decimal64.3\", \"value\": \"27\" } ] } }",
"{ \"id\": \"064c9e67-5ad4-4afb-98ff-189c6aa9c205\", \"type\": \"event.sync.ptp-status.ptp-clock-class-change\", \"source\": \"/cluster/node/compute-1.example.com/sync/ptp-status/ptp-clock-class-change\", \"dataContentType\": \"application/json\", \"time\": \"2023-01-10T02:41:56.785673989Z\", \"data\": { \"version\": \"v1\", \"values\": [ { \"resource\": \"/cluster/node/compute-1.example.com/ens5fx/master\", \"dataType\": \"metric\", \"valueType\": \"decimal64.3\", \"value\": \"165\" } ] } }",
"curl http://<node_name>:9091/metrics",
"HELP openshift_ptp_clock_state 0 = FREERUN, 1 = LOCKED, 2 = HOLDOVER TYPE openshift_ptp_clock_state gauge openshift_ptp_clock_state{iface=\"ens1fx\",node=\"compute-1.example.com\",process=\"ptp4l\"} 1 openshift_ptp_clock_state{iface=\"ens3fx\",node=\"compute-1.example.com\",process=\"ptp4l\"} 1 openshift_ptp_clock_state{iface=\"ens5fx\",node=\"compute-1.example.com\",process=\"ptp4l\"} 1 openshift_ptp_clock_state{iface=\"ens7fx\",node=\"compute-1.example.com\",process=\"ptp4l\"} 1 HELP openshift_ptp_delay_ns TYPE openshift_ptp_delay_ns gauge openshift_ptp_delay_ns{from=\"master\",iface=\"ens1fx\",node=\"compute-1.example.com\",process=\"ptp4l\"} 842 openshift_ptp_delay_ns{from=\"master\",iface=\"ens3fx\",node=\"compute-1.example.com\",process=\"ptp4l\"} 480 openshift_ptp_delay_ns{from=\"master\",iface=\"ens5fx\",node=\"compute-1.example.com\",process=\"ptp4l\"} 584 openshift_ptp_delay_ns{from=\"master\",iface=\"ens7fx\",node=\"compute-1.example.com\",process=\"ptp4l\"} 482 openshift_ptp_delay_ns{from=\"phc\",iface=\"CLOCK_REALTIME\",node=\"compute-1.example.com\",process=\"phc2sys\"} 547 HELP openshift_ptp_offset_ns TYPE openshift_ptp_offset_ns gauge openshift_ptp_offset_ns{from=\"master\",iface=\"ens1fx\",node=\"compute-1.example.com\",process=\"ptp4l\"} -2 openshift_ptp_offset_ns{from=\"master\",iface=\"ens3fx\",node=\"compute-1.example.com\",process=\"ptp4l\"} -44 openshift_ptp_offset_ns{from=\"master\",iface=\"ens5fx\",node=\"compute-1.example.com\",process=\"ptp4l\"} -8 openshift_ptp_offset_ns{from=\"master\",iface=\"ens7fx\",node=\"compute-1.example.com\",process=\"ptp4l\"} 3 openshift_ptp_offset_ns{from=\"phc\",iface=\"CLOCK_REALTIME\",node=\"compute-1.example.com\",process=\"phc2sys\"} 12"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/networking/using-ptp |
3.10. Network Labels | 3.10. Network Labels Network labels can be used to greatly simplify several administrative tasks associated with creating and administering logical networks and associating those logical networks with physical host network interfaces and bonds. A network label is a plain text, human readable label that can be attached to a logical network or a physical host network interface. There is no strict limit on the length of label, but you must use a combination of lowercase and uppercase letters, underscores and hyphens; no spaces or special characters are allowed. Attaching a label to a logical network or physical host network interface creates an association with other logical networks or physical host network interfaces to which the same label has been attached, as follows: Network Label Associations When you attach a label to a logical network, that logical network will be automatically associated with any physical host network interfaces with the given label. When you attach a label to a physical host network interface, any logical networks with the given label will be automatically associated with that physical host network interface. Changing the label attached to a logical network or physical host network interface acts in the same way as removing a label and adding a new label. The association between related logical networks or physical host network interfaces is updated. Network Labels and Clusters When a labeled logical network is added to a cluster and there is a physical host network interface in that cluster with the same label, the logical network is automatically added to that physical host network interface. When a labeled logical network is detached from a cluster and there is a physical host network interface in that cluster with the same label, the logical network is automatically detached from that physical host network interface. Network Labels and Logical Networks With Roles When a labeled logical network is assigned to act as a display network or migration network, that logical network is then configured on the physical host network interface using DHCP so that the logical network can be assigned an IP address. Setting a label on a role network (for instance, "a migration network" or "a display network") causes a mass deployment of that network on all hosts. Such mass additions of networks are achieved through the use of DHCP. This method of mass deployment was chosen over a method of typing in static addresses, because of the unscalable nature of the task of typing in many static IP addresses. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/technical_reference/network_labels |
Chapter 1. OpenShift Container Platform installation overview | Chapter 1. OpenShift Container Platform installation overview 1.1. About OpenShift Container Platform installation The OpenShift Container Platform installation program offers four methods for deploying a cluster which are detailed in the following list: Interactive : You can deploy a cluster with the web-based Assisted Installer . This is an ideal approach for clusters with networks connected to the internet. The Assisted Installer is the easiest way to install OpenShift Container Platform, it provides smart defaults, and it performs pre-flight validations before installing the cluster. It also provides a RESTful API for automation and advanced configuration scenarios. Local Agent-based : You can deploy a cluster locally with the Agent-based Installer for disconnected environments or restricted networks. It provides many of the benefits of the Assisted Installer, but you must download and configure the Agent-based Installer first. Configuration is done with a command-line interface. This approach is ideal for disconnected environments. Automated : You can deploy a cluster on installer-provisioned infrastructure. The installation program uses each cluster host's baseboard management controller (BMC) for provisioning. You can deploy clusters in connected or disconnected environments. Full control : You can deploy a cluster on infrastructure that you prepare and maintain, which provides maximum customizability. You can deploy clusters in connected or disconnected environments. Each method deploys a cluster with the following characteristics: Highly available infrastructure with no single points of failure, which is available by default. Administrators can control what updates are applied and when. 1.1.1. About the installation program You can use the installation program to deploy each type of cluster. The installation program generates the main assets, such as Ignition config files for the bootstrap, control plane, and compute machines. You can start an OpenShift Container Platform cluster with these three machine configurations, provided you correctly configured the infrastructure. The OpenShift Container Platform installation program uses a set of targets and dependencies to manage cluster installations. The installation program has a set of targets that it must achieve, and each target has a set of dependencies. Because each target is only concerned with its own dependencies, the installation program can act to achieve multiple targets in parallel with the ultimate target being a running cluster. The installation program recognizes and uses existing components instead of running commands to create them again because the program meets the dependencies. Figure 1.1. OpenShift Container Platform installation targets and dependencies 1.1.2. About Red Hat Enterprise Linux CoreOS (RHCOS) Post-installation, each cluster machine uses Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. RHCOS is the immutable container host version of Red Hat Enterprise Linux (RHEL) and features a RHEL kernel with SELinux enabled by default. RHCOS includes the kubelet , which is the Kubernetes node agent, and the CRI-O container runtime, which is optimized for Kubernetes. Every control plane machine in an OpenShift Container Platform 4.17 cluster must use RHCOS, which includes a critical first-boot provisioning tool called Ignition. This tool enables the cluster to configure the machines. Operating system updates are delivered as a bootable container image, using OSTree as a backend, that is deployed across the cluster by the Machine Config Operator. Actual operating system changes are made in-place on each machine as an atomic operation by using rpm-ostree . Together, these technologies enable OpenShift Container Platform to manage the operating system like it manages any other application on the cluster, by in-place upgrades that keep the entire platform up to date. These in-place updates can reduce the burden on operations teams. If you use RHCOS as the operating system for all cluster machines, the cluster manages all aspects of its components and machines, including the operating system. Because of this, only the installation program and the Machine Config Operator can change machines. The installation program uses Ignition config files to set the exact state of each machine, and the Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. 1.1.3. Glossary of common terms for OpenShift Container Platform installing The glossary defines common terms that relate to the installation content. Read the following list of terms to better understand the installation process. Assisted Installer An installer hosted at console.redhat.com that provides a web-based user interface or a RESTful API for creating a cluster configuration. The Assisted Installer generates a discovery image. Cluster machines boot with the discovery image, which installs RHCOS and an agent. Together, the Assisted Installer and agent provide preinstallation validation and installation for the cluster. Agent-based Installer An installer similar to the Assisted Installer, but you must download the Agent-based Installer first. The Agent-based Installer is ideal for disconnected environments. Bootstrap node A temporary machine that runs a minimal Kubernetes configuration required to deploy the OpenShift Container Platform control plane. Control plane A container orchestration layer that exposes the API and interfaces to define, deploy, and manage the lifecycle of containers. Also known as control plane machines. Compute node Nodes that are responsible for executing workloads for cluster users. Also known as worker nodes. Disconnected installation In some situations, parts of a data center might not have access to the internet, even through proxy servers. You can still install the OpenShift Container Platform in these environments, but you must download the required software and images and make them available to the disconnected environment. The OpenShift Container Platform installation program A program that provisions the infrastructure and deploys a cluster. Installer-provisioned infrastructure The installation program deploys and configures the infrastructure that the cluster runs on. Ignition config files A file that the Ignition tool uses to configure Red Hat Enterprise Linux CoreOS (RHCOS) during operating system initialization. The installation program generates different Ignition configuration files to initialize bootstrap, control plane, and worker nodes. Kubernetes manifests Specifications of a Kubernetes API object in a JSON or YAML format. A configuration file can include deployments, config maps, secrets, daemonsets, and so on. Kubelet A primary node agent that runs on each node in the cluster to ensure that containers are running in a pod. Load balancers A load balancer serves as the single point of contact for clients. Load balancers for the API distribute incoming traffic across control plane nodes. Machine Config Operator An Operator that manages and applies configurations and updates of the base operating system and container runtime, including everything between the kernel and kubelet, for the nodes in the cluster. Operators The preferred method of packaging, deploying, and managing a Kubernetes application in an OpenShift Container Platform cluster. An operator takes human operational knowledge and encodes it into software that is easily packaged and shared with customers. User-provisioned infrastructure You can install OpenShift Container Platform on infrastructure that you provide. You can use the installation program to generate the assets required to provision the cluster infrastructure, create the cluster infrastructure, and then deploy the cluster to the infrastructure that you provided. 1.1.4. Installation process Except for the Assisted Installer, when you install an OpenShift Container Platform cluster, you must download the installation program from the appropriate Cluster Type page on the OpenShift Cluster Manager Hybrid Cloud Console. This console manages: REST API for accounts. Registry tokens, which are the pull secrets that you use to obtain the required components. Cluster registration, which associates the cluster identity to your Red Hat account to facilitate the gathering of usage metrics. In OpenShift Container Platform 4.17, the installation program is a Go binary file that performs a series of file transformations on a set of assets. The way you interact with the installation program differs depending on your installation type. Consider the following installation use cases: To deploy a cluster with the Assisted Installer, you must configure the cluster settings by using the Assisted Installer . There is no installation program to download and configure. After you finish setting the cluster configuration, you download a discovery ISO and then boot cluster machines with that image. You can install clusters with the Assisted Installer on Nutanix, vSphere, and bare metal with full integration, and other platforms without integration. If you install on bare metal, you must provide all of the cluster infrastructure and resources, including the networking, load balancing, storage, and individual cluster machines. To deploy clusters with the Agent-based Installer, you can download the Agent-based Installer first. You can then configure the cluster and generate a discovery image. You boot cluster machines with the discovery image, which installs an agent that communicates with the installation program and handles the provisioning for you instead of you interacting with the installation program or setting up a provisioner machine yourself. You must provide all of the cluster infrastructure and resources, including the networking, load balancing, storage, and individual cluster machines. This approach is ideal for disconnected environments. For clusters with installer-provisioned infrastructure, you delegate the infrastructure bootstrapping and provisioning to the installation program instead of doing it yourself. The installation program creates all of the networking, machines, and operating systems that are required to support the cluster, except if you install on bare metal. If you install on bare metal, you must provide all of the cluster infrastructure and resources, including the bootstrap machine, networking, load balancing, storage, and individual cluster machines. If you provision and manage the infrastructure for your cluster, you must provide all of the cluster infrastructure and resources, including the bootstrap machine, networking, load balancing, storage, and individual cluster machines. For the installation program, the program uses three sets of files during installation: an installation configuration file that is named install-config.yaml , Kubernetes manifests, and Ignition config files for your machine types. Important You can modify Kubernetes and the Ignition config files that control the underlying RHCOS operating system during installation. However, no validation is available to confirm the suitability of any modifications that you make to these objects. If you modify these objects, you might render your cluster non-functional. Because of this risk, modifying Kubernetes and Ignition config files is not supported unless you are following documented procedures or are instructed to do so by Red Hat support. The installation configuration file is transformed into Kubernetes manifests, and then the manifests are wrapped into Ignition config files. The installation program uses these Ignition config files to create the cluster. The installation configuration files are all pruned when you run the installation program, so be sure to back up all the configuration files that you want to use again. Important You cannot modify the parameters that you set during installation, but you can modify many cluster attributes after installation. The installation process with the Assisted Installer Installation with the Assisted Installer involves creating a cluster configuration interactively by using the web-based user interface or the RESTful API. The Assisted Installer user interface prompts you for required values and provides reasonable default values for the remaining parameters, unless you change them in the user interface or with the API. The Assisted Installer generates a discovery image, which you download and use to boot the cluster machines. The image installs RHCOS and an agent, and the agent handles the provisioning for you. You can install OpenShift Container Platform with the Assisted Installer and full integration on Nutanix, vSphere, and bare metal. Additionally, you can install OpenShift Container Platform with the Assisted Installer on other platforms without integration. OpenShift Container Platform manages all aspects of the cluster, including the operating system itself. Each machine boots with a configuration that references resources hosted in the cluster that it joins. This configuration allows the cluster to manage itself as updates are applied. If possible, use the Assisted Installer feature to avoid having to download and configure the Agent-based Installer. The installation process with Agent-based infrastructure Agent-based installation is similar to using the Assisted Installer, except that you must initially download and install the Agent-based Installer . An Agent-based installation is useful when you want the convenience of the Assisted Installer, but you need to install a cluster in a disconnected environment. If possible, use the Agent-based installation feature to avoid having to create a provisioner machine with a bootstrap VM, and then provision and maintain the cluster infrastructure. The installation process with installer-provisioned infrastructure The default installation type uses installer-provisioned infrastructure. By default, the installation program acts as an installation wizard, prompting you for values that it cannot determine on its own and providing reasonable default values for the remaining parameters. You can also customize the installation process to support advanced infrastructure scenarios. The installation program provisions the underlying infrastructure for the cluster. You can install either a standard cluster or a customized cluster. With a standard cluster, you provide minimum details that are required to install the cluster. With a customized cluster, you can specify more details about the platform, such as the number of machines that the control plane uses, the type of virtual machine that the cluster deploys, or the CIDR range for the Kubernetes service network. If possible, use this feature to avoid having to provision and maintain the cluster infrastructure. In all other environments, you use the installation program to generate the assets that you require to provision your cluster infrastructure. With installer-provisioned infrastructure clusters, OpenShift Container Platform manages all aspects of the cluster, including the operating system itself. Each machine boots with a configuration that references resources hosted in the cluster that it joins. This configuration allows the cluster to manage itself as updates are applied. The installation process with user-provisioned infrastructure You can also install OpenShift Container Platform on infrastructure that you provide. You use the installation program to generate the assets that you require to provision the cluster infrastructure, create the cluster infrastructure, and then deploy the cluster to the infrastructure that you provided. If you do not use infrastructure that the installation program provisioned, you must manage and maintain the cluster resources yourself. The following list details some of these self-managed resources: The underlying infrastructure for the control plane and compute machines that make up the cluster Load balancers Cluster networking, including the DNS records and required subnets Storage for the cluster infrastructure and applications If your cluster uses user-provisioned infrastructure, you have the option of adding RHEL compute machines to your cluster. Installation process details When a cluster is provisioned, each machine in the cluster requires information about the cluster. OpenShift Container Platform uses a temporary bootstrap machine during initial configuration to provide the required information to the permanent control plane. The temporary bootstrap machine boots by using an Ignition config file that describes how to create the cluster. The bootstrap machine creates the control plane machines that make up the control plane. The control plane machines then create the compute machines, which are also known as worker machines. The following figure illustrates this process: Figure 1.2. Creating the bootstrap, control plane, and compute machines After the cluster machines initialize, the bootstrap machine is destroyed. All clusters use the bootstrap process to initialize the cluster, but if you provision the infrastructure for your cluster, you must complete many of the steps manually. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. Consider using Ignition config files within 12 hours after they are generated, because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Bootstrapping a cluster involves the following steps: The bootstrap machine boots and starts hosting the remote resources required for the control plane machines to boot. If you provision the infrastructure, this step requires manual intervention. The bootstrap machine starts a single-node etcd cluster and a temporary Kubernetes control plane. The control plane machines fetch the remote resources from the bootstrap machine and finish booting. If you provision the infrastructure, this step requires manual intervention. The temporary control plane schedules the production control plane to the production control plane machines. The Cluster Version Operator (CVO) comes online and installs the etcd Operator. The etcd Operator scales up etcd on all control plane nodes. The temporary control plane shuts down and passes control to the production control plane. The bootstrap machine injects OpenShift Container Platform components into the production control plane. The installation program shuts down the bootstrap machine. If you provision the infrastructure, this step requires manual intervention. The control plane sets up the compute nodes. The control plane installs additional services in the form of a set of Operators. The result of this bootstrapping process is a running OpenShift Container Platform cluster. The cluster then downloads and configures remaining components needed for the day-to-day operations, including the creation of compute machines in supported environments. Additional resources Red Hat OpenShift Network Calculator 1.1.5. Verifying node state after installation The OpenShift Container Platform installation completes when the following installation health checks are successful: The provisioner can access the OpenShift Container Platform web console. All control plane nodes are ready. All cluster Operators are available. Note After the installation completes, the specific cluster Operators responsible for the worker nodes continuously attempt to provision all worker nodes. Some time is required before all worker nodes report as READY . For installations on bare metal, wait a minimum of 60 minutes before troubleshooting a worker node. For installations on all other platforms, wait a minimum of 40 minutes before troubleshooting a worker node. A DEGRADED state for the cluster Operators responsible for the worker nodes depends on the Operators' own resources and not on the state of the nodes. After your installation completes, you can continue to monitor the condition of the nodes in your cluster. Prerequisites The installation program resolves successfully in the terminal. Procedure Show the status of all worker nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION example-compute1.example.com Ready worker 13m v1.21.6+bb8d50a example-compute2.example.com Ready worker 13m v1.21.6+bb8d50a example-compute4.example.com Ready worker 14m v1.21.6+bb8d50a example-control1.example.com Ready master 52m v1.21.6+bb8d50a example-control2.example.com Ready master 55m v1.21.6+bb8d50a example-control3.example.com Ready master 55m v1.21.6+bb8d50a Show the phase of all worker machine nodes: USD oc get machines -A Example output NAMESPACE NAME PHASE TYPE REGION ZONE AGE openshift-machine-api example-zbbt6-master-0 Running 95m openshift-machine-api example-zbbt6-master-1 Running 95m openshift-machine-api example-zbbt6-master-2 Running 95m openshift-machine-api example-zbbt6-worker-0-25bhp Running 49m openshift-machine-api example-zbbt6-worker-0-8b4c2 Running 49m openshift-machine-api example-zbbt6-worker-0-jkbqt Running 49m openshift-machine-api example-zbbt6-worker-0-qrl5b Running 49m Additional resources Getting the BareMetalHost resource Following the progress of the installation Validating an installation Agent-based Installer Assisted Installer for OpenShift Container Platform Installation scope The scope of the OpenShift Container Platform installation program is intentionally narrow. It is designed for simplicity and ensured success. You can complete many more configuration tasks after installation completes. Additional resources See Available cluster customizations for details about OpenShift Container Platform configuration resources. 1.1.6. OpenShift Local overview OpenShift Local supports rapid application development to get started building OpenShift Container Platform clusters. OpenShift Local is designed to run on a local computer to simplify setup and testing, and to emulate the cloud development environment locally with all of the tools needed to develop container-based applications. Regardless of the programming language you use, OpenShift Local hosts your application and brings a minimal, preconfigured Red Hat OpenShift Container Platform cluster to your local PC without the need for a server-based infrastructure. On a hosted environment, OpenShift Local can create microservices, convert them into images, and run them in Kubernetes-hosted containers directly on your laptop or desktop running Linux, macOS, or Windows 10 or later. For more information about OpenShift Local, see Red Hat OpenShift Local Overview . 1.2. Supported platforms for OpenShift Container Platform clusters In OpenShift Container Platform 4.17, you can install a cluster that uses installer-provisioned infrastructure on the following platforms: Amazon Web Services (AWS) Bare metal Google Cloud Platform (GCP) IBM Cloud(R) Microsoft Azure Microsoft Azure Stack Hub Nutanix Red Hat OpenStack Platform (RHOSP) The latest OpenShift Container Platform release supports both the latest RHOSP long-life release and intermediate release. For complete RHOSP release compatibility, see the OpenShift Container Platform on RHOSP support matrix . VMware vSphere For these clusters, all machines, including the computer that you run the installation process on, must have direct internet access to pull images for platform containers and provide telemetry data to Red Hat. Important After installation, the following changes are not supported: Mixing cloud provider platforms. Mixing cloud provider components. For example, using a persistent storage framework from a another platform on the platform where you installed the cluster. In OpenShift Container Platform 4.17, you can install a cluster that uses user-provisioned infrastructure on the following platforms: AWS Azure Azure Stack Hub Bare metal GCP IBM Power(R) IBM Z(R) or IBM(R) LinuxONE RHOSP The latest OpenShift Container Platform release supports both the latest RHOSP long-life release and intermediate release. For complete RHOSP release compatibility, see the OpenShift Container Platform on RHOSP support matrix . VMware Cloud on AWS VMware vSphere Depending on the supported cases for the platform, you can perform installations on user-provisioned infrastructure, so that you can run machines with full internet access, place your cluster behind a proxy, or perform a disconnected installation. In a disconnected installation, you can download the images that are required to install a cluster, place them in a mirror registry, and use that data to install your cluster. While you require internet access to pull images for platform containers, with a disconnected installation on vSphere or bare metal infrastructure, your cluster machines do not require direct internet access. The OpenShift Container Platform 4.x Tested Integrations page contains details about integration testing for different platforms. Additional resources See Supported installation methods for different platforms for more information about the types of installations that are available for each supported platform. See Selecting a cluster installation method and preparing it for users for information about choosing an installation method and preparing the required resources. Red Hat OpenShift Network Calculator can help you design your cluster network during both the deployment and expansion phases. It addresses common questions related to the cluster network and provides output in a convenient JSON format. | [
"oc get nodes",
"NAME STATUS ROLES AGE VERSION example-compute1.example.com Ready worker 13m v1.21.6+bb8d50a example-compute2.example.com Ready worker 13m v1.21.6+bb8d50a example-compute4.example.com Ready worker 14m v1.21.6+bb8d50a example-control1.example.com Ready master 52m v1.21.6+bb8d50a example-control2.example.com Ready master 55m v1.21.6+bb8d50a example-control3.example.com Ready master 55m v1.21.6+bb8d50a",
"oc get machines -A",
"NAMESPACE NAME PHASE TYPE REGION ZONE AGE openshift-machine-api example-zbbt6-master-0 Running 95m openshift-machine-api example-zbbt6-master-1 Running 95m openshift-machine-api example-zbbt6-master-2 Running 95m openshift-machine-api example-zbbt6-worker-0-25bhp Running 49m openshift-machine-api example-zbbt6-worker-0-8b4c2 Running 49m openshift-machine-api example-zbbt6-worker-0-jkbqt Running 49m openshift-machine-api example-zbbt6-worker-0-qrl5b Running 49m"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installation_overview/ocp-installation-overview |
Chapter 14. Geo-replication | Chapter 14. Geo-replication Geo-replication allows multiple, geographically distributed Red Hat Quay deployments to work as a single registry from the perspective of a client or user. It significantly improves push and pull performance in a globally-distributed Red Hat Quay setup. Image data is asynchronously replicated in the background with transparent failover and redirect for clients. Deployments of Red Hat Quay with geo-replication is supported on standalone and Operator deployments. 14.1. Geo-replication features When geo-replication is configured, container image pushes will be written to the preferred storage engine for that Red Hat Quay instance. This is typically the nearest storage backend within the region. After the initial push, image data will be replicated in the background to other storage engines. The list of replication locations is configurable and those can be different storage backends. An image pull will always use the closest available storage engine, to maximize pull performance. If replication has not been completed yet, the pull will use the source storage backend instead. 14.2. Geo-replication requirements and constraints In geo-replicated setups, Red Hat Quay requires that all regions are able to read and write to all other region's object storage. Object storage must be geographically accessible by all other regions. In case of an object storage system failure of one geo-replicating site, that site's Red Hat Quay deployment must be shut down so that clients are redirected to the remaining site with intact storage systems by a global load balancer. Otherwise, clients will experience pull and push failures. Red Hat Quay has no internal awareness of the health or availability of the connected object storage system. Users must configure a global load balancer (LB) to monitor the health of your distributed system and to route traffic to different sites based on their storage status. To check the status of your geo-replication deployment, you must use the /health/endtoend checkpoint, which is used for global health monitoring. You must configure the redirect manually using the /health/endtoend endpoint. The /health/instance end point only checks local instance health. If the object storage system of one site becomes unavailable, there will be no automatic redirect to the remaining storage system, or systems, of the remaining site, or sites. Geo-replication is asynchronous. The permanent loss of a site incurs the loss of the data that has been saved in that sites' object storage system but has not yet been replicated to the remaining sites at the time of failure. A single database, and therefore all metadata and Red Hat Quay configuration, is shared across all regions. Geo-replication does not replicate the database. In the event of an outage, Red Hat Quay with geo-replication enabled will not failover to another database. A single Redis cache is shared across the entire Red Hat Quay setup and needs to accessible by all Red Hat Quay pods. The exact same configuration should be used across all regions, with exception of the storage backend, which can be configured explicitly using the QUAY_DISTRIBUTED_STORAGE_PREFERENCE environment variable. Geo-replication requires object storage in each region. It does not work with local storage. Each region must be able to access every storage engine in each region, which requires a network path. Alternatively, the storage proxy option can be used. The entire storage backend, for example, all blobs, is replicated. Repository mirroring, by contrast, can be limited to a repository, or an image. All Red Hat Quay instances must share the same entrypoint, typically through a load balancer. All Red Hat Quay instances must have the same set of superusers, as they are defined inside the common configuration file. Geo-replication requires your Clair configuration to be set to unmanaged . An unmanaged Clair database allows the Red Hat Quay Operator to work in a geo-replicated environment, where multiple instances of the Red Hat Quay Operator must communicate with the same database. For more information, see Advanced Clair configuration . Geo-Replication requires SSL/TLS certificates and keys. For more information, see Using SSL/TLS to protect connections to Red Hat Quay . If the above requirements cannot be met, you should instead use two or more distinct Red Hat Quay deployments and take advantage of repository mirroring functions. 14.3. Geo-replication using standalone Red Hat Quay In the following image, Red Hat Quay is running standalone in two separate regions, with a common database and a common Redis instance. Localized image storage is provided in each region and image pulls are served from the closest available storage engine. Container image pushes are written to the preferred storage engine for the Red Hat Quay instance, and will then be replicated, in the background, to the other storage engines. Note If Clair fails in one cluster, for example, the US cluster, US users would not see vulnerability reports in Red Hat Quay for the second cluster (EU). This is because all Clair instances have the same state. When Clair fails, it is usually because of a problem within the cluster. Geo-replication architecture 14.3.1. Enable storage replication - standalone Quay Use the following procedure to enable storage replication on Red Hat Quay. Procedure In your Red Hat Quay config editor, locate the Registry Storage section. Click Enable Storage Replication . Add each of the storage engines to which data will be replicated. All storage engines to be used must be listed. If complete replication of all images to all storage engines is required, click Replicate to storage engine by default under each storage engine configuration. This ensures that all images are replicated to that storage engine. Note To enable per-namespace replication, contact Red Hat Quay support. When finished, click Save Configuration Changes . The configuration changes will take effect after Red Hat Quay restarts. After adding storage and enabling Replicate to storage engine by default for geo-replication, you must sync existing image data across all storage. To do this, you must oc exec (alternatively, docker exec or kubectl exec ) into the container and enter the following commands: # scl enable python27 bash # python -m util.backfillreplication Note This is a one time operation to sync content after adding new storage. 14.3.2. Run Red Hat Quay with storage preferences Copy the config.yaml to all machines running Red Hat Quay For each machine in each region, add a QUAY_DISTRIBUTED_STORAGE_PREFERENCE environment variable with the preferred storage engine for the region in which the machine is running. For example, for a machine running in Europe with the config directory on the host available from USDQUAY/config : Note The value of the environment variable specified must match the name of a Location ID as defined in the config panel. Restart all Red Hat Quay containers 14.3.3. Removing a geo-replicated site from your standalone Red Hat Quay deployment By using the following procedure, Red Hat Quay administrators can remove sites in a geo-replicated setup. Prerequisites You have configured Red Hat Quay geo-replication with at least two sites, for example, usstorage and eustorage . Each site has its own Organization, Repository, and image tags. Procedure Sync the blobs between all of your defined sites by running the following command: USD python -m util.backfillreplication Warning Prior to removing storage engines from your Red Hat Quay config.yaml file, you must ensure that all blobs are synced between all defined sites. Complete this step before proceeding. In your Red Hat Quay config.yaml file for site usstorage , remove the DISTRIBUTED_STORAGE_CONFIG entry for the eustorage site. Enter the following command to obtain a list of running containers: USD podman ps Example output CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 92c5321cde38 registry.redhat.io/rhel8/redis-5:1 run-redis 11 days ago Up 11 days ago 0.0.0.0:6379->6379/tcp redis 4e6d1ecd3811 registry.redhat.io/rhel8/postgresql-13:1-109 run-postgresql 33 seconds ago Up 34 seconds ago 0.0.0.0:5432->5432/tcp postgresql-quay d2eadac74fda registry-proxy.engineering.redhat.com/rh-osbs/quay-quay-rhel8:v3.9.0-131 registry 4 seconds ago Up 4 seconds ago 0.0.0.0:80->8080/tcp, 0.0.0.0:443->8443/tcp quay Enter the following command to execute a shell inside of the PostgreSQL container: USD podman exec -it postgresql-quay -- /bin/bash Enter psql by running the following command: bash-4.4USD psql Enter the following command to reveal a list of sites in your geo-replicated deployment: quay=# select * from imagestoragelocation; Example output id | name ----+------------------- 1 | usstorage 2 | eustorage Enter the following command to exit the postgres CLI to re-enter bash-4.4: \q Enter the following command to permanently remove the eustorage site: Important The following action cannot be undone. Use with caution. bash-4.4USD python -m util.removelocation eustorage Example output WARNING: This is a destructive operation. Are you sure you want to remove eustorage from your storage locations? [y/n] y Deleted placement 30 Deleted placement 31 Deleted placement 32 Deleted placement 33 Deleted location eustorage 14.4. Geo-replication using the Red Hat Quay Operator In the example shown above, the Red Hat Quay Operator is deployed in two separate regions, with a common database and a common Redis instance. Localized image storage is provided in each region and image pulls are served from the closest available storage engine. Container image pushes are written to the preferred storage engine for the Quay instance, and will then be replicated, in the background, to the other storage engines. Because the Operator now manages the Clair security scanner and its database separately, geo-replication setups can be leveraged so that they do not manage the Clair database. Instead, an external shared database would be used. Red Hat Quay and Clair support several providers and vendors of PostgreSQL, which can be found in the Red Hat Quay 3.x test matrix . Additionally, the Operator also supports custom Clair configurations that can be injected into the deployment, which allows users to configure Clair with the connection credentials for the external database. 14.4.1. Setting up geo-replication on OpenShift Container Platform Use the following procedure to set up geo-replication on OpenShift Container Platform. Procedure Deploy a postgres instance for Red Hat Quay. Login to the database by entering the following command: psql -U <username> -h <hostname> -p <port> -d <database_name> Create a database for Red Hat Quay named quay . For example: CREATE DATABASE quay; Enable pg_trm extension inside the database \c quay; CREATE EXTENSION IF NOT EXISTS pg_trgm; Deploy a Redis instance: Note Deploying a Redis instance might be unnecessary if your cloud provider has its own service. Deploying a Redis instance is required if you are leveraging Builders. Deploy a VM for Redis Verify that it is accessible from the clusters where Red Hat Quay is running Port 6379/TCP must be open Run Redis inside the instance sudo dnf install -y podman podman run -d --name redis -p 6379:6379 redis Create two object storage backends, one for each cluster. Ideally, one object storage bucket will be close to the first, or primary, cluster, and the other will run closer to the second, or secondary, cluster. Deploy the clusters with the same config bundle, using environment variable overrides to select the appropriate storage backend for an individual cluster. Configure a load balancer to provide a single entry point to the clusters. 14.4.1.1. Configuring geo-replication for the Red Hat Quay Operator on OpenShift Container Platform Use the following procedure to configure geo-replication for the Red Hat Quay Operator. Procedure Create a config.yaml file that is shared between clusters. This config.yaml file contains the details for the common PostgreSQL, Redis and storage backends: Geo-replication config.yaml file SERVER_HOSTNAME: <georep.quayteam.org or any other name> 1 DB_CONNECTION_ARGS: autorollback: true threadlocals: true DB_URI: postgresql://postgres:[email protected]:5432/quay 2 BUILDLOGS_REDIS: host: 10.19.0.2 port: 6379 USER_EVENTS_REDIS: host: 10.19.0.2 port: 6379 DISTRIBUTED_STORAGE_CONFIG: usstorage: - GoogleCloudStorage - access_key: GOOGQGPGVMASAAMQABCDEFG bucket_name: georep-test-bucket-0 secret_key: AYWfEaxX/u84XRA2vUX5C987654321 storage_path: /quaygcp eustorage: - GoogleCloudStorage - access_key: GOOGQGPGVMASAAMQWERTYUIOP bucket_name: georep-test-bucket-1 secret_key: AYWfEaxX/u84XRA2vUX5Cuj12345678 storage_path: /quaygcp DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: - usstorage - eustorage DISTRIBUTED_STORAGE_PREFERENCE: - usstorage - eustorage FEATURE_STORAGE_REPLICATION: true 1 A proper SERVER_HOSTNAME must be used for the route and must match the hostname of the global load balancer. 2 To retrieve the configuration file for a Clair instance deployed using the OpenShift Container Platform Operator, see Retrieving the Clair config . Create the configBundleSecret by entering the following command: USD oc create secret generic --from-file config.yaml=./config.yaml georep-config-bundle In each of the clusters, set the configBundleSecret and use the QUAY_DISTRIBUTED_STORAGE_PREFERENCE environmental variable override to configure the appropriate storage for that cluster. For example: Note The config.yaml file between both deployments must match. If making a change to one cluster, it must also be changed in the other. US cluster QuayRegistry example apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: georep-config-bundle components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: postgres managed: false - kind: clairpostgres managed: false - kind: redis managed: false - kind: quay managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: usstorage - kind: mirror managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: usstorage Note Because SSL/TLS is unmanaged, and the route is managed, you must supply the certificates with either with the config tool or directly in the config bundle. For more information, see Configuring TLS and routes . European cluster apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: georep-config-bundle components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: postgres managed: false - kind: clairpostgres managed: false - kind: redis managed: false - kind: quay managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: eustorage - kind: mirror managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: eustorage Note Because SSL/TLS is unmanaged, and the route is managed, you must supply the certificates with either with the config tool or directly in the config bundle. For more information, see Configuring TLS and routes . 14.4.2. Removing a geo-replicated site from your Red Hat Quay Operator deployment By using the following procedure, Red Hat Quay administrators can remove sites in a geo-replicated setup. Prerequisites You are logged into OpenShift Container Platform. You have configured Red Hat Quay geo-replication with at least two sites, for example, usstorage and eustorage . Each site has its own Organization, Repository, and image tags. Procedure Sync the blobs between all of your defined sites by running the following command: USD python -m util.backfillreplication Warning Prior to removing storage engines from your Red Hat Quay config.yaml file, you must ensure that all blobs are synced between all defined sites. When running this command, replication jobs are created which are picked up by the replication worker. If there are blobs that need replicated, the script returns UUIDs of blobs that will be replicated. If you run this command multiple times, and the output from the return script is empty, it does not mean that the replication process is done; it means that there are no more blobs to be queued for replication. Customers should use appropriate judgement before proceeding, as the allotted time replication takes depends on the number of blobs detected. Alternatively, you could use a third party cloud tool, such as Microsoft Azure, to check the synchronization status. This step must be completed before proceeding. In your Red Hat Quay config.yaml file for site usstorage , remove the DISTRIBUTED_STORAGE_CONFIG entry for the eustorage site. Enter the following command to identify your Quay application pods: USD oc get pod -n <quay_namespace> Example output quay390usstorage-quay-app-5779ddc886-2drh2 quay390eustorage-quay-app-66969cd859-n2ssm Enter the following command to open an interactive shell session in the usstorage pod: USD oc rsh quay390usstorage-quay-app-5779ddc886-2drh2 Enter the following command to permanently remove the eustorage site: Important The following action cannot be undone. Use with caution. sh-4.4USD python -m util.removelocation eustorage Example output WARNING: This is a destructive operation. Are you sure you want to remove eustorage from your storage locations? [y/n] y Deleted placement 30 Deleted placement 31 Deleted placement 32 Deleted placement 33 Deleted location eustorage 14.5. Mixed storage for geo-replication Red Hat Quay geo-replication supports the use of different and multiple replication targets, for example, using AWS S3 storage on public cloud and using Ceph storage on premise. This complicates the key requirement of granting access to all storage backends from all Red Hat Quay pods and cluster nodes. As a result, it is recommended that you use the following: A VPN to prevent visibility of the internal storage, or A token pair that only allows access to the specified bucket used by Red Hat Quay This results in the public cloud instance of Red Hat Quay having access to on-premise storage, but the network will be encrypted, protected, and will use ACLs, thereby meeting security requirements. If you cannot implement these security measures, it might be preferable to deploy two distinct Red Hat Quay registries and to use repository mirroring as an alternative to geo-replication. | [
"scl enable python27 bash python -m util.backfillreplication",
"sudo podman run -d --rm -p 80:8080 -p 443:8443 --name=quay -v USDQUAY/config:/conf/stack:Z -e QUAY_DISTRIBUTED_STORAGE_PREFERENCE=europestorage registry.redhat.io/quay/quay-rhel8:v3.9.10",
"python -m util.backfillreplication",
"podman ps",
"CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 92c5321cde38 registry.redhat.io/rhel8/redis-5:1 run-redis 11 days ago Up 11 days ago 0.0.0.0:6379->6379/tcp redis 4e6d1ecd3811 registry.redhat.io/rhel8/postgresql-13:1-109 run-postgresql 33 seconds ago Up 34 seconds ago 0.0.0.0:5432->5432/tcp postgresql-quay d2eadac74fda registry-proxy.engineering.redhat.com/rh-osbs/quay-quay-rhel8:v3.9.0-131 registry 4 seconds ago Up 4 seconds ago 0.0.0.0:80->8080/tcp, 0.0.0.0:443->8443/tcp quay",
"podman exec -it postgresql-quay -- /bin/bash",
"bash-4.4USD psql",
"quay=# select * from imagestoragelocation;",
"id | name ----+------------------- 1 | usstorage 2 | eustorage",
"\\q",
"bash-4.4USD python -m util.removelocation eustorage",
"WARNING: This is a destructive operation. Are you sure you want to remove eustorage from your storage locations? [y/n] y Deleted placement 30 Deleted placement 31 Deleted placement 32 Deleted placement 33 Deleted location eustorage",
"psql -U <username> -h <hostname> -p <port> -d <database_name>",
"CREATE DATABASE quay;",
"\\c quay; CREATE EXTENSION IF NOT EXISTS pg_trgm;",
"sudo dnf install -y podman run -d --name redis -p 6379:6379 redis",
"SERVER_HOSTNAME: <georep.quayteam.org or any other name> 1 DB_CONNECTION_ARGS: autorollback: true threadlocals: true DB_URI: postgresql://postgres:[email protected]:5432/quay 2 BUILDLOGS_REDIS: host: 10.19.0.2 port: 6379 USER_EVENTS_REDIS: host: 10.19.0.2 port: 6379 DISTRIBUTED_STORAGE_CONFIG: usstorage: - GoogleCloudStorage - access_key: GOOGQGPGVMASAAMQABCDEFG bucket_name: georep-test-bucket-0 secret_key: AYWfEaxX/u84XRA2vUX5C987654321 storage_path: /quaygcp eustorage: - GoogleCloudStorage - access_key: GOOGQGPGVMASAAMQWERTYUIOP bucket_name: georep-test-bucket-1 secret_key: AYWfEaxX/u84XRA2vUX5Cuj12345678 storage_path: /quaygcp DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: - usstorage - eustorage DISTRIBUTED_STORAGE_PREFERENCE: - usstorage - eustorage FEATURE_STORAGE_REPLICATION: true",
"oc create secret generic --from-file config.yaml=./config.yaml georep-config-bundle",
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: georep-config-bundle components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: postgres managed: false - kind: clairpostgres managed: false - kind: redis managed: false - kind: quay managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: usstorage - kind: mirror managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: usstorage",
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: georep-config-bundle components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: postgres managed: false - kind: clairpostgres managed: false - kind: redis managed: false - kind: quay managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: eustorage - kind: mirror managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: eustorage",
"python -m util.backfillreplication",
"oc get pod -n <quay_namespace>",
"quay390usstorage-quay-app-5779ddc886-2drh2 quay390eustorage-quay-app-66969cd859-n2ssm",
"oc rsh quay390usstorage-quay-app-5779ddc886-2drh2",
"sh-4.4USD python -m util.removelocation eustorage",
"WARNING: This is a destructive operation. Are you sure you want to remove eustorage from your storage locations? [y/n] y Deleted placement 30 Deleted placement 31 Deleted placement 32 Deleted placement 33 Deleted location eustorage"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/manage_red_hat_quay/georepl-intro |
Chapter 12. Linux Containers | Chapter 12. Linux Containers 12.1. Linux Containers Using Docker Technology Red Hat Enterprise Linux Atomic Host 7.1.4 includes the following updates: The docker packages have been upgraded to upstream version 1.7.1, which contains various improvements over version 1.7, which, in its turn, contains significant changes from version 1.6 included in Red Hat Enterprise Linux Atomic Host 7.1.3. See the following change log for the full list of fixes and features between version 1.6 and 1.7.1: https://github.com/docker/docker/blob/master/CHANGELOG.md . Additionally, Red Hat Enterprise Linux Atomic Host 7.1.4 includes the following changes: Firewalld is now supported for docker containers. If firewalld is running on the system, the rules will be added via the firewalld passthrough. If firewalld is reloaded, the configuration will be re-applied. Docker now mounts the cgroup information specific to a container under the /sys/fs/cgroup directory. Some applications make decisions based on the amount of resources available to them. For example, a Java Virtual Machines (JVMs) would want to check how much memory is available to them so they can allocate a large enough pool to improve their performance. This allows applications to discover the maximum about of memory available to the container, by reading /sys/fs/cgroup/memory . The docker run command now emits a warning message if you are using a device mapper on a loopback device. It is strongly recommended to use the dm.thinpooldev option as a storage option for a production environment. Do not use loopback in a production environment. You can now run containers in systemd mode with the --init=systemd flag. If you are running a container with systemd as PID 1, this flag will turn on all systemd features to allow it to run in a non-privileged container. Set container_uuid as an environment variable to pass to systemd what to store in the /etc/machine-id file. This file links the journald within the container to to external log. Mount host directories into a container so systemd will not require privileges then mount the journal directory from the host into the container. If you run journald within the container, the host journalctl utility will be able to display the content. Mount the /run directory as a tmpfs. Then automatically mount the /sys/fs/cgroup directory as read-only into a container if --systemd is specified. Send proper signal to systemd when running in systemd mode. The search experience within containers using the docker search command has been improved: You can now prepend indices to search results. You can prefix a remote name with a registry name. You can shorten the index name if it is not an IP address. The --no-index option has been added to avoid listing index names. The sorting of entries when the index is preserved has been changed: You can sort by index_name , start_count , registry_name , name and description . The sorting of entries when the index is omitted has been changed: You can sort by registry_name , star_count , name and description . You can now expose configured registry list using the Docker info API. Red Hat Enterprise Linux Atomic Host 7.1.3 includes the following updates: docker-storage-setup docker-storage-setup now relies on the Logical Volume Manager (LVM) to extend thin pools automatically. By default, 60% of free space in the volume group is used for a thin pool and it is grown automatically by LVM. When the thin pool is full 60%, it will be grown by 20%. A default configuration file for docker-storage-setup is now in /usr/lib/docker-storage-setup/docker-storage-setup . You can override the settings in this file by editing the /etc/sysconfig/docker-storage-setup file. Support for passing raw block devices to the docker service for creating a thin pool has been removed. Now the docker-storage-setup service creates an LVM thin pool and passes it to docker. The chunk size for thin pools has been increased from 64K to 512K. By default, the partition table for the root user is not grown. You can change this behavior by setting the GROWPART=true option in the /etc/sysconfig/docker-storage-setup file. A thin pool is now set up with the skip_block_zeroing feature. This means that when a new block is provisioned in the pool, it will not be zeroed. This is done for performance reasons. One can change this behavior by using the --zero option: By default, docker storage using the devicemapper graphdriver runs on loopback devices. It is strongly recommended to not use this setup, as it is not production ready. A warning message is displayed to warn the user about this. The user has the option to suppress this warning by passing this storage flag dm.no_warn_on_loop_devices=true . Updates related to handling storage on Docker-formatted containers: NFS Volume Plugins validated with SELinux have been added. This includes using the NFS Volume Plugin to NFS Mount GlusterFS. Persistent volume support validated for the NFS volume plugin only has been added. Local storage (HostPath volume plugin) validated with SELinux has been added. (requires workaround described in the docs) iSCSI Volume Plugins validated with SELinux has been added. GCEPersistentDisk Volume Plugins validated with SELinux has been added. (requires workaround described in the docs) Red Hat Enterprise Linux Atomic Host 7.1.2 includes the following updates: docker-1.6.0-11.el7 A completely re-architected Registry and a new Registry API supported by Docker 1.6 that enhance significantly image pulls performance and reliability. A new logging driver API which allows you to send container logs to other systems has been added to the docker utilty. The --log driver option has been added to the docker run command and it takes three sub-options: a JSON file, syslog, or none. The none option can be used with applications with verbose logs that are non-essential. Dockerfile instructions can now be used when committing and importing. This also adds the ability to make changes to running images without having to re-build the entire image. The commit --change and import --change options allow you to specify standard changes to be applied to the new image. These are expressed in the Dockerfile syntax and used to modify the image. This release adds support for custom cgroups. Using the --cgroup-parent flag, you can pass a specific cgroup to run a container in. This allows you to create and manage cgroups on their own. You can define custom resources for those cgroups and put containers under a common parent group. With this update, you can now specify the default ulimit settings for all containers, when configuring the Docker daemon. For example: This command sets a soft limit of 1024 and a hard limit of 2048 child processes for all containers. You can set this option multiple times for different ulimit values, for example: These settings can be overwritten when creating a container as such: This will overwrite the default nproc value passed into the daemon. The ability to block registries with the --block-registry flag. Support for searching multiple registries at once. Pushing local images to a public registry requires confirmation. Short names are resolved locally against a list of registries configured in an order, with the docker.io registry last. This way, pulling is always done with a fully qualified name. Red Hat Enterprise Linux Atomic Host 7.1.1 includes the following updates: docker-1.5.0-28.el7 IPv6 support: Support is available for globally routed and local link addresses. Read-only containers: This option is used to restrict applications in a container from being able to write to the entire file system. Statistics API and endpoint: Statistics on live CPU, memory, network IO and block IO can now be streamed from containers. The docker build -f docker_file command to specify a file other than Dockerfile to be used by docker build. The ability to specify additional registries to use for unqualified pulls and searches. Prior to this an unqualified name was only searched in the public Docker Hub. The ability to block communication with certain registries with --block-registry= <registry> flag. This includes the ability to block the public Docker Hub and the ability to block all but specified registries. Confirmation is required to push to a public registry. All repositories are now fully qualified when listed. The output of docker images lists the source registry name for all images pulled. The output of docker search shows the source registry name for all results. For more information, see Get Started with Docker Formatted Container Images on Red Hat Systems | [
"lvchange --zero y thin-pool",
"docker -d --default-ulimit nproc=1024:2048",
"--default-ulimit nproc=1024:2408 --default-ulimit nofile=100:200",
"docker run -d --ulimit nproc=2048:4096 httpd"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.1_release_notes/chap-red_hat_enterprise_linux-7.1_release_notes-linux_containers_with_docker_format |
18.12.11.4. Pre-existing network filters | 18.12.11.4. Pre-existing network filters The following is a list of example network filters that are automatically installed with libvirt: Table 18.15. ICMPv6 protocol types Command Name Description no-arp-spoofing Prevents a guest virtual machine from spoofing ARP traffic; this filter only allows ARP request and reply messages and enforces that those packets contain the MAC and IP addresses of the guest virtual machine. allow-dhcp Allows a guest virtual machine to request an IP address via DHCP (from any DHCP server) allow-dhcp-server Allows a guest virtual machine to request an IP address from a specified DHCP server. The dotted decimal IP address of the DHCP server must be provided in a reference to this filter. The name of the variable must be DHCPSERVER . no-ip-spoofing Prevents a guest virtual machine from sending IP packets with a source IP address different from the one inside the packet. no-ip-multicast Prevents a guest virtual machine from sending IP multicast packets. clean-traffic Prevents MAC, IP and ARP spoofing. This filter references several other filters as building blocks. These filters are only building blocks and require a combination with other filters to provide useful network traffic filtering. The most used one in the above list is the clean-traffic filter. This filter itself can for example be combined with the no-ip-multicast filter to prevent virtual machines from sending IP multicast traffic on top of the prevention of packet spoofing. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sub-sect-pre-exist-net-filter |
Chapter 5. Managing images | Chapter 5. Managing images 5.1. Managing images overview With OpenShift Container Platform you can interact with images and set up image streams, depending on where the registries of the images are located, any authentication requirements around those registries, and how you want your builds and deployments to behave. 5.1.1. Images overview An image stream comprises any number of container images identified by tags. It presents a single virtual view of related images, similar to a container image repository. By watching an image stream, builds and deployments can receive notifications when new images are added or modified and react by performing a build or deployment, respectively. 5.2. Tagging images The following sections provide an overview and instructions for using image tags in the context of container images for working with OpenShift Container Platform image streams and their tags. 5.2.1. Image tags An image tag is a label applied to a container image in a repository that distinguishes a specific image from other images in an image stream. Typically, the tag represents a version number of some sort. For example, here :v3.11.59-2 is the tag: registry.access.redhat.com/openshift3/jenkins-2-rhel7:v3.11.59-2 You can add additional tags to an image. For example, an image might be assigned the tags :v3.11.59-2 and :latest . OpenShift Container Platform provides the oc tag command, which is similar to the docker tag command, but operates on image streams instead of directly on images. 5.2.2. Image tag conventions Images evolve over time and their tags reflect this. Generally, an image tag always points to the latest image built. If there is too much information embedded in a tag name, like v2.0.1-may-2019 , the tag points to just one revision of an image and is never updated. Using default image pruning options, such an image is never removed. In very large clusters, the schema of creating new tags for every revised image could eventually fill up the etcd datastore with excess tag metadata for images that are long outdated. If the tag is named v2.0 , image revisions are more likely. This results in longer tag history and, therefore, the image pruner is more likely to remove old and unused images. Although tag naming convention is up to you, here are a few examples in the format <image_name>:<image_tag> : Table 5.1. Image tag naming conventions Description Example Revision myimage:v2.0.1 Architecture myimage:v2.0-x86_64 Base image myimage:v1.2-centos7 Latest (potentially unstable) myimage:latest Latest stable myimage:stable If you require dates in tag names, periodically inspect old and unsupported images and istags and remove them. Otherwise, you can experience increasing resource usage caused by retaining old images. 5.2.3. Adding tags to image streams An image stream in OpenShift Container Platform comprises zero or more container images identified by tags. There are different types of tags available. The default behavior uses a permanent tag, which points to a specific image in time. If the permanent tag is in use and the source changes, the tag does not change for the destination. A tracking tag means the destination tag's metadata is updated during the import of the source tag. Procedure You can add tags to an image stream using the oc tag command: USD oc tag <source> <destination> For example, to configure the ruby image stream static-2.0 tag to always refer to the current image for the ruby image stream 2.0 tag: USD oc tag ruby:2.0 ruby:static-2.0 This creates a new image stream tag named static-2.0 in the ruby image stream. The new tag directly references the image id that the ruby:2.0 image stream tag pointed to at the time oc tag was run, and the image it points to never changes. To ensure the destination tag is updated when the source tag changes, use the --alias=true flag: USD oc tag --alias=true <source> <destination> Note Use a tracking tag for creating permanent aliases, for example, latest or stable . The tag only works correctly within a single image stream. Trying to create a cross-image stream alias produces an error. You can also add the --scheduled=true flag to have the destination tag be refreshed, or re-imported, periodically. The period is configured globally at the system level. The --reference flag creates an image stream tag that is not imported. The tag points to the source location, permanently. If you want to instruct OpenShift Container Platform to always fetch the tagged image from the integrated registry, use --reference-policy=local . The registry uses the pull-through feature to serve the image to the client. By default, the image blobs are mirrored locally by the registry. As a result, they can be pulled more quickly the time they are needed. The flag also allows for pulling from insecure registries without a need to supply --insecure-registry to the container runtime as long as the image stream has an insecure annotation or the tag has an insecure import policy. 5.2.4. Removing tags from image streams You can remove tags from an image stream. Procedure To remove a tag completely from an image stream run: USD oc delete istag/ruby:latest or: USD oc tag -d ruby:latest 5.2.5. Referencing images in imagestreams You can use tags to reference images in image streams using the following reference types. Table 5.2. Imagestream reference types Reference type Description ImageStreamTag An ImageStreamTag is used to reference or retrieve an image for a given image stream and tag. ImageStreamImage An ImageStreamImage is used to reference or retrieve an image for a given image stream and image sha ID. DockerImage A DockerImage is used to reference or retrieve an image for a given external registry. It uses standard Docker pull specification for its name. When viewing example image stream definitions you may notice they contain definitions of ImageStreamTag and references to DockerImage , but nothing related to ImageStreamImage . This is because the ImageStreamImage objects are automatically created in OpenShift Container Platform when you import or tag an image into the image stream. You should never have to explicitly define an ImageStreamImage object in any image stream definition that you use to create image streams. Procedure To reference an image for a given image stream and tag, use ImageStreamTag : To reference an image for a given image stream and image sha ID, use ImageStreamImage : The <id> is an immutable identifier for a specific image, also called a digest. To reference or retrieve an image for a given external registry, use DockerImage : Note When no tag is specified, it is assumed the latest tag is used. You can also reference a third-party registry: Or an image with a digest: 5.3. Image pull policy Each container in a pod has a container image. After you have created an image and pushed it to a registry, you can then refer to it in the pod. 5.3.1. Image pull policy overview When OpenShift Container Platform creates containers, it uses the container imagePullPolicy to determine if the image should be pulled prior to starting the container. There are three possible values for imagePullPolicy : Table 5.3. imagePullPolicy values Value Description Always Always pull the image. IfNotPresent Only pull the image if it does not already exist on the node. Never Never pull the image. If a container imagePullPolicy parameter is not specified, OpenShift Container Platform sets it based on the image tag: If the tag is latest , OpenShift Container Platform defaults imagePullPolicy to Always . Otherwise, OpenShift Container Platform defaults imagePullPolicy to IfNotPresent . 5.4. Using image pull secrets If you are using the OpenShift image registry and are pulling from image streams located in the same project, then your pod service account should already have the correct permissions and no additional action should be required. However, for other scenarios, such as referencing images across OpenShift Container Platform projects or from secured registries, additional configuration steps are required. You can obtain the image pull secret from the Red Hat OpenShift Cluster Manager . This pull secret is called pullSecret . You use this pull secret to authenticate with the services that are provided by the included authorities, Quay.io and registry.redhat.io , which serve the container images for OpenShift Container Platform components. 5.4.1. Allowing pods to reference images across projects When using the OpenShift image registry, to allow pods in project-a to reference images in project-b , a service account in project-a must be bound to the system:image-puller role in project-b . Note When you create a pod service account or a namespace, wait until the service account is provisioned with a docker pull secret; if you create a pod before its service account is fully provisioned, the pod fails to access the OpenShift image registry. Procedure To allow pods in project-a to reference images in project-b , bind a service account in project-a to the system:image-puller role in project-b : USD oc policy add-role-to-user \ system:image-puller system:serviceaccount:project-a:default \ --namespace=project-b After adding that role, the pods in project-a that reference the default service account are able to pull images from project-b . To allow access for any service account in project-a , use the group: USD oc policy add-role-to-group \ system:image-puller system:serviceaccounts:project-a \ --namespace=project-b 5.4.2. Allowing pods to reference images from other secured registries To pull a secured container from other private or secured registries, you must create a pull secret from your container client credentials, such as Docker or Podman, and add it to your service account. Both Docker and Podman use a configuration file to store authentication details to log in to secured or insecure registry: Docker : By default, Docker uses USDHOME/.docker/config.json . Podman : By default, Podman uses USDHOME/.config/containers/auth.json . These files store your authentication information if you have previously logged in to a secured or insecure registry. Note Both Docker and Podman credential files and the associated pull secret can contain multiple references to the same registry if they have unique paths, for example, quay.io and quay.io/<example_repository> . However, neither Docker nor Podman support multiple entries for the exact same registry path. Example config.json file { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io/repository-main":{ "auth":"b3Blb=", "email":"[email protected]" } } } Example pull secret apiVersion: v1 data: .dockerconfigjson: ewogICAiYXV0aHMiOnsKICAgICAgIm0iOnsKICAgICAgIsKICAgICAgICAgImF1dGgiOiJiM0JsYj0iLAogICAgICAgICAiZW1haWwiOiJ5b3VAZXhhbXBsZS5jb20iCiAgICAgIH0KICAgfQp9Cg== kind: Secret metadata: creationTimestamp: "2021-09-09T19:10:11Z" name: pull-secret namespace: default resourceVersion: "37676" uid: e2851531-01bc-48ba-878c-de96cfe31020 type: Opaque Procedure Create a secret from an existing authentication file: For Docker clients using .docker/config.json , enter the following command: USD oc create secret generic <pull_secret_name> \ --from-file=.dockerconfigjson=<path/to/.docker/config.json> \ --type=kubernetes.io/dockerconfigjson For Podman clients using .config/containers/auth.json , enter the following command: USD oc create secret generic <pull_secret_name> \ --from-file=<path/to/.config/containers/auth.json> \ --type=kubernetes.io/podmanconfigjson If you do not already have a Docker credentials file for the secured registry, you can create a secret by running: USD oc create secret docker-registry <pull_secret_name> \ --docker-server=<registry_server> \ --docker-username=<user_name> \ --docker-password=<password> \ --docker-email=<email> To use a secret for pulling images for pods, you must add the secret to your service account. The name of the service account in this example should match the name of the service account the pod uses. The default service account is default : USD oc secrets link default <pull_secret_name> --for=pull 5.4.2.1. Pulling from private registries with delegated authentication A private registry can delegate authentication to a separate service. In these cases, image pull secrets must be defined for both the authentication and registry endpoints. Procedure Create a secret for the delegated authentication server: USD oc create secret docker-registry \ --docker-server=sso.redhat.com \ [email protected] \ --docker-password=******** \ --docker-email=unused \ redhat-connect-sso secret/redhat-connect-sso Create a secret for the private registry: USD oc create secret docker-registry \ --docker-server=privateregistry.example.com \ [email protected] \ --docker-password=******** \ --docker-email=unused \ private-registry secret/private-registry 5.4.3. Updating the global cluster pull secret You can update the global pull secret for your cluster by either replacing the current pull secret or appending a new pull secret. Important To transfer your cluster to another owner, you must first initiate the transfer in OpenShift Cluster Manager Hybrid Cloud Console , and then update the pull secret on the cluster. Updating a cluster's pull secret without initiating the transfer in OpenShift Cluster Manager causes the cluster to stop reporting Telemetry metrics in OpenShift Cluster Manager. For more information about transferring cluster ownership , see "Transferring cluster ownership" in the Red Hat OpenShift Cluster Manager documentation. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Optional: To append a new pull secret to the existing pull secret, complete the following steps: Enter the following command to download the pull secret: USD oc get secret/pull-secret -n openshift-config --template='{{index .data ".dockerconfigjson" | base64decode}}' ><pull_secret_location> 1 1 Provide the path to the pull secret file. Enter the following command to add the new pull secret: USD oc registry login --registry="<registry>" \ 1 --auth-basic="<username>:<password>" \ 2 --to=<pull_secret_location> 3 1 Provide the new registry. You can include multiple repositories within the same registry, for example: --registry="<registry/my-namespace/my-repository>" . 2 Provide the credentials of the new registry. 3 Provide the path to the pull secret file. Alternatively, you can perform a manual update to the pull secret file. Enter the following command to update the global pull secret for your cluster: USD oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1 1 Provide the path to the new pull secret file. This update is rolled out to all nodes, which can take some time depending on the size of your cluster. Note As of OpenShift Container Platform 4.7.4, changes to the global pull secret no longer trigger a node drain or reboot. | [
"registry.access.redhat.com/openshift3/jenkins-2-rhel7:v3.11.59-2",
"oc tag <source> <destination>",
"oc tag ruby:2.0 ruby:static-2.0",
"oc tag --alias=true <source> <destination>",
"oc delete istag/ruby:latest",
"oc tag -d ruby:latest",
"<image_stream_name>:<tag>",
"<image_stream_name>@<id>",
"openshift/ruby-20-centos7:2.0",
"registry.redhat.io/rhel7:latest",
"centos/ruby-22-centos7@sha256:3a335d7d8a452970c5b4054ad7118ff134b3a6b50a2bb6d0c07c746e8986b28e",
"oc policy add-role-to-user system:image-puller system:serviceaccount:project-a:default --namespace=project-b",
"oc policy add-role-to-group system:image-puller system:serviceaccounts:project-a --namespace=project-b",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io/repository-main\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"apiVersion: v1 data: .dockerconfigjson: ewogICAiYXV0aHMiOnsKICAgICAgIm0iOnsKICAgICAgIsKICAgICAgICAgImF1dGgiOiJiM0JsYj0iLAogICAgICAgICAiZW1haWwiOiJ5b3VAZXhhbXBsZS5jb20iCiAgICAgIH0KICAgfQp9Cg== kind: Secret metadata: creationTimestamp: \"2021-09-09T19:10:11Z\" name: pull-secret namespace: default resourceVersion: \"37676\" uid: e2851531-01bc-48ba-878c-de96cfe31020 type: Opaque",
"oc create secret generic <pull_secret_name> --from-file=.dockerconfigjson=<path/to/.docker/config.json> --type=kubernetes.io/dockerconfigjson",
"oc create secret generic <pull_secret_name> --from-file=<path/to/.config/containers/auth.json> --type=kubernetes.io/podmanconfigjson",
"oc create secret docker-registry <pull_secret_name> --docker-server=<registry_server> --docker-username=<user_name> --docker-password=<password> --docker-email=<email>",
"oc secrets link default <pull_secret_name> --for=pull",
"oc create secret docker-registry --docker-server=sso.redhat.com [email protected] --docker-password=******** --docker-email=unused redhat-connect-sso secret/redhat-connect-sso",
"oc create secret docker-registry --docker-server=privateregistry.example.com [email protected] --docker-password=******** --docker-email=unused private-registry secret/private-registry",
"oc get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' ><pull_secret_location> 1",
"oc registry login --registry=\"<registry>\" \\ 1 --auth-basic=\"<username>:<password>\" \\ 2 --to=<pull_secret_location> 3",
"oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/images/managing-images |
Chapter 16. Deleting applications | Chapter 16. Deleting applications You can delete applications created in your project. 16.1. Deleting applications using the Developer perspective You can delete an application and all of its associated components using the Topology view in the Developer perspective: Click the application you want to delete to see the side panel with the resource details of the application. Click the Actions drop-down menu displayed on the upper right of the panel, and select Delete Application to see a confirmation dialog box. Enter the name of the application and click Delete to delete it. You can also right-click the application you want to delete and click Delete Application to delete it. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/building_applications/odc-deleting-applications |
Chapter 5. Installing a cluster on vSphere using the Agent-based Installer | Chapter 5. Installing a cluster on vSphere using the Agent-based Installer The Agent-based installation method provides the flexibility to boot your on-premises servers in any way that you choose. It combines the ease of use of the Assisted Installation service with the ability to run offline, including in air-gapped environments. Agent-based installation is a subcommand of the OpenShift Container Platform installer. It generates a bootable ISO image containing all of the information required to deploy an OpenShift Container Platform cluster with an available release image. 5.1. Additional resources Preparing to install with the Agent-based Installer | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_on_vmware_vsphere/installing-vsphere-agent-based-installer |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/migrating_red_hat_build_of_openjdk_8_to_red_hat_build_of_openjdk_11/making-open-source-more-inclusive |
Chapter 86. DockerOutput schema reference | Chapter 86. DockerOutput schema reference Used in: Build The type property is a discriminator that distinguishes use of the DockerOutput type from ImageStreamOutput . It must have the value docker for the type DockerOutput . Property Description image The full name which should be used for tagging and pushing the newly built image. For example quay.io/my-organization/my-custom-connect:latest . Required. string pushSecret Container Registry Secret with the credentials for pushing the newly built image. string additionalKanikoOptions Configures additional options which will be passed to the Kaniko executor when building the new Connect image. Allowed options are: --customPlatform, --insecure, --insecure-pull, --insecure-registry, --log-format, --log-timestamp, --registry-mirror, --reproducible, --single-snapshot, --skip-tls-verify, --skip-tls-verify-pull, --skip-tls-verify-registry, --verbosity, --snapshotMode, --use-new-run. These options will be used only on OpenShift where the Kaniko executor is used. They will be ignored on OpenShift. The options are described in the Kaniko GitHub repository . Changing this field does not trigger new build of the Kafka Connect image. string array type Must be docker . string | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-DockerOutput-reference |
Appendix A. Component Versions | Appendix A. Component Versions This appendix is a list of components and their versions in the Red Hat Enterprise Linux 6.9 release. Table A.1. Component Versions Component Version Kernel 2.6.32-696 QLogic qla2xxx driver 8.07.00.26.06.8-k QLogic ql2xxx firmware ql2100-firmware-1.19.38-3.1 ql2200-firmware-2.02.08-3.1 ql23xx-firmware-3.03.27-3.1 ql2400-firmware-7.03.00-1 ql2500-firmware-7.03.00-1 Emulex lpfc driver 0:11.0.0.5 iSCSI initiator utils iscsi-initiator-utils-6.2.0.873-26 DM-Multipath device-mapper-multipath-0.4.9-100 LVM lvm2-2.02.143-12 | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.9_release_notes/appe-red_hat_enterprise_linux-6.9_release_notes-component_versions |
Chapter 4. GFS2 quota management | Chapter 4. GFS2 quota management File system quotas are used to limit the amount of file system space a user or group can use. A user or group does not have a quota limit until one is set. When a GFS2 file system is mounted with the quota=on or quota=account option, GFS2 keeps track of the space used by each user and group even when there are no limits in place. GFS2 updates quota information in a transactional way so system crashes do not require quota usages to be reconstructed. To prevent a performance slowdown, a GFS2 node synchronizes updates to the quota file only periodically. The fuzzy quota accounting can allow users or groups to slightly exceed the set limit. To minimize this, GFS2 dynamically reduces the synchronization period as a hard quota limit is approached. Note GFS2 supports the standard Linux quota facilities. In order to use this you will need to install the quota RPM. This is the preferred way to administer quotas on GFS2 and should be used for all new deployments of GFS2 using quotas. For more information about disk quotas, see the man pages of the following commands: quotacheck edquota repquota quota 4.1. Configuring GFS2 disk quotas To implement disk quotas for GFS2 file systems, there are three steps to perform. The steps to perform to implement disk quotas are as follows: Set up quotas in enforcement or accounting mode. Initialize the quota database file with current block usage information. Assign quota policies. (In accounting mode, these policies are not enforced.) Each of these steps is discussed in detail in the following sections. 4.1.1. Setting up quotas in enforcement or accounting mode In GFS2 file systems, quotas are disabled by default. To enable quotas for a file system, mount the file system with the quota=on option specified. To mount a file system with quotas enabled, specify quota=on for the options argument when creating the GFS2 file system resource in a cluster. For example, the following command specifies that the GFS2 Filesystem resource being created will be mounted with quotas enabled. It is possible to keep track of disk usage and maintain quota accounting for every user and group without enforcing the limit and warn values. To do this, mount the file system with the quota=account option specified. To mount a file system with quotas disabled, specify quota=off for the options argument when creating the GFS2 file system resource in a cluster. 4.1.2. Creating the quota database files After each quota-enabled file system is mounted, the system is capable of working with disk quotas. However, the file system itself is not yet ready to support quotas. The step is to run the quotacheck command. The quotacheck command examines quota-enabled file systems and builds a table of the current disk usage per file system. The table is then used to update the operating system's copy of disk usage. In addition, the file system's disk quota files are updated. To create the quota files on the file system, use the -u and the -g options of the quotacheck command; both of these options must be specified for user and group quotas to be initialized. For example, if quotas are enabled for the /home file system, create the files in the /home directory: 4.1.3. Assigning quotas per user The last step is assigning the disk quotas with the edquota command. Note that if you have mounted your file system in accounting mode (with the quota=account option specified), the quotas are not enforced. To configure the quota for a user, as root in a shell prompt, execute the command: Perform this step for each user who needs a quota. For example, if a quota is enabled for the /home partition ( /dev/VolGroup00/LogVol02 in the example below) and the command edquota testuser is executed, the following is shown in the editor configured as the default for the system: Note The text editor defined by the EDITOR environment variable is used by edquota . To change the editor, set the EDITOR environment variable in your ~/.bash_profile file to the full path of the editor of your choice. The first column is the name of the file system that has a quota enabled for it. The second column shows how many blocks the user is currently using. The two columns are used to set soft and hard block limits for the user on the file system. The soft block limit defines the maximum amount of disk space that can be used. The hard block limit is the absolute maximum amount of disk space that a user or group can use. Once this limit is reached, no further disk space can be used. The GFS2 file system does not maintain quotas for inodes, so these columns do not apply to GFS2 file systems and will be blank. If any of the values are set to 0, that limit is not set. In the text editor, change the limits. For example: To verify that the quota for the user has been set, use the following command: You can also set quotas from the command line with the setquota command. For information about the setquota command, see the setquota (8) man page. 4.1.4. Assigning quotas per group Quotas can also be assigned on a per-group basis. Note that if you have mounted your file system in accounting mode (with the account=on option specified), the quotas are not enforced. To set a group quota for the devel group (the group must exist prior to setting the group quota), use the following command: This command displays the existing quota for the group in the text editor: The GFS2 file system does not maintain quotas for inodes, so these columns do not apply to GFS2 file systems and will be blank. Modify the limits, then save the file. To verify that the group quota has been set, use the following command: 4.2. Managing GFS2 disk Quotas If quotas are implemented, they need some maintenance, mostly in the form of watching to see if the quotas are exceeded and making sure the quotas are accurate. If users repeatedly exceed their quotas or consistently reach their soft limits, a system administrator has a few choices to make depending on what type of users they are and how much disk space impacts their work. The administrator can either help the user determine how to use less disk space or increase the user's disk quota. You can create a disk usage report by running the repquota utility. For example, the command repquota /home produces this output: To view the disk usage report for all (option -a ) quota-enabled file systems, use the command: The -- displayed after each user is a quick way to determine whether the block limits have been exceeded. If the block soft limit is exceeded, a + appears in place of the first - in the output. The second - indicates the inode limit, but GFS2 file systems do not support inode limits so that character will remain as - . GFS2 file systems do not support a grace period, so the grace column will remain blank. Note that the repquota command is not supported over NFS, irrespective of the underlying file system. 4.3. Keeping GFS2 disk quotas accurate with the quotacheck command If you enable quotas on your file system after a period of time when you have been running with quotas disabled, you should run the quotacheck command to create, check, and repair quota files. Additionally, you may want to run the quotacheck command if you think your quota files may not be accurate, as may occur when a file system is not unmounted cleanly after a system crash. For more information about the quotacheck command, see the quotacheck(8) man page. Note Run quotacheck when the file system is relatively idle on all nodes because disk activity may affect the computed quota values. 4.4. Synchronizing quotas with the quotasync Command GFS2 stores all quota information in its own internal file on disk. A GFS2 node does not update this quota file for every file system write; rather, by default it updates the quota file once every 60 seconds. This is necessary to avoid contention among nodes writing to the quota file, which would cause a slowdown in performance. As a user or group approaches their quota limit, GFS2 dynamically reduces the time between its quota-file updates to prevent the limit from being exceeded. The normal time period between quota synchronizations is a tunable parameter, quota_quantum . You can change this from its default value of 60 seconds using the quota_quantum= mount option, as described in the "GFS2-Specific Mount Options" table in Mounting a GFS2 file system that specifies mount options . The quota_quantum parameter must be set on each node and each time the file system is mounted. Changes to the quota_quantum parameter are not persistent across unmounts. You can update the quota_quantum value with the mount -o remount . You can use the quotasync command to synchronize the quota information from a node to the on-disk quota file between the automatic updates performed by GFS2. Usage Synchronizing Quota Information u Sync the user quota files. g Sync the group quota files a Sync all file systems that are currently quota-enabled and support sync. When -a is absent, a file system mountpoint should be specified. mountpoint Specifies the GFS2 file system to which the actions apply. You can tune the time between synchronizations by specifying a quota-quantum mount option. MountPoint Specifies the GFS2 file system to which the actions apply. secs Specifies the new time period between regular quota-file synchronizations by GFS2. Smaller values may increase contention and slow down performance. The following example synchronizes all the cached dirty quotas from the node it is run on to the on-disk quota file for the file system /mnt/mygfs2 . This following example changes the default time period between regular quota-file updates to one hour (3600 seconds) for file system /mnt/mygfs2 when remounting that file system on logical volume /dev/volgroup/logical_volume . | [
"pcs resource create gfs2mount Filesystem options=\"quota=on\" device=BLOCKDEVICE directory=MOUNTPOINT fstype=gfs2 clone",
"quotacheck -ug /home",
"edquota username",
"Disk quotas for user testuser (uid 501): Filesystem blocks soft hard inodes soft hard /dev/VolGroup00/LogVol02 440436 0 0",
"Disk quotas for user testuser (uid 501): Filesystem blocks soft hard inodes soft hard /dev/VolGroup00/LogVol02 440436 500000 550000",
"quota testuser",
"edquota -g devel",
"Disk quotas for group devel (gid 505): Filesystem blocks soft hard inodes soft hard /dev/VolGroup00/LogVol02 440400 0 0",
"quota -g devel",
"*** Report for user quotas on device /dev/mapper/VolGroup00-LogVol02 Block grace time: 7days; Inode grace time: 7days Block limits File limits User used soft hard grace used soft hard grace ---------------------------------------------------------------------- root -- 36 0 0 4 0 0 kristin -- 540 0 0 125 0 0 testuser -- 440400 500000 550000 37418 0 0",
"repquota -a",
"quotasync [-ug] -a| mountpoint",
"mount -o quota_quantum= secs ,remount BlockDevice MountPoint",
"quotasync -ug /mnt/mygfs2",
"mount -o quota_quantum=3600,remount /dev/volgroup/logical_volume /mnt/mygfs2"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_gfs2_file_systems/assembly_gfs2-disk-quota-administration-configuring-gfs2-file-systems |
Node APIs | Node APIs OpenShift Container Platform 4.12 Reference guide for node APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/node_apis/index |
Chapter 4. Configuring Satellite Server with external services | Chapter 4. Configuring Satellite Server with external services If you do not want to configure the DNS, DHCP, and TFTP services on Satellite Server, use this section to configure your Satellite Server to work with external DNS, DHCP, and TFTP services. 4.1. Configuring Satellite Server with external DNS You can configure Satellite Server with external DNS. Satellite Server uses the nsupdate utility to update DNS records on the remote server. To make any changes persistent, you must enter the satellite-installer command with the options appropriate for your environment. Prerequisites You must have a configured external DNS server. This guide assumes you have an existing installation. Procedure Copy the /etc/rndc.key file from the external DNS server to Satellite Server: Configure the ownership, permissions, and SELinux context: To test the nsupdate utility, add a host remotely: Enter the satellite-installer command to make the following persistent changes to the /etc/foreman-proxy/settings.d/dns.yml file: In the Satellite web UI, navigate to Infrastructure > Capsules . Locate the Satellite Server and select Refresh from the list in the Actions column. Associate the DNS service with the appropriate subnets and domain. 4.2. Configuring Satellite Server with external DHCP To configure Satellite Server with external DHCP, you must complete the following procedures: Section 4.2.1, "Configuring an external DHCP server to use with Satellite Server" Section 4.2.2, "Configuring Satellite Server with an external DHCP server" 4.2.1. Configuring an external DHCP server to use with Satellite Server To configure an external DHCP server running Red Hat Enterprise Linux to use with Satellite Server, you must install the ISC DHCP Service and Berkeley Internet Name Domain (BIND) utilities packages. You must also share the DHCP configuration and lease files with Satellite Server. The example in this procedure uses the distributed Network File System (NFS) protocol to share the DHCP configuration and lease files. Note If you use dnsmasq as an external DHCP server, enable the dhcp-no-override setting. This is required because Satellite creates configuration files on the TFTP server under the grub2/ subdirectory. If the dhcp-no-override setting is disabled, hosts fetch the bootloader and its configuration from the root directory, which might cause an error. Procedure On your Red Hat Enterprise Linux host, install the ISC DHCP Service and Berkeley Internet Name Domain (BIND) utilities packages: Generate a security token: Edit the dhcpd configuration file for all subnets and add the key generated by tsig-keygen . The following is an example: Note that the option routers value is the IP address of your Satellite Server or Capsule Server that you want to use with an external DHCP service. On Satellite Server, define each subnet. Do not set DHCP Capsule for the defined Subnet yet. To prevent conflicts, set up the lease and reservation ranges separately. For example, if the lease range is 192.168.38.10 to 192.168.38.100, in the Satellite web UI define the reservation range as 192.168.38.101 to 192.168.38.250. Configure the firewall for external access to the DHCP server: Make the changes persistent: On Satellite Server, determine the UID and GID of the foreman user: On the DHCP server, create the foreman user and group with the same IDs as determined in a step: To ensure that the configuration files are accessible, restore the read and execute flags: Enable and start the DHCP service: Export the DHCP configuration and lease files using NFS: Create directories for the DHCP configuration and lease files that you want to export using NFS: To create mount points for the created directories, add the following line to the /etc/fstab file: Mount the file systems in /etc/fstab : Ensure the following lines are present in /etc/exports : Note that the IP address that you enter is the Satellite or Capsule IP address that you want to use with an external DHCP service. Reload the NFS server: Configure the firewall for DHCP omapi port 7911: Optional: Configure the firewall for external access to NFS. Clients are configured using NFSv3. Make the changes persistent: 4.2.2. Configuring Satellite Server with an external DHCP server You can configure Satellite Server with an external DHCP server. Prerequisites Ensure that you have configured an external DHCP server and that you have shared the DHCP configuration and lease files with Satellite Server. For more information, see Section 4.2.1, "Configuring an external DHCP server to use with Satellite Server" . Procedure Install the nfs-utils package: Create the DHCP directories for NFS: Change the file owner: Verify communication with the NFS server and the Remote Procedure Call (RPC) communication paths: Add the following lines to the /etc/fstab file: Mount the file systems on /etc/fstab : To verify that the foreman-proxy user can access the files that are shared over the network, display the DHCP configuration and lease files: Enter the satellite-installer command to make the following persistent changes to the /etc/foreman-proxy/settings.d/dhcp.yml file: Associate the DHCP service with the appropriate subnets and domain. 4.3. Configuring Satellite Server with external TFTP You can configure Satellite Server with external TFTP services. Procedure Create the TFTP directory for NFS: In the /etc/fstab file, add the following line: Mount the file systems in /etc/fstab : Enter the satellite-installer command to make the following persistent changes to the /etc/foreman-proxy/settings.d/tftp.yml file: If the TFTP service is running on a different server than the DHCP service, update the tftp_servername setting with the FQDN or IP address of the server that the TFTP service is running on: In the Satellite web UI, navigate to Infrastructure > Capsules . Locate the Satellite Server and select Refresh from the list in the Actions column. Associate the TFTP service with the appropriate subnets and domain. 4.4. Configuring Satellite Server with external IdM DNS When Satellite Server adds a DNS record for a host, it first determines which Capsule is providing DNS for that domain. It then communicates with the Capsule that is configured to provide DNS service for your deployment and adds the record. The hosts are not involved in this process. Therefore, you must install and configure the IdM client on the Satellite or Capsule that is currently configured to provide a DNS service for the domain you want to manage using the IdM server. Satellite Server can be configured to use a Red Hat Identity Management (IdM) server to provide DNS service. For more information about Red Hat Identity Management, see the Linux Domain Identity, Authentication, and Policy Guide . To configure Satellite Server to use a Red Hat Identity Management (IdM) server to provide DNS service, use one of the following procedures: Section 4.4.1, "Configuring dynamic DNS update with GSS-TSIG authentication" Section 4.4.2, "Configuring dynamic DNS update with TSIG authentication" To revert to internal DNS service, use the following procedure: Section 4.4.3, "Reverting to internal DNS service" Note You are not required to use Satellite Server to manage DNS. When you are using the realm enrollment feature of Satellite, where provisioned hosts are enrolled automatically to IdM, the ipa-client-install script creates DNS records for the client. Configuring Satellite Server with external IdM DNS and realm enrollment are mutually exclusive. For more information about configuring realm enrollment, see External Authentication for Provisioned Hosts in Installing Satellite Server in a connected network environment . 4.4.1. Configuring dynamic DNS update with GSS-TSIG authentication You can configure the IdM server to use the generic security service algorithm for secret key transaction (GSS-TSIG) technology defined in RFC3645 . To configure the IdM server to use the GSS-TSIG technology, you must install the IdM client on the Satellite Server base operating system. Prerequisites You must ensure the IdM server is deployed and the host-based firewall is configured correctly. For more information, see Port Requirements for IdM in the Installing Identity Management Guide . You must contact the IdM server administrator to ensure that you obtain an account on the IdM server with permissions to create zones on the IdM server. You should create a backup of the answer file. You can use the backup to restore the answer file to its original state if it becomes corrupted. For more information, see Configuring Satellite Server . Procedure To configure dynamic DNS update with GSS-TSIG authentication, complete the following steps: Creating a Kerberos principal on the IdM server Obtain a Kerberos ticket for the account obtained from the IdM administrator: Create a new Kerberos principal for Satellite Server to use to authenticate on the IdM server: Installing and configuring the idM client On the base operating system of either the Satellite or Capsule that is managing the DNS service for your deployment, install the ipa-client package: Configure the IdM client by running the installation script and following the on-screen prompts: Obtain a Kerberos ticket: Remove any preexisting keytab : Obtain the keytab for this system: Note When adding a keytab to a standby system with the same host name as the original system in service, add the r option to prevent generating new credentials and rendering the credentials on the original system invalid. For the dns.keytab file, set the group and owner to foreman-proxy : Optional: To verify that the keytab file is valid, enter the following command: Configuring DNS zones in the IdM web UI Create and configure the zone that you want to manage: Navigate to Network Services > DNS > DNS Zones . Select Add and enter the zone name. For example, example.com . Click Add and Edit . Click the Settings tab and in the BIND update policy box, add the following to the semi-colon separated list: Set Dynamic update to True . Enable Allow PTR sync . Click Save to save the changes. Create and configure the reverse zone: Navigate to Network Services > DNS > DNS Zones . Click Add . Select Reverse zone IP network and add the network address in CIDR format to enable reverse lookups. Click Add and Edit . Click the Settings tab and in the BIND update policy box, add the following to the semi-colon separated list: Set Dynamic update to True . Click Save to save the changes. Configuring the Satellite or Capsule Server that manages the DNS service for the domain Configure your Satellite Server or Capsule Server to connect to your DNS service: For each affected Capsule, update the configuration of that Capsule in the Satellite web UI: In the Satellite web UI, navigate to Infrastructure > Capsules , locate the Satellite Server, and from the list in the Actions column, select Refresh . Configure the domain: In the Satellite web UI, navigate to Infrastructure > Domains and select the domain name. In the Domain tab, ensure DNS Capsule is set to the Capsule where the subnet is connected. Configure the subnet: In the Satellite web UI, navigate to Infrastructure > Subnets and select the subnet name. In the Subnet tab, set IPAM to None . In the Domains tab, select the domain that you want to manage using the IdM server. In the Capsules tab, ensure Reverse DNS Capsule is set to the Capsule where the subnet is connected. Click Submit to save the changes. 4.4.2. Configuring dynamic DNS update with TSIG authentication You can configure an IdM server to use the secret key transaction authentication for DNS (TSIG) technology that uses the rndc.key key file for authentication. The TSIG protocol is defined in RFC2845 . Prerequisites You must ensure the IdM server is deployed and the host-based firewall is configured correctly. For more information, see Port Requirements in the Linux Domain Identity, Authentication, and Policy Guide . You must obtain root user access on the IdM server. You must confirm whether Satellite Server or Capsule Server is configured to provide DNS service for your deployment. You must configure DNS, DHCP and TFTP services on the base operating system of either the Satellite or Capsule that is managing the DNS service for your deployment. You must create a backup of the answer file. You can use the backup to restore the answer file to its original state if it becomes corrupted. For more information, see Configuring Satellite Server . Procedure To configure dynamic DNS update with TSIG authentication, complete the following steps: Enabling external updates to the DNS zone in the IdM server On the IdM Server, add the following to the top of the /etc/named.conf file: Reload the named service to make the changes take effect: In the IdM web UI, navigate to Network Services > DNS > DNS Zones and click the name of the zone. In the Settings tab, apply the following changes: Add the following in the BIND update policy box: Set Dynamic update to True . Click Update to save the changes. Copy the /etc/rndc.key file from the IdM server to the base operating system of your Satellite Server. Enter the following command: To set the correct ownership, permissions, and SELinux context for the rndc.key file, enter the following command: Assign the foreman-proxy user to the named group manually. Normally, satellite-installer ensures that the foreman-proxy user belongs to the named UNIX group, however, in this scenario Satellite does not manage users and groups, therefore you need to assign the foreman-proxy user to the named group manually. On Satellite Server, enter the following satellite-installer command to configure Satellite to use the external DNS server: Testing external updates to the DNS zone in the IdM server Ensure that the key in the /etc/rndc.key file on Satellite Server is the same key file that is used on the IdM server: On Satellite Server, create a test DNS entry for a host. For example, host test.example.com with an A record of 192.168.25.20 on the IdM server at 192.168.25.1 . On Satellite Server, test the DNS entry: To view the entry in the IdM web UI, navigate to Network Services > DNS > DNS Zones . Click the name of the zone and search for the host by name. If resolved successfully, remove the test DNS entry: Confirm that the DNS entry was removed: The above nslookup command fails and returns the SERVFAIL error message if the record was successfully deleted. 4.4.3. Reverting to internal DNS service You can revert to using Satellite Server and Capsule Server as your DNS providers. You can use a backup of the answer file that was created before configuring external DNS, or you can create a backup of the answer file. For more information about answer files, see Configuring Satellite Server . Procedure On the Satellite or Capsule Server that you want to configure to manage DNS service for the domain, complete the following steps: Configuring Satellite or Capsule as a DNS server If you have created a backup of the answer file before configuring external DNS, restore the answer file and then enter the satellite-installer command: If you do not have a suitable backup of the answer file, create a backup of the answer file now. To configure Satellite or Capsule as DNS server without using an answer file, enter the following satellite-installer command on Satellite or Capsule: For more information, see Configuring DNS, DHCP, and TFTP on Capsule Server . After you run the satellite-installer command to make any changes to your Capsule configuration, you must update the configuration of each affected Capsule in the Satellite web UI. Updating the configuration in the Satellite web UI In the Satellite web UI, navigate to Infrastructure > Capsules . For each Capsule that you want to update, from the Actions list, select Refresh . Configure the domain: In the Satellite web UI, navigate to Infrastructure > Domains and click the domain name that you want to configure. In the Domain tab, set DNS Capsule to the Capsule where the subnet is connected. Configure the subnet: In the Satellite web UI, navigate to Infrastructure > Subnets and select the subnet name. In the Subnet tab, set IPAM to DHCP or Internal DB . In the Domains tab, select the domain that you want to manage using Satellite or Capsule. In the Capsules tab, set Reverse DNS Capsule to the Capsule where the subnet is connected. Click Submit to save the changes. | [
"scp root@ dns.example.com :/etc/rndc.key /etc/foreman-proxy/rndc.key",
"restorecon -v /etc/foreman-proxy/rndc.key chown -v root:foreman-proxy /etc/foreman-proxy/rndc.key chmod -v 640 /etc/foreman-proxy/rndc.key",
"echo -e \"server DNS_IP_Address \\n update add aaa.example.com 3600 IN A Host_IP_Address \\n send\\n\" | nsupdate -k /etc/foreman-proxy/rndc.key nslookup aaa.example.com DNS_IP_Address echo -e \"server DNS_IP_Address \\n update delete aaa.example.com 3600 IN A Host_IP_Address \\n send\\n\" | nsupdate -k /etc/foreman-proxy/rndc.key",
"satellite-installer --foreman-proxy-dns=true --foreman-proxy-dns-managed=false --foreman-proxy-dns-provider=nsupdate --foreman-proxy-dns-server=\" DNS_IP_Address \" --foreman-proxy-keyfile=/etc/foreman-proxy/rndc.key",
"dnf install dhcp-server bind-utils",
"tsig-keygen -a hmac-md5 omapi_key",
"cat /etc/dhcp/dhcpd.conf default-lease-time 604800; max-lease-time 2592000; log-facility local7; subnet 192.168.38.0 netmask 255.255.255.0 { range 192.168.38.10 192.168.38.100 ; option routers 192.168.38.1 ; option subnet-mask 255.255.255.0 ; option domain-search \" virtual.lan \"; option domain-name \" virtual.lan \"; option domain-name-servers 8.8.8.8 ; } omapi-port 7911; key omapi_key { algorithm hmac-md5; secret \" My_Secret \"; }; omapi-key omapi_key;",
"firewall-cmd --add-service dhcp",
"firewall-cmd --runtime-to-permanent",
"id -u foreman 993 id -g foreman 990",
"groupadd -g 990 foreman useradd -u 993 -g 990 -s /sbin/nologin foreman",
"chmod o+rx /etc/dhcp/ chmod o+r /etc/dhcp/dhcpd.conf chattr +i /etc/dhcp/ /etc/dhcp/dhcpd.conf",
"systemctl enable --now dhcpd",
"dnf install nfs-utils systemctl enable --now nfs-server",
"mkdir -p /exports/var/lib/dhcpd /exports/etc/dhcp",
"/var/lib/dhcpd /exports/var/lib/dhcpd none bind,auto 0 0 /etc/dhcp /exports/etc/dhcp none bind,auto 0 0",
"mount -a",
"/exports 192.168.38.1 (rw,async,no_root_squash,fsid=0,no_subtree_check) /exports/etc/dhcp 192.168.38.1 (ro,async,no_root_squash,no_subtree_check,nohide) /exports/var/lib/dhcpd 192.168.38.1 (ro,async,no_root_squash,no_subtree_check,nohide)",
"exportfs -rva",
"firewall-cmd --add-port=7911/tcp",
"firewall-cmd --add-service mountd --add-service nfs --add-service rpc-bind --zone public",
"firewall-cmd --runtime-to-permanent",
"satellite-maintain packages install nfs-utils",
"mkdir -p /mnt/nfs/etc/dhcp /mnt/nfs/var/lib/dhcpd",
"chown -R foreman-proxy /mnt/nfs",
"showmount -e DHCP_Server_FQDN rpcinfo -p DHCP_Server_FQDN",
"DHCP_Server_FQDN :/exports/etc/dhcp /mnt/nfs/etc/dhcp nfs ro,vers=3,auto,nosharecache,context=\"system_u:object_r:dhcp_etc_t:s0\" 0 0 DHCP_Server_FQDN :/exports/var/lib/dhcpd /mnt/nfs/var/lib/dhcpd nfs ro,vers=3,auto,nosharecache,context=\"system_u:object_r:dhcpd_state_t:s0\" 0 0",
"mount -a",
"su foreman-proxy -s /bin/bash cat /mnt/nfs/etc/dhcp/dhcpd.conf cat /mnt/nfs/var/lib/dhcpd/dhcpd.leases exit",
"satellite-installer --enable-foreman-proxy-plugin-dhcp-remote-isc --foreman-proxy-dhcp-provider=remote_isc --foreman-proxy-dhcp-server= My_DHCP_Server_FQDN --foreman-proxy-dhcp=true --foreman-proxy-plugin-dhcp-remote-isc-dhcp-config /mnt/nfs/etc/dhcp/dhcpd.conf --foreman-proxy-plugin-dhcp-remote-isc-dhcp-leases /mnt/nfs/var/lib/dhcpd/dhcpd.leases --foreman-proxy-plugin-dhcp-remote-isc-key-name=omapi_key --foreman-proxy-plugin-dhcp-remote-isc-key-secret= My_Secret --foreman-proxy-plugin-dhcp-remote-isc-omapi-port=7911",
"mkdir -p /mnt/nfs/var/lib/tftpboot",
"TFTP_Server_IP_Address :/exports/var/lib/tftpboot /mnt/nfs/var/lib/tftpboot nfs rw,vers=3,auto,nosharecache,context=\"system_u:object_r:tftpdir_rw_t:s0\" 0 0",
"mount -a",
"satellite-installer --foreman-proxy-tftp-root /mnt/nfs/var/lib/tftpboot --foreman-proxy-tftp=true",
"satellite-installer --foreman-proxy-tftp-servername= TFTP_Server_FQDN",
"kinit idm_user",
"ipa service-add capsule/satellite.example.com",
"satellite-maintain packages install ipa-client",
"ipa-client-install",
"kinit admin",
"rm /etc/foreman-proxy/dns.keytab",
"ipa-getkeytab -p capsule/ [email protected] -s idm1.example.com -k /etc/foreman-proxy/dns.keytab",
"chown foreman-proxy:foreman-proxy /etc/foreman-proxy/dns.keytab",
"kinit -kt /etc/foreman-proxy/dns.keytab capsule/ [email protected]",
"grant capsule\\047 [email protected] wildcard * ANY;",
"grant capsule\\047 [email protected] wildcard * ANY;",
"satellite-installer --foreman-proxy-dns-managed=false --foreman-proxy-dns-provider=nsupdate_gss --foreman-proxy-dns-server=\" idm1.example.com \" --foreman-proxy-dns-tsig-keytab=/etc/foreman-proxy/dns.keytab --foreman-proxy-dns-tsig-principal=\"capsule/ [email protected] \" --foreman-proxy-dns=true",
"######################################################################## include \"/etc/rndc.key\"; controls { inet _IdM_Server_IP_Address_ port 953 allow { _Satellite_IP_Address_; } keys { \"rndc-key\"; }; }; ########################################################################",
"systemctl reload named",
"grant \"rndc-key\" zonesub ANY;",
"scp /etc/rndc.key root@ satellite.example.com :/etc/rndc.key",
"restorecon -v /etc/rndc.key chown -v root:named /etc/rndc.key chmod -v 640 /etc/rndc.key",
"usermod -a -G named foreman-proxy",
"satellite-installer --foreman-proxy-dns-managed=false --foreman-proxy-dns-provider=nsupdate --foreman-proxy-dns-server=\" IdM_Server_IP_Address \" --foreman-proxy-dns-ttl=86400 --foreman-proxy-dns=true --foreman-proxy-keyfile=/etc/rndc.key",
"key \"rndc-key\" { algorithm hmac-md5; secret \" secret-key ==\"; };",
"echo -e \"server 192.168.25.1\\n update add test.example.com 3600 IN A 192.168.25.20\\n send\\n\" | nsupdate -k /etc/rndc.key",
"nslookup test.example.com 192.168.25.1 Server: 192.168.25.1 Address: 192.168.25.1#53 Name: test.example.com Address: 192.168.25.20",
"echo -e \"server 192.168.25.1\\n update delete test.example.com 3600 IN A 192.168.25.20\\n send\\n\" | nsupdate -k /etc/rndc.key",
"nslookup test.example.com 192.168.25.1",
"satellite-installer",
"satellite-installer --foreman-proxy-dns-managed=true --foreman-proxy-dns-provider=nsupdate --foreman-proxy-dns-server=\"127.0.0.1\" --foreman-proxy-dns=true"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/installing_satellite_server_in_a_disconnected_network_environment/configuring-external-services |
5.45. dash | 5.45. dash 5.45.1. RHBA-2012:1381 - dash bug fix update Updated dash packages that fix one bug are now available for Red Hat Enterprise Linux 6. The dash packages provide the POSIX-compliant Debian Almquist shell intended for small media like floppy disks. Bug Fix BZ# 706147 Prior to this update, the dash shell was not an allowed login shell. As a consequence, users could not log in using the dash shell. This update adds the dash to the /etc/shells list of allowed login shells when installing or upgrading dash package and removes it from the list when uninstalling the package. Now, users can login using the dash shell. All users of dash are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/dash |
Chapter 4. Configuring user workload monitoring | Chapter 4. Configuring user workload monitoring 4.1. Preparing to configure the user workload monitoring stack This section explains which user-defined monitoring components can be configured, how to enable user workload monitoring, and how to prepare for configuring the user workload monitoring stack. Important Not all configuration parameters for the monitoring stack are exposed. Only the parameters and fields listed in the Config map reference for the Cluster Monitoring Operator are supported for configuration. The monitoring stack imposes additional resource requirements. Consult the computing resources recommendations in Scaling the Cluster Monitoring Operator and verify that you have sufficient resources. 4.1.1. Configurable monitoring components This table shows the monitoring components you can configure and the keys used to specify the components in the user-workload-monitoring-config config map. Table 4.1. Configurable monitoring components for user-defined projects Component user-workload-monitoring-config config map key Prometheus Operator prometheusOperator Prometheus prometheus Alertmanager alertmanager Thanos Ruler thanosRuler Warning Different configuration changes to the ConfigMap object result in different outcomes: The pods are not redeployed. Therefore, there is no service outage. The affected pods are redeployed: For single-node clusters, this results in temporary service outage. For multi-node clusters, because of high-availability, the affected pods are gradually rolled out and the monitoring stack remains available. Configuring and resizing a persistent volume always results in a service outage, regardless of high availability. Each procedure that requires a change in the config map includes its expected outcome. 4.1.2. Enabling monitoring for user-defined projects In OpenShift Container Platform, you can enable monitoring for user-defined projects in addition to the default platform monitoring. You can monitor your own projects in OpenShift Container Platform without the need for an additional monitoring solution. Using this feature centralizes monitoring for core platform components and user-defined projects. Note Versions of Prometheus Operator installed using Operator Lifecycle Manager (OLM) are not compatible with user-defined monitoring. Therefore, custom Prometheus instances installed as a Prometheus custom resource (CR) managed by the OLM Prometheus Operator are not supported in OpenShift Container Platform. 4.1.2.1. Enabling monitoring for user-defined projects Cluster administrators can enable monitoring for user-defined projects by setting the enableUserWorkload: true field in the cluster monitoring ConfigMap object. Important You must remove any custom Prometheus instances before enabling monitoring for user-defined projects. Note You must have access to the cluster as a user with the cluster-admin cluster role to enable monitoring for user-defined projects in OpenShift Container Platform. Cluster administrators can then optionally grant users permission to configure the components that are responsible for monitoring user-defined projects. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have installed the OpenShift CLI ( oc ). You have created the cluster-monitoring-config ConfigMap object. You have optionally created and configured the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project. You can add configuration options to this ConfigMap object for the components that monitor user-defined projects. Note Every time you save configuration changes to the user-workload-monitoring-config ConfigMap object, the pods in the openshift-user-workload-monitoring project are redeployed. It might sometimes take a while for these components to redeploy. Procedure Edit the cluster-monitoring-config ConfigMap object: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add enableUserWorkload: true under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true 1 1 When set to true , the enableUserWorkload parameter enables monitoring for user-defined projects in a cluster. Save the file to apply the changes. Monitoring for user-defined projects is then enabled automatically. Note If you enable monitoring for user-defined projects, the user-workload-monitoring-config ConfigMap object is created by default. Verify that the prometheus-operator , prometheus-user-workload , and thanos-ruler-user-workload pods are running in the openshift-user-workload-monitoring project. It might take a short while for the pods to start: USD oc -n openshift-user-workload-monitoring get pod Example output NAME READY STATUS RESTARTS AGE prometheus-operator-6f7b748d5b-t7nbg 2/2 Running 0 3h prometheus-user-workload-0 4/4 Running 1 3h prometheus-user-workload-1 4/4 Running 1 3h thanos-ruler-user-workload-0 3/3 Running 0 3h thanos-ruler-user-workload-1 3/3 Running 0 3h Additional resources User workload monitoring first steps 4.1.2.2. Granting users permission to configure monitoring for user-defined projects As a cluster administrator, you can assign the user-workload-monitoring-config-edit role to a user. This grants permission to configure and manage monitoring for user-defined projects without giving them permission to configure and manage core OpenShift Container Platform monitoring components. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. The user account that you are assigning the role to already exists. You have installed the OpenShift CLI ( oc ). Procedure Assign the user-workload-monitoring-config-edit role to a user in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring adm policy add-role-to-user \ user-workload-monitoring-config-edit <user> \ --role-namespace openshift-user-workload-monitoring Verify that the user is correctly assigned to the user-workload-monitoring-config-edit role by displaying the related role binding: USD oc describe rolebinding <role_binding_name> -n openshift-user-workload-monitoring Example command USD oc describe rolebinding user-workload-monitoring-config-edit -n openshift-user-workload-monitoring Example output Name: user-workload-monitoring-config-edit Labels: <none> Annotations: <none> Role: Kind: Role Name: user-workload-monitoring-config-edit Subjects: Kind Name Namespace ---- ---- --------- User user1 1 1 In this example, user1 is assigned to the user-workload-monitoring-config-edit role. 4.1.3. Enabling alert routing for user-defined projects In OpenShift Container Platform, an administrator can enable alert routing for user-defined projects. This process consists of the following steps: Enable alert routing for user-defined projects: Use the default platform Alertmanager instance. Use a separate Alertmanager instance only for user-defined projects. Grant users permission to configure alert routing for user-defined projects. After you complete these steps, developers and other users can configure custom alerts and alert routing for their user-defined projects. Additional resources Understanding alert routing for user-defined projects 4.1.3.1. Enabling the platform Alertmanager instance for user-defined alert routing You can allow users to create user-defined alert routing configurations that use the main platform instance of Alertmanager. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the cluster-monitoring-config ConfigMap object: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add enableUserAlertmanagerConfig: true in the alertmanagerMain section under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | # ... alertmanagerMain: enableUserAlertmanagerConfig: true 1 # ... 1 Set the enableUserAlertmanagerConfig value to true to allow users to create user-defined alert routing configurations that use the main platform instance of Alertmanager. Save the file to apply the changes. The new configuration is applied automatically. 4.1.3.2. Enabling a separate Alertmanager instance for user-defined alert routing In some clusters, you might want to deploy a dedicated Alertmanager instance for user-defined projects, which can help reduce the load on the default platform Alertmanager instance and can better separate user-defined alerts from default platform alerts. In these cases, you can optionally enable a separate instance of Alertmanager to send alerts for user-defined projects only. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config ConfigMap object: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add enabled: true and enableAlertmanagerConfig: true in the alertmanager section under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: enabled: true 1 enableAlertmanagerConfig: true 2 1 Set the enabled value to true to enable a dedicated instance of the Alertmanager for user-defined projects in a cluster. Set the value to false or omit the key entirely to disable the Alertmanager for user-defined projects. If you set this value to false or if the key is omitted, user-defined alerts are routed to the default platform Alertmanager instance. 2 Set the enableAlertmanagerConfig value to true to enable users to define their own alert routing configurations with AlertmanagerConfig objects. Save the file to apply the changes. The dedicated instance of Alertmanager for user-defined projects starts automatically. Verification Verify that the user-workload Alertmanager instance has started: # oc -n openshift-user-workload-monitoring get alertmanager Example output NAME VERSION REPLICAS AGE user-workload 0.24.0 2 100s 4.1.3.3. Granting users permission to configure alert routing for user-defined projects You can grant users permission to configure alert routing for user-defined projects. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have enabled monitoring for user-defined projects. The user account that you are assigning the role to already exists. You have installed the OpenShift CLI ( oc ). Procedure Assign the alert-routing-edit cluster role to a user in the user-defined project: USD oc -n <namespace> adm policy add-role-to-user alert-routing-edit <user> 1 1 For <namespace> , substitute the namespace for the user-defined project, such as ns1 . For <user> , substitute the username for the account to which you want to assign the role. Additional resources Configuring alert notifications 4.1.4. Granting users permissions for monitoring for user-defined projects As a cluster administrator, you can monitor all core OpenShift Container Platform and user-defined projects. You can also grant developers and other users different permissions: Monitoring user-defined projects Configuring the components that monitor user-defined projects Configuring alert routing for user-defined projects Managing alerts and silences for user-defined projects You can grant the permissions by assigning one of the following monitoring roles or cluster roles: Table 4.2. Monitoring roles Role name Description Project user-workload-monitoring-config-edit Users with this role can edit the user-workload-monitoring-config ConfigMap object to configure Prometheus, Prometheus Operator, Alertmanager, and Thanos Ruler for user-defined workload monitoring. openshift-user-workload-monitoring monitoring-alertmanager-api-reader Users with this role have read access to the user-defined Alertmanager API for all projects, if the user-defined Alertmanager is enabled. openshift-user-workload-monitoring monitoring-alertmanager-api-writer Users with this role have read and write access to the user-defined Alertmanager API for all projects, if the user-defined Alertmanager is enabled. openshift-user-workload-monitoring Table 4.3. Monitoring cluster roles Cluster role name Description Project monitoring-rules-view Users with this cluster role have read access to PrometheusRule custom resources (CRs) for user-defined projects. They can also view the alerts and silences in the Developer perspective of the OpenShift Container Platform web console. Can be bound with RoleBinding to any user project. monitoring-rules-edit Users with this cluster role can create, modify, and delete PrometheusRule CRs for user-defined projects. They can also manage alerts and silences in the Developer perspective of the OpenShift Container Platform web console. Can be bound with RoleBinding to any user project. monitoring-edit Users with this cluster role have the same privileges as users with the monitoring-rules-edit cluster role. Additionally, users can create, read, modify, and delete ServiceMonitor and PodMonitor resources to scrape metrics from services and pods. Can be bound with RoleBinding to any user project. alert-routing-edit Users with this cluster role can create, update, and delete AlertmanagerConfig CRs for user-defined projects. Can be bound with RoleBinding to any user project. Additional resources Granting users permission to configure monitoring for user-defined projects Granting users permission to configure alert routing for user-defined projects 4.1.4.1. Granting user permissions by using the web console You can grant users permissions for the openshift-monitoring project or their own projects, by using the OpenShift Container Platform web console. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. The user account that you are assigning the role to already exists. Procedure In the Administrator perspective of the OpenShift Container Platform web console, go to User Management RoleBindings Create binding . In the Binding Type section, select the Namespace Role Binding type. In the Name field, enter a name for the role binding. In the Namespace field, select the project where you want to grant the access. Important The monitoring role or cluster role permissions that you grant to a user by using this procedure apply only to the project that you select in the Namespace field. Select a monitoring role or cluster role from the Role Name list. In the Subject section, select User . In the Subject Name field, enter the name of the user. Select Create to apply the role binding. 4.1.4.2. Granting user permissions by using the CLI You can grant users permissions for the openshift-monitoring project or their own projects, by using the OpenShift CLI ( oc ). Important Whichever role or cluster role you choose, you must bind it against a specific project as a cluster administrator. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. The user account that you are assigning the role to already exists. You have installed the OpenShift CLI ( oc ). Procedure To assign a monitoring role to a user for a project, enter the following command: USD oc adm policy add-role-to-user <role> <user> -n <namespace> --role-namespace <namespace> 1 1 Substitute <role> with the wanted monitoring role, <user> with the user to whom you want to assign the role, and <namespace> with the project where you want to grant the access. To assign a monitoring cluster role to a user for a project, enter the following command: USD oc adm policy add-cluster-role-to-user <cluster-role> <user> -n <namespace> 1 1 Substitute <cluster-role> with the wanted monitoring cluster role, <user> with the user to whom you want to assign the cluster role, and <namespace> with the project where you want to grant the access. 4.1.5. Excluding a user-defined project from monitoring Individual user-defined projects can be excluded from user workload monitoring. To do so, add the openshift.io/user-monitoring label to the project's namespace with a value of false . Procedure Add the label to the project namespace: USD oc label namespace my-project 'openshift.io/user-monitoring=false' To re-enable monitoring, remove the label from the namespace: USD oc label namespace my-project 'openshift.io/user-monitoring-' Note If there were any active monitoring targets for the project, it may take a few minutes for Prometheus to stop scraping them after adding the label. 4.1.6. Disabling monitoring for user-defined projects After enabling monitoring for user-defined projects, you can disable it again by setting enableUserWorkload: false in the cluster monitoring ConfigMap object. Note Alternatively, you can remove enableUserWorkload: true to disable monitoring for user-defined projects. Procedure Edit the cluster-monitoring-config ConfigMap object: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Set enableUserWorkload: to false under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: false Save the file to apply the changes. Monitoring for user-defined projects is then disabled automatically. Check that the prometheus-operator , prometheus-user-workload and thanos-ruler-user-workload pods are terminated in the openshift-user-workload-monitoring project. This might take a short while: USD oc -n openshift-user-workload-monitoring get pod Example output No resources found in openshift-user-workload-monitoring project. Note The user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project is not automatically deleted when monitoring for user-defined projects is disabled. This is to preserve any custom configurations that you may have created in the ConfigMap object. 4.2. Configuring performance and scalability for user workload monitoring You can configure the monitoring stack to optimize the performance and scale of your clusters. The following documentation provides information about how to distribute the monitoring components and control the impact of the monitoring stack on CPU and memory resources. 4.2.1. Controlling the placement and distribution of monitoring components You can move the monitoring stack components to specific nodes: Use the nodeSelector constraint with labeled nodes to move any of the monitoring stack components to specific nodes. Assign tolerations to enable moving components to tainted nodes. By doing so, you control the placement and distribution of the monitoring components across a cluster. By controlling placement and distribution of monitoring components, you can optimize system resource use, improve performance, and separate workloads based on specific requirements or policies. Additional resources Using node selectors to move monitoring components 4.2.1.1. Moving monitoring components to different nodes You can move any of the components that monitor workloads for user-defined projects to specific worker nodes. Warning It is not permitted to move components to control plane or infrastructure nodes. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure If you have not done so yet, add a label to the nodes on which you want to run the monitoring components: USD oc label nodes <node_name> <node_label> 1 1 Replace <node_name> with the name of the node where you want to add the label. Replace <node_label> with the name of the wanted label. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Specify the node labels for the nodeSelector constraint for the component under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | # ... <component>: 1 nodeSelector: <node_label_1> 2 <node_label_2> 3 # ... 1 Substitute <component> with the appropriate monitoring stack component name. 2 Substitute <node_label_1> with the label you added to the node. 3 Optional: Specify additional labels. If you specify additional labels, the pods for the component are only scheduled on the nodes that contain all of the specified labels. Note If monitoring components remain in a Pending state after configuring the nodeSelector constraint, check the pod events for errors relating to taints and tolerations. Save the file to apply the changes. The components specified in the new configuration are automatically moved to the new nodes, and the pods affected by the new configuration are redeployed. Additional resources Enabling monitoring for user-defined projects Understanding how to update labels on nodes Placing pods on specific nodes using node selectors nodeSelector (Kubernetes documentation) 4.2.1.2. Assigning tolerations to monitoring components You can assign tolerations to the components that monitor user-defined projects, to enable moving them to tainted worker nodes. Scheduling is not permitted on control plane or infrastructure nodes. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Specify tolerations for the component: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: tolerations: <toleration_specification> Substitute <component> and <toleration_specification> accordingly. For example, oc adm taint nodes node1 key1=value1:NoSchedule adds a taint to node1 with the key key1 and the value value1 . This prevents monitoring components from deploying pods on node1 unless a toleration is configured for that taint. The following example configures the thanosRuler component to tolerate the example taint: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoSchedule" Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Additional resources Enabling monitoring for user-defined projects Controlling pod placement using node taints Taints and Tolerations (Kubernetes documentation) 4.2.2. Managing CPU and memory resources for monitoring components You can ensure that the containers that run monitoring components have enough CPU and memory resources by specifying values for resource limits and requests for those components. You can configure these limits and requests for monitoring components that monitor user-defined projects in the openshift-user-workload-monitoring namespace. 4.2.2.1. Specifying limits and requests To configure CPU and memory resources, specify values for resource limits and requests in the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring namespace. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add values to define resource limits and requests for each component you want to configure. Important Ensure that the value set for a limit is always higher than the value set for a request. Otherwise, an error will occur, and the container will not run. Example of setting resource limits and requests apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi prometheus: resources: limits: cpu: 500m memory: 3Gi requests: cpu: 200m memory: 500Mi thanosRuler: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Additional resources About specifying limits and requests for monitoring components Kubernetes requests and limits documentation (Kubernetes documentation) 4.2.3. Controlling the impact of unbound metrics attributes in user-defined projects Cluster administrators can use the following measures to control the impact of unbound metrics attributes in user-defined projects: Limit the number of samples that can be accepted per target scrape in user-defined projects Limit the number of scraped labels, the length of label names, and the length of label values Create alerts that fire when a scrape sample threshold is reached or when the target cannot be scraped Note Limiting scrape samples can help prevent the issues caused by adding many unbound attributes to labels. Developers can also prevent the underlying cause by limiting the number of unbound attributes that they define for metrics. Using attributes that are bound to a limited set of possible values reduces the number of potential key-value pair combinations. Additional resources Controlling the impact of unbound metrics attributes in user-defined projects Enabling monitoring for user-defined projects Determining why Prometheus is consuming a lot of disk space 4.2.3.1. Setting scrape sample and label limits for user-defined projects You can limit the number of samples that can be accepted per target scrape in user-defined projects. You can also limit the number of scraped labels, the length of label names, and the length of label values. Warning If you set sample or label limits, no further sample data is ingested for that target scrape after the limit is reached. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add the enforcedSampleLimit configuration to data/config.yaml to limit the number of samples that can be accepted per target scrape in user-defined projects: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: enforcedSampleLimit: 50000 1 1 A value is required if this parameter is specified. This enforcedSampleLimit example limits the number of samples that can be accepted per target scrape in user-defined projects to 50,000. Add the enforcedLabelLimit , enforcedLabelNameLengthLimit , and enforcedLabelValueLengthLimit configurations to data/config.yaml to limit the number of scraped labels, the length of label names, and the length of label values in user-defined projects: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: enforcedLabelLimit: 500 1 enforcedLabelNameLengthLimit: 50 2 enforcedLabelValueLengthLimit: 600 3 1 Specifies the maximum number of labels per scrape. The default value is 0 , which specifies no limit. 2 Specifies the maximum length in characters of a label name. The default value is 0 , which specifies no limit. 3 Specifies the maximum length in characters of a label value. The default value is 0 , which specifies no limit. Save the file to apply the changes. The limits are applied automatically. 4.2.3.2. Creating scrape sample alerts You can create alerts that notify you when: The target cannot be scraped or is not available for the specified for duration A scrape sample threshold is reached or is exceeded for the specified for duration Prerequisites You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have limited the number of samples that can be accepted per target scrape in user-defined projects, by using enforcedSampleLimit . You have installed the OpenShift CLI ( oc ). Procedure Create a YAML file with alerts that inform you when the targets are down and when the enforced sample limit is approaching. The file in this example is called monitoring-stack-alerts.yaml : apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: labels: prometheus: k8s role: alert-rules name: monitoring-stack-alerts 1 namespace: ns1 2 spec: groups: - name: general.rules rules: - alert: TargetDown 3 annotations: message: '{{ printf "%.4g" USDvalue }}% of the {{ USDlabels.job }}/{{ USDlabels.service }} targets in {{ USDlabels.namespace }} namespace are down.' 4 expr: 100 * (count(up == 0) BY (job, namespace, service) / count(up) BY (job, namespace, service)) > 10 for: 10m 5 labels: severity: warning 6 - alert: ApproachingEnforcedSamplesLimit 7 annotations: message: '{{ USDlabels.container }} container of the {{ USDlabels.pod }} pod in the {{ USDlabels.namespace }} namespace consumes {{ USDvalue | humanizePercentage }} of the samples limit budget.' 8 expr: scrape_samples_scraped/50000 > 0.8 9 for: 10m 10 labels: severity: warning 11 1 Defines the name of the alerting rule. 2 Specifies the user-defined project where the alerting rule will be deployed. 3 The TargetDown alert will fire if the target cannot be scraped or is not available for the for duration. 4 The message that will be output when the TargetDown alert fires. 5 The conditions for the TargetDown alert must be true for this duration before the alert is fired. 6 Defines the severity for the TargetDown alert. 7 The ApproachingEnforcedSamplesLimit alert will fire when the defined scrape sample threshold is reached or exceeded for the specified for duration. 8 The message that will be output when the ApproachingEnforcedSamplesLimit alert fires. 9 The threshold for the ApproachingEnforcedSamplesLimit alert. In this example the alert will fire when the number of samples per target scrape has exceeded 80% of the enforced sample limit of 50000 . The for duration must also have passed before the alert will fire. The <number> in the expression scrape_samples_scraped/<number> > <threshold> must match the enforcedSampleLimit value defined in the user-workload-monitoring-config ConfigMap object. 10 The conditions for the ApproachingEnforcedSamplesLimit alert must be true for this duration before the alert is fired. 11 Defines the severity for the ApproachingEnforcedSamplesLimit alert. Apply the configuration to the user-defined project: USD oc apply -f monitoring-stack-alerts.yaml 4.2.4. Configuring pod topology spread constraints You can configure pod topology spread constraints for all the pods for user-defined monitoring to control how pod replicas are scheduled to nodes across zones. This ensures that the pods are highly available and run more efficiently, because workloads are spread across nodes in different data centers or hierarchical infrastructure zones. You can configure pod topology spread constraints for monitoring pods by using the user-workload-monitoring-config config map. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add the following settings under the data/config.yaml field to configure pod topology spread constraints: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 topologySpreadConstraints: - maxSkew: <n> 2 topologyKey: <key> 3 whenUnsatisfiable: <value> 4 labelSelector: 5 <match_option> 1 Specify a name of the component for which you want to set up pod topology spread constraints. 2 Specify a numeric value for maxSkew , which defines the degree to which pods are allowed to be unevenly distributed. 3 Specify a key of node labels for topologyKey . Nodes that have a label with this key and identical values are considered to be in the same topology. The scheduler tries to put a balanced number of pods into each domain. 4 Specify a value for whenUnsatisfiable . Available options are DoNotSchedule and ScheduleAnyway . Specify DoNotSchedule if you want the maxSkew value to define the maximum difference allowed between the number of matching pods in the target topology and the global minimum. Specify ScheduleAnyway if you want the scheduler to still schedule the pod but to give higher priority to nodes that might reduce the skew. 5 Specify labelSelector to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. Example configuration for Thanos Ruler apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: topologySpreadConstraints: - maxSkew: 1 topologyKey: monitoring whenUnsatisfiable: ScheduleAnyway labelSelector: matchLabels: app.kubernetes.io/name: thanos-ruler Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Additional resources About pod topology spread constraints for monitoring Controlling pod placement by using pod topology spread constraints Pod Topology Spread Constraints (Kubernetes documentation) 4.3. Storing and recording data for user workload monitoring Store and record your metrics and alerting data, configure logs to specify which activities are recorded, control how long Prometheus retains stored data, and set the maximum amount of disk space for the data. These actions help you protect your data and use them for troubleshooting. 4.3.1. Configuring persistent storage Run cluster monitoring with persistent storage to gain the following benefits: Protect your metrics and alerting data from data loss by storing them in a persistent volume (PV). As a result, they can survive pods being restarted or recreated. Avoid getting duplicate notifications and losing silences for alerts when the Alertmanager pods are restarted. For production environments, it is highly recommended to configure persistent storage. Important In multi-node clusters, you must configure persistent storage for Prometheus, Alertmanager, and Thanos Ruler to ensure high availability. 4.3.1.1. Persistent storage prerequisites Dedicate sufficient persistent storage to ensure that the disk does not become full. Use Filesystem as the storage type value for the volumeMode parameter when you configure the persistent volume. Important Do not use a raw block volume, which is described with volumeMode: Block in the PersistentVolume resource. Prometheus cannot use raw block volumes. Prometheus does not support file systems that are not POSIX compliant. For example, some NFS file system implementations are not POSIX compliant. If you want to use an NFS file system for storage, verify with the vendor that their NFS implementation is fully POSIX compliant. 4.3.1.2. Configuring a persistent volume claim To use a persistent volume (PV) for monitoring components, you must configure a persistent volume claim (PVC). Prerequisites You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add your PVC configuration for the component under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: storageClassName: <storage_class> 2 resources: requests: storage: <amount_of_storage> 3 1 Specify the monitoring component for which you want to configure the PVC. 2 Specify an existing storage class. If a storage class is not specified, the default storage class is used. 3 Specify the amount of required storage. The following example configures a PVC that claims persistent storage for Thanos Ruler: Example PVC configuration apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: volumeClaimTemplate: spec: storageClassName: my-storage-class resources: requests: storage: 10Gi Note Storage requirements for the thanosRuler component depend on the number of rules that are evaluated and how many samples each rule generates. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed and the new storage configuration is applied. Warning When you update the config map with a PVC configuration, the affected StatefulSet object is recreated, resulting in a temporary service outage. Additional resources Understanding persistent storage PersistentVolumeClaims (Kubernetes documentation) 4.3.1.3. Resizing a persistent volume You can resize a persistent volume (PV) for the instances of Prometheus, Thanos Ruler, and Alertmanager. You need to manually expand a persistent volume claim (PVC), and then update the config map in which the component is configured. Important You can only expand the size of the PVC. Shrinking the storage size is not possible. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have configured at least one PVC for components that monitor user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Manually expand a PVC with the updated storage request. For more information, see "Expanding persistent volume claims (PVCs) with a file system" in Expanding persistent volumes . Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add a new storage size for the PVC configuration for the component under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: resources: requests: storage: <amount_of_storage> 2 1 The component for which you want to change the storage size. 2 Specify the new size for the storage volume. It must be greater than the value. The following example sets the new PVC request to 20 gigabytes for Thanos Ruler: Example storage configuration for thanosRuler apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: volumeClaimTemplate: spec: resources: requests: storage: 20Gi Note Storage requirements for the thanosRuler component depend on the number of rules that are evaluated and how many samples each rule generates. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Warning When you update the config map with a new storage size, the affected StatefulSet object is recreated, resulting in a temporary service outage. Additional resources Prometheus database storage requirements Expanding persistent volume claims (PVCs) with a file system 4.3.2. Modifying retention time and size for Prometheus metrics data By default, Prometheus retains metrics data for 24 hours for monitoring for user-defined projects. You can modify the retention time for the Prometheus instance to change when the data is deleted. You can also set the maximum amount of disk space the retained metrics data uses. Note Data compaction occurs every two hours. Therefore, a persistent volume (PV) might fill up before compaction, potentially exceeding the retentionSize limit. In such cases, the KubePersistentVolumeFillingUp alert fires until the space on a PV is lower than the retentionSize limit. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add the retention time and size configuration under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: retention: <time_specification> 1 retentionSize: <size_specification> 2 1 The retention time: a number directly followed by ms (milliseconds), s (seconds), m (minutes), h (hours), d (days), w (weeks), or y (years). You can also combine time values for specific times, such as 1h30m15s . 2 The retention size: a number directly followed by B (bytes), KB (kilobytes), MB (megabytes), GB (gigabytes), TB (terabytes), PB (petabytes), and EB (exabytes). The following example sets the retention time to 24 hours and the retention size to 10 gigabytes for the Prometheus instance: Example of setting retention time for Prometheus apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: retention: 24h retentionSize: 10GB Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. 4.3.2.1. Modifying the retention time for Thanos Ruler metrics data By default, for user-defined projects, Thanos Ruler automatically retains metrics data for 24 hours. You can modify the retention time to change how long this data is retained by specifying a time value in the user-workload-monitoring-config config map in the openshift-user-workload-monitoring namespace. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add the retention time configuration under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: retention: <time_specification> 1 1 Specify the retention time in the following format: a number directly followed by ms (milliseconds), s (seconds), m (minutes), h (hours), d (days), w (weeks), or y (years). You can also combine time values for specific times, such as 1h30m15s . The default is 24h . The following example sets the retention time to 10 days for Thanos Ruler data: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: retention: 10d Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Additional resources Retention time and size for Prometheus metrics Enabling monitoring for user-defined projects Prometheus database storage requirements Recommended configurable storage technology Understanding persistent storage Optimizing storage 4.3.3. Setting log levels for monitoring components You can configure the log level for Alertmanager, Prometheus Operator, Prometheus, and Thanos Ruler. The following log levels can be applied to the relevant component in the user-workload-monitoring-config ConfigMap object: debug . Log debug, informational, warning, and error messages. info . Log informational, warning, and error messages. warn . Log warning and error messages only. error . Log error messages only. The default log level is info . Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add logLevel: <log_level> for a component under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 logLevel: <log_level> 2 1 The monitoring stack component for which you are setting a log level. Available component values are prometheus , alertmanager , prometheusOperator , and thanosRuler . 2 The log level to set for the component. The available values are error , warn , info , and debug . The default value is info . Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Confirm that the log level has been applied by reviewing the deployment or pod configuration in the related project. The following example checks the log level for the prometheus-operator deployment: USD oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep "log-level" Example output - --log-level=debug Check that the pods for the component are running. The following example lists the status of pods: USD oc -n openshift-user-workload-monitoring get pods Note If an unrecognized logLevel value is included in the ConfigMap object, the pods for the component might not restart successfully. 4.3.4. Enabling the query log file for Prometheus You can configure Prometheus to write all queries that have been run by the engine to a log file. Important Because log rotation is not supported, only enable this feature temporarily when you need to troubleshoot an issue. After you finish troubleshooting, disable query logging by reverting the changes you made to the ConfigMap object to enable the feature. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add the queryLogFile parameter for Prometheus under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: queryLogFile: <path> 1 1 Add the full path to the file in which queries will be logged. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Verify that the pods for the component are running. The following sample command lists the status of pods: USD oc -n openshift-user-workload-monitoring get pods Example output ... prometheus-operator-776fcbbd56-2nbfm 2/2 Running 0 132m prometheus-user-workload-0 5/5 Running 1 132m prometheus-user-workload-1 5/5 Running 1 132m thanos-ruler-user-workload-0 3/3 Running 0 132m thanos-ruler-user-workload-1 3/3 Running 0 132m ... Read the query log: USD oc -n openshift-user-workload-monitoring exec prometheus-user-workload-0 -- cat <path> Important Revert the setting in the config map after you have examined the logged query information. Additional resources Enabling monitoring for user-defined projects 4.4. Configuring metrics for user workload monitoring Configure the collection of metrics to monitor how cluster components and your own workloads are performing. You can send ingested metrics to remote systems for long-term storage and add cluster ID labels to the metrics to identify the data coming from different clusters. Additional resources Understanding metrics 4.4.1. Configuring remote write storage You can configure remote write storage to enable Prometheus to send ingested metrics to remote systems for long-term storage. Doing so has no impact on how or for how long Prometheus stores metrics. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). You have set up a remote write compatible endpoint (such as Thanos) and know the endpoint URL. See the Prometheus remote endpoints and storage documentation for information about endpoints that are compatible with the remote write feature. Important Red Hat only provides information for configuring remote write senders and does not offer guidance on configuring receiver endpoints. Customers are responsible for setting up their own endpoints that are remote-write compatible. Issues with endpoint receiver configurations are not included in Red Hat production support. You have set up authentication credentials in a Secret object for the remote write endpoint. You must create the secret in the openshift-user-workload-monitoring namespace. Warning To reduce security risks, use HTTPS and authentication to send metrics to an endpoint. Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add a remoteWrite: section under data/config.yaml/prometheus , as shown in the following example: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://remote-write-endpoint.example.com" 1 <endpoint_authentication_credentials> 2 1 The URL of the remote write endpoint. 2 The authentication method and credentials for the endpoint. Currently supported authentication methods are AWS Signature Version 4, authentication using HTTP in an Authorization request header, Basic authentication, OAuth 2.0, and TLS client. See Supported remote write authentication settings for sample configurations of supported authentication methods. Add write relabel configuration values after the authentication credentials: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://remote-write-endpoint.example.com" <endpoint_authentication_credentials> writeRelabelConfigs: - <your_write_relabel_configs> 1 1 Add configuration for metrics that you want to send to the remote endpoint. Example of forwarding a single metric called my_metric apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://remote-write-endpoint.example.com" writeRelabelConfigs: - sourceLabels: [__name__] regex: 'my_metric' action: keep Example of forwarding metrics called my_metric_1 and my_metric_2 in my_namespace namespace apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://remote-write-endpoint.example.com" writeRelabelConfigs: - sourceLabels: [__name__,namespace] regex: '(my_metric_1|my_metric_2);my_namespace' action: keep Save the file to apply the changes. The new configuration is applied automatically. 4.4.1.1. Supported remote write authentication settings You can use different methods to authenticate with a remote write endpoint. Currently supported authentication methods are AWS Signature Version 4, basic authentication, authorization, OAuth 2.0, and TLS client. The following table provides details about supported authentication methods for use with remote write. Authentication method Config map field Description AWS Signature Version 4 sigv4 This method uses AWS Signature Version 4 authentication to sign requests. You cannot use this method simultaneously with authorization, OAuth 2.0, or Basic authentication. Basic authentication basicAuth Basic authentication sets the authorization header on every remote write request with the configured username and password. authorization authorization Authorization sets the Authorization header on every remote write request using the configured token. OAuth 2.0 oauth2 An OAuth 2.0 configuration uses the client credentials grant type. Prometheus fetches an access token from tokenUrl with the specified client ID and client secret to access the remote write endpoint. You cannot use this method simultaneously with authorization, AWS Signature Version 4, or Basic authentication. TLS client tlsConfig A TLS client configuration specifies the CA certificate, the client certificate, and the client key file information used to authenticate with the remote write endpoint server using TLS. The sample configuration assumes that you have already created a CA certificate file, a client certificate file, and a client key file. 4.4.1.2. Example remote write authentication settings The following samples show different authentication settings you can use to connect to a remote write endpoint. Each sample also shows how to configure a corresponding Secret object that contains authentication credentials and other relevant settings. Each sample configures authentication for use with monitoring for user-defined projects in the openshift-user-workload-monitoring namespace. 4.4.1.2.1. Sample YAML for AWS Signature Version 4 authentication The following shows the settings for a sigv4 secret named sigv4-credentials in the openshift-user-workload-monitoring namespace. apiVersion: v1 kind: Secret metadata: name: sigv4-credentials namespace: openshift-user-workload-monitoring stringData: accessKey: <AWS_access_key> 1 secretKey: <AWS_secret_key> 2 type: Opaque 1 The AWS API access key. 2 The AWS API secret key. The following shows sample AWS Signature Version 4 remote write authentication settings that use a Secret object named sigv4-credentials in the openshift-user-workload-monitoring namespace: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://authorization.example.com/api/write" sigv4: region: <AWS_region> 1 accessKey: name: sigv4-credentials 2 key: accessKey 3 secretKey: name: sigv4-credentials 4 key: secretKey 5 profile: <AWS_profile_name> 6 roleArn: <AWS_role_arn> 7 1 The AWS region. 2 4 The name of the Secret object containing the AWS API access credentials. 3 The key that contains the AWS API access key in the specified Secret object. 5 The key that contains the AWS API secret key in the specified Secret object. 6 The name of the AWS profile that is being used to authenticate. 7 The unique identifier for the Amazon Resource Name (ARN) assigned to your role. 4.4.1.2.2. Sample YAML for Basic authentication The following shows sample Basic authentication settings for a Secret object named rw-basic-auth in the openshift-user-workload-monitoring namespace: apiVersion: v1 kind: Secret metadata: name: rw-basic-auth namespace: openshift-user-workload-monitoring stringData: user: <basic_username> 1 password: <basic_password> 2 type: Opaque 1 The username. 2 The password. The following sample shows a basicAuth remote write configuration that uses a Secret object named rw-basic-auth in the openshift-user-workload-monitoring namespace. It assumes that you have already set up authentication credentials for the endpoint. apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://basicauth.example.com/api/write" basicAuth: username: name: rw-basic-auth 1 key: user 2 password: name: rw-basic-auth 3 key: password 4 1 3 The name of the Secret object that contains the authentication credentials. 2 The key that contains the username in the specified Secret object. 4 The key that contains the password in the specified Secret object. 4.4.1.2.3. Sample YAML for authentication with a bearer token using a Secret Object The following shows bearer token settings for a Secret object named rw-bearer-auth in the openshift-user-workload-monitoring namespace: apiVersion: v1 kind: Secret metadata: name: rw-bearer-auth namespace: openshift-user-workload-monitoring stringData: token: <authentication_token> 1 type: Opaque 1 The authentication token. The following shows sample bearer token config map settings that use a Secret object named rw-bearer-auth in the openshift-user-workload-monitoring namespace: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | enableUserWorkload: true prometheus: remoteWrite: - url: "https://authorization.example.com/api/write" authorization: type: Bearer 1 credentials: name: rw-bearer-auth 2 key: token 3 1 The authentication type of the request. The default value is Bearer . 2 The name of the Secret object that contains the authentication credentials. 3 The key that contains the authentication token in the specified Secret object. 4.4.1.2.4. Sample YAML for OAuth 2.0 authentication The following shows sample OAuth 2.0 settings for a Secret object named oauth2-credentials in the openshift-user-workload-monitoring namespace: apiVersion: v1 kind: Secret metadata: name: oauth2-credentials namespace: openshift-user-workload-monitoring stringData: id: <oauth2_id> 1 secret: <oauth2_secret> 2 type: Opaque 1 The Oauth 2.0 ID. 2 The OAuth 2.0 secret. The following shows an oauth2 remote write authentication sample configuration that uses a Secret object named oauth2-credentials in the openshift-user-workload-monitoring namespace: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://test.example.com/api/write" oauth2: clientId: secret: name: oauth2-credentials 1 key: id 2 clientSecret: name: oauth2-credentials 3 key: secret 4 tokenUrl: https://example.com/oauth2/token 5 scopes: 6 - <scope_1> - <scope_2> endpointParams: 7 param1: <parameter_1> param2: <parameter_2> 1 3 The name of the corresponding Secret object. Note that ClientId can alternatively refer to a ConfigMap object, although clientSecret must refer to a Secret object. 2 4 The key that contains the OAuth 2.0 credentials in the specified Secret object. 5 The URL used to fetch a token with the specified clientId and clientSecret . 6 The OAuth 2.0 scopes for the authorization request. These scopes limit what data the tokens can access. 7 The OAuth 2.0 authorization request parameters required for the authorization server. 4.4.1.2.5. Sample YAML for TLS client authentication The following shows sample TLS client settings for a tls Secret object named mtls-bundle in the openshift-user-workload-monitoring namespace. apiVersion: v1 kind: Secret metadata: name: mtls-bundle namespace: openshift-user-workload-monitoring data: ca.crt: <ca_cert> 1 client.crt: <client_cert> 2 client.key: <client_key> 3 type: tls 1 The CA certificate in the Prometheus container with which to validate the server certificate. 2 The client certificate for authentication with the server. 3 The client key. The following sample shows a tlsConfig remote write authentication configuration that uses a TLS Secret object named mtls-bundle . apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://remote-write-endpoint.example.com" tlsConfig: ca: secret: name: mtls-bundle 1 key: ca.crt 2 cert: secret: name: mtls-bundle 3 key: client.crt 4 keySecret: name: mtls-bundle 5 key: client.key 6 1 3 5 The name of the corresponding Secret object that contains the TLS authentication credentials. Note that ca and cert can alternatively refer to a ConfigMap object, though keySecret must refer to a Secret object. 2 The key in the specified Secret object that contains the CA certificate for the endpoint. 4 The key in the specified Secret object that contains the client certificate for the endpoint. 6 The key in the specified Secret object that contains the client key secret. 4.4.1.3. Example remote write queue configuration You can use the queueConfig object for remote write to tune the remote write queue parameters. The following example shows the queue parameters with their default values for monitoring for user-defined projects in the openshift-user-workload-monitoring namespace. Example configuration of remote write parameters with default values apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://remote-write-endpoint.example.com" <endpoint_authentication_credentials> queueConfig: capacity: 10000 1 minShards: 1 2 maxShards: 50 3 maxSamplesPerSend: 2000 4 batchSendDeadline: 5s 5 minBackoff: 30ms 6 maxBackoff: 5s 7 retryOnRateLimit: false 8 sampleAgeLimit: 0s 9 1 The number of samples to buffer per shard before they are dropped from the queue. 2 The minimum number of shards. 3 The maximum number of shards. 4 The maximum number of samples per send. 5 The maximum time for a sample to wait in buffer. 6 The initial time to wait before retrying a failed request. The time gets doubled for every retry up to the maxbackoff time. 7 The maximum time to wait before retrying a failed request. 8 Set this parameter to true to retry a request after receiving a 429 status code from the remote write storage. 9 The samples that are older than the sampleAgeLimit limit are dropped from the queue. If the value is undefined or set to 0s , the parameter is ignored. Additional resources Prometheus REST API reference for remote write Setting up remote write compatible endpoints (Prometheus documentation) Tuning remote write settings (Prometheus documentation) Understanding secrets 4.4.2. Creating cluster ID labels for metrics You can create cluster ID labels for metrics by adding the write_relabel settings for remote write storage in the user-workload-monitoring-config config map in the openshift-user-workload-monitoring namespace. Note When Prometheus scrapes user workload targets that expose a namespace label, the system stores this label as exported_namespace . This behavior ensures that the final namespace label value is equal to the namespace of the target pod. You cannot override this default configuration by setting the value of the honorLabels field to true for PodMonitor or ServiceMonitor objects. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). You have configured remote write storage. Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config In the writeRelabelConfigs: section under data/config.yaml/prometheus/remoteWrite , add cluster ID relabel configuration values: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://remote-write-endpoint.example.com" <endpoint_authentication_credentials> writeRelabelConfigs: 1 - <relabel_config> 2 1 Add a list of write relabel configurations for metrics that you want to send to the remote endpoint. 2 Substitute the label configuration for the metrics sent to the remote write endpoint. The following sample shows how to forward a metric with the cluster ID label cluster_id : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://remote-write-endpoint.example.com" writeRelabelConfigs: - sourceLabels: - __tmp_openshift_cluster_id__ 1 targetLabel: cluster_id 2 action: replace 3 1 The system initially applies a temporary cluster ID source label named __tmp_openshift_cluster_id__ . This temporary label gets replaced by the cluster ID label name that you specify. 2 Specify the name of the cluster ID label for metrics sent to remote write storage. If you use a label name that already exists for a metric, that value is overwritten with the name of this cluster ID label. For the label name, do not use __tmp_openshift_cluster_id__ . The final relabeling step removes labels that use this name. 3 The replace write relabel action replaces the temporary label with the target label for outgoing metrics. This action is the default and is applied if no action is specified. Save the file to apply the changes. The new configuration is applied automatically. Additional resources Adding cluster ID labels to metrics Obtaining your cluster ID 4.4.3. Setting up metrics collection for user-defined projects You can create a ServiceMonitor resource to scrape metrics from a service endpoint in a user-defined project. This assumes that your application uses a Prometheus client library to expose metrics to the /metrics canonical name. This section describes how to deploy a sample service in a user-defined project and then create a ServiceMonitor resource that defines how that service should be monitored. 4.4.3.1. Deploying a sample service To test monitoring of a service in a user-defined project, you can deploy a sample service. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with administrative permissions for the namespace. Procedure Create a YAML file for the service configuration. In this example, it is called prometheus-example-app.yaml . Add the following deployment and service configuration details to the file: apiVersion: v1 kind: Namespace metadata: name: ns1 --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: prometheus-example-app name: prometheus-example-app namespace: ns1 spec: replicas: 1 selector: matchLabels: app: prometheus-example-app template: metadata: labels: app: prometheus-example-app spec: containers: - image: ghcr.io/rhobs/prometheus-example-app:0.4.2 imagePullPolicy: IfNotPresent name: prometheus-example-app --- apiVersion: v1 kind: Service metadata: labels: app: prometheus-example-app name: prometheus-example-app namespace: ns1 spec: ports: - port: 8080 protocol: TCP targetPort: 8080 name: web selector: app: prometheus-example-app type: ClusterIP This configuration deploys a service named prometheus-example-app in the user-defined ns1 project. This service exposes the custom version metric. Apply the configuration to the cluster: USD oc apply -f prometheus-example-app.yaml It takes some time to deploy the service. You can check that the pod is running: USD oc -n ns1 get pod Example output NAME READY STATUS RESTARTS AGE prometheus-example-app-7857545cb7-sbgwq 1/1 Running 0 81m 4.4.3.2. Specifying how a service is monitored To use the metrics exposed by your service, you must configure OpenShift Container Platform monitoring to scrape metrics from the /metrics endpoint. You can do this using a ServiceMonitor custom resource definition (CRD) that specifies how a service should be monitored, or a PodMonitor CRD that specifies how a pod should be monitored. The former requires a Service object, while the latter does not, allowing Prometheus to directly scrape metrics from the metrics endpoint exposed by a pod. This procedure shows you how to create a ServiceMonitor resource for a service in a user-defined project. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or the monitoring-edit cluster role. You have enabled monitoring for user-defined projects. For this example, you have deployed the prometheus-example-app sample service in the ns1 project. Note The prometheus-example-app sample service does not support TLS authentication. Procedure Create a new YAML configuration file named example-app-service-monitor.yaml . Add a ServiceMonitor resource to the YAML file. The following example creates a service monitor named prometheus-example-monitor to scrape metrics exposed by the prometheus-example-app service in the ns1 namespace: apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 1 spec: endpoints: - interval: 30s port: web 2 scheme: http selector: 3 matchLabels: app: prometheus-example-app 1 Specify a user-defined namespace where your service runs. 2 Specify endpoint ports to be scraped by Prometheus. 3 Configure a selector to match your service based on its metadata labels. Note A ServiceMonitor resource in a user-defined namespace can only discover services in the same namespace. That is, the namespaceSelector field of the ServiceMonitor resource is always ignored. Apply the configuration to the cluster: USD oc apply -f example-app-service-monitor.yaml It takes some time to deploy the ServiceMonitor resource. Verify that the ServiceMonitor resource is running: USD oc -n <namespace> get servicemonitor Example output NAME AGE prometheus-example-monitor 81m 4.4.3.3. Example service endpoint authentication settings You can configure authentication for service endpoints for user-defined project monitoring by using ServiceMonitor and PodMonitor custom resource definitions (CRDs). The following samples show different authentication settings for a ServiceMonitor resource. Each sample shows how to configure a corresponding Secret object that contains authentication credentials and other relevant settings. 4.4.3.3.1. Sample YAML authentication with a bearer token The following sample shows bearer token settings for a Secret object named example-bearer-auth in the ns1 namespace: Example bearer token secret apiVersion: v1 kind: Secret metadata: name: example-bearer-auth namespace: ns1 stringData: token: <authentication_token> 1 1 Specify an authentication token. The following sample shows bearer token authentication settings for a ServiceMonitor CRD. The example uses a Secret object named example-bearer-auth : Example bearer token authentication settings apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - authorization: credentials: key: token 1 name: example-bearer-auth 2 port: web selector: matchLabels: app: prometheus-example-app 1 The key that contains the authentication token in the specified Secret object. 2 The name of the Secret object that contains the authentication credentials. Important Do not use bearerTokenFile to configure bearer token. If you use the bearerTokenFile configuration, the ServiceMonitor resource is rejected. 4.4.3.3.2. Sample YAML for Basic authentication The following sample shows Basic authentication settings for a Secret object named example-basic-auth in the ns1 namespace: Example Basic authentication secret apiVersion: v1 kind: Secret metadata: name: example-basic-auth namespace: ns1 stringData: user: <basic_username> 1 password: <basic_password> 2 1 Specify a username for authentication. 2 Specify a password for authentication. The following sample shows Basic authentication settings for a ServiceMonitor CRD. The example uses a Secret object named example-basic-auth : Example Basic authentication settings apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - basicAuth: username: key: user 1 name: example-basic-auth 2 password: key: password 3 name: example-basic-auth 4 port: web selector: matchLabels: app: prometheus-example-app 1 The key that contains the username in the specified Secret object. 2 4 The name of the Secret object that contains the Basic authentication. 3 The key that contains the password in the specified Secret object. 4.4.3.3.3. Sample YAML authentication with OAuth 2.0 The following sample shows OAuth 2.0 settings for a Secret object named example-oauth2 in the ns1 namespace: Example OAuth 2.0 secret apiVersion: v1 kind: Secret metadata: name: example-oauth2 namespace: ns1 stringData: id: <oauth2_id> 1 secret: <oauth2_secret> 2 1 Specify an Oauth 2.0 ID. 2 Specify an Oauth 2.0 secret. The following sample shows OAuth 2.0 authentication settings for a ServiceMonitor CRD. The example uses a Secret object named example-oauth2 : Example OAuth 2.0 authentication settings apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - oauth2: clientId: secret: key: id 1 name: example-oauth2 2 clientSecret: key: secret 3 name: example-oauth2 4 tokenUrl: https://example.com/oauth2/token 5 port: web selector: matchLabels: app: prometheus-example-app 1 The key that contains the OAuth 2.0 ID in the specified Secret object. 2 4 The name of the Secret object that contains the OAuth 2.0 credentials. 3 The key that contains the OAuth 2.0 secret in the specified Secret object. 5 The URL used to fetch a token with the specified clientId and clientSecret . Additional resources Enabling monitoring for user-defined projects Scrape Prometheus metrics using TLS in ServiceMonitor configuration (Red Hat Customer Portal article) PodMonitor API ServiceMonitor API 4.5. Configuring alerts and notifications for user workload monitoring You can configure a local or external Alertmanager instance to route alerts from Prometheus to endpoint receivers. You can also attach custom labels to all time series and alerts to add useful metadata information. 4.5.1. Configuring external Alertmanager instances The OpenShift Container Platform monitoring stack includes a local Alertmanager instance that routes alerts from Prometheus. You can add external Alertmanager instances to route alerts for user-defined projects. If you add the same external Alertmanager configuration for multiple clusters and disable the local instance for each cluster, you can then manage alert routing for multiple clusters by using a single external Alertmanager instance. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add an additionalAlertmanagerConfigs section with configuration details under data/config.yaml/<component> : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 additionalAlertmanagerConfigs: - <alertmanager_specification> 2 2 Substitute <alertmanager_specification> with authentication and other configuration details for additional Alertmanager instances. Currently supported authentication methods are bearer token ( bearerToken ) and client TLS ( tlsConfig ). 1 Substitute <component> for one of two supported external Alertmanager components: prometheus or thanosRuler . The following sample config map configures an additional Alertmanager for Thanos Ruler by using a bearer token with client TLS authentication: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: additionalAlertmanagerConfigs: - scheme: https pathPrefix: / timeout: "30s" apiVersion: v1 bearerToken: name: alertmanager-bearer-token key: token tlsConfig: key: name: alertmanager-tls key: tls.key cert: name: alertmanager-tls key: tls.crt ca: name: alertmanager-tls key: tls.ca staticConfigs: - external-alertmanager1-remote.com - external-alertmanager1-remote2.com Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. 4.5.2. Configuring secrets for Alertmanager The OpenShift Container Platform monitoring stack includes Alertmanager, which routes alerts from Prometheus to endpoint receivers. If you need to authenticate with a receiver so that Alertmanager can send alerts to it, you can configure Alertmanager to use a secret that contains authentication credentials for the receiver. For example, you can configure Alertmanager to use a secret to authenticate with an endpoint receiver that requires a certificate issued by a private Certificate Authority (CA). You can also configure Alertmanager to use a secret to authenticate with a receiver that requires a password file for Basic HTTP authentication. In either case, authentication details are contained in the Secret object rather than in the ConfigMap object. 4.5.2.1. Adding a secret to the Alertmanager configuration You can add secrets to the Alertmanager configuration by editing the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project. After you add a secret to the config map, the secret is mounted as a volume at /etc/alertmanager/secrets/<secret_name> within the alertmanager container for the Alertmanager pods. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have created the secret to be configured in Alertmanager in the openshift-user-workload-monitoring project. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add a secrets: section under data/config.yaml/alertmanager with the following configuration: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: secrets: 1 - <secret_name_1> 2 - <secret_name_2> 1 This section contains the secrets to be mounted into Alertmanager. The secrets must be located within the same namespace as the Alertmanager object. 2 The name of the Secret object that contains authentication credentials for the receiver. If you add multiple secrets, place each one on a new line. The following sample config map settings configure Alertmanager to use two Secret objects named test-secret-basic-auth and test-secret-api-token : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: secrets: - test-secret-basic-auth - test-secret-api-token Save the file to apply the changes. The new configuration is applied automatically. 4.5.3. Attaching additional labels to your time series and alerts You can attach custom labels to all time series and alerts leaving Prometheus by using the external labels feature of Prometheus. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Define labels you want to add for every metric under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: externalLabels: <key>: <value> 1 1 Substitute <key>: <value> with key-value pairs where <key> is a unique name for the new label and <value> is its value. Warning Do not use prometheus or prometheus_replica as key names, because they are reserved and will be overwritten. Do not use cluster or managed_cluster as key names. Using them can cause issues where you are unable to see data in the developer dashboards. Note In the openshift-user-workload-monitoring project, Prometheus handles metrics and Thanos Ruler handles alerting and recording rules. Setting externalLabels for prometheus in the user-workload-monitoring-config ConfigMap object will only configure external labels for metrics and not for any rules. For example, to add metadata about the region and environment to all time series and alerts, use the following example: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: externalLabels: region: eu environment: prod Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Additional resources Enabling monitoring for user-defined projects 4.5.4. Configuring alert notifications In OpenShift Container Platform, an administrator can enable alert routing for user-defined projects with one of the following methods: Use the default platform Alertmanager instance. Use a separate Alertmanager instance only for user-defined projects. Developers and other users with the alert-routing-edit cluster role can configure custom alert notifications for their user-defined projects by configuring alert receivers. Note Review the following limitations of alert routing for user-defined projects: User-defined alert routing is scoped to the namespace in which the resource is defined. For example, a routing configuration in namespace ns1 only applies to PrometheusRules resources in the same namespace. When a namespace is excluded from user-defined monitoring, AlertmanagerConfig resources in the namespace cease to be part of the Alertmanager configuration. Additional resources Understanding alert routing for user-defined projects Sending notifications to external systems PagerDuty (PagerDuty official site) Prometheus Integration Guide (PagerDuty official site) Support version matrix for monitoring components Enabling alert routing for user-defined projects 4.5.4.1. Configuring alert routing for user-defined projects If you are a non-administrator user who has been given the alert-routing-edit cluster role, you can create or edit alert routing for user-defined projects. Prerequisites A cluster administrator has enabled monitoring for user-defined projects. A cluster administrator has enabled alert routing for user-defined projects. You are logged in as a user that has the alert-routing-edit cluster role for the project for which you want to create alert routing. You have installed the OpenShift CLI ( oc ). Procedure Create a YAML file for alert routing. The example in this procedure uses a file called example-app-alert-routing.yaml . Add an AlertmanagerConfig YAML definition to the file. For example: apiVersion: monitoring.coreos.com/v1beta1 kind: AlertmanagerConfig metadata: name: example-routing namespace: ns1 spec: route: receiver: default groupBy: [job] receivers: - name: default webhookConfigs: - url: https://example.org/post Save the file. Apply the resource to the cluster: USD oc apply -f example-app-alert-routing.yaml The configuration is automatically applied to the Alertmanager pods. 4.5.4.2. Configuring alert routing for user-defined projects with the Alertmanager secret If you have enabled a separate instance of Alertmanager that is dedicated to user-defined alert routing, you can customize where and how the instance sends notifications by editing the alertmanager-user-workload secret in the openshift-user-workload-monitoring namespace. Note All features of a supported version of upstream Alertmanager are also supported in an OpenShift Container Platform Alertmanager configuration. To check all the configuration options of a supported version of upstream Alertmanager, see Alertmanager configuration (Prometheus documentation). Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have enabled a separate instance of Alertmanager for user-defined alert routing. You have installed the OpenShift CLI ( oc ). Procedure Print the currently active Alertmanager configuration into the file alertmanager.yaml : USD oc -n openshift-user-workload-monitoring get secret alertmanager-user-workload --template='{{ index .data "alertmanager.yaml" }}' | base64 --decode > alertmanager.yaml Edit the configuration in alertmanager.yaml : route: receiver: Default group_by: - name: Default routes: - matchers: - "service = prometheus-example-monitor" 1 receiver: <receiver> 2 receivers: - name: Default - name: <receiver> <receiver_configuration> 3 1 Specify labels to match your alerts. This example targets all alerts that have the service="prometheus-example-monitor" label. 2 Specify the name of the receiver to use for the alerts group. 3 Specify the receiver configuration. Apply the new configuration in the file: USD oc -n openshift-user-workload-monitoring create secret generic alertmanager-user-workload --from-file=alertmanager.yaml --dry-run=client -o=yaml | oc -n openshift-user-workload-monitoring replace secret --filename=- 4.5.4.3. Configuring different alert receivers for default platform alerts and user-defined alerts You can configure different alert receivers for default platform alerts and user-defined alerts to ensure the following results: All default platform alerts are sent to a receiver owned by the team in charge of these alerts. All user-defined alerts are sent to another receiver so that the team can focus only on platform alerts. You can achieve this by using the openshift_io_alert_source="platform" label that is added by the Cluster Monitoring Operator to all platform alerts: Use the openshift_io_alert_source="platform" matcher to match default platform alerts. Use the openshift_io_alert_source!="platform" or 'openshift_io_alert_source=""' matcher to match user-defined alerts. Note This configuration does not apply if you have enabled a separate instance of Alertmanager dedicated to user-defined alerts. | [
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true 1",
"oc -n openshift-user-workload-monitoring get pod",
"NAME READY STATUS RESTARTS AGE prometheus-operator-6f7b748d5b-t7nbg 2/2 Running 0 3h prometheus-user-workload-0 4/4 Running 1 3h prometheus-user-workload-1 4/4 Running 1 3h thanos-ruler-user-workload-0 3/3 Running 0 3h thanos-ruler-user-workload-1 3/3 Running 0 3h",
"oc -n openshift-user-workload-monitoring adm policy add-role-to-user user-workload-monitoring-config-edit <user> --role-namespace openshift-user-workload-monitoring",
"oc describe rolebinding <role_binding_name> -n openshift-user-workload-monitoring",
"oc describe rolebinding user-workload-monitoring-config-edit -n openshift-user-workload-monitoring",
"Name: user-workload-monitoring-config-edit Labels: <none> Annotations: <none> Role: Kind: Role Name: user-workload-monitoring-config-edit Subjects: Kind Name Namespace ---- ---- --------- User user1 1",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | # alertmanagerMain: enableUserAlertmanagerConfig: true 1 #",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: enabled: true 1 enableAlertmanagerConfig: true 2",
"oc -n openshift-user-workload-monitoring get alertmanager",
"NAME VERSION REPLICAS AGE user-workload 0.24.0 2 100s",
"oc -n <namespace> adm policy add-role-to-user alert-routing-edit <user> 1",
"oc adm policy add-role-to-user <role> <user> -n <namespace> --role-namespace <namespace> 1",
"oc adm policy add-cluster-role-to-user <cluster-role> <user> -n <namespace> 1",
"oc label namespace my-project 'openshift.io/user-monitoring=false'",
"oc label namespace my-project 'openshift.io/user-monitoring-'",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: false",
"oc -n openshift-user-workload-monitoring get pod",
"No resources found in openshift-user-workload-monitoring project.",
"oc label nodes <node_name> <node_label> 1",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | # <component>: 1 nodeSelector: <node_label_1> 2 <node_label_2> 3 #",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: tolerations: <toleration_specification>",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\"",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi prometheus: resources: limits: cpu: 500m memory: 3Gi requests: cpu: 200m memory: 500Mi thanosRuler: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: enforcedSampleLimit: 50000 1",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: enforcedLabelLimit: 500 1 enforcedLabelNameLengthLimit: 50 2 enforcedLabelValueLengthLimit: 600 3",
"apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: labels: prometheus: k8s role: alert-rules name: monitoring-stack-alerts 1 namespace: ns1 2 spec: groups: - name: general.rules rules: - alert: TargetDown 3 annotations: message: '{{ printf \"%.4g\" USDvalue }}% of the {{ USDlabels.job }}/{{ USDlabels.service }} targets in {{ USDlabels.namespace }} namespace are down.' 4 expr: 100 * (count(up == 0) BY (job, namespace, service) / count(up) BY (job, namespace, service)) > 10 for: 10m 5 labels: severity: warning 6 - alert: ApproachingEnforcedSamplesLimit 7 annotations: message: '{{ USDlabels.container }} container of the {{ USDlabels.pod }} pod in the {{ USDlabels.namespace }} namespace consumes {{ USDvalue | humanizePercentage }} of the samples limit budget.' 8 expr: scrape_samples_scraped/50000 > 0.8 9 for: 10m 10 labels: severity: warning 11",
"oc apply -f monitoring-stack-alerts.yaml",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 topologySpreadConstraints: - maxSkew: <n> 2 topologyKey: <key> 3 whenUnsatisfiable: <value> 4 labelSelector: 5 <match_option>",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: topologySpreadConstraints: - maxSkew: 1 topologyKey: monitoring whenUnsatisfiable: ScheduleAnyway labelSelector: matchLabels: app.kubernetes.io/name: thanos-ruler",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: storageClassName: <storage_class> 2 resources: requests: storage: <amount_of_storage> 3",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: volumeClaimTemplate: spec: storageClassName: my-storage-class resources: requests: storage: 10Gi",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: resources: requests: storage: <amount_of_storage> 2",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: volumeClaimTemplate: spec: resources: requests: storage: 20Gi",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: retention: <time_specification> 1 retentionSize: <size_specification> 2",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: retention: 24h retentionSize: 10GB",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: retention: <time_specification> 1",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: retention: 10d",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 logLevel: <log_level> 2",
"oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep \"log-level\"",
"- --log-level=debug",
"oc -n openshift-user-workload-monitoring get pods",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: queryLogFile: <path> 1",
"oc -n openshift-user-workload-monitoring get pods",
"prometheus-operator-776fcbbd56-2nbfm 2/2 Running 0 132m prometheus-user-workload-0 5/5 Running 1 132m prometheus-user-workload-1 5/5 Running 1 132m thanos-ruler-user-workload-0 3/3 Running 0 132m thanos-ruler-user-workload-1 3/3 Running 0 132m",
"oc -n openshift-user-workload-monitoring exec prometheus-user-workload-0 -- cat <path>",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" 1 <endpoint_authentication_credentials> 2",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" <endpoint_authentication_credentials> writeRelabelConfigs: - <your_write_relabel_configs> 1",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" writeRelabelConfigs: - sourceLabels: [__name__] regex: 'my_metric' action: keep",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" writeRelabelConfigs: - sourceLabels: [__name__,namespace] regex: '(my_metric_1|my_metric_2);my_namespace' action: keep",
"apiVersion: v1 kind: Secret metadata: name: sigv4-credentials namespace: openshift-user-workload-monitoring stringData: accessKey: <AWS_access_key> 1 secretKey: <AWS_secret_key> 2 type: Opaque",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://authorization.example.com/api/write\" sigv4: region: <AWS_region> 1 accessKey: name: sigv4-credentials 2 key: accessKey 3 secretKey: name: sigv4-credentials 4 key: secretKey 5 profile: <AWS_profile_name> 6 roleArn: <AWS_role_arn> 7",
"apiVersion: v1 kind: Secret metadata: name: rw-basic-auth namespace: openshift-user-workload-monitoring stringData: user: <basic_username> 1 password: <basic_password> 2 type: Opaque",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://basicauth.example.com/api/write\" basicAuth: username: name: rw-basic-auth 1 key: user 2 password: name: rw-basic-auth 3 key: password 4",
"apiVersion: v1 kind: Secret metadata: name: rw-bearer-auth namespace: openshift-user-workload-monitoring stringData: token: <authentication_token> 1 type: Opaque",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | enableUserWorkload: true prometheus: remoteWrite: - url: \"https://authorization.example.com/api/write\" authorization: type: Bearer 1 credentials: name: rw-bearer-auth 2 key: token 3",
"apiVersion: v1 kind: Secret metadata: name: oauth2-credentials namespace: openshift-user-workload-monitoring stringData: id: <oauth2_id> 1 secret: <oauth2_secret> 2 type: Opaque",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://test.example.com/api/write\" oauth2: clientId: secret: name: oauth2-credentials 1 key: id 2 clientSecret: name: oauth2-credentials 3 key: secret 4 tokenUrl: https://example.com/oauth2/token 5 scopes: 6 - <scope_1> - <scope_2> endpointParams: 7 param1: <parameter_1> param2: <parameter_2>",
"apiVersion: v1 kind: Secret metadata: name: mtls-bundle namespace: openshift-user-workload-monitoring data: ca.crt: <ca_cert> 1 client.crt: <client_cert> 2 client.key: <client_key> 3 type: tls",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" tlsConfig: ca: secret: name: mtls-bundle 1 key: ca.crt 2 cert: secret: name: mtls-bundle 3 key: client.crt 4 keySecret: name: mtls-bundle 5 key: client.key 6",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" <endpoint_authentication_credentials> queueConfig: capacity: 10000 1 minShards: 1 2 maxShards: 50 3 maxSamplesPerSend: 2000 4 batchSendDeadline: 5s 5 minBackoff: 30ms 6 maxBackoff: 5s 7 retryOnRateLimit: false 8 sampleAgeLimit: 0s 9",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" <endpoint_authentication_credentials> writeRelabelConfigs: 1 - <relabel_config> 2",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" writeRelabelConfigs: - sourceLabels: - __tmp_openshift_cluster_id__ 1 targetLabel: cluster_id 2 action: replace 3",
"apiVersion: v1 kind: Namespace metadata: name: ns1 --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: prometheus-example-app name: prometheus-example-app namespace: ns1 spec: replicas: 1 selector: matchLabels: app: prometheus-example-app template: metadata: labels: app: prometheus-example-app spec: containers: - image: ghcr.io/rhobs/prometheus-example-app:0.4.2 imagePullPolicy: IfNotPresent name: prometheus-example-app --- apiVersion: v1 kind: Service metadata: labels: app: prometheus-example-app name: prometheus-example-app namespace: ns1 spec: ports: - port: 8080 protocol: TCP targetPort: 8080 name: web selector: app: prometheus-example-app type: ClusterIP",
"oc apply -f prometheus-example-app.yaml",
"oc -n ns1 get pod",
"NAME READY STATUS RESTARTS AGE prometheus-example-app-7857545cb7-sbgwq 1/1 Running 0 81m",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 1 spec: endpoints: - interval: 30s port: web 2 scheme: http selector: 3 matchLabels: app: prometheus-example-app",
"oc apply -f example-app-service-monitor.yaml",
"oc -n <namespace> get servicemonitor",
"NAME AGE prometheus-example-monitor 81m",
"apiVersion: v1 kind: Secret metadata: name: example-bearer-auth namespace: ns1 stringData: token: <authentication_token> 1",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - authorization: credentials: key: token 1 name: example-bearer-auth 2 port: web selector: matchLabels: app: prometheus-example-app",
"apiVersion: v1 kind: Secret metadata: name: example-basic-auth namespace: ns1 stringData: user: <basic_username> 1 password: <basic_password> 2",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - basicAuth: username: key: user 1 name: example-basic-auth 2 password: key: password 3 name: example-basic-auth 4 port: web selector: matchLabels: app: prometheus-example-app",
"apiVersion: v1 kind: Secret metadata: name: example-oauth2 namespace: ns1 stringData: id: <oauth2_id> 1 secret: <oauth2_secret> 2",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - oauth2: clientId: secret: key: id 1 name: example-oauth2 2 clientSecret: key: secret 3 name: example-oauth2 4 tokenUrl: https://example.com/oauth2/token 5 port: web selector: matchLabels: app: prometheus-example-app",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 additionalAlertmanagerConfigs: - <alertmanager_specification> 2",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: additionalAlertmanagerConfigs: - scheme: https pathPrefix: / timeout: \"30s\" apiVersion: v1 bearerToken: name: alertmanager-bearer-token key: token tlsConfig: key: name: alertmanager-tls key: tls.key cert: name: alertmanager-tls key: tls.crt ca: name: alertmanager-tls key: tls.ca staticConfigs: - external-alertmanager1-remote.com - external-alertmanager1-remote2.com",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: secrets: 1 - <secret_name_1> 2 - <secret_name_2>",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: secrets: - test-secret-basic-auth - test-secret-api-token",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: externalLabels: <key>: <value> 1",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: externalLabels: region: eu environment: prod",
"apiVersion: monitoring.coreos.com/v1beta1 kind: AlertmanagerConfig metadata: name: example-routing namespace: ns1 spec: route: receiver: default groupBy: [job] receivers: - name: default webhookConfigs: - url: https://example.org/post",
"oc apply -f example-app-alert-routing.yaml",
"oc -n openshift-user-workload-monitoring get secret alertmanager-user-workload --template='{{ index .data \"alertmanager.yaml\" }}' | base64 --decode > alertmanager.yaml",
"route: receiver: Default group_by: - name: Default routes: - matchers: - \"service = prometheus-example-monitor\" 1 receiver: <receiver> 2 receivers: - name: Default - name: <receiver> <receiver_configuration> 3",
"oc -n openshift-user-workload-monitoring create secret generic alertmanager-user-workload --from-file=alertmanager.yaml --dry-run=client -o=yaml | oc -n openshift-user-workload-monitoring replace secret --filename=-"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/monitoring/configuring-user-workload-monitoring |
Appendix A. Business Central system properties | Appendix A. Business Central system properties The Business Central system properties listed in this section are passed to standalone*.xml files. Git directory Use the following properties to set the location and name for the Business Central Git directory: org.uberfire.nio.git.dir : Location of the Business Central Git directory. org.uberfire.nio.git.dirname : Name of the Business Central Git directory. Default value: .niogit . org.uberfire.nio.git.ketch : Enables or disables Git ketch. org.uberfire.nio.git.hooks : Location of the Git hooks directory. Git over HTTP Use the following properties to configure access to the Git repository over HTTP: org.uberfire.nio.git.proxy.ssh.over.http : Specifies whether SSH should use an HTTP proxy. Default value: false . http.proxyHost : Defines the host name of the HTTP proxy. Default value: null . http.proxyPort : Defines the host port (integer value) of the HTTP proxy. Default value: null . http.proxyUser : Defines the user name of the HTTP proxy. http.proxyPassword : Defines the user password of the HTTP proxy. org.uberfire.nio.git.http.enabled : Enables or disables the HTTP daemon. Default value: true . org.uberfire.nio.git.http.host : If the HTTP daemon is enabled, it uses this property as the host identifier. This is an informative property that is used to display how to access the Git repository over HTTP. The HTTP still relies on the servlet container. Default value: localhost . org.uberfire.nio.git.http.hostname : If the HTTP daemon is enabled, it uses this property as the host name identifier. This is an informative property that is used to display how to access the Git repository over HTTP. The HTTP still relies on the servlet container. Default value: localhost . org.uberfire.nio.git.http.port : If the HTTP daemon is enabled, it uses this property as the port number. This is an informative property that is used to display how to access the Git repository over HTTP. The HTTP still relies on the servlet container. Default value: 8080 . Git over HTTPS Use the following properties to configure access to the Git repository over HTTPS: org.uberfire.nio.git.proxy.ssh.over.https : Specifies whether SSH uses an HTTPS proxy. Default value: false . https.proxyHost : Defines the host name of the HTTPS proxy. Default value: null . https.proxyPort : Defines the host port (integer value) of the HTTPS proxy. Default value: null . https.proxyUser : Defines the user name of the HTTPS proxy. https.proxyPassword : Defines the user password of the HTTPS proxy. user.dir : Location of the user directory. org.uberfire.nio.git.https.enabled : Enables or disables the HTTPS daemon. Default value: false org.uberfire.nio.git.https.host : If the HTTPS daemon is enabled, it uses this property as the host identifier. This is an informative property that is used to display how to access the Git repository over HTTPS. The HTTPS still relies on the servlet container. Default value: localhost . org.uberfire.nio.git.https.hostname : If the HTTPS daemon is enabled, it uses this property as the host name identifier. This is an informative property that is used to display how to access the Git repository over HTTPS. The HTTPS still relies on the servlet container. Default value: localhost . org.uberfire.nio.git.https.port : If the HTTPS daemon is enabled, it uses this property as the port number. This is an informative property that is used to display how to access the Git repository over HTTPS. The HTTPS still relies on the servlet container. Default value: 8080 . JGit org.uberfire.nio.jgit.cache.instances : Defines the JGit cache size. org.uberfire.nio.jgit.cache.overflow.cleanup.size : Defines the JGit cache overflow cleanup size. org.uberfire.nio.jgit.remove.eldest.iterations : Enables or disables whether to remove eldest JGit iterations. org.uberfire.nio.jgit.cache.evict.threshold.duration : Defines the JGit evict threshold duration. org.uberfire.nio.jgit.cache.evict.threshold.time.unit : Defines the JGit evict threshold time unit. Git daemon Use the following properties to enable and configure the Git daemon: org.uberfire.nio.git.daemon.enabled : Enables or disables the Git daemon. Default value: true . org.uberfire.nio.git.daemon.host : If the Git daemon is enabled, it uses this property as the local host identifier. Default value: localhost . org.uberfire.nio.git.daemon.hostname : If the Git daemon is enabled, it uses this property as the local host name identifier. Default value: localhost org.uberfire.nio.git.daemon.port : If the Git daemon is enabled, it uses this property as the port number. Default value: 9418 . org.uberfire.nio.git.http.sslVerify : Enables or disables SSL certificate checking for Git repositories. Default value: true . Note If the default or assigned port is already in use, a new port is automatically selected. Ensure that the ports are available and check the log for more information. Git SSH Use the following properties to enable and configure the Git SSH daemon: org.uberfire.nio.git.ssh.enabled : Enables or disables the SSH daemon. Default value: true . org.uberfire.nio.git.ssh.host : If the SSH daemon enabled, it uses this property as the local host identifier. Default value: localhost . org.uberfire.nio.git.ssh.hostname : If the SSH daemon is enabled, it uses this property as local host name identifier. Default value: localhost . org.uberfire.nio.git.ssh.port : If the SSH daemon is enabled, it uses this property as the port number. Default value: 8001 . Note If the default or assigned port is already in use, a new port is automatically selected. Ensure that the ports are available and check the log for more information. org.uberfire.nio.git.ssh.cert.dir : Location of the .security directory where local certificates are stored. Default value: Working directory. org.uberfire.nio.git.ssh.idle.timeout : Sets the SSH idle timeout. org.uberfire.nio.git.ssh.passphrase : Pass phrase used to access the public key store of your operating system when cloning git repositories with SCP style URLs. Example: [email protected]:user/repository.git . org.uberfire.nio.git.ssh.algorithm : Algorithm used by SSH. Default value: RSA . org.uberfire.nio.git.gc.limit : Sets the GC limit. org.uberfire.nio.git.ssh.ciphers : A comma-separated string of ciphers. The available ciphers are aes128-ctr , aes192-ctr , aes256-ctr , arcfour128 , arcfour256 , aes192-cbc , aes256-cbc . If the property is not used, all available ciphers are loaded. org.uberfire.nio.git.ssh.macs : A comma-separated string of message authentication codes (MACs). The available MACs are hmac-md5 , hmac-md5-96 , hmac-sha1 , hmac-sha1-96 , hmac-sha2-256 , hmac-sha2-512 . If the property is not used, all available MACs are loaded. Note If you plan to use RSA or any algorithm other than DSA, make sure you set up your application server to use the Bouncy Castle JCE library. KIE Server nodes and Process Automation Manager controller Use the following properties to configure the connections with the KIE Server nodes from the Process Automation Manager controller: org.kie.server.controller : The URL is used to connect to the Process Automation Manager controller. For example, ws://localhost:8080/business-central/websocket/controller . org.kie.server.user : User name used to connect to the KIE Server nodes from the Process Automation Manager controller. This property is only required when using this Business Central installation as a Process Automation Manager controller. org.kie.server.pwd : Password used to connect to the KIE Server nodes from the Process Automation Manager controller. This property is only required when using this Business Central installation as a Process Automation Manager controller. Maven and miscellaneous Use the following properties to configure Maven and other miscellaneous functions: kie.maven.offline.force : Forces Maven to behave as if offline. If true, disables online dependency resolution. Default value: false . Note Use this property for Business Central only. If you share a runtime environment with any other component, isolate the configuration and apply it only to Business Central. org.uberfire.gzip.enable : Enables or disables Gzip compression on the GzipFilter compression filter. Default value: true . org.kie.workbench.profile : Selects the Business Central profile. Possible values are FULL or PLANNER_AND_RULES . A prefix FULL_ sets the profile and hides the profile preferences from the administrator preferences. Default value: FULL org.appformer.m2repo.url : Business Central uses the default location of the Maven repository when looking for dependencies. It directs to the Maven repository inside Business Central, for example, http://localhost:8080/business-central/maven2 . Set this property before starting Business Central. Default value: File path to the inner m2 repository. appformer.ssh.keystore : Defines the custom SSH keystore to be used with Business Central by specifying a class name. If the property is not available, the default SSH keystore is used. appformer.ssh.keys.storage.folder : When using the default SSH keystore, this property defines the storage folder for the user's SSH public keys. If the property is not available, the keys are stored in the Business Central .security folder. appformer.experimental.features : Enables the experimental features framework. Default value: false . org.kie.demo : Enables an external clone of a demo application from GitHub. org.uberfire.metadata.index.dir : Place where the Lucene .index directory is stored. Default value: Working directory. org.uberfire.ldap.regex.role_mapper : Regex pattern used to map LDAP principal names to the application role name. Note that the variable role must be a part of the pattern as the application role name substitutes the variable role when matching a principle value and role name. org.uberfire.sys.repo.monitor.disabled : Disables the configuration monitor. Do not disable unless you are sure. Default value: false . org.uberfire.secure.key : Password used by password encryption. Default value: org.uberfire.admin . org.uberfire.secure.alg : Crypto algorithm used by password encryption. Default value: PBEWithMD5AndDES . org.uberfire.domain : Security-domain name used by uberfire. Default value: ApplicationRealm . org.guvnor.m2repo.dir : Place where the Maven repository folder is stored. Default value: <working-directory>/repositories/kie . org.guvnor.project.gav.check.disabled : Disables group ID, artifact ID, and version (GAV) checks. Default value: false . org.kie.build.disable-project-explorer : Disables automatic build of a selected project in Project Explorer. Default value: false . org.kie.builder.cache.size : Defines the cache size of the project builder. Default value: 20 . org.kie.library.assets_per_page : You can customize the number of assets per page in the project screen. Default value: 15 . org.kie.verification.disable-dtable-realtime-verification : Disables the real-time validation and verification of decision tables. Default value: false . Process Automation Manager controller Use the following properties to configure how to connect to the Process Automation Manager controller: org.kie.workbench.controller : The URL used to connect to the Process Automation Manager controller, for example, ws://localhost:8080/kie-server-controller/websocket/controller . org.kie.workbench.controller.user : The Process Automation Manager controller user. Default value: kieserver . org.kie.workbench.controller.pwd : The Process Automation Manager controller password. Default value: kieserver1! . org.kie.workbench.controller.token : The token string used to connect to the Process Automation Manager controller. Java Cryptography Extension KeyStore (JCEKS) Use the following properties to configure JCEKS: kie.keystore.keyStoreURL : The URL used to load a Java Cryptography Extension KeyStore (JCEKS). For example, file:///home/kie/keystores/keystore.jceks. kie.keystore.keyStorePwd : The password used for the JCEKS. kie.keystore.key.ctrl.alias : The alias of the key for the default REST Process Automation Manager controller. kie.keystore.key.ctrl.pwd : The password of the alias for the default REST Process Automation Manager controller. Rendering Use the following properties to switch between Business Central and KIE Server rendered forms: org.jbpm.wb.forms.renderer.ext : Switches the form rendering between Business Central and KIE Server. By default, the form rendering is performed by Business Central. Default value: false . org.jbpm.wb.forms.renderer.name : Enables you to switch between Business Central and KIE Server rendered forms. Default value: workbench . | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/installing_and_configuring_red_hat_process_automation_manager/business-central-system-properties-ref_install-on-eap |
B.23. freetype | B.23. freetype B.23.1. RHSA-2010:0864 - Important: freetype security update Updated freetype packages that fix multiple security issues are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having important security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. FreeType is a free, high-quality, portable font engine that can open and manage font files. It also loads, hints, and renders individual glyphs efficiently. These packages provide the FreeType 2 font engine. CVE-2010-2805 , CVE-2010-3311 It was found that the FreeType font rendering engine improperly validated certain position values when processing input streams. If a user loaded a specially-crafted font file with an application linked against FreeType, it could cause the application to crash or, possibly, execute arbitrary code with the privileges of the user running the application. CVE-2010-2808 A stack-based buffer overflow flaw was found in the way the FreeType font rendering engine processed some PostScript Type 1 fonts. If a user loaded a specially-crafted font file with an application linked against FreeType, it could cause the application to crash or, possibly, execute arbitrary code with the privileges of the user running the application. CVE-2010-2806 An array index error was found in the way the FreeType font rendering engine processed certain PostScript Type 42 font files. If a user loaded a specially-crafted font file with an application linked against FreeType, it could cause the application to crash or, possibly, execute arbitrary code with the privileges of the user running the application. Note: All of the issues in this erratum only affect the FreeType 2 font engine. Users are advised to upgrade to these updated packages, which contain backported patches to correct these issues. The X server must be restarted (log out, then log back in) for this update to take effect. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/freetype |
Part IV. Appendices | Part IV. Appendices Tools and techniques to help identify, analyze, and address potential problems. It also covers best practices for reporting bugs, ensuring that issues are clearly communicated for prompt resolution. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/interactively_installing_rhel_over_the_network/appendices |
5.158. libtar | 5.158. libtar 5.158.1. RHBA-2012:0462 - libtar bug fix update Updated libtar packages that fix one bug are now available for Red Hat Enterprise Linux 6. The libtar package contains a C library for manipulating tar archives. The library supports both the strict POSIX tar format and many of the commonly used GNU extensions. Bug Fix BZ# 729009 Previously, the build system configuration files included in the libtar package were incompatible with the way the rpmbuild tool extracts debugging information from the binaries installed to the rpm build root during the build of a package. As a consequence, the libtar-debuginfo package did not contain debugging information. A patch has been applied to address this issue, and the libtar-debuginfo package now contains the appropriate content. All users of libtar are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/libtar |
Chapter 2. Planning for operational measurements | Chapter 2. Planning for operational measurements You can use Ceilometer or collectd to collect telemetry data for autoscaling or Service Telemetry Framework (STF). 2.1. Collectd measurements The following are the default collectd measurements: cpu disk free disk usage hugepages interface load memory unixsock uptime 2.2. Planning for data storage Gnocchi stores a collection of data points, where each data point is an aggregate. The storage format is compressed using different techniques. As a result, to calculate the size of a time-series database, you must estimate the size based on the worst-case scenario. Warning The use of Red Hat OpenStack Platform (RHOSP) Object Storage (swift) for time series database (Gnocchi) storage is only supported for small and non-production environments. Procedure Calculate the number of data points: number of points = timespan / granularity For example, if you want to retain a year of data with one-minute resolution, use the formula: number of data points = (365 days X 24 hours X 60 minutes) / 1 minute number of data points = 525600 Calculate the size of the time-series database: size in bytes = number of data points X 8 bytes If you apply this formula to the example, the result is 4.1 MB: size in bytes = 525600 points X 8 bytes = 4204800 bytes = 4.1 MB This value is an estimated storage requirement for a single aggregated time-series database. If your archive policy uses multiple aggregation methods (min, max, mean, sum, std, count), multiply this value by the number of aggregation methods you use. Additional resources Section 1.3.1, "Archive policies: Storing both short and long-term data in a time-series database" Section 2.3, "Planning and managing archive policies" 2.3. Planning and managing archive policies You can use an archive policy to configure how you aggregate the metrics and for how long you store the metrics in the time-series database. An archive policy is defined as the number of points over a timespan. If your archive policy defines a policy of 10 points with a granularity of 1 second, the time series archive keeps up to 10 seconds, each representing an aggregation over 1 second. This means that the time series retains, at a maximum, 10 seconds of data between the more recent point and the older point. The archive policy also defines the aggregate method to use. The default is set to the parameter default_aggregation_methods , where the default values are set to mean , min , max . sum , std , count . So, depending on the use case, the archive policy and the granularity can vary. To plan an archive policy, ensure that you are familiar with the following concepts: Metrics. For more information, see Section 2.3.1, "Metrics" . Measures. For more information, see Section 2.3.2, "Creating custom measures" . 2.3.1. Metrics Gnocchi provides an object type called metric . A metric is anything that you can measure, for example, the CPU usage of a server, the temperature of a room, or the number of bytes sent by a network interface. A metric has the following properties: A UUID to identify it A name The archive policy used to store and aggregate the measures Additional resources For terminology definitions, see Gnocchi Metric-as-a-Service terminology . 2.3.2. Creating custom measures A measure is an incoming tuple that the API sends to Gnocchi. It consists of a timestamp and a value. You can create your own custom measures. Procedure Create a custom measure: 2.3.3. Verifying the metric status You can use the openstack metric command to verify a successful deployment. Procedure Verify the deployment: If there are no error messages, your deployment is successful. 2.3.4. Creating an archive policy You can create an archive policy to define how you aggregate the metrics and for how long you store the metrics in the time-series database. Procedure Create an archive policy. Replace <archive-policy-name> with the name of the policy and replace <aggregation-method> with the method of aggregation. Note <definition> is the policy definition. Separate multiple attributes with a comma (,). Separate the name and value of the archive policy definition with a colon (:). 2.3.5. Viewing an archive policy Use the following steps to examine your archive policies. Procedure List the archive policies. View the details of an archive policy: 2.3.6. Deleting an archive policy Use the following step if you want to delete an archive policy. Procedure Delete the archive policy. Replace <archive-policy-name> with the name of the policy that you want to delete. Verification Check that the archive policy that you deleted is absent from the list of archive policies. 2.3.7. Creating an archive policy rule You can use an archive policy rule to configure the mapping between a metric and an archive policy. Procedure Create an archive policy rule. Replace <rule-name> with the name of the rule and replace <archive-policy-name> with the name of the archive policy: | [
"openstack metric measures add -m <MEASURE1> -m <MEASURE2> .. -r <RESOURCE_NAME> <METRIC_NAME>",
"(overcloud) [stack@undercloud-0 ~]USD openstack metric status +-----------------------------------------------------+-------+ | Field | Value | +-----------------------------------------------------+-------+ | storage/number of metric having measures to process | 0 | | storage/total number of measures to process | 0 | +-----------------------------------------------------+-------+",
"openstack metric archive policy create <archive-policy-name> --definition <definition> --aggregation-method <aggregation-method>",
"openstack metric archive policy list",
"openstack metric archive-policy show <archive-policy-name>",
"openstack metric archive policy delete <archive-policy-name>",
"openstack metric archive policy list",
"openstack metric archive-policy-rule create <rule-name> / --archive-policy-name <archive-policy-name>"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/managing_overcloud_observability/planning-for-operational-measurements_assembly |
Chapter 238. Nagios Component | Chapter 238. Nagios Component Available as of Camel version 2.3 The Nagios component allows you to send passive checks to Nagios . Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-nagios</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 238.1. URI format nagios://host[:port][?Options] Camel provides two abilities with the Nagios component. You can send passive check messages by sending a message to its endpoint. Camel also provides a EventNotifer which allows you to send notifications to Nagios. 238.2. Options The Nagios component supports 2 options, which are listed below. Name Description Default Type configuration (advanced) To use a shared NagiosConfiguration NagiosConfiguration resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Nagios endpoint is configured using URI syntax: with the following path and query parameters: 238.2.1. Path Parameters (2 parameters): Name Description Default Type host Required This is the address of the Nagios host where checks should be send. String port Required The port number of the host. int 238.2.2. Query Parameters (7 parameters): Name Description Default Type connectionTimeout (producer) Connection timeout in millis. 5000 int sendSync (producer) Whether or not to use synchronous when sending a passive check. Setting it to false will allow Camel to continue routing the message and the passive check message will be send asynchronously. true boolean timeout (producer) Sending timeout in millis. 5000 int synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean encryption (security) To specify an encryption method. Encryption encryptionMethod (security) Deprecated To specify an encryption method. NagiosEncryptionMethod password (security) Password to be authenticated when sending checks to Nagios. String 238.3. Spring Boot Auto-Configuration The component supports 10 options, which are listed below. Name Description Default Type camel.component.nagios.configuration.connection-timeout Connection timeout in millis. 5000 Integer camel.component.nagios.configuration.encryption To specify an encryption method. Encryption camel.component.nagios.configuration.host This is the address of the Nagios host where checks should be send. String camel.component.nagios.configuration.nagios-settings NagiosSettings camel.component.nagios.configuration.password Password to be authenticated when sending checks to Nagios. String camel.component.nagios.configuration.port The port number of the host. Integer camel.component.nagios.configuration.timeout Sending timeout in millis. 5000 Integer camel.component.nagios.enabled Enable nagios component true Boolean camel.component.nagios.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.component.nagios.configuration.encryption-method To specify an encryption method. NagiosEncryptionMethod 238.4. Sending message examples You can send a message to Nagios where the message payload contains the message. By default it will be OK level and use the CamelContext name as the service name. You can overrule these values using headers as shown above. For example we send the Hello Nagios message to Nagios as follows: template.sendBody("direct:start", "Hello Nagios"); from("direct:start").to("nagios:127.0.0.1:5667?password=secret").to("mock:result"); To send a CRITICAL message you can send the headers such as: Map headers = new HashMap(); headers.put(NagiosConstants.LEVEL, "CRITICAL"); headers.put(NagiosConstants.HOST_NAME, "myHost"); headers.put(NagiosConstants.SERVICE_NAME, "myService"); template.sendBodyAndHeaders("direct:start", "Hello Nagios", headers); 238.5. Using NagiosEventNotifer The Nagios component also provides an EventNotifer which you can use to send events to Nagios. For example we can enable this from Java as follows: NagiosEventNotifier notifier = new NagiosEventNotifier(); notifier.getConfiguration().setHost("localhost"); notifier.getConfiguration().setPort(5667); notifier.getConfiguration().setPassword("password"); CamelContext context = ... context.getManagementStrategy().addEventNotifier(notifier); return context; In Spring XML its just a matter of defining a Spring bean with the type EventNotifier and Camel will pick it up as documented here: Advanced configuration of CamelContext using Spring . 238.6. See Also Configuring Camel Component Endpoint Getting Started | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-nagios</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"nagios://host[:port][?Options]",
"nagios:host:port",
"template.sendBody(\"direct:start\", \"Hello Nagios\"); from(\"direct:start\").to(\"nagios:127.0.0.1:5667?password=secret\").to(\"mock:result\");",
"Map headers = new HashMap(); headers.put(NagiosConstants.LEVEL, \"CRITICAL\"); headers.put(NagiosConstants.HOST_NAME, \"myHost\"); headers.put(NagiosConstants.SERVICE_NAME, \"myService\"); template.sendBodyAndHeaders(\"direct:start\", \"Hello Nagios\", headers);",
"NagiosEventNotifier notifier = new NagiosEventNotifier(); notifier.getConfiguration().setHost(\"localhost\"); notifier.getConfiguration().setPort(5667); notifier.getConfiguration().setPassword(\"password\"); CamelContext context = context.getManagementStrategy().addEventNotifier(notifier); return context;"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/nagios-component |
Chapter 30. High Availability | Chapter 30. High Availability High availability is the ability for the system to continue functioning after failure of one or more of the servers. A part of high availability is failover which is the ability for client connections to migrate from one server to another in event of server failure so client applications can continue to operate. Note Only persistent message data will survive failover. Any non persistent message data will not be available after failover. 30.1. Live / Backup Pairs JBoss EAP 7 messaging allows servers to be linked together as live - backup pairs where each live server has a backup. Live servers receive messages from clients, while a backup server is not operational until failover occurs. A backup server can be owned by only one live server, and it will remain in passive mode, waiting to take over the live server's work. Note There is a one-to-one relation between a live server and a backup server. A live server can have only one backup server, and a backup server can be owned by only one live server. When a live server crashes or is brought down in the correct mode, the backup server currently in passive mode will become the new live server. If the new live server is configured to allow automatic failback, it will detect the old live server coming back up and automatically stop, allowing the old live server to start receiving messages again. Note If you deploy just one pair of live / backup servers, you cannot effectively use a load balancer in front of the pair because the backup instance is not actively processing messages. Moreover, services such as JNDI and the Undertow web server are not active on the backup server either. For these reasons, deploying JEE applications to an instance of JBoss EAP being used as a backup messaging server is not supported. 30.1.1. Journal Synchronization When HA is configured with a replicated journal, it takes time for the backup to synchronize with live server. To check whether synchronization is complete, submit the following command in the CLI: If the result is true , synchronization is complete. To check whether it is safe to shut down the live server, submit the following command in the CLI: If the result is true , it is safe to shut down the live server. 30.2. HA Policies JBoss EAP messaging supports two different strategies for backing up a server: replication and shared store. Use the ha-policy attribute of the server configuration element to assign the policy of your choice to the given server. There are four valid values for ha-policy : replication-master replication-slave shared-store-master shared-store-slave As you can see, the value specifies whether the server uses a data replication or a shared store ha policy, and whether it takes the role of master or slave. Use the management CLI to add an ha-policy to the server of your choice. Note The examples below assume you are running JBoss EAP using the standalone-full-ha configuration profile. For example, use the following command to add the replication-master policy to the default server. The replication-master policy is configured with the default values. Values to override the default configuration can be included when you add the policy. The management CLI command to read the current configuration uses the following basic syntax. For example, use the following command to read the current configuration for the replication-master policy that was added above to the default server. The output is also is also included to highlight the default configuration. See Data Replication and Shared Store for details on the configuration options available for each policy. 30.3. Data Replication When using replication, the live and the backup server pairs do not share the same data directories, all data synchronization is done over the network. Therefore all (persistent) data received by the live server will be duplicated to the backup. If the live server is cleanly shut down, the backup server will activate and clients will failover to backup. This behavior is pre-determined and is therefore not configurable when using data replication. The backup server will first need to synchronize all existing data from the live server before replacing it. Unlike shared storage, therefore, a replicating backup will not be fully operational immediately after startup. The time it will take for the synchronization to happen depends on the amount of data to be synchronized and the network speed. Also note that clients are blocked for the duration of initial-replication-sync-timeout when the backup is started. After this timeout elapses, clients will be unblocked, even if synchronization is not completed. After a successful failover, the backup's journal will start holding newer data than the data on the live server. You can configure the original live server to perform a failback and become the live server once restarted. A failback will synchronize data between the backup and the live server before the live server comes back online. In cases were both servers are shut down, the administrator will have to determine which server's journal has the latest data. If the backup journal has the latest data, copy that journal to the live server. Otherwise, whenever it activates again, the backup will replicate the stale journal data from the live server and will delete its own journal data. If the live server's data is the latest, no action is needed and the servers can be started normally. Important Due to higher latencies and a potentially unreliable network between data centers, the configuration and use of replicated journals for high availability between data centers is neither recommended nor supported. The replicating live and backup pair must be part of a cluster. The cluster-connection configuration element defines how a backup server finds its live match. Replication requires at least three live/backup pairs to reduce the risk of network isolation, although you cannot eliminate this risk. If you use at least three live/backup pairs, the cluster can use quorum voting to avoid using two live brokers. When you configure cluster-connection , remember the following details: Both the live and backup server must be part of the same cluster. Notice that even a simple live/backup replicating pair requires a cluster configuration. The cluster user and password must match on each server in the pair. Specify a pair of live/backup servers by configuring the group-name attribute in both the <master> and the <slave> elements. A backup server only connects to a live server that shares the same group-name . As an example of using a group-name , suppose you have three live servers and three backup servers. Because each live server must pair with its own backup, assign the following group names: live1 and backup1 use the group-name of pair1 . live2 and backup2 use the group-name of pair2 . live3 and backup3 use the group-name of pair3 . In this example, server backup1 searches for the live server with the same group-name , pair1 , which in this case is the server live1 . Much like in the shared store case, when the live server stops or crashes, its replicating, paired backup will become active and take over its duties. Specifically, the paired backup will become active when it loses connection to its live server. This can be problematic because this can also happen because of a temporary network problem. In order to address this issue, the paired backup will try to determine whether it still can connect to the other servers in the cluster. If it can connect to more than half the servers, it will become active. If it loses communication to its live server plus more than half the other servers in the cluster, the paired backup will wait and try reconnecting with the live server. This reduces the risk of a "split brain" situation where both the backup and live servers are processing messages without the other knowing it. Important This is an important distinction from a shared store backup, where the backup will activate and start to serve client requests if it does not find a live server and the file lock on the journal was released. Note also that in replication the backup server does not know whether any data it might have is up to date, so it really cannot decide to activate automatically. To activate a replicating backup server using the data it has, the administrator must change its configuration to make it a live server by changing slave to master. Additional resources Configuring Cluster Connections 30.3.1. Configuring Data Replication Below are two examples showing the basic configuration for both a live and a backup server residing in the cluster named my-cluster and in the backup group named group1 . The steps below use the management CLI to provide a basic configuration for both a live and a backup server residing in the cluster named my-cluster and in the backup group named group1 . Note The examples below assume you are running JBoss EAP using the standalone-full-ha configuration profile. Management CLI Commands to Configure a Live Server for Data Replication Add the ha-policy to the Live Server The check-for-live-server attribute tells the live server to check to make sure that no other server has its given id within the cluster. The default value for this attribute was false in JBoss EAP 7.0. In JBoss EAP 7.1 and later, the default value is true . Add the ha-policy to the Backup Server Confirm a shared cluster-connection exists. Proper communication between the live and backup servers requires a cluster-connection . Use the following management CLI command to confirm that the same cluster-connection is configured on both the live and backup servers. The example uses the default cluster-connection found in the standalone-full-ha configuration profile, which should be sufficient for most use cases. See Configuring Cluster Connections for details on how to configure a cluster connection. Use the following management CLI command to confirm that both the live and backup server are using the same cluster-connection. If the cluster-connection exists, the output will provide the current configuration. Otherwise an error message will be displayed. See All Replication Configuration for details on all configuration attributes. 30.3.2. All Replication Configuration You can use the management CLI to add configuration to a policy after it has been added. The commands to do so follow the basic syntax below. For example, to set the value of the restart-backup attribute to true , use the following command. The following tables provide the HA configuration attributes for the replication-master node and replication-slave configuration elements. Table 30.1. Attributes for replication-master Attribute Description check-for-live-server Set to true to tell this server to check the cluster for another server using the same server ID when starting up. The default value for JBoss EAP 7.0 is false . The default value for JBoss EAP 7.1 and later is true . cluster-name Name of the cluster used for replication. group-name If set, backup servers will only pair with live servers with the matching group-name . initial-replication-sync-timeout How long to wait in milliseconds until the initiation replication is synchronized. Default is 30000 . synchronized-with-backup Indicates whether the journals on the live server and the replication server have been synchronized. Table 30.2. Attributes for replication-slave Attribute Description allow-failback Whether this server will automatically stop when another places a request to take over its place. A typical use case is when live server requests to resume active processing after a restart or failure recovery. A backup server with allow-failback set to true would yield to the live server once it rejoined the cluster and requested to resume processing. Default is true . cluster-name Name of the cluster used for replication. group-name If set, backup servers will pair only with live servers with the matching group-name . initial-replication-sync-timeout How long to wait in milliseconds until the initiation replication is synchronized. Default is 30000 . max-saved-replicated-journal-size Specifies how many times a replicated backup server can restart after moving its files on start. After reaching the maximum, the server will stop permanently after if fails back. Default is 2 . restart-backup Set to true to tell this backup server to restart once it has been stopped because of failback. Default is true . synchronized-with-live Indicates whether the journals on the replication server have been synchronized with the live server, meaning it is safe to shut down the live server. 30.3.3. Preventing Cluster Connection Timeouts Each live and backup pair uses a cluster-connection to communicate. The call-timeout attribute of a cluster-connection sets the amount of a time a server will wait for a response after making a call to another server on the cluster. The default value for call-timeout is 30 seconds, which is sufficient for most use cases. However, there are situations where the backup server might be unable to process replication packets coming from the live server. This may happen, for example, when the initial pre-creation of journal files takes too much time, due to slow disk operations or to a large value for journal-min-files . If timeouts like this occur you will see a line in your logs similar to the one below. Warning If a line like the one above appears in your logs that means that the replication process has stopped. You must restart the backup server to reinitiate replication. To prevent cluster connection timeouts, consider the following options: Increase the call-timeout of the cluster-connection . See Configuring Cluster Connections for more information. Decrease the value of journal-min-files . See Configuring Persistence for more information. 30.3.4. Removing Old Journal Directories A backup server will move its journals to a new location when it starts to synchronize with a live server. By default the journal directories are located in data/activemq directory under EAP_HOME/standalone . For domains, each server will have its own serverX/data/activemq directory located under EAP_HOME/domain/servers . The directories are named bindings , journal , largemessages and paging . See Configuring Persistence and Configuring Paging for more information about these directories. Once moved, the new directories are renamed oldreplica.X , where X is a digit suffix. If another synchronization starts due to a new failover then the suffix for the "moved" directories will be increased by 1. For example, on the first synchronization the journal directories will be moved to oldreplica.1 , on the second, oldreplica.2 , and so on. The original directories will store the data synchronized from the live server. By default a backup server is configured to manage two occurrences of failing over and failing back. After that a cleanup process triggers that removes the oldreplica.X directories. You can change the number of failover occurrences that trigger the cleanup process using the max-saved-replicated-journal-size attribute on the backup server. Note Live servers will have max-saved-replicated-journal-size set to 2 . This value cannot be changed 30.3.5. Updating Dedicated Live and Backup Servers If the live and backup servers are deployed in a dedicated topology, where each server is running in its own instance of JBoss EAP, follow the steps below to ensure a smooth update and restart of the cluster. Cleanly shut down the backup servers. Cleanly shut down the live servers. Update the configuration of the live and backup servers. Start the live servers. Start the backup servers. 30.3.6. Detecting network isolation of the broker To detect network isolation of the broker, you can ping a configurable list of hosts. Use one of the following parameters to configure how the status of the broker on the network is detected: network-check-NIC : Denotes the Network Interface Controller (NIC) to be used in the InetAddress.isReachable method to check network availability. network-check-period : Denotes a frequency in milliseconds that defines how often the network status is checked. network-check-timeout : Denotes a waiting time period before a network connection is expired. network-check-list : Denotes the list of IP addresses that are pinged to detect the network status. network-check-URL-list : Denotes the list of http URIs that are used to validate the network. network-check-ping-command : Denotes the ping command and its parameters that are used to detect the network status on an IPv4 network. network-check-ping6-command : Denotes the ping command and its parameters that are used to detect the network status on an IPv6 network. Procedure Use the following command to ping a configurable list of hosts to detect network isolation of the broker: Example To check the network status by pinging the IP address 10.0.0.1 , issue the following command: 30.3.7. Limitations of Data Replication: Split Brain Processing A "split brain" situation occurs when both a live server and its backup are active at the same time. Both servers can serve clients and process messages without the other knowing it. In this situation there is no longer any message replication between the live and backup servers. A split situation can happen if there is network failure between the two servers. For example, if the connection between a live server and a network router is broken, the backup server will lose the connection to the live server. However, because the backup can still can connect to more than half the servers in the cluster, it becomes active. Recall that a backup will also activate if there is just one live-backup pair and the backup server loses connectivity to the live server. When both servers are active within the cluster, two undesired situations can happen: Remote clients fail over to the backup server, but local clients such as MDBs will use the live server. Both nodes will have completely different journals, resulting in split brain processing. The broken connection to the live server is fixed after remote clients have already failed over to the backup server. Any new clients will be connected to the live server while old clients continue to use the backup, which also results in a split brain scenario. Customers should implement a reliable network between each pair of live and backup servers to reduce the risk of split brain processing when using data replication. For example, use duplicated Network Interface Cards and other network redundancies. 30.4. Shared Store This style of high availability differs from data replication in that it requires a shared file system which is accessible by both the live and backup node. This means that the server pairs use the same location for their paging , message journal , bindings journal , and large messages in their configuration. Note Using a shared store is not supported on Windows. It is supported on Red Hat Enterprise Linux when using Red Hat versions of GFS2 or NFSv4. In addition, GFS2 is supported only with an ASYNCIO journal type, while NFSv4 is supported with both ASYNCIO and NIO journal types. Also, each participating server in the pair, live and backup, will need to have a cluster-connection defined, even if not part of a cluster, because the cluster-connection defines how the backup server announces its presence to its live server and any other nodes. See Configuring Cluster Connections for details on how this is done. When failover occurs and a backup server takes over, it will need to load the persistent storage from the shared file system before clients can connect to it. This style of high availability differs from data replication in that it requires a shared file system which is accessible by both the live and backup pair. Typically this will be some kind of high performance Storage Area Network, or SAN. Red Hat does not recommend using Network Attached Storage, known as a NAS, for your storage solution. The advantage of shared store high availability is that no replication occurs between the live and backup nodes, this means it does not suffer any performance penalties due to the overhead of replication during normal operation. The disadvantage of shared store replication is that when the backup server activates it needs to load the journal from the shared store which can take some time depending on the amount of data in the store. Also, it requires a shared storage solution supported by JBoss EAP. If you require the highest performance during normal operation, Red Hat recommends having access to a highly performant SAN and accept the slightly slower failover costs. Exact costs will depend on the amount of data. 30.4.1. Configuring a Shared Store Note The examples below assume you are running JBoss EAP using the standalone-full-ha configuration profile. Add the ha-policy to the Live Server. Add the ha-policy to the Backup Server. Confirm a shared cluster-connection exists. Proper communication between the live and backup servers requires a cluster-connection . Use the following management CLI command to confirm that the same cluster-connection is configured on both the live and backup servers. The example uses the default cluster-connection found in the standalone-full-ha configuration profile, which should be sufficient for most use cases. See Configuring Cluster Connections for details on how to configure a cluster connection. If the cluster-connection exists, the output will provide the current configuration. Otherwise an error message will be displayed. See All Shared Store Configuration for details on all configuration attributes for shared store policies. 30.4.2. All Shared Store Configuration Use the management CLI to add configuration to a policy after it has been added. The commands to do so follow the basic syntax below. For example, to set the value of the restart-backup attribute to true , use the following command. Table 30.3. Attributes of the shared-store-master Configuration Element Attribute Description failover-on-server-shutdown Set to true to tell this server to failover when it is normally shut down. Default is false . Table 30.4. Attributes of the shared-store-slave Configuration Element Attribute Description allow-failback Set to true to tell this server to automatically stop when another places a request to take over its place. The use case is when a regular server stops and its backup takes over its duties, later the main server restarts and requests the server (the former backup) to stop operating. Default is true . failover-on-server-shutdown Set to true to tell this server to failover when it is normally shut down. Default is false . restart-backup Set to true to tell this server to restart once it has been stopped because of failback or scaling down. Default is true . 30.5. Failing Back to a Live Server After a live server has failed and a backup taken has taken over its duties, you may want to restart the live server and have clients fail back to it. In case of a shared store, simply restart the original live server and kill the new live server by killing the process itself. Alternatively, you can set allow-fail-back to true on the slave which will force it to automatically stop once the master is back online. The management CLI command to set allow-fail-back looks like the following: In replication HA mode you need make sure the check-for-live-server attribute is set to true in the master configuration. Starting with JBoss EAP 7.1, this is the default value. If set to true , a live server will search the cluster during startup for another server using its nodeID. If it finds one, it will contact this server and try to "fail-back". Since this is a remote replication scenario, the original live server will have to synchronize its data with the backup running with its ID. Once they are in sync, it will request the backup server to shut down so it can take over active processing. This behavior allows the original live server to determine whether there was a fail-over, and if so whether the server that took its duties is still running or not. Warning Be aware that if you restart a live server after the failover to backup has occurred, then the check-for-live-server attribute must be set to true . If not, then the live server will start at once without checking that its backup server is running. This results in a situation in which the live and backup are running at the same time, causing the delivery of duplicate messages to all newly connected clients. For shared stores, it is also possible to cause failover to occur on normal server shut down, to enable this set failover-on-server-shutdown to true in the HA configuration on either the master or slave like so: You can also force the running backup server to shut down when the original live server comes back up, allowing the original live server to take over automatically, by setting allow-failback to true . 30.6. Colocated Backup Servers JBoss EAP also makes it possible to colocate backup messaging servers in the same JVM as another live server. Take for example a simple two node cluster of standalone servers where each live server colocates the backup for the other. You can use either a shared store or a replicated HA policy when colocating servers in this way. There are two important things to remember when configuring messaging servers for colocation. First, each server element in the configuration will need its own remote-connector and remote-acceptor or http-connector and http-acceptor . For example, a live server with a remote-acceptor can be configured to listen on port 5445 , while a remote-acceptor from a colocated backup uses port 5446 . The ports are defined in socket-binding elements that must be added to the default socket-binding-group . In the case of http-acceptors , the live and colocated backup can share the same http-listener . Cluster-related configuration elements in each server configuration will use the remote-connector or http-connector used by the server. The relevant configuration is included in each of the examples that follow. Second, remember to properly configure paths for journal related directories. For example, in a shared store colocated topology, both the live server and its backup, colocated on another live server, must be configured to share directory locations for the binding and message journals, for large messages , and for paging . 30.6.1. Configuring Manual Creation of a Colocated HA Topology The example management CLI commands used in the steps below illustrate how to configure a simple two node cluster employing a colocated topology. The example configures a two node colocated cluster. A live server and a backup server will live on each node. The colocated backup on node one is paired with the live server colocated on node two , and the backup server on node two is be paired with the live server on node one . Examples are included for both a shared store and a data replication HA policy. Note The examples below assume you are running JBoss EAP using the full-ha configuration profile. Modify the default server on each instance to use an HA policy. The default server on each node will become the live server. The instructions you follow depend on whether you have configured a shared store policy or a data replication policy. Instructions for a shared store policy: Use the following management CLI command to add the preferred HA policy. Instructions for a data replication policy: The default server on each node should be configured with a unique group-name . In the following example, the first command is executed on node one , and the second on node two . Colocate a new backup server with each live server. Add a new server to each instance of JBoss EAP to colocate with the default live server. The new server will backup the default server on the other node. Use the following management CLI command to create a new server named backup . , configure the new server to use the preferred HA policy. The instructions you follow depend on whether you have configured a shared store policy or a data replication policy. Instructions for a shared store policy: Use the following management CLI command to add the HA policy: Instructions for a data replication policy: Configure the backup servers to use the group-name of the live server on the other node. In the following example, the first command is executed on node one , and the second command is executed on node two . Configure the directory locations for all servers. Once the servers are configured for HA, you must configure the locations for the binding journal, message journal, and large messages directory. If you plan to use paging, you must also configure the paging directory. The instructions you follow depend on whether you have configured a shared store policy or a data replication policy. Instructions for a shared store policy: The path values for the live server on node one should point to the same location on a supported file system as the backup server on node two . The same is true for the live server on node two and its backup on node one . Use the following management CLI commands to configure the directory locations for node one : Use the following management CLI commands to configure the directory locations for node two : Instructions for a data replication policy: Each server uses its own directories and does not share them with any other server. In the example commands below, each value for a path location is assumed to be a unique location on a file system. There is no need to change the directory locations for the live servers since they will use the default locations. However, the backup servers still must be configured with unique locations. Use the following management CLI commands to configure the directory locations for node one : Use the following management CLI commands to configure the directory locations for node two : Add a new acceptor and connector to the backup servers. Each backup server must be configured with an http-connector and an http-acceptor that uses the default http-listener . This allows a server to receive and send communications over the HTTP port. The following example adds an http-acceptor and an http-connector to the backup server. Configure the cluster-connection for the backup servers. Each messaging server needs a cluster-connection , a broadcast-group , and a discovery-group for proper communication. Use the following management CLI commands to configure these elements. The colocated server configuration is now completed. 30.7. Failover Modes JBoss EAP messaging defines two types of client failover: Automatic client failover Application-level client failover JBoss EAP messaging also provides 100% transparent automatic reattachment of connections to the same server (e.g. in case of transient network problems). This is similar to failover, except it is reconnecting to the same server and is discussed in Client Reconnection and Session Reattachment . During failover, if the client has consumers on any non persistent or temporary queues, those queues will be automatically recreated during failover on the backup node, since the backup node will not have any knowledge of non persistent queues. 30.7.1. Automatic Client Failover JBoss EAP messaging clients can be configured to receive knowledge of all live and backup servers, so that in the event of a connection failure at the client - live server connection, the client will detect the failure and reconnect to the backup server. The backup server will then automatically recreate any sessions and consumers that existed on each connection before failover, thus saving the user from having to hand-code manual reconnection logic. A JBoss EAP messaging client detects connection failure when it has not received packets from the server within the time given by client-failure-check-period as explained in Detecting Dead Connections . If the client does not receive data in the allotted time, it will assume the connection has failed and attempt failover. If the socket is closed by the operating system, the server process might be killed rather than the server hardware itself crashing for example, the client will failover straight away. JBoss EAP messaging clients can be configured to discover the list of live-backup server pairs in a number of different ways. They can be configured with explicit endpoints, for example, but the most common way is for the client to receive information about the cluster topology when it first connects to the cluster. See Server Discovery for more information. The default HA configuration includes a cluster-connection that uses the recommended http-connector for cluster communication. This is the same http-connector that remote clients use when making connections to the server using the default RemoteConnectionFactory . While it is not recommended, you can use a different connector. If you use your own connector, make sure it is included as part of the configuration for both the connection-factory to be used by the remote client and the cluster-connection used by the cluster nodes. See Configuring the Messaging Transports and Cluster Connections for more information on connectors and cluster connections. Warning The connector defined in the connection-factory to be used by a Jakarta Messaging client must be the same one defined in the cluster-connection used by the cluster. Otherwise, the client will not be able to update its topology of the underlying live/backup pairs and therefore will not know the location of the backup server. Use CLI commands to review the configuration for both the connection-factory and the cluster-connection . For example, to read the current configuration for the connection-factory named RemoteConnectionFactory use the following command. Likewise, the command below reads the configuration for the cluster-connection named my-cluster . To enable automatic client failover, the client must be configured to allow non-zero reconnection attempts. See Client Reconnection and Session Reattachment for more information. By default, failover will occur only after at least one connection has been made to the live server. In other words, failover will not occur if the client fails to make an initial connection to the live server. If it does fail its initial attempt, a client would simply retry connecting to the live server according to the reconnect-attempts property and fail after the configured number of attempts. An exception to this rule is the case where there is only one pair of live - backup servers, and no other live server, and a remote MDB is connected to the live server when it is cleanly shut down. If the MDB has configured @ActivationConfigProperty(propertyName = "rebalanceConnections", propertyValue = "true") , it tries to rebalance its connection to another live server and will not failover to the backup. Failing Over on the Initial Connection Since the client does not learn about the full topology until after the first connection is made, there is a window of time where it does not know about the backup. If a failure happens at this point the client can only try reconnecting to the original live server. To configure how many attempts the client will make you can set the property initialConnectAttempts on the ClientSessionFactoryImpl or ActiveMQConnectionFactory . Alternatively in the server configuration, you can set the initial-connect-attempts attribute of the connection factory used by the client. The default for this is 0 , that is, try only once. Once the number of attempts has been made, an exception will be thrown. About Server Replication JBoss EAP messaging does not replicate full server state between live and backup servers. When the new session is automatically recreated on the backup, it won't have any knowledge of the messages already sent or acknowledged during that session. Any in-flight sends or acknowledgements at the time of failover may also be lost. By replicating full server state, JBoss EAP messaging could theoretically provide a 100% transparent seamless failover, avoiding any lost messages or acknowledgements. However, doing so comes at a great cost: replicating the full server state, including the queues and session. This would require replication of the entire server state machine. That is, every operation on the live server would have to replicated on the replica servers in the exact same global order to ensure a consistent replica state. This is extremely hard to do in a performant and scalable way, especially considering that multiple threads are changing the live server state concurrently. It is possible to provide full state machine replication using techniques such as virtual synchrony, but this does not scale well and effectively serializes all operations to a single thread, dramatically reducing concurrency. Other techniques for multi-threaded active replication exist such as replicating lock states or replicating thread scheduling, but this is very hard to achieve at a Java level. Consequently, it was not worth reducing performance and concurrency for the sake of 100% transparent failover. Even without 100% transparent failover, it is simple to guarantee once and only once delivery, even in the case of failure, by using a combination of duplicate detection and retrying of transactions. However this is not 100% transparent to the client code. 30.7.1.1. Handling Blocking Calls During Failover If the client code is in a blocking call to the server, i.e. it is waiting for a response to continue its execution, during a failover, the new session will not have any knowledge of the call that was in progress. The blocked call might otherwise hang forever, waiting for a response that will never come. To prevent this, JBoss EAP messaging will unblock any blocking calls that were in progress at the time of failover by making them throw a javax.jms.JMSException , if using Jakarta Messaging, or an ActiveMQException with error code ActiveMQException.UNBLOCKED if using the core API. It is up to the client code to catch this exception and retry any operations if desired. If the method being unblocked is a call to commit(), or prepare(), then the transaction will be automatically rolled back and JBoss EAP messaging will throw a javax.jms.TransactionRolledBackException , if using Jakarta Messaging, or a ActiveMQException with error code ActiveMQException.TRANSACTION_ROLLED_BACK if using the core API. 30.7.1.2. Handling Failover With Transactions If the session is transactional and messages have already been sent or acknowledged in the current transaction, then the server cannot be sure whether messages or acknowledgements were lost during the failover. Consequently the transaction will be marked as rollback-only, and any subsequent attempt to commit it will throw a javax.jms.TransactionRolledBackException ,if using Jakarta Messaging. or a ActiveMQException with error code ActiveMQException.TRANSACTION_ROLLED_BACK if using the core API. Warning The caveat to this rule is when XA is used either via Jakarta Messaging or through the core API. If a two phase commit is used and prepare() has already been called then rolling back could cause a HeuristicMixedException . Because of this the commit will throw a XAException.XA_RETRY exception. This informs the Transaction Manager that it should retry the commit at some later point in time, a side effect of this is that any non persistent messages will be lost. To avoid this from happening, be sure to use persistent messages when using XA. With acknowledgements this is not an issue since they are flushed to the server before prepare() gets called. It is up to the user to catch the exception and perform any client side local rollback code as necessary. There is no need to manually rollback the session since it is already rolled back. The user can then just retry the transactional operations again on the same session. If failover occurs when a commit call is being executed, the server, as previously described, will unblock the call to prevent a hang, since no response will come back. In this case it is not easy for the client to determine whether the transaction commit was actually processed on the live server before failure occurred. Note If XA is being used either via Jakarta Messaging or through the core API then an XAException.XA_RETRY is thrown. This is to inform Transaction Managers that a retry should occur at some point. At some later point in time the Transaction Manager will retry the commit. If the original commit has not occurred, it will still exist and be committed. If it does not exist, then it is assumed to have been committed, although the transaction manager may log a warning. To remedy this, the client can enable duplicate detection in the transaction, and retry the transaction operations again after the call is unblocked. See Duplicate Message Detection for information on how detection is configured on the server. If the transaction had indeed been committed on the live server successfully before failover, duplicate detection will ensure that any durable messages resent in the transaction will be ignored on the server to prevent them getting sent more than once when the transaction is retried. 30.7.1.3. Getting Notified of Connection Failure Jakarta Messaging provides a standard mechanism for sending asynchronously notifications of a connection failure: java.jms.ExceptionListener . Please consult the Jakarta Messaging javadoc for more information on this class. The core API also provides a similar feature in the form of the class org.apache.activemq.artemis.core.client.SessionFailureListener . Any ExceptionListener or SessionFailureListener instance will always be called by JBoss EAP in case of a connection failure, whether the connection was successfully failed over, reconnected, or reattached. However, you can find out if the reconnect or reattach has happened by inspecting the value for the failedOver flag passed into connectionFailed() on SessionfailureListener or the error code on the javax.jms.JMSException which will be one of the following: JMSException error codes Error code Description FAILOVER Failover has occurred and we have successfully reattached or reconnected. DISCONNECT No failover has occurred and we are disconnected. 30.7.2. Application-Level Failover In some cases you may not want automatic client failover, and prefer to handle any connection failure yourself, and code your own manually reconnection logic in your own failure handler. We define this as application-level failover, since the failover is handled at the user application level. To implement application-level failover if you're using Jakarta Messaging set an ExceptionListener class on the Jakarta Messaging connection. The ExceptionListener will be called by JBoss EAP messaging in the event that connection failure is detected. In your ExceptionListener , you would close your old Jakarta Messaging connections, potentially look up new connection factory instances from JNDI and creating new connections. If you are using the core API, then the procedure is very similar: you would set a FailureListener on the core ClientSession instances. 30.8. Detecting Dead Connections This section discusses connection time to live (TTL) and explains how JBoss EAP messaging handles crashed clients and clients that have exited without cleanly closing their resources. Cleaning up Dead Connection Resources on the Server Before a JBoss EAP client application exits, it should close its resources in a controlled manner, using a finally block. Below is an example of a core client appropriately closing its session and session factory in a finally block: ServerLocator locator = null; ClientSessionFactory sf = null; ClientSession session = null; try { locator = ActiveMQClient.createServerLocatorWithoutHA(..); sf = locator.createClientSessionFactory();; session = sf.createSession(...); ... do some stuff with the session... } finally { if (session != null) { session.close(); } if (sf != null) { sf.close(); } if(locator != null) { locator.close(); } } And here is an example of a well behaved Jakarta Messaging client application: Connection jmsConnection = null; try { ConnectionFactory jmsConnectionFactory = ActiveMQJMSClient.createConnectionFactoryWithoutHA(...); jmsConnection = jmsConnectionFactory.createConnection(); ... do some stuff with the connection... } finally { if (connection != null) { connection.close(); } } Unfortunately sometimes clients crash and do not have a chance to clean up their resources. If this occurs, it can leave server side resources hanging on the server. If these resources are not removed they would cause a resource leak on the server, and over time this likely would result in the server running out of memory or other resources. When looking to clean up dead client resources, it is important to be aware of the fact that sometimes the network between the client and the server can fail and then come back, allowing the client to reconnect. Because JBoss EAP supports client reconnection, it is important that it not clean up "dead" server side resources too soon, or clients will be prevented any client from reconnecting and regaining their old sessions on the server. JBoss EAP makes all of this configurable. For each ClientSessionFactory configured, a Time-To-Live, or TTL, property can be used to set how long the server will keep a connection alive in milliseconds in the absence of any data from the client. The client will automatically send "ping" packets periodically to prevent the server from closing its connection. If the server does not receive any packets on a connection for the length of the TTL time, it will automatically close all the sessions on the server that relate to that connection. If you are using Jakarta Messaging, the connection TTL is defined by the ConnectionTTL attribute on a ActiveMQConnectionFactory instance, or if you are deploying Jakarta Messaging connection factory instances direct into JNDI on the server side, you can specify it in the xml config, using the parameter connectionTtl . The default value for ConnectionTTL on an network-based connection, such as an http-connector , is 60000 , i.e. 1 minute. The default value for connection TTL on a internal connection, e.g. an in-vm connection, is -1 . A value of -1 for ConnectionTTL means the server will never time out the connection on the server side. If you do not want clients to specify their own connection TTL, you can set a global value on the server side. This can be done by specifying the connection-ttl-override attribute in the server configuration. The default value for connection-ttl-override is -1 which means "do not override", i.e. let clients use their own values. Closing Core Sessions or Jakarta Messaging Connections It is important that all core client sessions and Jakarta Messaging connections are always closed explicitly in a finally block when you are finished using them. If you fail to do so, JBoss EAP will detect this at garbage collection time. It will then close the connection and log a warning similar to the following: Note that if you are using Jakarta Messaging the warning will involve a Jakarta Messaging connection, not a client session. Also, the log will tell you the exact line of code where the unclosed Jakarta Messaging connection or core client session was instantiated. This will enable you to pinpoint the error in your code and correct it appropriately. Detecting Failure from the Client Side As long as the client is receiving data from the server it will consider the connection to be alive. If the client does not receive any packets for client-failure-check-period milliseconds, it will consider the connection failed and will either initiate failover, or call any FailureListener instances, or ExceptionListener instances if you are using Jakarta Messaging, depending on how the client has been configured. If you are using Jakarta Messaging the behavior is defined by the ClientFailureCheckPeriod attribute on a ActiveMQConnectionFactory instance. The default value for client failure check period on a network connection, for example an HTTP connection, is 30000 , or 30 seconds. The default value for client failure check period on an in-vm connection, is -1 . A value of -1 means the client will never fail the connection on the client side if no data is received from the server. Whatever the type of connection, the check period is typically much lower than the value for connection TTL on the server so that clients can reconnect in case of transitory failure. Configuring Asynchronous Connection Execution Most packets received on the server side are executed on the remoting thread. These packets represent short-running operations and are always executed on the remoting thread for performance reasons. However, by default some kinds of packets are executed using a thread from a thread pool so that the remoting thread is not tied up for too long. Please note that processing operations asynchronously on another thread adds a little more latency. These packets are: org.apache.activemq.artemis.core.protocol.core.impl.wireformat.RollbackMessage org.apache.activemq.artemis.core.protocol.core.impl.wireformat.SessionCloseMessage org.apache.activemq.artemis.core.protocol.core.impl.wireformat.SessionCommitMessage org.apache.activemq.artemis.core.protocol.core.impl.wireformat.SessionXACommitMessage org.apache.activemq.artemis.core.protocol.core.impl.wireformat.SessionXAPrepareMessage org.apache.activemq.artemis.core.protocol.core.impl.wireformat.SessionXARollbackMessage To disable asynchronous connection execution, set the parameter async-connection-execution-enabled to false . The default value is true . 30.9. Client Reconnection and Session Reattachment JBoss EAP messaging clients can be configured to automatically reconnect or reattach to the server in the event that a failure is detected in the connection between the client and the server. Transparent Session Reattachment If the failure was due to some transient cause such as a temporary network outage, and the target server was not restarted, the sessions will still exist on the server, assuming the client has not been disconnected for more than the value of connection-ttl . See Detecting Dead Connections . In this scenario, JBoss EAP will automatically reattach the client sessions to the server sessions when the re-connection is made. This is done 100% transparently and the client can continue exactly as if nothing had happened. As JBoss EAP messaging clients send commands to their servers they store each sent command in an in-memory buffer. When a connection fails and the client subsequently attempts to reattach to the same server, as part of the reattachment protocol, the server gives the client the id of the last command it successfully received. If the client has sent more commands than were received before failover it can replay any sent commands from its buffer so that the client and server can reconcile their states. The size in bytes of this buffer is set by the confirmationWindowSize property. When the server has received confirmationWindowSize bytes of commands and processed them it will send back a command confirmation to the client, and the client can then free up space in the buffer. If you are using the Jakarta Messaging service on the server to load your Jakarta Messaging connection factory instances into JNDI, then this property can be configured in the server configuration, by setting the confirmation-window-size attribute of the chosen connection-factory . If you are using Jakarta Messaging but not using JNDI then you can set these values directly on the ActiveMQConnectionFactory instance using the appropriate setter method, setConfirmationWindowSize . If you are using the core API, the ServerLocator instance has a setConfirmationWindowSize method exposed as well. Setting confirmationWindowSize to -1 , which is also the default, disables any buffering and prevents any reattachment from occurring, forcing a reconnect instead. Session Reconnection Alternatively, the server might have actually been restarted after crashing or it might have been stopped. In such a case any sessions will no longer exist on the server and it will not be possible to 100% transparently reattach to them. In this case, JBoss EAP will automatically reconnect the connection and recreate any sessions and consumers on the server corresponding to the sessions and consumers on the client. This process is exactly the same as what happens when failing over to a backup server. Client reconnection is also used internally by components such as core bridges to allow them to reconnect to their target servers. See the section on Automatic Client Failover to get a full understanding of how transacted and non-transacted sessions are reconnected during a reconnect and what you need to do to maintain once and only once delivery guarantees. Configuring Reconnection Attributes Client reconnection is configured by setting the following properties: retryInterval. This optional parameter sets the period in milliseconds between subsequent reconnection attempts, if the connection to the target server has failed. The default value is 2000 milliseconds. retryIntervalMultiplier. This optional parameter sets a multiplier to apply to the time since the last retry to compute the time to the retry. This allows you to implement an exponential backoff between retry attempts. For example, if you set retryInterval to 1000 ms and set retryIntervalMultiplier to 2.0 , then, if the first reconnect attempt fails, the client will wait 1000 ms then 2000 ms then 4000 ms between subsequent reconnection attempts. The default value is 1.0 meaning each reconnect attempt is spaced at equal intervals. maxRetryInterval. This optional parameter sets the maximum retry interval that will be used. When setting retryIntervalMultiplier it would otherwise be possible that subsequent retries exponentially increase to ridiculously large values. By setting this parameter you can set an upper limit on that value. The default value is 2000 milliseconds. reconnectAttempts. This optional parameter sets the total number of reconnect attempts to make before giving up and shutting down. A value of -1 signifies an unlimited number of attempts. The default value is 0 . If you are using Jakarta Messaging and JNDI on the client to look up your Jakarta Messaging connection factory instances then you can specify these parameters in the JNDI context environment. For example, your jndi.properties file might look like the following. If you are using Jakarta Messaging, but instantiating your Jakarta Messaging connection factory directly, you can specify the parameters using the appropriate setter methods on the ActiveMQConnectionFactory immediately after creating it. If you are using the core API and instantiating the ServerLocator instance directly you can also specify the parameters using the appropriate setter methods on the ServerLocator immediately after creating it. If your client does manage to reconnect but the session is no longer available on the server, for instance if the server has been restarted or it has timed out, then the client will not be able to reattach, and any ExceptionListener or FailureListener instances registered on the connection or session will be called. ExceptionListeners and SessionFailureListeners Note that when a client reconnects or reattaches, any registered Jakarta Messaging ExceptionListener or core API SessionFailureListener will be called. | [
"/subsystem=messaging-activemq/server=default/ha-policy=replication-master:read-attribute(name=synchronized-with-backup)",
"/subsystem=messaging-activemq/server=default/ha-policy=replication-slave:read-attribute(name=synchronized-with-live)",
"/subsystem=messaging-activemq/server= SERVER /ha-policy= POLICY :add",
"/subsystem=messaging-activemq/server=default/ha-policy=replication-master:add",
"/subsystem=messaging-activemq/server= SERVER /ha-policy= POLICY :read-resource",
"/subsystem=messaging-activemq/server=default/ha-policy=replication-master:read-resource { \"outcome\" => \"success\", \"result\" => { \"check-for-live-server\" => true, \"cluster-name\" => undefined, \"group-name\" => undefined, \"initial-replication-sync-timeout\" => 30000L } }",
"/subsystem=messaging-activemq/server=default/ha-policy=replication-master:add(check-for-live-server=true,cluster-name=my-cluster,group-name=group1)",
"/subsystem=messaging-activemq/server=default/ha-policy=replication-slave:add(cluster-name=my-cluster,group-name=group1)",
"/subsystem=messaging-activemq/server=default/cluster-connection=my-cluster:read-resource",
"/subsystem=messaging-activemq/server=default/ha-policy= POLICY :write-attribute(name= ATTRIBUTE ,value= VALUE )",
"/subsystem=messaging-activemq/server=default/ha-policy=replication-slave:write-attribute(name=restart-backup,value=true)",
"AMQ222207: The backup server is not responding promptly introducing latency beyond the limit. Replication server being disconnected now.",
"/subsystem=messaging-activemq/server=default:write-attribute(name=<parameter-name>, value=\"<ip-address>\")",
"/subsystem=messaging-activemq/server=default:write-attribute(name=network-check-list, value=\"10.0.0.1\")",
"/subsystem=messaging-activemq/server=default/ha-policy=shared-store-master:add",
"/subsystem=messaging-activemq/server=default/ha-policy=shared-store-slave:add",
"/subsystem=messaging-activemq/server=default/cluster-connection=my-cluster:read-resource",
"/subsystem=messaging-activemq/server=default/ha-policy= POLICY :write-attribute(name= ATTRIBUTE ,value= VALUE )",
"/subsystem=messaging-activemq/server=default/ha-policy=shared-store-slave:write-attribute(name=restart-backup,value=true)",
"/subsystem=messaging-activemq/server=default/ha-policy=shared-store-slave:write-attribute(name=allow-fail-back,value=true)",
"/subsystem=messaging-activemq/server=default/ha-policy=replication-master:write-attribute(name=check-for-live-server,value=true)",
"/subsystem=messaging-activemq/server=default/ha-policy=shared-store-slave:write-attribute(name=failover-on-server-shutdown,value=true)",
"/subsystem=messaging-activemq/server=default/ha-policy=shared-store-slave:write-attribute(name=allow-failback,value=true)",
"/subsystem=messaging-activemq/server=default/ha-policy=shared-store-master:add",
"/subsystem=messaging-activemq/server=default/ha-policy=replication-master:add(cluster-name=my-cluster,group-name=group1,check-for-live-server=true) /subsystem=messaging-activemq/server=default/ha-policy=replication-master:add(cluster-name=my-cluster,group-name=group2,check-for-live-server=true)",
"/subsystem=messaging-activemq/server=backup:add",
"/subsystem=messaging-activemq/server=backup/ha-policy=shared-store-slave:add",
"/subsystem=messaging-activemq/server=backup/ha-policy=replication-slave:add(cluster-name=my-cluster,group-name=group2) /subsystem=messaging-activemq/server=backup/ha-policy=replication-slave:add(cluster-name=my-cluster,group-name=group1)",
"/subsystem=messaging-activemq/server=default/path=bindings-directory:write-attribute(name=path,value= /PATH/TO /shared/bindings-A) /subsystem=messaging-activemq/server=default/path=journal-directory:write-attribute(name=path,value= /PATH/TO /shared/journal-A) /subsystem=messaging-activemq/server=default/path=large-messages-directory:write-attribute(name=path,value= /PATH/TO /shared/largemessages-A) /subsystem=messaging-activemq/server=default/path=paging-directory:write-attribute(name=path,value= /PATH/TO /shared/paging-A) /subsystem=messaging-activemq/server=backup/path=bindings-directory:write-attribute(name=path,value= /PATH/TO /shared/bindings-B) /subsystem=messaging-activemq/server=backup/path=journal-directory:write-attribute(name=path,value= /PATH/TO /shared/journal-B) /subsystem=messaging-activemq/server=backup/path=large-messages-directory:write-attribute(name=path,value= /PATH/TO /shared/largemessages-B) /subsystem=messaging-activemq/server=backup/path=paging-directory:write-attribute(name=path,value= /PATH/TO /shared/paging-B)",
"/subsystem=messaging-activemq/server=default/path=bindings-directory:write-attribute(name=path,value= /PATH/TO /shared/bindings-B) /subsystem=messaging-activemq/server=default/path=journal-directory:write-attribute(name=path,value= /PATH/TO /shared/journal-B) /subsystem=messaging-activemq/server=default/path=large-messages-directory:write-attribute(name=path,value= /PATH/TO /shared/largemessages-B) /subsystem=messaging-activemq/server=default/path=paging-directory:write-attribute(name=path,value= /PATH/TO /shared/paging-B) /subsystem=messaging-activemq/server=backup/path=bindings-directory:write-attribute(name=path,value= /PATH/TO /shared/bindings-A) /subsystem=messaging-activemq/server=backup/path=journal-directory:write-attribute(name=path,value= /PATH/TO /shared/journal-A) /subsystem=messaging-activemq/server=backup/path=large-messages-directory:write-attribute(name=path,value= /PATH/TO /shared/largemessages-A) /subsystem=messaging-activemq/server=backup/path=paging-directory:write-attribute(name=path,value= /PATH/TO /shared/paging-A)",
"/subsystem=messaging-activemq/server=backup/path=bindings-directory:write-attribute(name=path,value=activemq/bindings-B) /subsystem=messaging-activemq/server=backup/path=journal-directory:write-attribute(name=path,value=activemq/journal-B) /subsystem=messaging-activemq/server=backup/path=large-messages-directory:write-attribute(name=path,value=activemq/largemessages-B) /subsystem=messaging-activemq/server=backup/path=paging-directory:write-attribute(name=path,value=activemq/paging-B)",
"/subsystem=messaging-activemq/server=backup/path=bindings-directory:write-attribute(name=path,value=activemq/bindings-B) /subsystem=messaging-activemq/server=backup/path=journal-directory:write-attribute(name=path,value=activemq/journal-B) /subsystem=messaging-activemq/server=backup/path=large-messages-directory:write-attribute(name=path,value=activemq/largemessages-B) /subsystem=messaging-activemq/server=backup/path=paging-directory:write-attribute(name=path,value=activemq/paging-B)",
"/subsystem=messaging-activemq/server=backup/http-acceptor=http-acceptor:add(http-listener=default) /subsystem=messaging-activemq/server=backup/http-connector=http-connector:add(endpoint=http-acceptor,socket-binding=http)",
"/subsystem=messaging-activemq/server=backup/broadcast-group=bg-group1:add(connectors=[http-connector],jgroups-cluster=activemq-cluster) /subsystem=messaging-activemq/server=backup/discovery-group=dg-group1:add(jgroups-cluster=activemq-cluster) /subsystem=messaging-activemq/server=backup/cluster-connection=my-cluster:add(connector-name=http-connector,cluster-connection-address=jms,discovery-group=dg-group1)",
"/subsystem=messaging-activemq/server=default/connection-factory=RemoteConnectionFactory:read-resource",
"/subsystem=messaging-activemq/server=default/cluster-connection=my-cluster:read-resource",
"/subsystem=messaging-activemq/server=default/connection-factory=RemoteConnectionFactory:write-attribute(name=reconnect-attempts,value=<NEW_VALUE>)",
"/subsystem=messaging-activemq/server=default/connection-factory=RemoteConnectionFactory:write-attribute(name=initial-connect-attempts,value=<NEW_VALUE>)",
"ServerLocator locator = null; ClientSessionFactory sf = null; ClientSession session = null; try { locator = ActiveMQClient.createServerLocatorWithoutHA(..); sf = locator.createClientSessionFactory();; session = sf.createSession(...); ... do some stuff with the session } finally { if (session != null) { session.close(); } if (sf != null) { sf.close(); } if(locator != null) { locator.close(); } }",
"Connection jmsConnection = null; try { ConnectionFactory jmsConnectionFactory = ActiveMQJMSClient.createConnectionFactoryWithoutHA(...); jmsConnection = jmsConnectionFactory.createConnection(); ... do some stuff with the connection } finally { if (connection != null) { connection.close(); } }",
"[Finalizer] 20:14:43,244 WARNING [org.apache.activemq.artemis.core.client.impl.DelegatingSession] I'm closing a ClientSession you left open. Please make sure you close all ClientSessions explicitly before let ting them go out of scope! [Finalizer] 20:14:43,244 WARNING [org.apache.activemq.artemis.core.client.impl.DelegatingSession] The session you didn't close was created here: java.lang.Exception at org.apache.activemq.artemis.core.client.impl.DelegatingSession.<init>(DelegatingSession.java:83) at org.acme.yourproject.YourClass (YourClass.java:666)",
"org.apache.activemq.artemis.core.protocol.core.impl.wireformat.RollbackMessage org.apache.activemq.artemis.core.protocol.core.impl.wireformat.SessionCloseMessage org.apache.activemq.artemis.core.protocol.core.impl.wireformat.SessionCommitMessage org.apache.activemq.artemis.core.protocol.core.impl.wireformat.SessionXACommitMessage org.apache.activemq.artemis.core.protocol.core.impl.wireformat.SessionXAPrepareMessage org.apache.activemq.artemis.core.protocol.core.impl.wireformat.SessionXARollbackMessage",
"java.naming.factory.initial = ActiveMQInitialContextFactory connection.ConnectionFactory=tcp://localhost:8080?retryInterval=1000&retryIntervalMultiplier=1.5&maxRetryInterval=60000&reconnectAttempts=1000"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/configuring_messaging/messaging-ha |
3.9. Suspending Activity on a GFS2 File System | 3.9. Suspending Activity on a GFS2 File System You can suspend write activity to a file system by using the dmsetup suspend command. Suspending write activity allows hardware-based device snapshots to be used to capture the file system in a consistent state. The dmsetup resume command ends the suspension. Usage Start Suspension End Suspension MountPoint Specifies the file system. Examples This example suspends writes to file system /mygfs2 . This example ends suspension of writes to file system /mygfs2 . | [
"dmsetup suspend MountPoint",
"dmsetup resume MountPoint",
"dmsetup suspend /mygfs2",
"dmsetup resume /mygfs2"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/global_file_system_2/s1-manage-suspendfs |
Chapter 10. Embedding a Server for Offline Configuration | Chapter 10. Embedding a Server for Offline Configuration You can embed a JBoss EAP standalone server or host controller process inside the management CLI process. This allows you to configure the server without it being visible on the network. A common use of this feature is for initial configuration of the server, such as managing security-related settings or avoiding port conflicts, prior to the server being online. This direct, local administration of a JBoss EAP installation through the management CLI does not require a socket-based connection. You can use the management CLI with the embedded server in a way that is consistent with interacting with a remote JBoss EAP server. All of the standard management CLI commands that you can use to administer a remote server are available. Start an Embedded Standalone Server You can launch a standalone server locally using the management CLI to modify standalone configuration without launching an additional process or opening network sockets. The following procedure launches the management CLI, starts an embedded standalone server, modifies configuration, and then stops the embedded server. Launch the management CLI. Launch the embedded standalone server. Passing in the --std-out=echo parameter prints the standard output to the terminal. Perform the desired operations. Stop the embedded server. This stops the embedded server and returns you to your management CLI session. If you want to exit the management CLI session as well, you can use the quit command. Specifying the Server Configuration By default, the embedded server will use the standalone.xml configuration file. You can use the --server-config parameter to specify a different configuration file to use. Starting in Admin-only Mode By default, the embedded server is started in admin-only mode, which will start services related to server administration, but will not start other services or accept end-user requests. This is useful for the initial configuration of the server. You can start the embedded server in the normal running mode by setting the --admin-only parameter to false. You can also change the running mode using the reload command. Controlling Standard Output You can control how to handle standard output from the embedded server. By default, standard output is discarded, but you could find the output in the server log. You can pass in --std-out=echo to have server output appear with the management CLI output. Boot Timeout By default, the embed-server command blocks indefinitely waiting for the embedded server to fully start. You can specify the time to wait in seconds using the --timeout parameter. A value less than 1 will return as soon as the embedded server reaches a point where it can be managed by the CLI. Starting with a Blank Configuration When starting an embedded server, you can specify to start with an empty configuration. This is useful if you want to build the entire server configuration using management CLI commands. This command will fail if the file already exists, which helps to avoid the accidental deletion of a configuration file. You can specify to remove any existing configuration by passing in the --remove-existing parameter. Start an Embedded Host Controller You can launch a host controller locally using the management CLI to modify domain and host controller configuration without launching additional processes or opening network sockets. An embedded host controller does not start any of its servers. Additionally, you can not use the --admin-only parameter when starting an embedded host controller. It will always be launched as if it is in admin-only mode. The following procedure launches the management CLI, starts an embedded host controller, modifies configuration, and then stops the embedded host controller. Launch the management CLI. Launch the embedded host controller. Passing in the --std-out=echo parameter prints the standard output to the terminal. Perform the desired operations. Stop the embedded host controller. Specifying the Host Controller Configuration By default, the embedded host controller will use domain.xml for domain configuration and host.xml for host configuration. You can use the --domain-config and --host-config parameters to specify different configuration files to use. Note Depending on which alternative configuration file you use, you may need to set certain properties when launching the management CLI. For example, Controlling Standard Output You can control how to handle standard output from the embedded host controller. By default, standard output is discarded, but you could find the output in the host controller's log. You can pass in --std-out=echo to have host controller output appear with the management CLI output. Boot Timeout By default, the embed-host-controller command blocks indefinitely waiting for the embedded host controller to fully start. You can specify the time to wait in seconds using the --timeout parameter. A value less than 1 will return as soon as the embedded host controller reaches a point where it can be managed by the CLI. Non-Modular Class Loading with the Management CLI Using the EAP_HOME /bin/jboss-cli.sh script to launch the management CLI uses a modular class loading environment. If you use the EAP_HOME /bin/client/jboss-cli-client.jar to run the management CLI in a non-modular class loading environment, you will need to specify the root JBoss EAP installation directory. Launch the management CLI. Start the embedded server, specifying the root installation directory. Note To embed a host controller, use the embed-host-controller command. The embedding logic will set up an appropriate modular class loading environment for the server. The module path for the modular class loader will have a single element: EAP_HOME /modules . No matter which way you launch the management CLI, the embedded server will run in a modular class loading environment. | [
"EAP_HOME /bin/jboss-cli.sh",
"embed-server --std-out=echo",
"/socket-binding-group=standard-sockets/socket-binding=management-http:write-attribute(name=port,value=9991)",
"stop-embedded-server",
"embed-server --server-config=standalone-full-ha.xml",
"embed-server --admin-only=false",
"reload --start-mode=normal",
"embed-server --std-out=echo",
"embed-server --timeout=30",
"embed-server --server-config=my-config.xml --empty-config",
"embed-server --server-config=my-config.xml --empty-config --remove-existing",
"EAP_HOME /bin/jboss-cli.sh",
"embed-host-controller --std-out=echo",
"/host= HOST_NAME :write-attribute(name=name,value= NEW_HOST_NAME )",
"stop-embedded-host-controller",
"embed-host-controller --domain-config=other-domain.xml --host-config=host-slave.xml",
"EAP_HOME /bin/jboss-cli.sh -Djboss.domain.master.address=127.0.0.1",
"embed-host-controller --std-out=echo",
"embed-host-controller --timeout=30",
"java -jar EAP_HOME /bin/client/jboss-cli-client.jar",
"embed-server --jboss-home= /path/to/EAP_HOME"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/management_cli_guide/running_embedded_server |
Chapter 2. Differences from upstream OpenJDK 21 | Chapter 2. Differences from upstream OpenJDK 21 Red Hat build of OpenJDK in Red Hat Enterprise Linux contains a number of structural changes from the upstream distribution of OpenJDK. The Microsoft Windows version of Red Hat build of OpenJDK attempts to follow Red Hat Enterprise Linux updates as closely as possible. The following list details the most notable Red Hat build of OpenJDK 21 changes: FIPS support. Red Hat build of OpenJDK 21 automatically detects whether RHEL is in FIPS mode and automatically configures Red Hat build of OpenJDK 21 to operate in that mode. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Cryptographic policy support. Red Hat build of OpenJDK 21 obtains the list of enabled cryptographic algorithms and key size constraints from the RHEL system configuration. These configuration components are used by the Transport Layer Security (TLS) encryption protocol, the certificate path validation, and any signed JARs. You can set different security profiles to balance safety and compatibility. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. The src.zip file includes the source for all of the JAR libraries shipped with Red Hat build of OpenJDK. Red Hat build of OpenJDK on RHEL uses system-wide timezone data files as a source for timezone information. Red Hat build of OpenJDK on RHEL uses system-wide CA certificates. Red Hat build of OpenJDK on Microsoft Windows includes the latest available timezone data from RHEL. Red Hat build of OpenJDK on Microsoft Windows uses the latest available CA certificates from RHEL. Additional resources See, Improve system FIPS detection (RHEL Planning Jira) See, Using system-wide cryptographic policies (RHEL documentation) | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/release_notes_for_red_hat_build_of_openjdk_21.0.2/rn-openjdk-diff-from-upstream |
Chapter 1. About networking | Chapter 1. About networking Red Hat OpenShift Networking is an ecosystem of features, plugins and advanced networking capabilities that extend Kubernetes networking with the advanced networking-related features that your cluster needs to manage its network traffic for one or multiple hybrid clusters. This ecosystem of networking capabilities integrates ingress, egress, load balancing, high-performance throughput, security, inter- and intra-cluster traffic management and provides role-based observability tooling to reduce its natural complexities. Note OpenShift SDN CNI is deprecated as of OpenShift Container Platform 4.14. As of OpenShift Container Platform 4.15, the network plugin is not an option for new installations. In a subsequent future release, the OpenShift SDN network plugin is planned to be removed and no longer supported. Red Hat will provide bug fixes and support for this feature until it is removed, but this feature will no longer receive enhancements. As an alternative to OpenShift SDN CNI, you can use OVN Kubernetes CNI instead. For more information, see OpenShift SDN CNI removal . The following list highlights some of the most commonly used Red Hat OpenShift Networking features available on your cluster: Primary cluster network provided by either of the following Container Network Interface (CNI) plugins: OVN-Kubernetes network plugin , the default plugin OpenShift SDN network plugin Certified 3rd-party alternative primary network plugins Cluster Network Operator for network plugin management Ingress Operator for TLS encrypted web traffic DNS Operator for name assignment MetalLB Operator for traffic load balancing on bare metal clusters IP failover support for high-availability Additional hardware network support through multiple CNI plugins, including for macvlan, ipvlan, and SR-IOV hardware networks IPv4, IPv6, and dual stack addressing Hybrid Linux-Windows host clusters for Windows-based workloads Red Hat OpenShift Service Mesh for discovery, load balancing, service-to-service authentication, failure recovery, metrics, and monitoring of services Single-node OpenShift Network Observability Operator for network debugging and insights Submariner for inter-cluster networking Red Hat Service Interconnect for layer 7 inter-cluster networking | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/networking/about-networking |
2.8.9.5.3. IP Set Types | 2.8.9.5.3. IP Set Types bitmap:ip Stores an IPv4 host address, a network range, or an IPv4 network addresses with the prefix-length in CIDR notation if the netmask option is used when the set is created. It can optionally store a timeout value, a counter value, and a comment. It can store up to 65536 entries. The command to create the bitmap:ip set has the following format: ipset create set-name range start_ipaddr-end_ipaddr | ipaddr/prefix-length [ netmask prefix-length ] [ timeout value ] [ counters ] [ comment ] Example 2.6. Create an IP Set for a Range of Addresses Using a Prefix Length To create an IP set for a range of addresses using a prefix length, make use of the bitmap:ip set type as follows: Once the set is created, entries can be added as follows: Review the members of the list: To add a range of addresses: Review the members of the list: Example 2.7. Create an IP Set for a Range of Addresses Using a Netmask To create an IP set for a range of address using a netmask, make use of the bitmap:ip set type as follows: Once the set is created, entries can be added as follows: If you attempt to add an address, the range containing that address will be added: bitmap:ip,mac Stores an IPv4 address and a MAC address as a pair. It can store up to 65536 entries. ipset create my-range bitmap:ip,mac range start_ipaddr-end_ipaddr | ipaddr/prefix-length [ timeout value ] [ counters ] [ comment ] Example 2.8. Create an IP Set for a Range of IPv4 MAC Address Pairs To create an IP set for a range of IPv4 MAC address pairs, make use of the bitmap:ip,mac set type as follows: It is not necessary to specify a MAC address when creating the set. Once the set is created, entries can be added as follows: bitmap:port Stores a range of ports. It can store up to 65536 entries. ipset create my-port-range bitmap:port range start_port-end_port [ timeout value ] [ counters ] [ comment ] The set match and SET target netfilter kernel modules interpret the stored numbers as TCP or UDP port numbers. The protocol can optionally be specified together with the port. The proto only needs to be specified if a service name is used, and that name does not exist as a TCP service. Example 2.9. Create an IP Set for a Range of Ports To create an IP set for a range of ports, make use of the bitmap:port set type as follows: Once the set is created, entries can be added as follows: hash:ip Stores a host or network address in the form of a hash. By default, an address specified without a network prefix length is a host address. The all-zero IP address cannot be stored. ipset create my-addresses hash:ip [ family[ inet | inet6 ] ] [ hashsize value ] [ maxelem value ] [ netmask prefix-length ] [ timeout value ] The inet family is the default, if family is omitted addresses will be interpreted as IPv4 addresses. The hashsize value is the initial hash size to use and defaults to 1024 . The maxelem value is the maximum number of elements which can be stored in the set, it defaults to 65536 . The netfilter tool searches for a network prefix which is the most specific, it tries to find the smallest block of addresses that match. Example 2.10. Create an IP Set for IP Addresses To create an IP set for IP addresses, make use of the hash:ip set type as follows: Once the set is created, entries can be added as follows: If additional options such as netmask and timeout are required, they must be specified when the set is created. For example: The maxelem option restricts to total number of elements in the set, thus conserving memory space. The timeout option means that elements will only exist in the set for the number of seconds specified. For example: The following output shows the time counting down: The element will be removed from the set when the timeout period ends. See the ipset(8) manual page for more examples. | [
"~]# ipset create my-range bitmap:ip range 192.168.33.0/28",
"~]# ipset add my-range 192.168.33.1",
"~]# ipset list my-range Name: my-range Type: bitmap:ip Header: range 192.168.33.0-192.168.33.15 Size in memory: 84 References: 0 Members: 192.168.33.1",
"~]# ipset add my-range 192.168.33.2-192.168.33.4",
"~]# ipset list my-range Name: my-range Type: bitmap:ip Header: range 192.168.33.0-192.168.33.15 Size in memory: 84 References: 0 Members: 192.168.33.1 192.168.33.2 192.168.33.3 192.168.33.4",
"~]# ipset create my-big-range bitmap:ip range 192.168.124.0-192.168.126.0 netmask 24",
"~]# ipset add my-big-range 192.168.124.0",
"~]# ipset add my-big-range 192.168.125.150 ~]# ipset list my-big-range Name: my-big-range Type: bitmap:ip Header: range 192.168.124.0-192.168.126.255 netmask 24 Size in memory: 84 References: 0 Members: 192.168.124.0 192.168.125.0",
"~]# ipset create my-range bitmap:ip,mac range 192.168.1.0/24",
"~]# ipset add my-range 192.168.1.1,12:34:56:78:9A:BC",
"~]# ipset create my-permitted-port-range bitmap:port range 1024-49151",
"~]# ipset add my-permitted-port-range 5060-5061",
"~]# ipset create my-addresses hash:ip",
"~]# ipset add my-addresses 10.10.10.0",
"~]# ipset create my-busy-addresses hash:ip maxelem 24 netmask 28 timeout 100",
"~]# ipset add my-busy-addresses timeout 100",
"ipset add my-busy-addresses 192.168.60.0 timeout 100 ipset list my-busy-addresses Name: my-busy-addresses Type: hash:ip Header: family inet hashsize 1024 maxelem 24 netmask 28 timeout 100 Size in memory: 8300 References: 0 Members: 192.168.60.0 timeout 90 ipset list my-busy-addresses Name: my-busy-addresses Type: hash:ip Header: family inet hashsize 1024 maxelem 24 netmask 28 timeout 100 Size in memory: 8300 References: 0 Members: 192.168.60.0 timeout 83"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-iptables-ip_set_types |
14.5. Amending an Image | 14.5. Amending an Image Amend the image format-specific options for the image file. Optionally, specify the file's format type ( fmt ). Note This operation is only supported for the qcow2 file format. | [
"qemu-img amend [-p] [-f fmt ] [-t cache ] -o options filename"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-using-qemu_img-amending_an_image |
7.5. Opening and Updating Support Cases Using Interactive Mode | 7.5. Opening and Updating Support Cases Using Interactive Mode Procedure 7.2. Opening a New Support Case Using Interactive Mode To open a new support case using interactive mode, proceed as follows: Start the tool by entering the following command: Enter the opencase command: Follow the on screen prompts to select a product and then a version. Enter a summary of the case. Enter a description of the case and press Ctrl + D on an empty line when complete. Select a severity of the case. Optionally chose to see if there is a solution to this problem before opening a support case. Confirm you would still like to open the support case. Optionally chose to attach an SOS report. Optionally chose to attach a file. Procedure 7.3. Viewing and Updating an Existing Support Case Using Interactive Mode To view and update an existing support case using interactive mode, proceed as follows: Start the tool by entering the following command: Enter the getcase command: Where case-number is the number of the case you want to view and update. Follow the on screen prompts to view the case, modify or add comments, and get or add attachments. Procedure 7.4. Modifying an Existing Support Case Using Interactive Mode To modify the attributes of an existing support case using interactive mode, proceed as follows: Start the tool by entering the following command: Enter the modifycase command: Where case-number is the number of the case you want to view and update. The modify selection list appears: Follow the on screen prompts to modify one or more of the options. For example, to modify the status, enter 3 : | [
"~]# redhat-support-tool",
"Command (? for help): opencase",
"Support case 0123456789 has successfully been opened",
"~]# redhat-support-tool",
"Command (? for help): getcase case-number",
"~]# redhat-support-tool",
"Command (? for help): modifycase case-number",
"Type the number of the attribute to modify or 'e' to return to the previous menu. 1 Modify Type 2 Modify Severity 3 Modify Status 4 Modify Alternative-ID 5 Modify Product 6 Modify Version End of options.",
"Selection: 3 1 Waiting on Customer 2 Waiting on Red Hat 3 Closed Please select a status (or 'q' to exit):"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-opening_and_updating_support_cases_using_interactive_mode |
Chapter 17. Berkeley Internet Name Domain | Chapter 17. Berkeley Internet Name Domain BIND performs name resolution services using the named daemon. BIND lets users locate computer resources and services by name instead of numerical addresses. In Red Hat Enterprise Linux, the bind package provides a DNS server. Enter the following command to see if the bind package is installed: If it is not installed, use the yum utility as the root user to install it: 17.1. BIND and SELinux The default permissions on the /var/named/slaves/ , /var/named/dynamic/ and /var/named/data/ directories allow zone files to be updated using zone transfers and dynamic DNS updates. Files in /var/named/ are labeled with the named_zone_t type, which is used for master zone files. For a slave server, configure the /etc/named.conf file to place slave zones in /var/named/slaves/ . The following is an example of a domain entry in /etc/named.conf for a slave DNS server that stores the zone file for testdomain.com in /var/named/slaves/ : If a zone file is labeled named_zone_t , the named_write_master_zones Boolean must be enabled to allow zone transfers and dynamic DNS to update the zone file. Also, the mode of the parent directory has to be changed to allow the named user or group read, write and execute access. If zone files in /var/named/ are labeled with the named_cache_t type, a file system relabel or running restorecon -R /var/ will change their type to named_zone_t . | [
"~]USD rpm -q bind package bind is not installed",
"~]# yum install bind",
"zone \"testdomain.com\" { type slave; masters { IP-address; }; file \"/var/named/slaves/db.testdomain.com\"; };"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/chap-managing_confined_services-berkeley_internet_name_domain |
Chapter 2. About Kafka | Chapter 2. About Kafka Apache Kafka is an open-source distributed publish-subscribe messaging system for fault-tolerant real-time data feeds. Additional resources For more information about Apache Kafka, see the Apache Kafka website . 2.1. Kafka concepts Knowledge of the key concepts of Kafka is important in understanding how AMQ Streams works. A Kafka cluster comprises multiple brokers. Topics are used to receive and store data in a Kafka cluster. Topics are split by partitions, where the data is written. Partitions are replicated across topics for fault tolerance. Kafka brokers and topics Broker A broker, sometimes referred to as a server or node, orchestrates the storage and passing of messages. Topic A topic provides a destination for the storage of data. Each topic is split into one or more partitions. Cluster A group of broker instances. Partition The number of topic partitions is defined by a topic partition count . Partition leader A partition leader handles all producer requests for a topic. Partition follower A partition follower replicates the partition data of a partition leader, optionally handling consumer requests. Topics use a replication factor to configure the number of replicas of each partition within the cluster. A topic comprises at least one partition. An in-sync replica has the same number of messages as the leader. Configuration defines how many replicas must be in-sync to be able to produce messages, ensuring that a message is committed only after it has been successfully copied to the replica partition. In this way, if the leader fails the message is not lost. In the Kafka brokers and topics diagram, we can see each numbered partition has a leader and two followers in replicated topics. 2.2. Producers and consumers Producers and consumers send and receive messages (publish and subscribe) through brokers. Messages comprise an optional key and a value that contains the message data, plus headers and related metadata. The key is used to identify the subject of the message, or a property of the message. Messages are delivered in batches, and batches and records contain headers and metadata that provide details that are useful for filtering and routing by clients, such as the timestamp and offset position for the record. Producers and consumers Producer A producer sends messages to a broker topic to be written to the end offset of a partition. Messages are written to partitions by a producer on a round robin basis, or to a specific partition based on the message key. Consumer A consumer subscribes to a topic and reads messages according to topic, partition and offset. Consumer group Consumer groups are used to share a typically large data stream generated by multiple producers from a given topic. Consumers are grouped using a group.id , allowing messages to be spread across the members. Consumers within a group do not read data from the same partition, but can receive data from one or more partitions. Offsets Offsets describe the position of messages within a partition. Each message in a given partition has a unique offset, which helps identify the position of a consumer within the partition to track the number of records that have been consumed. Committed offsets are written to an offset commit log. A __consumer_offsets topic stores information on committed offsets, the position of last and offset, according to consumer group. Producing and consuming data | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q2/html/amq_streams_on_openshift_overview/kafka-concepts_str |
Chapter 6. Uninstalling OpenShift Data Foundation | Chapter 6. Uninstalling OpenShift Data Foundation 6.1. Uninstalling OpenShift Data Foundation in Internal mode To uninstall OpenShift Data Foundation in Internal mode, refer to the knowledgebase article on Uninstalling OpenShift Data Foundation . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_openshift_data_foundation_using_ibm_power/uninstalling_openshift_data_foundation |
Chapter 2. Working with ML2/OVN | Chapter 2. Working with ML2/OVN Red Hat OpenStack Platform (RHOSP) networks are managed by the Networking service (neutron). The core of the Networking service is the Modular Layer 2 (ML2) plug-in, and the default mechanism driver for RHOSP ML2 plug-in is the Open Virtual Networking (OVN) mechanism driver. Earlier RHOSP versions used the Open vSwitch (OVS) mechanism driver by default, but Red Hat recommends the ML2/OVN mechanism driver for most deployments. If you upgrade from an RHOSP 13 ML2/OVS deployment to RHOSP 16, Red Hat recommends migrating from ML2/OVS to ML2/OVN after the upgrade. In some cases, ML2/OVN might not meet your requirements. In these cases you can deploy RHOSP with ML2/OVS. 2.1. List of components in the RHOSP OVN architecture The RHOSP OVN architecture replaces the OVS Modular Layer 2 (ML2) mechanism driver with the OVN ML2 mechanism driver to support the Networking API. OVN provides networking services for the Red Hat OpenStack platform. As illustrated in Figure 2.1, the OVN architecture consists of the following components and services: ML2 plug-in with OVN mechanism driver The ML2 plug-in translates the OpenStack-specific networking configuration into the platform-neutral OVN logical networking configuration. It typically runs on the Controller node. OVN northbound (NB) database ( ovn-nb ) This database stores the logical OVN networking configuration from the OVN ML2 plug-in. It typically runs on the Controller node and listens on TCP port 6641 . OVN northbound service ( ovn-northd ) This service converts the logical networking configuration from the OVN NB database to the logical data path flows and populates these on the OVN Southbound database. It typically runs on the Controller node. OVN southbound (SB) database ( ovn-sb ) This database stores the converted logical data path flows. It typically runs on the Controller node and listens on TCP port 6642 . OVN controller ( ovn-controller ) This controller connects to the OVN SB database and acts as the open vSwitch controller to control and monitor network traffic. It runs on all Compute and gateway nodes where OS::Tripleo::Services::OVNController is defined. OVN metadata agent ( ovn-metadata-agent ) This agent creates the haproxy instances for managing the OVS interfaces, network namespaces and HAProxy processes used to proxy metadata API requests. The agent runs on all Compute and gateway nodes where OS::TripleO::Services::OVNMetadataAgent is defined. OVS database server (OVSDB) Hosts the OVN Northbound and Southbound databases. Also interacts with ovs-vswitchd to host the OVS database conf.db . Note The schema file for the NB database is located in /usr/share/ovn/ovn-nb.ovsschema , and the SB database schema file is in /usr/share/ovn/ovn-sb.ovsschema . Figure 2.1. OVN architecture in a RHOSP environment 2.2. ML2/OVN databases In Red Hat OpenStack Platform ML2/OVN deployments, network configuration information passes between processes through shared distributed databases. You can inspect these databases to verify the status of the network and identify issues. OVN northbound database The northbound database ( OVN_Northbound ) serves as the interface between OVN and a cloud management system such as Red Hat OpenStack Platform (RHOSP). RHOSP produces the contents of the northbound database. The northbound database contains the current desired state of the network, presented as a collection of logical ports, logical switches, logical routers, and more. Every RHOSP Networking service (neutron) object is represented in a table in the northbound database. OVN southbound database The southbound database ( OVN_Southbound ) holds the logical and physical configuration state for OVN system to support virtual network abstraction. The ovn-controller uses the information in this database to configure OVS to satisfy Networking service (neutron) requirements. 2.3. The ovn-controller service on Compute nodes The ovn-controller service runs on each Compute node and connects to the OVN southbound (SB) database server to retrieve the logical flows. The ovn-controller translates these logical flows into physical OpenFlow flows and adds the flows to the OVS bridge ( br-int ). To communicate with ovs-vswitchd and install the OpenFlow flows, the ovn-controller connects to the local ovsdb-server (which hosts conf.db ) using the UNIX socket path that was passed when ovn-controller was started (for example unix:/var/run/openvswitch/db.sock ). The ovn-controller service expects certain key-value pairs in the external_ids column of the Open_vSwitch table; puppet-ovn uses puppet-vswitch to populate these fields. The following example shows the key-value pairs that puppet-vswitch configures in the external_ids column: 2.4. OVN metadata agent on Compute nodes The OVN metadata agent is configured in the tripleo-heat-templates/deployment/ovn/ovn-metadata-container-puppet.yaml file and included in the default Compute role through OS::TripleO::Services::OVNMetadataAgent . As such, the OVN metadata agent with default parameters is deployed as part of the OVN deployment. OpenStack guest instances access the Networking metadata service available at the link-local IP address: 169.254.169.254. The neutron-ovn-metadata-agent has access to the host networks where the Compute metadata API exists. Each HAProxy is in a network namespace that is not able to reach the appropriate host network. HaProxy adds the necessary headers to the metadata API request and then forwards the request to the neutron-ovn-metadata-agent over a UNIX domain socket. The OVN Networking service creates a unique network namespace for each virtual network that enables the metadata service. Each network accessed by the instances on the Compute node has a corresponding metadata namespace (ovnmeta-<network_uuid>). 2.5. The OVN composable service Red Hat OpenStack Platform usually consists of nodes in pre-defined roles, such as nodes in Controller roles, Compute roles, and different storage role types. Each of these default roles contains a set of services that are defined in the core heat template collection. In a default Red Hat OpenStack (RHOSP) deployment, the ML2/OVN composable service runs on Controller nodes. You can optionally create a custom Networker role and run the OVN composable service on dedicated Networker nodes. The OVN composable service ovn-dbs is deployed in a container called ovn-dbs-bundle. In a default installation ovn-dbs is included in the Controller role and runs on Controller nodes. Because the service is composable, you can assign it to another role, such as a Networker role. If you assign the OVN composable service to another role, ensure that the service is co-located on the same node as the pacemaker service, which controls the OVN database containers. Related information Deploying a Custom Role with ML2/OVN SR-IOV with ML2/OVN and native OVN DHCP 2.6. Layer 3 high availability with OVN OVN supports Layer 3 high availability (L3 HA) without any special configuration. Note When you create a router, do not use --ha option because OVN routers are highly available by default. Openstack router create commands that include the --ha option fail. OVN automatically schedules the router port to all available gateway nodes that can act as an L3 gateway on the specified external network. OVN L3 HA uses the gateway_chassis column in the OVN Logical_Router_Port table. Most functionality is managed by OpenFlow rules with bundled active_passive outputs. The ovn-controller handles the Address Resolution Protocol (ARP) responder and router enablement and disablement. Gratuitous ARPs for FIPs and router external addresses are also periodically sent by the ovn-controller . Note L3HA uses OVN to balance the routers back to the original gateway nodes to avoid any nodes becoming a bottleneck. BFD monitoring OVN uses the Bidirectional Forwarding Detection (BFD) protocol to monitor the availability of the gateway nodes. This protocol is encapsulated on top of the Geneve tunnels established from node to node. Each gateway node monitors all the other gateway nodes in a star topology in the deployment. Gateway nodes also monitor the compute nodes to let the gateways enable and disable routing of packets and ARP responses and announcements. Each compute node uses BFD to monitor each gateway node and automatically steers external traffic, such as source and destination Network Address Translation (SNAT and DNAT), through the active gateway node for a given router. Compute nodes do not need to monitor other compute nodes. Note External network failures are not detected as would happen with an ML2-OVS configuration. L3 HA for OVN supports the following failure modes: The gateway node becomes disconnected from the network (tunneling interface). ovs-vswitchd stops ( ovs-switchd is responsible for BFD signaling) ovn-controller stops ( ovn-controller removes itself as a registered node). Note This BFD monitoring mechanism only works for link failures, not for routing failures. 2.7. Feature support in OVN and OVS mechanism drivers Review the availability of Red Hat OpenStack Platform (RHOSP) features as part of your OVS to OVN mechanism driver migration plan. Feature OVN RHOSP 16.2 OVN RHOSP 17.1 OVS RHOSP 16.2 OVS RHOSP 17.1 Additional information Provisioning Baremetal Machines with OVN DHCP No No Yes Yes The built-in DHCP server on OVN presently can not provision baremetal nodes. It cannot serve DHCP for the provisioning networks. Chainbooting iPXE requires tagging ( --dhcp-match in dnsmasq), which is not supported in the OVN DHCP server. See https://bugzilla.redhat.com/show_bug.cgi?id=1622154 . North/south routing on VF(direct) ports on VLAN project (tenant networks) No No Yes Yes Core OVN limitation. See https://bugs.launchpad.net/neutron/+bug/1875852 . Reverse DNS for internal DNS records No Yes Yes Yes See https://bugzilla.redhat.com/show_bug.cgi?id=2211426 . Internal DNS resolution for isolated networks No No Yes Yes OVN does not support internal DNS resolution for isolated networks because it does not allocate ports for DNS service. This does not affect OVS deployments because OVS uses dnsmasq. See https://issues.redhat.com/browse/OSP-25661 . Security group logging Tech Preview Yes No No RHOSP does not support security group logging with the OVS mechanism driver. Stateless security groups No Yes No No See Configuring security groups . Load-balancing service distributed virtual routing (DVR) Yes Yes No No The OVS mechanism driver routes Load-balancing service traffic through Controller or Network nodes even with DVR enabled. The OVN mechanism driver routes Load-balancing service traffic directly through the Compute nodes. IPv6 DVR Yes Yes No No With the OVS mechanism driver, RHOSP does not distribute IPv6 traffic to the Compute nodes, even when DVR is enabled. All ingress/egress traffic goes through the centralized Controller or Network nodes. If you need IPv6 DVR, use the OVN mechanism driver. DVR and layer 3 high availability (L3 HA) Yes Yes No No RHOSP deployments with the OVS mechanism driver do not support DVR in conjunction with L3 HA. If you use DVR with RHOSP director, L3 HA is disabled. This means that the Networking service still schedules routers on the Network nodes and load-shares them between the L3 agents. However, if one agent fails, all routers hosted by this agent also fail. This affects only SNAT traffic. Red Hat recommends using the allow_automatic_l3agent_failover feature in such cases, so that if one Network node fails, the routers are rescheduled to a different node. 2.8. Limit for non-secure ports with ML2/OVN Ports might become unreachable if you disable the port security plug-in extension in Red Hat Open Stack Platform (RHOSP) deployments with the default ML2/OVN mechanism driver and a large number of ports. In some large ML2/OVN RHSOP deployments, a flow chain limit inside ML2/OVN can drop ARP requests that are targeted to ports where the security plug-in is disabled. There is no documented maximum limit for the actual number of logical switch ports that ML2/OVN can support, but the limit approximates 4,000 ports. Attributes that contribute to the approximated limit are the number of resubmits in the OpenFlow pipeline that ML2/OVN generates, and changes to the overall logical topology. 2.9. ML2/OVS to ML2/OVN in-place migration: validated and prohibited scenarios Red Hat continues to test and refine in-place migration scenarios. Work with your Red Hat Technical Account Manager or Global Professional Services to determine whether your OVS deployment meets the criteria for a valid in-place migration scenario. 2.9.1. Validated ML2/OVS to ML2/OVN migration scenarios DVR to DVR Start: RHOSP 16.1.1 or later with OVS with DVR. End: Same RHOSP version and release with OVN with DVR. SR-IOV was not present in the starting environment or added during or after the migration. Centralized routing + SR-IOV with virtual function (VF) ports only Start: RHOSP 16.1.1 or later with OVS (no DVR)and SR-IOV. End: Same RHOSP version and release with OVN (no DVR) and SR-IOV. Workloads used only SR-IOV virtual function (VF) ports. SR-IOV physical function (PF) ports caused migration failure. 2.9.2. ML2/OVS to ML2/OVN in-place migration scenarios that have not been verified You cannot perform an in-place ML2/OVS to ML2/OVN migration in the following scenarios until Red Hat announces that the underlying issues are resolved. OVS deployment uses network functions virtualization (NFV) Red Hat supports new deployments with ML2/OVN and NFV, but has not successfully tested migration of an ML2/OVS and NFV deployment to ML2/OVN. To track progress on this issue, see https://bugzilla.redhat.com/show_bug.cgi?id=1925290 . SR-IOV with physical function (PF) ports Migration tests failed when any workload uses an SR-IOV PF port. To track progress on this issue, see https://bugzilla.redhat.com/show_bug.cgi?id=1879546 . OVS uses trunk ports If your ML2/OVS deployment uses trunk ports, do not perform an ML2/OVS to ML2/OVN migration. The migration does not properly set up the trunked ports in the OVN environment. To track progress on this issue, see https://bugzilla.redhat.com/show_bug.cgi?id=1857652 . DVR with VLAN project (tenant) networks Do not migrate to ML2/OVN with DVR and VLAN project networks. You can migrate to ML2/OVN with centralized routing. To track progress on this issue, see https://bugzilla.redhat.com/show_bug.cgi?id=1766930 . 2.9.3. ML2/OVS to ML2/OVN in-place migration and security group rules Ensure that any custom security group rules in your originating ML2/OVS deployment are compatible with the target ML2/OVN deployment. For example, the default security group includes rules that allow egress to the DHCP server. If you deleted those rules in your ML2/OVS deployment, ML2/OVS automatically adds implicit rules that allow egress to the DHCP server. Those implicit rules are not supported by ML2/OVN, so in your target ML2/OVN environment, DHCP and metadata traffic would not reach the DHCP server and the instance would not boot. In this case, to restore DHCP access, you could add the following rules: 2.10. Using ML2/OVS instead of the default ML2/OVN in a new RHOSP 16.2 deployment In Red Hat OpenStack Platform (RHOSP) 16.0 and later deployments, the Modular Layer 2 plug-in with Open Virtual Network (ML2/OVN) is the default mechanism driver for the RHOSP Networking service. You can change this setting if your application requires the ML2/OVS mechanism driver. Procedure Log in to your undercloud as the stack user. In the template file, /home/stack/templates/containers-prepare-parameter.yaml , use ovs instead of ovn as value of the neutron_driver parameter: In the environment file, /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs.yaml , ensure that the NeutronNetworkType parameter includes vxlan or gre instead of geneve . Example Run the openstack overcloud deploy command and include the core heat templates, environment files, and the files that you modified. Important The order of the environment files is important because the parameters and resources defined in subsequent environment files take precedence. Additional resources Environment files in the Advanced Overcloud Customization guide Including environment files in overcloud creation in the Advanced Overcloud Customization guide 2.11. Keeping ML2/OVS after an upgrade instead of the default ML2/OVN In Red Hat OpenStack Platform (RHOSP) 16.0 and later deployments, the Modular Layer 2 plug-in with Open Virtual Network (ML2/OVN) is the default mechanism driver for the RHOSP Networking service. If you upgrade from an earlier version of RHOSP that used ML2/OVS, you can migrate from ML2/OVN to ML2/OVS after the upgrade. If instead you choose to keep using ML2/OVS after the upgrade, follow Red Hat's upgrade procedure as documented, and do not perform the ML2/OVS-to-ML2/OVN migration. Additional resources Framework for Upgrades (13 to 16.2) guide Migrating the Networking Service to the ML2 OVN Mechanism Driver 2.12. Deploying a custom role with ML2/OVN In a default Red Hat OpenStack (RHOSP) deployment, the ML2/OVN composable service runs on Controller nodes. You can optionally use supported custom roles like those described in the following examples. Networker Run the OVN composable services on dedicated networker nodes. Networker with SR-IOV Run the OVN composable services on dedicated networker nodes with SR-IOV. Controller with SR-IOV Run the OVN composable services on SR-IOV capable controller nodes. You can also generate your own custom roles. Limitations The following limitations apply to the use of SR-IOV with ML2/OVN and native OVN DHCP in this release. All external ports are scheduled on a single gateway node because there is only one HA Chassis Group for all of the ports. North/south routing on VF(direct) ports on VLAN tenant networks does not work with SR-IOV because the external ports are not colocated with the logical router's gateway ports. See https://bugs.launchpad.net/neutron/+bug/1875852 . Prerequisites You know how to deploy custom roles. For more information see Composable services and custom roles in the Advanced Overcloud Customization guide. Procedure Log in to the undercloud host as the stack user and source the stackrc file. Choose the custom roles file that is appropriate for your deployment. Use it directly in the deploy command if it suits your needs as-is. Or you can generate your own custom roles file that combines other custom roles files. Deployment Role Role File Networker role Networker Networker.yaml Networker role with SR-IOV NetworkerSriov NetworkerSriov.yaml Co-located control and networker with SR-IOV ControllerSriov ControllerSriov.yaml [Optional] Generate a new custom roles data file that combines one of these custom roles files with other custom roles files. Follow the instructions in Creating a roles_data file in the Advanced Overcloud Customization guide. Include the appropriate source role files depending on your deployment. [Optional] To identify specific nodes for the role, you can create a specific hardware flavor and assign the flavor to specific nodes. Then use an environment file define the flavor for the role, and to specify a node count. For more information, see the example in Creating a new role in the Advanced Overcloud Customization guide. Create an environment file as appropriate for your deployment. Deployment Sample Environment File Networker role neutron-ovn-dvr-ha.yaml Networker role with SR-IOV ovn-sriov.yaml Include the following settings as appropriate for your deployment. Deployment Settings Networker role Networker role with SR-IOV Co-located control and networker with SR-IOV Deploy the overcloud. Include the environment file in your deployment command with the -e option. Include the custom roles data file in your deployment command with the -r option. For example: -r Networker.yaml or -r mycustomrolesfile.yaml . Verification steps - OVN deployments Log in to a Controller or Networker node as the overcloud SSH user, which is heat-admin by default. Example Ensure that ovn_metadata_agent is running on Controller and Networker nodes. Sample output Ensure that Controller nodes with OVN services or dedicated Networker nodes have been configured as gateways for OVS. Sample output Verification steps - SR-IOV deployments Log in to a Compute node as the overcloud SSH user, which is heat-admin by default. Example Ensure that neutron_sriov_agent is running on the Compute nodes. Sample output Ensure that network-available SR-IOV NICs have been successfully detected. Sample output Additional resources Composable services and custom roles in the Advanced Overcloud Customization guide. 2.13. SR-IOV with ML2/OVN and native OVN DHCP You can deploy a custom role to use SR-IOV in an ML2/OVN deployment with native OVN DHCP. See Section 2.12, "Deploying a custom role with ML2/OVN" . Limitations The following limitations apply to the use of SR-IOV with ML2/OVN and native OVN DHCP in this release. All external ports are scheduled on a single gateway node because there is only one HA Chassis Group for all of the ports. North/south routing on VF(direct) ports on VLAN tenant networks does not work with SR-IOV because the external ports are not colocated with the logical router's gateway ports. See https://bugs.launchpad.net/neutron/+bug/1875852 . Additional resources Composable services and custom roles in the Advanced Overcloud Customization guide. | [
"hostname=<HOST NAME> ovn-encap-ip=<IP OF THE NODE> ovn-encap-type=geneve ovn-remote=tcp:OVN_DBS_VIP:6642",
"Allow VM to contact dhcp server (ipv4) openstack security group rule create --egress --ethertype IPv4 --protocol udp --dst-port 67 USD{SEC_GROUP_ID} # Allow VM to contact metadata server (ipv4) openstack security group rule create --egress --ethertype IPv4 --protocol tcp --remote-ip 169.254.169.254 USD{SEC_GROUP_ID} # Allow VM to contact dhcp server (ipv6, non-slaac). Be aware that the remote-ip may vary depending on your use case! openstack security group rule create --egress --ethertype IPv6 --protocol udp --dst-port 547 --remote-ip ff02::1:2 USD{SEC_GROUP_ID} # Allow VM to contact metadata server (ipv6) openstack security group rule create --egress --ethertype IPv6 --protocol tcp --remote-ip fe80::a9fe:a9fe USD{SEC_GROUP_ID}",
"parameter_defaults: ContainerImagePrepare: - set: neutron_driver: ovs",
"parameter_defaults: NeutronNetworkType: 'vxlan'",
"openstack overcloud deploy --templates -e <your_environment_files> -e /usr/share/openstack-tripleo-heat-templates/environments/services/ neutron-ovs.yaml -e /home/stack/templates/containers-prepare-parameter.yaml \\",
"source stackrc",
"ControllerParameters: OVNCMSOptions: \"\" ControllerSriovParameters: OVNCMSOptions: \"\" NetworkerParameters: OVNCMSOptions: \"enable-chassis-as-gw\" NetworkerSriovParameters: OVNCMSOptions: \"\"",
"OS::TripleO::Services::NeutronDhcpAgent: OS::Heat::None ControllerParameters: OVNCMSOptions: \"\" ControllerSriovParameters: OVNCMSOptions: \"\" NetworkerParameters: OVNCMSOptions: \"\" NetworkerSriovParameters: OVNCMSOptions: \"enable-chassis-as-gw\"",
"OS::TripleO::Services::NeutronDhcpAgent: OS::Heat::None ControllerParameters: OVNCMSOptions: \"\" ControllerSriovParameters: OVNCMSOptions: \"enable-chassis-as-gw\" NetworkerParameters: OVNCMSOptions: \"\" NetworkerSriovParameters: OVNCMSOptions: \"\"",
"ssh heat-admin@controller-0",
"sudo podman ps | grep ovn_metadata",
"a65125d9588d undercloud-0.ctlplane.localdomain:8787/rh-osbs/rhosp16-openstack-neutron-metadata-agent-ovn:16.2_20200813.1 kolla_start 23 hours ago Up 21 hours ago ovn_metadata_agent",
"sudo ovs-vsctl get Open_Vswitch . external_ids:ovn-cms-options",
"enable-chassis-as-gw",
"ssh heat-admin@compute-0",
"sudo podman ps | grep neutron_sriov_agent",
"f54cbbf4523a undercloud-0.ctlplane.localdomain:8787/rh-osbs/rhosp16-openstack-neutron-sriov-agent:16.2_20200813.1 kolla_start 23 hours ago Up 21 hours ago neutron_sriov_agent",
"sudo podman exec -uroot galera-bundle-podman-0 mysql nova -e 'select hypervisor_hostname,pci_stats from compute_nodes;'",
"computesriov-1.localdomain {... {\"dev_type\": \"type-PF\", \"physical_network\": \"datacentre\", \"trusted\": \"true\"}, \"count\": 1}, ... {\"dev_type\": \"type-VF\", \"physical_network\": \"datacentre\", \"trusted\": \"true\", \"parent_ifname\": \"enp7s0f3\"}, \"count\": 5}, ...} computesriov-0.localdomain {... {\"dev_type\": \"type-PF\", \"physical_network\": \"datacentre\", \"trusted\": \"true\"}, \"count\": 1}, ... {\"dev_type\": \"type-VF\", \"physical_network\": \"datacentre\", \"trusted\": \"true\", \"parent_ifname\": \"enp7s0f3\"}, \"count\": 5}, ...}"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/networking_guide/assembly_work-with-ovn_rhosp-network |
11.14. Alter Statement | 11.14. Alter Statement ALTER statements currently primarily support adding OPTIONS properties to Tables, Views and Procedures. Using a ALTER statement, you can either add, modify or remove a property. See "alter column options", "alter options", and "alter options list" in Section A.7, "Productions" . Example 11.6. Example ALTER ALTER statements are especially useful, when user would like to modify/enhance the metadata that has been imported from a NATIVE datasource. For example, if you have a database called "northwind", and you imported that metadata and would like to add CARDINALITY to its "customer" table, you can use ALTER statement, along with "chainable" metadata repositories feature to add this property to the desired table. The below shows an example -vdb.xml file, that illustrates the usage. Example 11.7. Example VDB | [
"ALTER FOREIGN TABLE \"customer\" OPTIONS (ADD CARDINALITY 10000); ALTER FOREIGN TABLE \"customer\" ALTER COLUMN \"name\" OPTIONS(SET UPDATABLE FALSE)",
"<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?> <vdb name=\"northwind\" version=\"1\"> <model name=\"nw\"> <property name=\"importer.importKeys\" value=\"true\"/> <property name=\"importer.importProcedures\" value=\"true\"/> <source name=\"northwind-connector\" translator-name=\"mysql\" connection-jndi-name=\"java:/nw-ds\"/> <metadata type = \"NATIVE,DDL\"><![CDATA[ ALTER FOREIGN TABLE \"customer\" OPTIONS (ADD CARDINALITY 10000); ALTER FOREIGN TABLE \"customer\" ALTER COLUMN \"name\" OPTIONS(SET UPDATABLE FALSE); ]]> </metadata> </model> </vdb>"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/alter_statement |
Chapter 4. GitOps | Chapter 4. GitOps 4.1. Red Hat OpenShift GitOps release notes Red Hat OpenShift GitOps is a declarative way to implement continuous deployment for cloud native applications. Red Hat OpenShift GitOps ensures consistency in applications when you deploy them to different clusters in different environments, such as: development, staging, and production. Red Hat OpenShift GitOps helps you automate the following tasks: Ensure that the clusters have similar states for configuration, monitoring, and storage Recover or recreate clusters from a known state Apply or revert configuration changes to multiple OpenShift Container Platform clusters Associate templated configuration with different environments Promote applications across clusters, from staging to production For an overview of Red Hat OpenShift GitOps, see Understanding OpenShift GitOps . 4.1.1. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see Red Hat CTO Chris Wright's message . 4.1.2. Release notes for Red Hat OpenShift GitOps 1.2.1 Red Hat OpenShift GitOps 1.2.1 is now available on OpenShift Container Platform 4.7 and 4.8. 4.1.2.1. Support matrix Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. Technology Preview Features Support Scope In the table below, features are marked with the following statuses: TP : Technology Preview GA : General Availability Note the following scope of support on the Red Hat Customer Portal for these features: Table 4.1. Support matrix Feature Red Hat OpenShift GitOps 1.2.1 Argo CD GA Argo CD ApplicationSet TP Red Hat OpenShift GitOps Application Manager (kam) TP 4.1.2.2. Fixed issues The following issues were resolved in the current release: Previously, huge memory spikes were observed on the application controller on startup. The flag --kubectl-parallelism-limit for the application controller is now set to 10 by default, however this value can be overridden by specifying a number for .spec.controller.kubeParallelismLimit in the Argo CD CR specification. GITOPS-1255 The latest Triggers APIs caused Kubernetes build failure due to duplicate entries in the kustomization.yaml when using the kam bootstrap command. The Pipelines and Tekton triggers components have now been updated to v0.24.2 and v0.14.2, respectively, to address this issue. GITOPS-1273 Persisting RBAC roles and bindings are now automatically removed from the target namespace when the Argo CD instance from the source namespace is deleted. GITOPS-1228 Previously, when deploying an Argo CD instance into a namespace, the Argo CD instance would change the "managed-by" label to be its own namespace. This fix would make namespaces unlabelled while also making sure the required RBAC roles and bindings are created and deleted for the namespace. GITOPS-1247 Previously, the default resource request limits on Argo CD workloads, specifically for the repo-server and application controller, were found to be very restrictive. The existing resource quota has now been removed and the default memory limit has been increased to 1024M in the repo server. Please note that this change will only affect new installations; existing Argo CD instance workloads will not be affected. GITOPS-1274 4.1.3. Release notes for Red Hat OpenShift GitOps 1.2 Red Hat OpenShift GitOps 1.2 is now available on OpenShift Container Platform 4.7 and 4.8. 4.1.3.1. Support matrix Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. Technology Preview Features Support Scope In the table below, features are marked with the following statuses: TP : Technology Preview GA : General Availability Note the following scope of support on the Red Hat Customer Portal for these features: Table 4.2. Support matrix Feature Red Hat OpenShift GitOps 1.2 Argo CD GA Argo CD ApplicationSet TP Red Hat OpenShift GitOps Application Manager (kam) TP 4.1.3.2. New features In addition to the fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift GitOps 1.2: If you do not have read or write access to the openshift-gitops namespace, you can now use the DISABLE_DEFAULT_ARGOCD_INSTANCE environment variable in the GitOps Operator and set the value to TRUE to prevent the default Argo CD instance from starting in the openshift-gitops namespace. Resource requests and limits are now configured in Argo CD workloads. Resource quota is enabled in the openshift-gitops namespace. As a result, out-of-band workloads deployed manually in the openshift-gitops namespace must be configured with resource requests and limits and the resource quota may need to be increased. Argo CD authentication is now integrated with Red Hat SSO and it is automatically configured with OpenShift 4 Identity Provider on the cluster. This feature is disabled by default. To enable Red Hat SSO, add SSO configuration in ArgoCD CR as shown below. Currently, keycloak is the only supported provider. apiVersion: argoproj.io/v1alpha1 kind: ArgoCD metadata: name: example-argocd labels: example: basic spec: sso: provider: keycloak server: route: enabled: true You can now define hostnames using route labels to support router sharding. Support for setting labels on the server (argocd server), grafana , and prometheus routes is now available. To set labels on a route, add labels under the route configuration for a server in the ArgoCD CR. Example ArgoCD CR YAML to set labels on argocd server apiVersion: argoproj.io/v1alpha1 kind: ArgoCD metadata: name: example-argocd labels: example: basic spec: server: route: enabled: true labels: key1: value1 key2: value2 The GitOps Operator now automatically grants permissions to Argo CD instances to manage resources in target namespaces by applying labels. Users can label the target namespace with the label argocd.argoproj.io/managed-by: <source-namespace> , where the source-namespace is the namespace where the argocd instance is deployed. 4.1.3.3. Fixed issues The following issues were resolved in the current release: Previously, if a user created additional instances of Argo CD managed by the default cluster instance in the openshift-gitops namespace, the application responsible for the new Argo CD instance would get stuck in an OutOfSync status. This issue has now been resolved by adding an owner reference to the cluster secret. GITOPS-1025 4.1.3.4. Known issues These are the known issues in Red Hat OpenShift GitOps 1.2: When an Argo CD instance is deleted from the source namespace, the argocd.argoproj.io/managed-by labels in the target namespaces are not removed. GITOPS-1228 Resource quota has been enabled in the openshift-gitops namespace in Red Hat OpenShift GitOps 1.2. This can affect out-of-band workloads deployed manually and workloads deployed by the default Argo CD instance in the openshift-gitops namespace. When you upgrade from Red Hat OpenShift GitOps v1.1.2 to v1.2 such workloads must be configured with resource requests and limits. If there are any additional workloads, the resource quota in the openshift-gitops namespace must be increased. Current Resource Quota for openshift-gitops namespace. Resource Requests Limits CPU 6688m 13750m Memory 4544Mi 9070Mi You can use the below command to update the CPU limits. USD oc patch resourcequota openshift-gitops-compute-resources -n openshift-gitops --type='json' -p='[{"op": "replace", "path": "/spec/hard/limits.cpu", "value":"9000m"}]' You can use the below command to update the CPU requests. USD oc patch resourcequota openshift-gitops-compute-resources -n openshift-gitops --type='json' -p='[{"op": "replace", "path": "/spec/hard/cpu", "value":"7000m"}] You can replace the path in the above commands from cpu to memory to update the memory. 4.1.4. Release notes for Red Hat OpenShift GitOps 1.1 Red Hat OpenShift GitOps 1.1 is now available on OpenShift Container Platform 4.7. 4.1.4.1. Support matrix Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. Technology Preview Features Support Scope In the table below, features are marked with the following statuses: TP : Technology Preview GA : General Availability Note the following scope of support on the Red Hat Customer Portal for these features: Table 4.3. Support matrix Feature Red Hat OpenShift GitOps 1.1 Argo CD GA Argo CD ApplicationSet TP Red Hat OpenShift GitOps Application Manager (kam) TP 4.1.4.2. New features In addition to the fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift GitOps 1.1: The ApplicationSet feature is now added (Technology Preview). The ApplicationSet feature enables both automation and greater flexibility when managing Argo CD applications across a large number of clusters and within monorepos. It also makes self-service usage possible on multitenant Kubernetes clusters. Argo CD is now integrated with cluster logging stack and with the OpenShift Container Platform Monitoring and Alerting features. Argo CD auth is now integrated with OpenShift Container Platform. Argo CD applications controller now supports horizontal scaling. Argo CD Redis servers now support high availability (HA). 4.1.4.3. Fixed issues The following issues were resolved in the current release: Previously, Red Hat OpenShift GitOps did not work as expected in a proxy server setup with active global proxy settings. This issue is fixed and now Argo CD is configured by the Red Hat OpenShift GitOps Operator using fully qualified domain names (FQDN) for the pods to enable communication between components. GITOPS-703 The Red Hat OpenShift GitOps backend relies on the ?ref= query parameter in the Red Hat OpenShift GitOps URL to make API calls. Previously, this parameter was not read from the URL, causing the backend to always consider the default reference. This issue is fixed and the Red Hat OpenShift GitOps backend now extracts the reference query parameter from the Red Hat OpenShift GitOps URL and only uses the default reference when there is no input reference provided. GITOPS-817 Previously, the Red Hat OpenShift GitOps backend failed to find the valid GitLab repository. This was because the Red Hat OpenShift GitOps backend checked for main as the branch reference, instead of master in the GitLab repository. This issue is fixed now. GITOPS-768 The Environments page in the Developer perspective of the OpenShift Container Platform web console now shows the list of applications and the number of environments. This page also displays an Argo CD link that directs you to the Argo CD Applications page that lists all the applications. The Argo CD Applications page has LABELS (for example, app.kubernetes.io/name=appName ) that help you filter only the applications of your choice. GITOPS-544 4.1.4.4. Known issues These are the known issues in Red Hat OpenShift GitOps 1.1: Red Hat OpenShift GitOps does not support Helm v2 and ksonnet. The Red Hat SSO (RH SSO) Operator is not supported in disconnected clusters. As a result, the Red Hat OpenShift GitOps Operator and RH SSO integration is not supported in disconnected clusters. When you delete an Argo CD application from the OpenShift Container Platform web console, the Argo CD application gets deleted in the user interface, but the deployments are still present in the cluster. As a workaround, delete the Argo CD application from the Argo CD console. GITOPS-830 4.1.4.5. Breaking Change 4.1.4.5.1. Upgrading from Red Hat OpenShift GitOps v1.0.1 When you upgrade from Red Hat OpenShift GitOps v1.0.1 to v1.1 , the Red Hat OpenShift GitOps Operator renames the default Argo CD instance created in the openshift-gitops namespace from argocd-cluster to openshift-gitops . This is a breaking change and needs the following steps to be performed manually, before the upgrade: Go to the OpenShift Container Platform web console and copy the content of the argocd-cm.yml config map file in the openshift-gitops namespace to a local file. The content may look like the following example: Example argocd config map YAML kind: ConfigMap apiVersion: v1 metadata: selfLink: /api/v1/namespaces/openshift-gitops/configmaps/argocd-cm resourceVersion: '112532' name: argocd-cm uid: f5226fbc-883d-47db-8b53-b5e363f007af creationTimestamp: '2021-04-16T19:24:08Z' managedFields: ... namespace: openshift-gitops labels: app.kubernetes.io/managed-by: argocd-cluster app.kubernetes.io/name: argocd-cm app.kubernetes.io/part-of: argocd data: "" 1 admin.enabled: 'true' statusbadge.enabled: 'false' resource.exclusions: | - apiGroups: - tekton.dev clusters: - '*' kinds: - TaskRun - PipelineRun ga.trackingid: '' repositories: | - type: git url: https://github.com/user-name/argocd-example-apps ga.anonymizeusers: 'false' help.chatUrl: '' url: >- https://argocd-cluster-server-openshift-gitops.apps.dev-svc-4.7-041614.devcluster.openshift.com "" 2 help.chatText: '' kustomize.buildOptions: '' resource.inclusions: '' repository.credentials: '' users.anonymous.enabled: 'false' configManagementPlugins: '' application.instanceLabelKey: '' 1 Restore only the data section of the content in the argocd-cm.yml config map file manually. 2 Replace the URL value in the config map entry with the new instance name openshift-gitops . Delete the default argocd-cluster instance. Edit the new argocd-cm.yml config map file to restore the entire data section manually. Replace the URL value in the config map entry with the new instance name openshift-gitops . For example, in the preceding example, replace the URL value with the following URL value: url: >- https://openshift-gitops-server-openshift-gitops.apps.dev-svc-4.7-041614.devcluster.openshift.com Login to the Argo CD cluster and verify that the configurations are present. 4.2. Understanding OpenShift GitOps 4.2.1. About GitOps GitOps is a declarative way to implement continuous deployment for cloud native applications. You can use GitOps to create repeatable processes for managing OpenShift Container Platform clusters and applications across multi-cluster Kubernetes environments. GitOps handles and automates complex deployments at a fast pace, saving time during deployment and release cycles. The GitOps workflow pushes an application through development, testing, staging, and production. GitOps either deploys a new application or updates an existing one, so you only need to update the repository; GitOps automates everything else. GitOps is a set of practices that use Git pull requests to manage infrastructure and application configurations. In GitOps, the Git repository is the only source of truth for system and application configuration. This Git repository contains a declarative description of the infrastructure you need in your specified environment and contains an automated process to make your environment match the described state. Also, it contains the entire state of the system so that the trail of changes to the system state are visible and auditable. By using GitOps, you resolve the issues of infrastructure and application configuration sprawl. GitOps defines infrastructure and application definitions as code. Then, it uses this code to manage multiple workspaces and clusters to simplify the creation of infrastructure and application configurations. By following the principles of the code, you can store the configuration of clusters and applications in Git repositories, and then follow the Git workflow to apply these repositories to your chosen clusters. You can apply the core principles of developing and maintaining software in a Git repository to the creation and management of your cluster and application configuration files. 4.2.2. About Red Hat OpenShift GitOps Red Hat OpenShift GitOps ensures consistency in applications when you deploy them to different clusters in different environments, such as: development, staging, and production. Red Hat OpenShift GitOps organizes the deployment process around the configuration repositories and makes them the central element. It always has at least two repositories: Application repository with the source code Environment configuration repository that defines the desired state of the application These repositories contain a declarative description of the infrastructure you need in your specified environment. They also contain an automated process to make your environment match the described state. Red Hat OpenShift GitOps uses Argo CD to maintain cluster resources. Argo CD is an open-source declarative tool for the continuous integration and continuous deployment (CI/CD) of applications. Red Hat OpenShift GitOps implements Argo CD as a controller so that it continuously monitors application definitions and configurations defined in a Git repository. Then, Argo CD compares the specified state of these configurations with their live state on the cluster. Argo CD reports any configurations that deviate from their specified state. These reports allow administrators to automatically or manually resync configurations to the defined state. Therefore, Argo CD enables you to deliver global custom resources, like the resources that are used to configure OpenShift Container Platform clusters. 4.2.2.1. Key features Red Hat OpenShift GitOps helps you automate the following tasks: Ensure that the clusters have similar states for configuration, monitoring, and storage Recover or recreate clusters from a known state Apply or revert configuration changes to multiple OpenShift Container Platform clusters Associate templated configuration with different environments Promote applications across clusters, from staging to production 4.3. Getting started with OpenShift GitOps Red Hat OpenShift GitOps uses Argo CD to manage specific cluster-scoped resources, including platform Operators, optional Operator Lifecycle Manager (OLM) Operators, and user management. This guide explains how to install the Red Hat OpenShift GitOps Operator to an OpenShift Container Platform cluster and logging in to the Argo CD instance. 4.3.1. Installing GitOps Operator in web console Prerequisites Access to the OpenShift Container Platform web console. An account with the cluster-admin role. You are logged in to the OpenShift cluster as an administrator. Warning If you have already installed the Community version of the Argo CD Operator, remove the Argo CD Community Operator before you install the Red Hat OpenShift GitOps Operator. Procedure Open the Administrator perspective of the web console and navigate to Operators OperatorHub in the menu on the left. Search for OpenShift GitOps , click the Red Hat OpenShift GitOps tile, and then click Install . Red Hat OpenShift GitOps will be installed in all namespaces of the cluster. After the Red Hat OpenShift GitOps Operator is installed, it automatically sets up a ready-to-use Argo CD instance that is available in the openshift-gitops namespace, and an Argo CD icon is displayed in the console toolbar. You can create subsequent Argo CD instances for your applications under your projects. 4.4. Configuring Argo CD to recursively sync a Git repository with your application 4.4.1. Configuring an OpenShift cluster by deploying an application with cluster configurations With Red Hat OpenShift GitOps, you can configure Argo CD to recursively sync the content of a Git directory with an application that contains custom configurations for your cluster. Prerequisites Red Hat OpenShift GitOps is installed in your cluster. 4.4.1.1. Logging in to the Argo CD instance by using your OpenShift credentials Red Hat OpenShift GitOps Operator automatically creates a ready-to-use Argo CD instance that is available in the openshift-gitops namespace. Prerequisites You have installed the Red Hat OpenShift GitOps Operator in your cluster. Procedure In the Administrator perspective of the web console, navigate to Operators Installed Operators to verify that the Red Hat OpenShift GitOps Operator is installed. Navigate to the menu OpenShift GitOps Cluster Argo CD . The login page of the Argo CD UI is displayed in a new window. Obtain the password for the Argo CD instance: Navigate to the Developer perspective of the web console. A list of available projects is displayed. Navigate to the openshift-gitops project. Use the left navigation panel to navigate to the Secrets page. Select the openshift-gitops-cluster instance to display the password. Copy the password. Use this password and admin as the username to log in to the Argo CD UI in the new window. 4.4.1.2. Creating an application by using the Argo CD dashboard Argo CD provides a dashboard which allows you to create applications. This sample workflow walks you through the process of configuring Argo CD to recursively sync the content of the cluster directory to the cluster-configs application. The directory defines the OpenShift Container Platform web console cluster configurations that add a link to the Red Hat Developer Blog - Kubernetes under the menu in the web console, and defines a namespace spring-petclinic on the cluster. Procedure In the Argo CD dashboard, click NEW APP to add a new Argo CD application. For this workflow, create a cluster-configs application with the following configurations: Application Name cluster-configs Project default Sync Policy Manual Repository URL https://github.com/redhat-developer/openshift-gitops-getting-started Revision HEAD Path cluster Destination https://kubernetes.default.svc Namespace spring-petclinic Directory Recurse checked Click CREATE to create your application. Open the Administrator perspective of the web console and navigate to Administration Namespaces in the menu on the left. Search for and select the namespace, then enter argocd.argoproj.io/managed-by=openshift-gitops in the Label field so that the Argo CD instance in the openshift-gitops namespace can manage your namespace. 4.4.1.3. Creating an application by using the oc tool You can create Argo CD applications in your terminal by using the oc tool. Procedure Download the sample application : USD git clone [email protected]:redhat-developer/openshift-gitops-getting-started.git Create the application: USD oc create -f openshift-gitops-getting-started/argo/cluster.yaml Run the oc get command to review the created application: USD oc get application -n openshift-gitops Add a label to the namespace your application is deployed in so that the Argo CD instance in the openshift-gitops namespace can manage it: USD oc label namespace spring-petclinic argocd.argoproj.io/managed-by=openshift-gitops 4.4.1.4. Synchronizing your application with your Git repository Procedure In the Argo CD dashboard, notice that the cluster-configs Argo CD application has the statuses Missing and OutOfSync . Because the application was configured with a manual sync policy, Argo CD does not sync it automatically. Click SYNC on the cluster-configs tile, review the changes, and then click SYNCHRONIZE . Argo CD will detect any changes in the Git repository automatically. If the configurations are changed, Argo CD will change the status of the cluster-configs to OutOfSync . You can modify the synchronization policy for Argo CD to automatically apply changes from your Git repository to the cluster. Notice that the cluster-configs Argo CD application now has the statuses Healthy and Synced . Click the cluster-configs tile to check the details of the synchronized resources and their status on the cluster. Navigate to the OpenShift Container Platform web console and click to verify that a link to the Red Hat Developer Blog - Kubernetes is now present there. Navigate to the Project page and search for the spring-petclinic namespace to verify that it has been added to the cluster. Your cluster configurations have been successfully synchronized to the cluster. 4.4.2. Deploying a Spring Boot application with Argo CD With Argo CD, you can deploy your applications to the OpenShift cluster either by using the Argo CD dashboard or by using the oc tool. Prerequisites Red Hat OpenShift GitOps is installed in your cluster. 4.4.2.1. Logging in to the Argo CD instance by using your OpenShift credentials Red Hat OpenShift GitOps Operator automatically creates a ready-to-use Argo CD instance that is available in the openshift-gitops namespace. Prerequisites You have installed the Red Hat OpenShift GitOps Operator in your cluster. Procedure In the Administrator perspective of the web console, navigate to Operators Installed Operators to verify that the Red Hat OpenShift GitOps Operator is installed. Navigate to the menu OpenShift GitOps Cluster Argo CD . The login page of the Argo CD UI is displayed in a new window. Obtain the password for the Argo CD instance: Navigate to the Developer perspective of the web console. A list of available projects is displayed. Navigate to the openshift-gitops project. Use the left navigation panel to navigate to the Secrets page. Select the openshift-gitops-cluster instance to display the password. Copy the password. Use this password and admin as the username to log in to the Argo CD UI in the new window. 4.4.2.2. Creating an application by using the Argo CD dashboard Argo CD provides a dashboard which allows you to create applications. This sample workflow walks you through the process of configuring Argo CD to recursively sync the content of the cluster directory to the cluster-configs application. The directory defines the OpenShift Container Platform web console cluster configurations that add a link to the Red Hat Developer Blog - Kubernetes under the menu in the web console, and defines a namespace spring-petclinic on the cluster. Procedure In the Argo CD dashboard, click NEW APP to add a new Argo CD application. For this workflow, create a cluster-configs application with the following configurations: Application Name cluster-configs Project default Sync Policy Manual Repository URL https://github.com/redhat-developer/openshift-gitops-getting-started Revision HEAD Path cluster Destination https://kubernetes.default.svc Namespace spring-petclinic Directory Recurse checked For this workflow, create a spring-petclinic application with the following configurations: Application Name spring-petclinic Project default Sync Policy Automatic Repository URL https://github.com/redhat-developer/openshift-gitops-getting-started Revision HEAD Path app Destination https://kubernetes.default.svc Namespace spring-petclinic Click CREATE to create your application. Open the Administrator perspective of the web console and navigate to Administration Namespaces in the menu on the left. Search for and select the namespace, then enter argocd.argoproj.io/managed-by=openshift-gitops in the Label field so that the Argo CD instance in the openshift-gitops namespace can manage your namespace. 4.4.2.3. Creating an application by using the oc tool You can create Argo CD applications in your terminal by using the oc tool. Procedure Download the sample application : USD git clone [email protected]:redhat-developer/openshift-gitops-getting-started.git Create the application: USD oc create -f openshift-gitops-getting-started/argo/app.yaml USD oc create -f openshift-gitops-getting-started/argo/cluster.yaml Run the oc get command to review the created application: USD oc get application -n openshift-gitops Add a label to the namespace your application is deployed in so that the Argo CD instance in the openshift-gitops namespace can manage it: USD oc label namespace spring-petclinic argocd.argoproj.io/managed-by=openshift-gitops USD oc label namespace spring-petclinic argocd.argoproj.io/managed-by=openshift-gitops 4.4.2.4. Verifying Argo CD self-healing behavior Argo CD constantly monitors the state of deployed applications, detects differences between the specified manifests in Git and live changes in the cluster, and then automatically corrects them. This behavior is referred to as self-healing. You can test and observe the self-healing behavior in Argo CD. Prerequisites The sample app-spring-petclinic application is deployed and configured. Procedure In the Argo CD dashboard, verify that your application has the Synced status. Click the app-spring-petclinic tile in the Argo CD dashboard to view the application resources that are deployed to the cluster. In the OpenShift web console, navigate to the Developer perspective. Modify the Spring PetClinic deployment and commit the changes to the app/ directory of the Git repository. Argo CD will automatically deploy the changes to the cluster. Test the self-healing behavior by modifying the deployment on the cluster and scaling it up to two pods while watching the application in the OpenShift web console. Run the following command to modify the deployment: USD oc scale deployment spring-petclinic --replicas 2 -n spring-petclinic In the OpenShift web console, notice that the deployment scales up to two pods and immediately scales down again to one pod. Argo CD detected a difference from the Git repository and auto-healed the application on the OpenShift cluster. In the Argo CD dashboard, click the app-spring-petclinic tile APP DETAILS EVENTS . The EVENTS tab displays the following events: Argo CD detecting out of sync deployment resources on the cluster and then resyncing the Git repository to correct it. 4.5. Configuring SSO for Argo CD on OpenShift After the Red Hat OpenShift GitOps Operator is installed, Argo CD automatically creates a user with admin permissions. To manage multiple users, Argo CD allows cluster administrators to configure SSO. Note Bundled Dex OIDC provider is not supported. Prerequisites Red Hat SSO is installed on the cluster. 4.5.1. Creating a new client in Keycloak Procedure Log in to your Keycloak server, select the realm you want to use, navigate to the Clients page, and then click Create in the upper-right section of the screen. Specify the following values: Client ID argocd Client Protocol openid-connect Route URL <your-argo-cd-route-url> Access Type confidential Valid Redirect URIs <your-argo-cd-route-url>/auth/callback Base URL /applications Click Save to see the Credentials tab added to the Client page. Copy the secret from the Credentials tab for further configuration. 4.5.2. Configuring the groups claim To manage users in Argo CD, you must configure a groups claim that can be included in the authentication token. Procedure In the Keycloak dashboard, navigate to Client Scope and add a new client with the following values: Name groups Protocol openid-connect Display On Content Scope On Include to Token Scope On Click Save and navigate to groups Mappers . Add a new token mapper with the following values: Name groups Mapper Type Group Membership Token Claim Name groups The token mapper adds the groups claim to the token when the client requests groups . Navigate to Clients Client Scopes and configure the client to provide the groups scope. Select groups in the Assigned Default Client Scopes table and click Add selected . The groups scope must be in the Available Client Scopes table. Navigate to Users Admin Groups and create a group ArgoCDAdmins . 4.5.3. Configuring Argo CD OIDC To configure Argo CD OpenID Connect (OIDC), you must generate your client secret, encode it, and add it to your custom resource. Prerequisites You have obtained your client secret. Procedure Store the client secret you generated. Encode the client secret in base64: USD echo -n '83083958-8ec6-47b0-a411-a8c55381fbd2' | base64 Edit the secret and add the base64 value to an oidc.keycloak.clientSecret key: USD oc edit secret argocd-secret -n <namespace> Example YAML of the secret apiVersion: v1 kind: Secret metadata: name: argocd-secret data: oidc.keycloak.clientSecret: ODMwODM5NTgtOGVjNi00N2IwLWE0MTEtYThjNTUzODFmYmQy Edit the argocd custom resource and add the OIDC configuration to enable the Keycloak authentication: USD oc edit argocd -n <your_namespace> Example of argocd custom resource apiVersion: argoproj.io/v1alpha1 kind: ArgoCD metadata: creationTimestamp: null name: argocd namespace: argocd spec: resourceExclusions: | - apiGroups: - tekton.dev clusters: - '*' kinds: - TaskRun - PipelineRun oidcConfig: | name: OpenShift Single Sign-On issuer: https://keycloak.example.com/auth/realms/myrealm 1 clientID: argocd 2 clientSecret: USDoidc.keycloak.clientSecret 3 requestedScopes: ["openid", "profile", "email", "groups"] 4 server: route: enabled: true 1 issuer must end with the correct realm name (in this example myrealm ). 2 clientID is the Client ID you configured in your Keycloak account. 3 clientSecret points to the right key you created in the argocd-secret secret. 4 requestedScopes contains the groups claim if you did not add it to the Default scope. 4.5.4. Keycloak Identity Brokering with OpenShift You can configure a Keycloak instance to use OpenShift for authentication through Identity Brokering. This allows for Single Sign-On (SSO) between the OpenShift cluster and the Keycloak instance. Prerequisites jq CLI tool is installed. Procedure Obtain the OpenShift Container Platform API URL: USD curl -s -k -H "Authorization: Bearer USD(oc whoami -t)" https://<openshift-user-facing-api-url>/apis/config.openshift.io/v1/infrastructures/cluster | jq ".status.apiServerURL". Note The address of the OpenShift Container Platform API is often protected by HTTPS. Therefore, you must configure X509_CA_BUNDLE in the container and set it to /var/run/secrets/kubernetes.io/serviceaccount/ca.crt . Otherwise, Keycloak cannot communicate with the API Server. In the Keycloak server dashboard, navigate to Identity Providers and select Openshift v4 . Specify the following values: Base Url OpenShift 4 API URL Client ID keycloak-broker Client Secret A secret that you want define Now you can log in to Argo CD with your OpenShift credentials through Keycloak as an Identity Broker. 4.5.5. Registering an additional an OAuth client If you need an additional OAuth client to manage authentication for your OpenShift Container Platform cluster, you can register one. Procedure To register your client: USD oc create -f <(echo ' kind: OAuthClient apiVersion: oauth.openshift.io/v1 metadata: name: keycloak-broker 1 secret: "..." 2 redirectURIs: - "https://keycloak-keycloak.apps.dev-svc-4.7-020201.devcluster.openshift.com/auth/realms/myrealm/broker/openshift-v4/endpoint" 3 grantMethod: prompt 4 ') 1 The name of the OAuth client is used as the client_id parameter when making requests to <namespace_route>/oauth/authorize and <namespace_route>/oauth/token . 2 The secret is used as the client_secret parameter when making requests to <namespace_route>/oauth/token . 3 The redirect_uri parameter specified in requests to <namespace_route>/oauth/authorize and <namespace_route>/oauth/token must be equal to or prefixed by one of the URIs listed in the redirectURIs parameter value. 4 If the user has not granted access to this client, the grantMethod determines which action to take when this client requests tokens. Specify auto to automatically approve the grant and retry the request, or prompt to prompt the user to approve or deny the grant. 4.5.6. Configure groups and Argo CD RBAC Role-based access control (RBAC) allows you to provide relevant permissions to users. Prerequisites You have created the ArgoCDAdmins group in Keycloak. The user you want to give permissions to has logged in to Argo CD. Procedure In the Keycloak dashboard navigate to Users Groups . Add the user to the Keycloak group ArgoCDAdmins . Ensure that ArgoCDAdmins group has the required permissions in the argocd-rbac config map. Edit the config map: USD oc edit configmap argocd-rbac-cm -n <namespace> Example of a config map that defines admin permissions. apiVersion: v1 kind: ConfigMap metadata: name: argocd-rbac-cm data: policy.csv: | g, /ArgoCDAdmins, role:admin 4.5.7. In-built permissions for Argo CD This section lists the permissions that are granted to ArgoCD to manage specific cluster-scoped resources which include cluster operators, optional OLM operators, and user management. Note that ArgoCD is not granted cluster-admin permissions. Table 4.4. Permissions granted to Argo CD Resource group What it configures for a user or an administrator operators.coreos.com Optional operators managed by OLM user.openshift.io, rbac.authorization.k8s.io Groups, Users, and their permissions config.openshift.io Control plane operators managed by CVO used to configure cluster-wide build configuration, registry configuration, and scheduler policies storage.k8s.io Storage console.openshift.io Console customization 4.6. Sizing requirements for GitOps Operator The sizing requirements page displays the sizing requirements for installing Red Hat OpenShift GitOps on OpenShift Container Platform. It also provides the sizing details for the default ArgoCD instance that is instantiated by the GitOps Operator. 4.6.1. Sizing requirements for GitOps Red Hat OpenShift GitOps is a declarative way to implement continuous deployment for cloud-native applications. Through GitOps, you can define and configure the CPU and memory requirements of your application. Every time you install the Red Hat OpenShift GitOps Operator, the resources on the namespace are installed within the defined limits. If the default installation does not set any limits or requests, the Operator fails within the namespace with quotas. Without enough resources, the cluster cannot schedule ArgoCD related pods. The following table details the resource requests and limits for the default workloads: Workload CPU requests CPU limits Memory requests Memory limits argocd-application-controller 1 2 1024M 2048M applicationset-controller 1 2 512M 1024M argocd-server 0.125 0.5 128M 256M argocd-repo-server 0.5 1 256M 1024M argocd-redis 0.25 0.5 128M 256M argocd-dex 0.25 0.5 128M 256M HAProxy 0.25 0.5 128M 256M Optionally, you can also use the ArgoCD custom resource with the oc command to see the specifics and modify them: oc edit argocd <name of argo cd> -n namespace | [
"apiVersion: argoproj.io/v1alpha1 kind: ArgoCD metadata: name: example-argocd labels: example: basic spec: sso: provider: keycloak server: route: enabled: true",
"apiVersion: argoproj.io/v1alpha1 kind: ArgoCD metadata: name: example-argocd labels: example: basic spec: server: route: enabled: true labels: key1: value1 key2: value2",
"oc patch resourcequota openshift-gitops-compute-resources -n openshift-gitops --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/hard/limits.cpu\", \"value\":\"9000m\"}]'",
"oc patch resourcequota openshift-gitops-compute-resources -n openshift-gitops --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/hard/cpu\", \"value\":\"7000m\"}]",
"kind: ConfigMap apiVersion: v1 metadata: selfLink: /api/v1/namespaces/openshift-gitops/configmaps/argocd-cm resourceVersion: '112532' name: argocd-cm uid: f5226fbc-883d-47db-8b53-b5e363f007af creationTimestamp: '2021-04-16T19:24:08Z' managedFields: namespace: openshift-gitops labels: app.kubernetes.io/managed-by: argocd-cluster app.kubernetes.io/name: argocd-cm app.kubernetes.io/part-of: argocd data: \"\" 1 admin.enabled: 'true' statusbadge.enabled: 'false' resource.exclusions: | - apiGroups: - tekton.dev clusters: - '*' kinds: - TaskRun - PipelineRun ga.trackingid: '' repositories: | - type: git url: https://github.com/user-name/argocd-example-apps ga.anonymizeusers: 'false' help.chatUrl: '' url: >- https://argocd-cluster-server-openshift-gitops.apps.dev-svc-4.7-041614.devcluster.openshift.com \"\" 2 help.chatText: '' kustomize.buildOptions: '' resource.inclusions: '' repository.credentials: '' users.anonymous.enabled: 'false' configManagementPlugins: '' application.instanceLabelKey: ''",
"url: >- https://openshift-gitops-server-openshift-gitops.apps.dev-svc-4.7-041614.devcluster.openshift.com",
"git clone [email protected]:redhat-developer/openshift-gitops-getting-started.git",
"oc create -f openshift-gitops-getting-started/argo/cluster.yaml",
"oc get application -n openshift-gitops",
"oc label namespace spring-petclinic argocd.argoproj.io/managed-by=openshift-gitops",
"git clone [email protected]:redhat-developer/openshift-gitops-getting-started.git",
"oc create -f openshift-gitops-getting-started/argo/app.yaml",
"oc create -f openshift-gitops-getting-started/argo/cluster.yaml",
"oc get application -n openshift-gitops",
"oc label namespace spring-petclinic argocd.argoproj.io/managed-by=openshift-gitops",
"oc label namespace spring-petclinic argocd.argoproj.io/managed-by=openshift-gitops",
"oc scale deployment spring-petclinic --replicas 2 -n spring-petclinic",
"echo -n '83083958-8ec6-47b0-a411-a8c55381fbd2' | base64",
"oc edit secret argocd-secret -n <namespace>",
"apiVersion: v1 kind: Secret metadata: name: argocd-secret data: oidc.keycloak.clientSecret: ODMwODM5NTgtOGVjNi00N2IwLWE0MTEtYThjNTUzODFmYmQy",
"oc edit argocd -n <your_namespace>",
"apiVersion: argoproj.io/v1alpha1 kind: ArgoCD metadata: creationTimestamp: null name: argocd namespace: argocd spec: resourceExclusions: | - apiGroups: - tekton.dev clusters: - '*' kinds: - TaskRun - PipelineRun oidcConfig: | name: OpenShift Single Sign-On issuer: https://keycloak.example.com/auth/realms/myrealm 1 clientID: argocd 2 clientSecret: USDoidc.keycloak.clientSecret 3 requestedScopes: [\"openid\", \"profile\", \"email\", \"groups\"] 4 server: route: enabled: true",
"curl -s -k -H \"Authorization: Bearer USD(oc whoami -t)\" https://<openshift-user-facing-api-url>/apis/config.openshift.io/v1/infrastructures/cluster | jq \".status.apiServerURL\".",
"oc create -f <(echo ' kind: OAuthClient apiVersion: oauth.openshift.io/v1 metadata: name: keycloak-broker 1 secret: \"...\" 2 redirectURIs: - \"https://keycloak-keycloak.apps.dev-svc-4.7-020201.devcluster.openshift.com/auth/realms/myrealm/broker/openshift-v4/endpoint\" 3 grantMethod: prompt 4 ')",
"oc edit configmap argocd-rbac-cm -n <namespace>",
"apiVersion: v1 kind: ConfigMap metadata: name: argocd-rbac-cm data: policy.csv: | g, /ArgoCDAdmins, role:admin",
"edit argocd <name of argo cd> -n namespace"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/cicd/gitops |
5.2. Entitlement | 5.2. Entitlement subscription-manager component When firstboot is running in text mode, the user can only register via Red Hat Network Register, not with subscription-manager . Both are available in GUI mode. subscription-manager component If multiple repositories are enabled, subscription-manager installs product certificates from all repositories instead of installing the product certificate only from the repository from which the RPM package was installed. subscription-manager component firstboot fails to provide Red Hat Network registration to a virtual machine in a NAT-based network; for example, in the libvirt environment. Note that this problem only occurs during the first boot after installation. If you run firstboot manually later, the registration finishes successfully. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/entitlement |
Chapter 1. Overview of authentication and authorization | Chapter 1. Overview of authentication and authorization 1.1. Glossary of common terms for Red Hat OpenShift Service on AWS authentication and authorization This glossary defines common terms that are used in Red Hat OpenShift Service on AWS authentication and authorization. authentication An authentication determines access to an Red Hat OpenShift Service on AWS cluster and ensures only authenticated users access the Red Hat OpenShift Service on AWS cluster. authorization Authorization determines whether the identified user has permissions to perform the requested action. bearer token Bearer token is used to authenticate to API with the header Authorization: Bearer <token> . config map A config map provides a way to inject configuration data into the pods. You can reference the data stored in a config map in a volume of type ConfigMap . Applications running in a pod can use this data. containers Lightweight and executable images that consist of software and all its dependencies. Because containers virtualize the operating system, you can run containers in a data center, public or private cloud, or your local host. Custom Resource (CR) A CR is an extension of the Kubernetes API. group A group is a set of users. A group is useful for granting permissions to multiple users one time. HTPasswd HTPasswd updates the files that store usernames and password for authentication of HTTP users. Keystone Keystone is an Red Hat OpenStack Platform (RHOSP) project that provides identity, token, catalog, and policy services. Lightweight directory access protocol (LDAP) LDAP is a protocol that queries user information. namespace A namespace isolates specific system resources that are visible to all processes. Inside a namespace, only processes that are members of that namespace can see those resources. node A node is a worker machine in the Red Hat OpenShift Service on AWS cluster. A node is either a virtual machine (VM) or a physical machine. OAuth client OAuth client is used to get a bearer token. OAuth server The Red Hat OpenShift Service on AWS control plane includes a built-in OAuth server that determines the user's identity from the configured identity provider and creates an access token. OpenID Connect The OpenID Connect is a protocol to authenticate the users to use single sign-on (SSO) to access sites that use OpenID Providers. pod A pod is the smallest logical unit in Kubernetes. A pod is comprised of one or more containers to run in a worker node. regular users Users that are created automatically in the cluster upon first login or via the API. request header A request header is an HTTP header that is used to provide information about HTTP request context, so that the server can track the response of the request. role-based access control (RBAC) A key security control to ensure that cluster users and workloads have access to only the resources required to execute their roles. service accounts Service accounts are used by the cluster components or applications. system users Users that are created automatically when the cluster is installed. users Users is an entity that can make requests to API. 1.2. About authentication in Red Hat OpenShift Service on AWS To control access to an Red Hat OpenShift Service on AWS cluster, an administrator with the dedicated-admin role can configure user authentication and ensure only approved users access the cluster. To interact with an Red Hat OpenShift Service on AWS cluster, users must first authenticate to the Red Hat OpenShift Service on AWS API in some way. You can authenticate by providing an OAuth access token or an X.509 client certificate in your requests to the Red Hat OpenShift Service on AWS API. Note If you do not present a valid access token or certificate, your request is unauthenticated and you receive an HTTP 401 error. An administrator can configure authentication by configuring an identity provider. You can define any supported identity provider in Red Hat OpenShift Service on AWS and add it to your cluster. 1.3. About authorization in Red Hat OpenShift Service on AWS Authorization involves determining whether the identified user has permissions to perform the requested action. Administrators can define permissions and assign them to users using the RBAC objects, such as rules, roles, and bindings . To understand how authorization works in Red Hat OpenShift Service on AWS, see Evaluating authorization . You can also control access to an Red Hat OpenShift Service on AWS cluster through projects and namespaces . Along with controlling user access to a cluster, you can also control the actions a pod can perform and the resources it can access using security context constraints (SCCs) . You can manage authorization for Red Hat OpenShift Service on AWS through the following tasks: Viewing local and cluster roles and bindings. Creating a local role and assigning it to a user or group. Assigning a cluster role to a user or group: Red Hat OpenShift Service on AWS includes a set of default cluster roles . You can add them to a user or group . Creating cluster-admin and dedicated-admin users: The user who created the Red Hat OpenShift Service on AWS cluster can grant access to other cluster-admin and dedicated-admin users. Creating service accounts: Service accounts provide a flexible way to control API access without sharing a regular user's credentials. A user can create and use a service account in applications and also as an OAuth client . Scoping tokens : A scoped token is a token that identifies as a specific user who can perform only specific operations. You can create scoped tokens to delegate some of your permissions to another user or a service account. Syncing LDAP groups: You can manage user groups in one place by syncing the groups stored in an LDAP server with the Red Hat OpenShift Service on AWS user groups. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/authentication_and_authorization/overview-of-authentication-authorization |
Chapter 1. Configuring secure communication with Redis | Chapter 1. Configuring secure communication with Redis Using the Transport Layer Security (TLS) encryption with Red Hat OpenShift GitOps, you can secure the communication between the Argo CD components and Redis cache and protect the possibly sensitive data in transit. You can secure communication with Redis by using one of the following configurations: Enable the autotls setting to issue an appropriate certificate for TLS encryption. Manually configure the TLS encryption by creating the argocd-operator-redis-tls secret with a key and certificate pair. Both configurations are possible with or without the High Availability (HA) enabled. 1.1. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. Red Hat OpenShift GitOps Operator is installed on your cluster. 1.2. Configuring TLS for Redis with autotls enabled You can configure TLS encryption for Redis by enabling the autotls setting on a new or already existing Argo CD instance. The configuration automatically provisions the argocd-operator-redis-tls secret and does not require further steps. Currently, OpenShift Container Platform is the only supported secret provider. Note By default, the autotls setting is disabled. Procedure Log in to the OpenShift Container Platform web console. Create an Argo CD instance with autotls enabled: In the Administrator perspective of the web console, use the left navigation panel to go to Administration CustomResourceDefinitions . Search for argocds.argoproj.io and click ArgoCD custom resource definition (CRD). On the CustomResourceDefinition details page, click the Instances tab, and then click Create ArgoCD . Edit or replace the YAML similar to the following example: Example Argo CD CR with autotls enabled apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: argocd 1 namespace: openshift-gitops 2 spec: redis: autotls: openshift 3 ha: enabled: true 4 1 The name of the Argo CD instance. 2 The namespace where you want to run the Argo CD instance. 3 The flag that enables the autotls setting and creates a TLS certificate for Redis. 4 The flag value that enables the HA feature. If you do not want to enable HA, do not include this line or set the flag value as false . Tip Alternatively, you can enable the autotls setting on an already existing Argo CD instance by running the following command: USD oc patch argocds.argoproj.io <instance-name> --type=merge -p '{"spec":{"redis":{"autotls":"openshift"}}}' Click Create . Verify that the Argo CD pods are ready and running: USD oc get pods -n <namespace> 1 1 Specify a namespace where the Argo CD instance is running, for example openshift-gitops . Example output with HA disabled NAME READY STATUS RESTARTS AGE argocd-application-controller-0 1/1 Running 0 26s argocd-redis-84b77d4f58-vp6zm 1/1 Running 0 37s argocd-repo-server-5b959b57f4-znxjq 1/1 Running 0 37s argocd-server-6b8787d686-wv9zh 1/1 Running 0 37s Note The HA-enabled TLS configuration requires a cluster with at least three worker nodes. It can take a few minutes for the output to appear if you have enabled the Argo CD instances with HA configuration. Example output with HA enabled NAME READY STATUS RESTARTS AGE argocd-application-controller-0 1/1 Running 0 10m argocd-redis-ha-haproxy-669757fdb7-5xg8h 1/1 Running 0 10m argocd-redis-ha-server-0 2/2 Running 0 9m9s argocd-redis-ha-server-1 2/2 Running 0 98s argocd-redis-ha-server-2 2/2 Running 0 53s argocd-repo-server-576499d46d-8hgbh 1/1 Running 0 10m argocd-server-9486f88b7-dk2ks 1/1 Running 0 10m Verify that the argocd-operator-redis-tls secret is created: USD oc get secrets argocd-operator-redis-tls -n <namespace> 1 1 Specify a namespace where the Argo CD instance is running, for example openshift-gitops . Example output NAME TYPE DATA AGE argocd-operator-redis-tls kubernetes.io/tls 2 30s The secret must be of the kubernetes.io/tls type and a size of 2 . 1.3. Configuring TLS for Redis with autotls disabled You can manually configure TLS encryption for Redis by creating the argocd-operator-redis-tls secret with a key and certificate pair. In addition, you must annotate the secret to indicate that it belongs to the appropriate Argo CD instance. The steps to create a certificate and secret vary for instances with High Availability (HA) enabled. Procedure Log in to the OpenShift Container Platform web console. Create an Argo CD instance: In the Administrator perspective of the web console, use the left navigation panel to go to Administration CustomResourceDefinitions . Search for argocds.argoproj.io and click ArgoCD custom resource definition (CRD). On the CustomResourceDefinition details page, click the Instances tab, and then click Create ArgoCD . Edit or replace the YAML similar to the following example: Example ArgoCD CR with autotls disabled apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: argocd 1 namespace: openshift-gitops 2 spec: ha: enabled: true 3 1 The name of the Argo CD instance. 2 The namespace where you want to run the Argo CD instance. 3 The flag value that enables the HA feature. If you do not want to enable HA, do not include this line or set the flag value as false . Click Create . Verify that the Argo CD pods are ready and running: USD oc get pods -n <namespace> 1 1 Specify a namespace where the Argo CD instance is running, for example openshift-gitops . Example output with HA disabled NAME READY STATUS RESTARTS AGE argocd-application-controller-0 1/1 Running 0 26s argocd-redis-84b77d4f58-vp6zm 1/1 Running 0 37s argocd-repo-server-5b959b57f4-znxjq 1/1 Running 0 37s argocd-server-6b8787d686-wv9zh 1/1 Running 0 37s Note The HA-enabled TLS configuration requires a cluster with at least three worker nodes. It can take a few minutes for the output to appear if you have enabled the Argo CD instances with HA configuration. Example output with HA enabled NAME READY STATUS RESTARTS AGE argocd-application-controller-0 1/1 Running 0 10m argocd-redis-ha-haproxy-669757fdb7-5xg8h 1/1 Running 0 10m argocd-redis-ha-server-0 2/2 Running 0 9m9s argocd-redis-ha-server-1 2/2 Running 0 98s argocd-redis-ha-server-2 2/2 Running 0 53s argocd-repo-server-576499d46d-8hgbh 1/1 Running 0 10m argocd-server-9486f88b7-dk2ks 1/1 Running 0 10m Create a self-signed certificate for the Redis server by using one of the following options depending on your HA configuration: For the Argo CD instance with HA disabled, run the following command: USD openssl req -new -x509 -sha256 \ -subj "/C=XX/ST=XX/O=Testing/CN=redis" \ -reqexts SAN -extensions SAN \ -config <(printf "\n[SAN]\nsubjectAltName=DNS:argocd-redis.<namespace>.svc.cluster.local\n[req]\ndistinguished_name=req") \ 1 -keyout /tmp/redis.key \ -out /tmp/redis.crt \ -newkey rsa:4096 \ -nodes \ -sha256 \ -days 10 1 Specify a namespace where the Argo CD instance is running, for example openshift-gitops . Example output Generating a RSA private key ...............++++ ............................++++ writing new private key to '/tmp/redis.key' For the Argo CD instance with HA enabled, run the following command: USD openssl req -new -x509 -sha256 \ -subj "/C=XX/ST=XX/O=Testing/CN=redis" \ -reqexts SAN -extensions SAN \ -config <(printf "\n[SAN]\nsubjectAltName=DNS:argocd-redis-ha-haproxy.<namespace>.svc.cluster.local\n[req]\ndistinguished_name=req") \ 1 -keyout /tmp/redis-ha.key \ -out /tmp/redis-ha.crt \ -newkey rsa:4096 \ -nodes \ -sha256 \ -days 10 1 Specify a namespace where the Argo CD instance is running, for example openshift-gitops . Example output Generating a RSA private key ...............++++ ............................++++ writing new private key to '/tmp/redis-ha.key' Verify that the generated certificate and key are available in the /tmp directory by running the following commands: USD cd /tmp USD ls Example output with HA disabled ... redis.crt redis.key ... Example output with HA enabled ... redis-ha.crt redis-ha.key ... Create the argocd-operator-redis-tls secret by using one of the following options depending on your HA configuration: For the Argo CD instance with HA disabled, run the following command: USD oc create secret tls argocd-operator-redis-tls --key=/tmp/redis.key --cert=/tmp/redis.crt For the Argo CD instance with HA enabled, run the following command: USD oc create secret tls argocd-operator-redis-tls --key=/tmp/redis-ha.key --cert=/tmp/redis-ha.crt Example output secret/argocd-operator-redis-tls created Annotate the secret to indicate that it belongs to the Argo CD CR: USD oc annotate secret argocd-operator-redis-tls argocds.argoproj.io/name=<instance-name> 1 1 Specify a name of the Argo CD instance, for example argocd . Example output secret/argocd-operator-redis-tls annotated Verify that the Argo CD pods are ready and running: USD oc get pods -n <namespace> 1 1 Specify a namespace where the Argo CD instance is running, for example openshift-gitops . Example output with HA disabled NAME READY STATUS RESTARTS AGE argocd-application-controller-0 1/1 Running 0 26s argocd-redis-84b77d4f58-vp6zm 1/1 Running 0 37s argocd-repo-server-5b959b57f4-znxjq 1/1 Running 0 37s argocd-server-6b8787d686-wv9zh 1/1 Running 0 37s Note It can take a few minutes for the output to appear if you have enabled the Argo CD instances with HA configuration. Example output with HA enabled NAME READY STATUS RESTARTS AGE argocd-application-controller-0 1/1 Running 0 10m argocd-redis-ha-haproxy-669757fdb7-5xg8h 1/1 Running 0 10m argocd-redis-ha-server-0 2/2 Running 0 9m9s argocd-redis-ha-server-1 2/2 Running 0 98s argocd-redis-ha-server-2 2/2 Running 0 53s argocd-repo-server-576499d46d-8hgbh 1/1 Running 0 10m argocd-server-9486f88b7-dk2ks 1/1 Running 0 10m | [
"apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: argocd 1 namespace: openshift-gitops 2 spec: redis: autotls: openshift 3 ha: enabled: true 4",
"oc patch argocds.argoproj.io <instance-name> --type=merge -p '{\"spec\":{\"redis\":{\"autotls\":\"openshift\"}}}'",
"oc get pods -n <namespace> 1",
"NAME READY STATUS RESTARTS AGE argocd-application-controller-0 1/1 Running 0 26s argocd-redis-84b77d4f58-vp6zm 1/1 Running 0 37s argocd-repo-server-5b959b57f4-znxjq 1/1 Running 0 37s argocd-server-6b8787d686-wv9zh 1/1 Running 0 37s",
"NAME READY STATUS RESTARTS AGE argocd-application-controller-0 1/1 Running 0 10m argocd-redis-ha-haproxy-669757fdb7-5xg8h 1/1 Running 0 10m argocd-redis-ha-server-0 2/2 Running 0 9m9s argocd-redis-ha-server-1 2/2 Running 0 98s argocd-redis-ha-server-2 2/2 Running 0 53s argocd-repo-server-576499d46d-8hgbh 1/1 Running 0 10m argocd-server-9486f88b7-dk2ks 1/1 Running 0 10m",
"oc get secrets argocd-operator-redis-tls -n <namespace> 1",
"NAME TYPE DATA AGE argocd-operator-redis-tls kubernetes.io/tls 2 30s",
"apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: argocd 1 namespace: openshift-gitops 2 spec: ha: enabled: true 3",
"oc get pods -n <namespace> 1",
"NAME READY STATUS RESTARTS AGE argocd-application-controller-0 1/1 Running 0 26s argocd-redis-84b77d4f58-vp6zm 1/1 Running 0 37s argocd-repo-server-5b959b57f4-znxjq 1/1 Running 0 37s argocd-server-6b8787d686-wv9zh 1/1 Running 0 37s",
"NAME READY STATUS RESTARTS AGE argocd-application-controller-0 1/1 Running 0 10m argocd-redis-ha-haproxy-669757fdb7-5xg8h 1/1 Running 0 10m argocd-redis-ha-server-0 2/2 Running 0 9m9s argocd-redis-ha-server-1 2/2 Running 0 98s argocd-redis-ha-server-2 2/2 Running 0 53s argocd-repo-server-576499d46d-8hgbh 1/1 Running 0 10m argocd-server-9486f88b7-dk2ks 1/1 Running 0 10m",
"openssl req -new -x509 -sha256 -subj \"/C=XX/ST=XX/O=Testing/CN=redis\" -reqexts SAN -extensions SAN -config <(printf \"\\n[SAN]\\nsubjectAltName=DNS:argocd-redis.<namespace>.svc.cluster.local\\n[req]\\ndistinguished_name=req\") \\ 1 -keyout /tmp/redis.key -out /tmp/redis.crt -newkey rsa:4096 -nodes -sha256 -days 10",
"Generating a RSA private key ...............++++ ............................++++ writing new private key to '/tmp/redis.key'",
"openssl req -new -x509 -sha256 -subj \"/C=XX/ST=XX/O=Testing/CN=redis\" -reqexts SAN -extensions SAN -config <(printf \"\\n[SAN]\\nsubjectAltName=DNS:argocd-redis-ha-haproxy.<namespace>.svc.cluster.local\\n[req]\\ndistinguished_name=req\") \\ 1 -keyout /tmp/redis-ha.key -out /tmp/redis-ha.crt -newkey rsa:4096 -nodes -sha256 -days 10",
"Generating a RSA private key ...............++++ ............................++++ writing new private key to '/tmp/redis-ha.key'",
"cd /tmp",
"ls",
"redis.crt redis.key",
"redis-ha.crt redis-ha.key",
"oc create secret tls argocd-operator-redis-tls --key=/tmp/redis.key --cert=/tmp/redis.crt",
"oc create secret tls argocd-operator-redis-tls --key=/tmp/redis-ha.key --cert=/tmp/redis-ha.crt",
"secret/argocd-operator-redis-tls created",
"oc annotate secret argocd-operator-redis-tls argocds.argoproj.io/name=<instance-name> 1",
"secret/argocd-operator-redis-tls annotated",
"oc get pods -n <namespace> 1",
"NAME READY STATUS RESTARTS AGE argocd-application-controller-0 1/1 Running 0 26s argocd-redis-84b77d4f58-vp6zm 1/1 Running 0 37s argocd-repo-server-5b959b57f4-znxjq 1/1 Running 0 37s argocd-server-6b8787d686-wv9zh 1/1 Running 0 37s",
"NAME READY STATUS RESTARTS AGE argocd-application-controller-0 1/1 Running 0 10m argocd-redis-ha-haproxy-669757fdb7-5xg8h 1/1 Running 0 10m argocd-redis-ha-server-0 2/2 Running 0 9m9s argocd-redis-ha-server-1 2/2 Running 0 98s argocd-redis-ha-server-2 2/2 Running 0 53s argocd-repo-server-576499d46d-8hgbh 1/1 Running 0 10m argocd-server-9486f88b7-dk2ks 1/1 Running 0 10m"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.11/html/security/configuring-secure-communication-with-redis |
Chapter 1. Introduction to System Authentication | Chapter 1. Introduction to System Authentication One of the cornerstones of establishing a secure network environment is making sure that access is restricted to people who have the right to access the network. If access is allowed, users can authenticate to the system, meaning they can verify their identities. On any Red Hat Enterprise Linux system, there are a number of different services available to create and identify user identities. These can be local system files, services which connect to larger identity domains like Kerberos or Samba, or tools to create those domains. This guide reviews some common system services and applications which are available to administrators to manage authentication and identities for a local system. Other guides are available which provide more detailed information on creating Linux domains and integrating a Linux system into a Windows domain . 1.1. Confirming User Identities Authentication is the process of confirming an identity. For network interactions, authentication involves the identification of one party by another party. There are many ways to use authentication over networks: simple passwords, certificates, one-time password (OTP) tokens, biometric scans. Authorization , on the other hand, defines what the authenticated party is allowed to do or access. Authentication requires that a user presents some kind of credential to verify his identity. The kind of credential that is required is defined by the authentication mechanism being used. There are several kinds of authentication for local users on a system: Password-based authentication . Almost all software permits the user to authenticate by providing a recognized name and password. This is also called simple authentication . Certificate-based authentication . Client authentication based on certificates is part of the SSL protocol. The client digitally signs a randomly generated piece of data and sends both the certificate and the signed data across the network. The server validates the signature and confirms the validity of the certificate. Kerberos authentication . Kerberos establishes a system of short-lived credentials, called ticket-granting tickets (TGTs) . The user presents credentials, that is, user name and password, that identify the user and indicate to the system that the user can be issued a ticket. TGT can then be repeatedly used to request access tickets to other services, like websites and email. Authentication using TGT allows the user to undergo only a single authentication process in this way. Smart card-based authentication . This is a variant of certificate-based authentication. The smart card (or token ) stores user certificates; when a user inserts the token into a system, the system can read the certificates and grant access. Single sign-on using smart cards goes through three steps: A user inserts a smart card into the card reader. Pluggable authentication modules (PAMs) on Red Hat Enterprise Linux detect the inserted smart card. The system maps the certificate to the user entry and then compares the presented certificates on the smart card, which are encrypted with a private key as explained under the certificate-based authentication, to the certificates stored in the user entry. If the certificate is successfully validated against the key distribution center (KDC), then the user is allowed to log in. Smart card-based authentication builds on the simple authentication layer established by Kerberos by adding certificates as additional identification mechanisms as well as by adding physical access requirements. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system-level_authentication_guide/introduction |
Chapter 5. Sending traces and metrics to the OpenTelemetry Collector | Chapter 5. Sending traces and metrics to the OpenTelemetry Collector You can set up and use the Red Hat build of OpenTelemetry to send traces to the OpenTelemetry Collector or the TempoStack instance. Sending traces and metrics to the OpenTelemetry Collector is possible with or without sidecar injection. 5.1. Sending traces and metrics to the OpenTelemetry Collector with sidecar injection You can set up sending telemetry data to an OpenTelemetry Collector instance with sidecar injection. The Red Hat build of OpenTelemetry Operator allows sidecar injection into deployment workloads and automatic configuration of your instrumentation to send telemetry data to the OpenTelemetry Collector. Prerequisites The Red Hat OpenShift distributed tracing platform (Tempo) is installed, and a TempoStack instance is deployed. You have access to the cluster through the web console or the OpenShift CLI ( oc ): You are logged in to the web console as a cluster administrator with the cluster-admin role. An active OpenShift CLI ( oc ) session by a cluster administrator with the cluster-admin role. For Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Procedure Create a project for an OpenTelemetry Collector instance. apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability Create a service account. apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-sidecar namespace: observability Grant the permissions to the service account for the k8sattributes and resourcedetection processors. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: ["", "config.openshift.io"] resources: ["pods", "namespaces", "infrastructures", "infrastructures/status"] verbs: ["get", "watch", "list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-sidecar namespace: observability roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io Deploy the OpenTelemetry Collector as a sidecar. apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: serviceAccount: otel-collector-sidecar mode: sidecar config: serviceAccount: otel-collector-sidecar receivers: otlp: protocols: grpc: {} http: {} processors: batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] timeout: 2s exporters: otlp: endpoint: "tempo-<example>-gateway:8090" 1 tls: insecure: true service: pipelines: traces: receivers: [otlp] processors: [memory_limiter, resourcedetection, batch] exporters: [otlp] 1 This points to the Gateway of the TempoStack instance deployed by using the <example> Tempo Operator. Create your deployment using the otel-collector-sidecar service account. Add the sidecar.opentelemetry.io/inject: "true" annotation to your Deployment object. This will inject all the needed environment variables to send data from your workloads to the OpenTelemetry Collector instance. 5.2. Sending traces and metrics to the OpenTelemetry Collector without sidecar injection You can set up sending telemetry data to an OpenTelemetry Collector instance without sidecar injection, which involves manually setting several environment variables. Prerequisites The Red Hat OpenShift distributed tracing platform (Tempo) is installed, and a TempoStack instance is deployed. You have access to the cluster through the web console or the OpenShift CLI ( oc ): You are logged in to the web console as a cluster administrator with the cluster-admin role. An active OpenShift CLI ( oc ) session by a cluster administrator with the cluster-admin role. For Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Procedure Create a project for an OpenTelemetry Collector instance. apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability Create a service account. apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: observability Grant the permissions to the service account for the k8sattributes and resourcedetection processors. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: ["", "config.openshift.io"] resources: ["pods", "namespaces", "infrastructures", "infrastructures/status"] verbs: ["get", "watch", "list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: observability roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io Deploy the OpenTelemetry Collector instance with the OpenTelemetryCollector custom resource. apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: mode: deployment serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} opencensus: otlp: protocols: grpc: {} http: {} zipkin: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlp: endpoint: "tempo-<example>-distributor:4317" 1 tls: insecure: true service: pipelines: traces: receivers: [jaeger, opencensus, otlp, zipkin] processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp] 1 This points to the Gateway of the TempoStack instance deployed by using the <example> Tempo Operator. Set the environment variables in the container with your instrumented application. Name Description Default value OTEL_SERVICE_NAME Sets the value of the service.name resource attribute. "" OTEL_EXPORTER_OTLP_ENDPOINT Base endpoint URL for any signal type with an optionally specified port number. https://localhost:4317 OTEL_EXPORTER_OTLP_CERTIFICATE Path to the certificate file for the TLS credentials of the gRPC client. https://localhost:4317 OTEL_TRACES_SAMPLER Sampler to be used for traces. parentbased_always_on OTEL_EXPORTER_OTLP_PROTOCOL Transport protocol for the OTLP exporter. grpc OTEL_EXPORTER_OTLP_TIMEOUT Maximum time interval for the OTLP exporter to wait for each batch export. 10s OTEL_EXPORTER_OTLP_INSECURE Disables client transport security for gRPC requests. An HTTPS schema overrides it. False | [
"apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-sidecar namespace: observability",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-sidecar namespace: observability roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: serviceAccount: otel-collector-sidecar mode: sidecar config: serviceAccount: otel-collector-sidecar receivers: otlp: protocols: grpc: {} http: {} processors: batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] timeout: 2s exporters: otlp: endpoint: \"tempo-<example>-gateway:8090\" 1 tls: insecure: true service: pipelines: traces: receivers: [otlp] processors: [memory_limiter, resourcedetection, batch] exporters: [otlp]",
"apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: observability",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: observability roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: mode: deployment serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} opencensus: otlp: protocols: grpc: {} http: {} zipkin: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlp: endpoint: \"tempo-<example>-distributor:4317\" 1 tls: insecure: true service: pipelines: traces: receivers: [jaeger, opencensus, otlp, zipkin] processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp]"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/red_hat_build_of_opentelemetry/otel-sending-traces-and-metrics-to-otel-collector |
8.67. ibus-hangul | 8.67. ibus-hangul 8.67.1. RHBA-2013:1036 - ibus-hangul bug fix update Updated ibus-hangul packages that fix one bug are now available. The ibus-hangul package is a Korean language input engine platform for the IBus input method (IM). Bug Fix BZ#965554 Previously, the Hangul engine for IBus did not function properly. If a preedit string was available, and the input focus was moved to another window, then the preedit string was committed. After that, when the input focus was moved back to the window, the X Input Method (XIM) could not handle the first key input. This update resolves this issue with a change in the code, and key press inputs after a focus change are no longer lost in the described scenario. Users of ibus-hangul are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/ibus-hangul |
Migrating to Red Hat build of OpenJDK 17 from earlier versions | Migrating to Red Hat build of OpenJDK 17 from earlier versions Red Hat build of OpenJDK 17 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/migrating_to_red_hat_build_of_openjdk_17_from_earlier_versions/index |
Chapter 8. Summary | Chapter 8. Summary This document has provided only a general introduction to security for Red Hat Ceph Storage. Contact the Red Hat Ceph Storage consulting team for additional help. | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/data_security_and_hardening_guide/con-sec-summay-sec |
Installing on IBM Power | Installing on IBM Power OpenShift Container Platform 4.18 Installing OpenShift Container Platform on IBM Power Red Hat OpenShift Documentation Team | [
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: ppc64le controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: ppc64le metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"sha512sum <installation_directory>/bootstrap.ign",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep '\\.iso[^.]'",
"\"location\": \"<url>/art/storage/releases/rhcos-4.18-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.18-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.18-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.18/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",",
"sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2",
"sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp",
"bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'",
"\"<url>/art/storage/releases/rhcos-4.18-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.18-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.18-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.18-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.18-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.18-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.18-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.18-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.18-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.18/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.18/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.18/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"",
"DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"./openshift-install create manifests --dir <installation_directory>",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"master\" name: 99-master-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"worker\" name: 99-worker-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'",
"bootlist -m normal -o sda",
"bootlist -m normal -o /dev/sdc /dev/sdd /dev/sde sdc sdd sde",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.31.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.18 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: ppc64le controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: ppc64le metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"sha512sum <installation_directory>/bootstrap.ign",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep '\\.iso[^.]'",
"\"location\": \"<url>/art/storage/releases/rhcos-4.18-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.18-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.18-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.18/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",",
"sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2",
"sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp",
"bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'",
"\"<url>/art/storage/releases/rhcos-4.18-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.18-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.18-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.18-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.18-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.18-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.18-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.18-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.18-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.18/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.18/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.18/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"",
"DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"./openshift-install create manifests --dir <installation_directory>",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"master\" name: 99-master-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"worker\" name: 99-worker-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'",
"bootlist -m normal -o sda",
"bootlist -m normal -o /dev/sdc /dev/sdd /dev/sde sdc sdd sde",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.31.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"managementState\":\"Managed\"}}'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.18 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"apiVersion:",
"baseDomain:",
"metadata:",
"metadata: name:",
"platform:",
"pullSecret:",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112",
"networking:",
"networking: networkType:",
"networking: clusterNetwork:",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: clusterNetwork: cidr:",
"networking: clusterNetwork: hostPrefix:",
"networking: serviceNetwork:",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork:",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"networking: machineNetwork: cidr:",
"networking: ovnKubernetesConfig: ipv4: internalJoinSubnet:",
"additionalTrustBundle:",
"capabilities:",
"capabilities: baselineCapabilitySet:",
"capabilities: additionalEnabledCapabilities:",
"cpuPartitioningMode:",
"compute:",
"compute: architecture:",
"compute: hyperthreading:",
"compute: name:",
"compute: platform:",
"compute: replicas:",
"featureSet:",
"controlPlane:",
"controlPlane: architecture:",
"controlPlane: hyperthreading:",
"controlPlane: name:",
"controlPlane: platform:",
"controlPlane: replicas:",
"credentialsMode:",
"fips:",
"imageContentSources:",
"imageContentSources: source:",
"imageContentSources: mirrors:",
"publish:",
"sshKey:"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html-single/installing_on_ibm_power/index |
Appendix B. Topic configuration parameters | Appendix B. Topic configuration parameters cleanup.policy Type: list Default: delete Valid Values: [compact, delete] Server Default Property: log.cleanup.policy Importance: medium A string that is either "delete" or "compact" or both. This string designates the retention policy to use on old log segments. The default policy ("delete") will discard old segments when their retention time or size limit has been reached. The "compact" setting will enable log compaction on the topic. compression.type Type: string Default: producer Valid Values: [uncompressed, zstd, lz4, snappy, gzip, producer] Server Default Property: compression.type Importance: medium Specify the final compression type for a given topic. This configuration accepts the standard compression codecs ('gzip', 'snappy', 'lz4', 'zstd'). It additionally accepts 'uncompressed' which is equivalent to no compression; and 'producer' which means retain the original compression codec set by the producer. delete.retention.ms Type: long Default: 86400000 (1 day) Valid Values: [0,... ] Server Default Property: log.cleaner.delete.retention.ms Importance: medium The amount of time to retain delete tombstone markers for log compacted topics. This setting also gives a bound on the time in which a consumer must complete a read if they begin from offset 0 to ensure that they get a valid snapshot of the final stage (otherwise delete tombstones may be collected before they complete their scan). file.delete.delay.ms Type: long Default: 60000 (1 minute) Valid Values: [0,... ] Server Default Property: log.segment.delete.delay.ms Importance: medium The time to wait before deleting a file from the filesystem. flush.messages Type: long Default: 9223372036854775807 Valid Values: [0,... ] Server Default Property: log.flush.interval.messages Importance: medium This setting allows specifying an interval at which we will force an fsync of data written to the log. For example if this was set to 1 we would fsync after every message; if it were 5 we would fsync after every five messages. In general we recommend you not set this and use replication for durability and allow the operating system's background flush capabilities as it is more efficient. This setting can be overridden on a per-topic basis (see the per-topic configuration section ). flush.ms Type: long Default: 9223372036854775807 Valid Values: [0,... ] Server Default Property: log.flush.interval.ms Importance: medium This setting allows specifying a time interval at which we will force an fsync of data written to the log. For example if this was set to 1000 we would fsync after 1000 ms had passed. In general we recommend you not set this and use replication for durability and allow the operating system's background flush capabilities as it is more efficient. follower.replication.throttled.replicas Type: list Default: "" Valid Values: [partitionId]:[brokerId],[partitionId]:[brokerId],... Server Default Property: follower.replication.throttled.replicas Importance: medium A list of replicas for which log replication should be throttled on the follower side. The list should describe a set of replicas in the form [PartitionId]:[BrokerId],[PartitionId]:[BrokerId]:... or alternatively the wildcard '*' can be used to throttle all replicas for this topic. index.interval.bytes Type: int Default: 4096 (4 kibibytes) Valid Values: [0,... ] Server Default Property: log.index.interval.bytes Importance: medium This setting controls how frequently Kafka adds an index entry to its offset index. The default setting ensures that we index a message roughly every 4096 bytes. More indexing allows reads to jump closer to the exact position in the log but makes the index larger. You probably don't need to change this. leader.replication.throttled.replicas Type: list Default: "" Valid Values: [partitionId]:[brokerId],[partitionId]:[brokerId],... Server Default Property: leader.replication.throttled.replicas Importance: medium A list of replicas for which log replication should be throttled on the leader side. The list should describe a set of replicas in the form [PartitionId]:[BrokerId],[PartitionId]:[BrokerId]:... or alternatively the wildcard '*' can be used to throttle all replicas for this topic. max.compaction.lag.ms Type: long Default: 9223372036854775807 Valid Values: [1,... ] Server Default Property: log.cleaner.max.compaction.lag.ms Importance: medium The maximum time a message will remain ineligible for compaction in the log. Only applicable for logs that are being compacted. max.message.bytes Type: int Default: 1048588 Valid Values: [0,... ] Server Default Property: message.max.bytes Importance: medium The largest record batch size allowed by Kafka (after compression if compression is enabled). If this is increased and there are consumers older than 0.10.2, the consumers' fetch size must also be increased so that they can fetch record batches this large. In the latest message format version, records are always grouped into batches for efficiency. In message format versions, uncompressed records are not grouped into batches and this limit only applies to a single record in that case. message.format.version Type: string Default: 2.7-IV2 Valid Values: [0.8.0, 0.8.1, 0.8.2, 0.9.0, 0.10.0-IV0, 0.10.0-IV1, 0.10.1-IV0, 0.10.1-IV1, 0.10.1-IV2, 0.10.2-IV0, 0.11.0-IV0, 0.11.0-IV1, 0.11.0-IV2, 1.0-IV0, 1.1-IV0, 2.0-IV0, 2.0-IV1, 2.1-IV0, 2.1-IV1, 2.1-IV2, 2.2-IV0, 2.2-IV1, 2.3-IV0, 2.3-IV1, 2.4-IV0, 2.4-IV1, 2.5-IV0, 2.6-IV0, 2.7-IV0, 2.7-IV1, 2.7-IV2] Server Default Property: log.message.format.version Importance: medium Specify the message format version the broker will use to append messages to the logs. The value should be a valid ApiVersion. Some examples are: 0.8.2, 0.9.0.0, 0.10.0, check ApiVersion for more details. By setting a particular message format version, the user is certifying that all the existing messages on disk are smaller or equal than the specified version. Setting this value incorrectly will cause consumers with older versions to break as they will receive messages with a format that they don't understand. message.timestamp.difference.max.ms Type: long Default: 9223372036854775807 Valid Values: [0,... ] Server Default Property: log.message.timestamp.difference.max.ms Importance: medium The maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in the message. If message.timestamp.type=CreateTime, a message will be rejected if the difference in timestamp exceeds this threshold. This configuration is ignored if message.timestamp.type=LogAppendTime. message.timestamp.type Type: string Default: CreateTime Valid Values: [CreateTime, LogAppendTime] Server Default Property: log.message.timestamp.type Importance: medium Define whether the timestamp in the message is message create time or log append time. The value should be either CreateTime or LogAppendTime . min.cleanable.dirty.ratio Type: double Default: 0.5 Valid Values: [0,... ,1] Server Default Property: log.cleaner.min.cleanable.ratio Importance: medium This configuration controls how frequently the log compactor will attempt to clean the log (assuming log compaction is enabled). By default we will avoid cleaning a log where more than 50% of the log has been compacted. This ratio bounds the maximum space wasted in the log by duplicates (at 50% at most 50% of the log could be duplicates). A higher ratio will mean fewer, more efficient cleanings but will mean more wasted space in the log. If the max.compaction.lag.ms or the min.compaction.lag.ms configurations are also specified, then the log compactor considers the log to be eligible for compaction as soon as either: (i) the dirty ratio threshold has been met and the log has had dirty (uncompacted) records for at least the min.compaction.lag.ms duration, or (ii) if the log has had dirty (uncompacted) records for at most the max.compaction.lag.ms period. min.compaction.lag.ms Type: long Default: 0 Valid Values: [0,... ] Server Default Property: log.cleaner.min.compaction.lag.ms Importance: medium The minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted. min.insync.replicas Type: int Default: 1 Valid Values: [1,... ] Server Default Property: min.insync.replicas Importance: medium When a producer sets acks to "all" (or "-1"), this configuration specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. If this minimum cannot be met, then the producer will raise an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend). When used together, min.insync.replicas and acks allow you to enforce greater durability guarantees. A typical scenario would be to create a topic with a replication factor of 3, set min.insync.replicas to 2, and produce with acks of "all". This will ensure that the producer raises an exception if a majority of replicas do not receive a write. preallocate Type: boolean Default: false Server Default Property: log.preallocate Importance: medium True if we should preallocate the file on disk when creating a new log segment. retention.bytes Type: long Default: -1 Server Default Property: log.retention.bytes Importance: medium This configuration controls the maximum size a partition (which consists of log segments) can grow to before we will discard old log segments to free up space if we are using the "delete" retention policy. By default there is no size limit only a time limit. Since this limit is enforced at the partition level, multiply it by the number of partitions to compute the topic retention in bytes. retention.ms Type: long Default: 604800000 (7 days) Valid Values: [-1,... ] Server Default Property: log.retention.ms Importance: medium This configuration controls the maximum time we will retain a log before we will discard old log segments to free up space if we are using the "delete" retention policy. This represents an SLA on how soon consumers must read their data. If set to -1, no time limit is applied. segment.bytes Type: int Default: 1073741824 (1 gibibyte) Valid Values: [14,... ] Server Default Property: log.segment.bytes Importance: medium This configuration controls the segment file size for the log. Retention and cleaning is always done a file at a time so a larger segment size means fewer files but less granular control over retention. segment.index.bytes Type: int Default: 10485760 (10 mebibytes) Valid Values: [0,... ] Server Default Property: log.index.size.max.bytes Importance: medium This configuration controls the size of the index that maps offsets to file positions. We preallocate this index file and shrink it only after log rolls. You generally should not need to change this setting. segment.jitter.ms Type: long Default: 0 Valid Values: [0,... ] Server Default Property: log.roll.jitter.ms Importance: medium The maximum random jitter subtracted from the scheduled segment roll time to avoid thundering herds of segment rolling. segment.ms Type: long Default: 604800000 (7 days) Valid Values: [1,... ] Server Default Property: log.roll.ms Importance: medium This configuration controls the period of time after which Kafka will force the log to roll even if the segment file isn't full to ensure that retention can delete or compact old data. unclean.leader.election.enable Type: boolean Default: false Server Default Property: unclean.leader.election.enable Importance: medium Indicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data loss. message.downconversion.enable Type: boolean Default: true Server Default Property: log.message.downconversion.enable Importance: low This configuration controls whether down-conversion of message formats is enabled to satisfy consume requests. When set to false , broker will not perform down-conversion for consumers expecting an older message format. The broker responds with UNSUPPORTED_VERSION error for consume requests from such older clients. This configurationdoes not apply to any message format conversion that might be required for replication to followers. | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q2/html/using_amq_streams_on_rhel/topic-configuration-parameters-str |
Chapter 36. System and Subscription Management | Chapter 36. System and Subscription Management Yum no longer crashes in certain nss and nspr update scenario Previously, when the yum installer updated a certain combination of nss and nspr package versions, the transaction sometimes terminated prematurely due to a following symbol lookup error: This then caused stale rpm locks. Yum has been updated to correctly deal with this particular nss and nspr update scenario. As a result, yum does not terminate anymore in the described scenario. (BZ# 1458841 ) The fastestmirror plug-in now orders mirrors before the metadata download Previously, when the yum installer ran for the first time after a cache cleanup, the fastestmirror plug-in did not select the fastest mirror before metadata download. This sometimes caused a delay if some mirrors were slow or unavailable. With this update, the fastestmirror plug-in has been modified to have effect on mirror selection before metadata download. As a result, the mirrors are polled and arranged before metadata download, which prevents such delays. (BZ# 1428210 ) The package-cleanup script no longer removes package dependencies of non-duplicates Previously, running the package-cleanup script with the --cleandupes option also removed packages that depended on duplicates. Consequently, some packages were removed unintentionally. With this update, the package-cleanup script has been fixed to skip package dependencies of non-duplicates. Instead, the package-cleanup script prints a warning with a suggestion of a workaround. (BZ# 1455318 ) rhnsd.pid is now writable only by the owner In Red Hat Enterprise Linux 7.4, the default permissions of the /var/run/rhnsd.pid file were changed to -rw-rw-rw-. . This setting was not secure. With this update, the change has been reverted, and the default permissions of /var/run/rhnsd.pid are now -rw-r--r--. . (BZ#1480306) rhn_check now correctly reports system reboots to Satellite Previously, if a system reboot of a Satellite client occurred during a rhn_check run, rhn_check did not report its termination to Satellite. Consequently, the status of rhn_check in Satellite did not update. With this update, this incorrect behavior is fixed and rhn_check now handles system reboots and reports the correct status to Satellite. (BZ# 1494389 ) The rpm rhnlib -qi command now refers to the current upstream project website Previously, the RPM information of the rhnlib package incorrectly referred to a deprecated upstream project website. With this update, the rpm rhnlib -qi command displays the URL of the current upstream project website. (BZ# 1503953 ) Kernel installations using rhnsd complete successfully If a kernel installation scheduled by the kernel was run using the Red Hat Network Daemon (rhnsd), the installation of the kernel sometimes stopped before completion. This issue has been fixed and kernel installations using rhnsd now complete successfully. (BZ#1475039) rhn_check no longer modifies permissions on files in /var/cache/yum/ Previously, when the Red Hat Network Daemon (rhnsd) executed the rhn_check command, the command modified permissions on the files in the /var/cache/yum/ directory incorrectly, resulting in a vulnerability. This bug has been fixed and rhn_check no longer modifies permissions on the files in the /var/cache/yum/ directory. (BZ# 1489989 ) subscription-manager reports an RPM package if its vendor contains non-UTF8 characters Previously, the subscription-manager utility assumed UTF-8 data in the RPM package vendor field. Consequently, if an RPM installed on the system contained a vendor with non-UTF8 characters, the subscription-manager failed to report the packages. With this update, the subscription-manager has been updated to ignore encoding issues in the RPM package vendor field. As a result, subscription-manager reports a package profile correctly even if the installed RPM has a non-UTF8 vendor. (BZ# 1519512 ) subscription-manager now works with proxies that expect the Host header Previously, the subscription-manager utility was not compatible with proxies that expect the Host header because it did not include the Host header when connecting. With this update, subscription-manager includes the Host header when connecting and is compatible with these proxies. (BZ# 1507158 ) subscription-manager assigns valid IPv4 addresses to network.ipv4_address even if initial DNS resolution fails Previously, when the subscription-manager utility failed to resolve the IPv4 address of a system, it incorrectly assigned the loopback interface address 127.0.0.1 for the network.ipv4_address fact. This occurred even when there was a valid interface with a valid IP address. With this update, if subscription-manager fails to resolve the IPv4 address of a system, it gathers IPv4 addresses from all interfaces except the loopback interface and assigns the valid IPv4 addresses for the network.ipv4_address fact. (BZ#1476817) virt-who ensures that provided options fit the same virtualization type With this update, the virt-who utility ensures that all command-line options provided by the user are compatible with the intended virtualization type. In addition, if virt-who detects an incompatible option, it provides a corresponding error message. (BZ# 1461417 ) virt-who configuration no longer resets on upgrade or reinstall Previously, upgrading or reinstalling virt-who reset the configuration of the /etc/virt-who.conf file to default values. This update changes the packaging of virt-who to prevent overwriting configuration files, which ensures the described problem no longer occurs. (BZ# 1485865 ) virt-who now reads the 'address' field provided by RHEVM to discover and report the correct host name Previously, if the virt-who utility reported on a Red Hat Virtualization (RHV) host and the hypervisor_id=hostname option was used, virt-who displayed an incorrect host name value. This update ensures that virt-who reads the correct field value in the described circumstances and as a result, the proper host name is displayed. (BZ# 1389729 ) | [
"/lib64/libnsssysinit.so: undefined symbol: PR_GetEnvSecure"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.5_release_notes/bug_fixes_system_and_subscription_management |
Providing feedback on Red Hat build of OpenJDK documentation | Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Create creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_eclipse_temurin_17.0.12/proc-providing-feedback-on-redhat-documentation |
5.2. Network Security Recommended Practices | 5.2. Network Security Recommended Practices Network security is a critical part of a secure virtualization infrastructure. See the following recommended practices for securing the network: Ensure that remote management of the system takes place only over secured network channels. Tools such as SSH and network protocols such as TLS or SSL provide both authentication and data encryption to assist with secure and controlled access to systems. Ensure that guest applications transferring sensitive data do so over secured network channels. If protocols such as TLS or SSL are not available, consider using one like IPsec. Configure firewalls and ensure they are activated at boot. Only network ports needed for the use and management of the system should be allowed. Test and review firewall rules regularly. 5.2.1. Securing Connectivity to SPICE The SPICE remote desktop protocol supports SSL/TLS which should be enabled for all of the SPICE communication channels (main, display, inputs, cursor, playback, record). 5.2.2. Securing Connectivity to Storage You can connect virtualized systems to networked storage in many different ways. Each approach presents different security benefits and concerns, but the same security principles apply to each: authenticate the remote store pool before use, and protect the confidentiality and integrity of the data while it is being transferred. The data must also remain secure while it is stored. Red Hat recommends that data is encrypted or digitally signed before storing, or both. Note For more information on networked storage, see the Using Storage Pools section of the Red Hat Enterprise Linux Virtualization Deployment and Administration Guide . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_security_guide/sect-virtualization_security_guide-network_security_in_a_virtualized_environment-network_security_recommended_practices |
3.2. Configuring IP Networking with nmtui | 3.2. Configuring IP Networking with nmtui As a system administrator, you can configure a network interface using the NetworkManager's tool, nmtui . See Section 2.5, "NetworkManager Tools" . This procedure describes how to configure networking using the text user interface tool, nmtui . Prerequisites The nmtui tool is used in a terminal window. It is contained in the NetworkManager-tui package, but it is not installed along with NetworkManager by default. To install NetworkManager-tui : To verify that NetworkManager is running, see Section 2.3, "Checking the Status of NetworkManager" . Procedure Start the nmtui tool: The text user interface appears. Figure 3.1. The NetworkManager Text User Interface starting menu To navigate, use the arrow keys or press Tab to step forwards and press Shift + Tab to step back through the options. Press Enter to select an option. The Space bar toggles the status of a check box. To apply changes after a modified connection which is already active requires a reactivation of the connection. In this case, follow the procedure below: Procedure Select the Activate a connection menu entry. Figure 3.2. Activate a Connection Select the modified connection. On the right, click the Deactivate button. Figure 3.3. Deactivate the Modified Connection Choose the connection again and click the Activate button. Figure 3.4. Reactivate the Modified Connection The following commands are also available: nmtui edit connection-name If no connection name is supplied, the selection menu appears. If the connection name is supplied and correctly identified, the relevant Edit connection screen appears. nmtui connect connection-name If no connection name is supplied, the selection menu appears. If the connection name is supplied and correctly identified, the relevant connection is activated. Any invalid command prints a usage message. Note that nmtui does not support all types of connections. In particular, you cannot edit VPNs, wireless network connections using WPA Enterprise, or Ethernet connections using 802.1X . | [
"~]# yum install NetworkManager-tui",
"~]USD nmtui"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-configuring_ip_networking_with_nmtui |
Chapter 10. Verifying connectivity to an endpoint | Chapter 10. Verifying connectivity to an endpoint The Cluster Network Operator (CNO) runs a controller, the connectivity check controller, that performs a connection health check between resources within your cluster. By reviewing the results of the health checks, you can diagnose connection problems or eliminate network connectivity as the cause of an issue that you are investigating. 10.1. Connection health checks performed To verify that cluster resources are reachable, a TCP connection is made to each of the following cluster API services: Kubernetes API server service Kubernetes API server endpoints OpenShift API server service OpenShift API server endpoints Load balancers To verify that services and service endpoints are reachable on every node in the cluster, a TCP connection is made to each of the following targets: Health check target service Health check target endpoints 10.2. Implementation of connection health checks The connectivity check controller orchestrates connection verification checks in your cluster. The results for the connection tests are stored in PodNetworkConnectivity objects in the openshift-network-diagnostics namespace. Connection tests are performed every minute in parallel. The Cluster Network Operator (CNO) deploys several resources to the cluster to send and receive connectivity health checks: Health check source This program deploys in a single pod replica set managed by a Deployment object. The program consumes PodNetworkConnectivity objects and connects to the spec.targetEndpoint specified in each object. Health check target A pod deployed as part of a daemon set on every node in the cluster. The pod listens for inbound health checks. The presence of this pod on every node allows for the testing of connectivity to each node. 10.3. PodNetworkConnectivityCheck object fields The PodNetworkConnectivityCheck object fields are described in the following tables. Table 10.1. PodNetworkConnectivityCheck object fields Field Type Description metadata.name string The name of the object in the following format: <source>-to-<target> . The destination described by <target> includes one of following strings: load-balancer-api-external load-balancer-api-internal kubernetes-apiserver-endpoint kubernetes-apiserver-service-cluster network-check-target openshift-apiserver-endpoint openshift-apiserver-service-cluster metadata.namespace string The namespace that the object is associated with. This value is always openshift-network-diagnostics . spec.sourcePod string The name of the pod where the connection check originates, such as network-check-source-596b4c6566-rgh92 . spec.targetEndpoint string The target of the connection check, such as api.devcluster.example.com:6443 . spec.tlsClientCert object Configuration for the TLS certificate to use. spec.tlsClientCert.name string The name of the TLS certificate used, if any. The default value is an empty string. status object An object representing the condition of the connection test and logs of recent connection successes and failures. status.conditions array The latest status of the connection check and any statuses. status.failures array Connection test logs from unsuccessful attempts. status.outages array Connect test logs covering the time periods of any outages. status.successes array Connection test logs from successful attempts. The following table describes the fields for objects in the status.conditions array: Table 10.2. status.conditions Field Type Description lastTransitionTime string The time that the condition of the connection transitioned from one status to another. message string The details about last transition in a human readable format. reason string The last status of the transition in a machine readable format. status string The status of the condition. type string The type of the condition. The following table describes the fields for objects in the status.conditions array: Table 10.3. status.outages Field Type Description end string The timestamp from when the connection failure is resolved. endLogs array Connection log entries, including the log entry related to the successful end of the outage. message string A summary of outage details in a human readable format. start string The timestamp from when the connection failure is first detected. startLogs array Connection log entries, including the original failure. Connection log fields The fields for a connection log entry are described in the following table. The object is used in the following fields: status.failures[] status.successes[] status.outages[].startLogs[] status.outages[].endLogs[] Table 10.4. Connection log object Field Type Description latency string Records the duration of the action. message string Provides the status in a human readable format. reason string Provides the reason for status in a machine readable format. The value is one of TCPConnect , TCPConnectError , DNSResolve , DNSError . success boolean Indicates if the log entry is a success or failure. time string The start time of connection check. 10.4. Verifying network connectivity for an endpoint As a cluster administrator, you can verify the connectivity of an endpoint, such as an API server, load balancer, service, or pod. Prerequisites Install the OpenShift CLI ( oc ). Access to the cluster as a user with the cluster-admin role. Procedure To list the current PodNetworkConnectivityCheck objects, enter the following command: USD oc get podnetworkconnectivitycheck -n openshift-network-diagnostics Example output NAME AGE network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 73m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-default-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-load-balancer-api-external 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-load-balancer-api-internal 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-c-n8mbf 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-d-4hnrz 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-service-cluster 75m View the connection test logs: From the output of the command, identify the endpoint that you want to review the connectivity logs for. To view the object, enter the following command: USD oc get podnetworkconnectivitycheck <name> \ -n openshift-network-diagnostics -o yaml where <name> specifies the name of the PodNetworkConnectivityCheck object. Example output apiVersion: controlplane.operator.openshift.io/v1alpha1 kind: PodNetworkConnectivityCheck metadata: name: network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 namespace: openshift-network-diagnostics ... spec: sourcePod: network-check-source-7c88f6d9f-hmg2f targetEndpoint: 10.0.0.4:6443 tlsClientCert: name: "" status: conditions: - lastTransitionTime: "2021-01-13T20:11:34Z" message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnectSuccess status: "True" type: Reachable failures: - latency: 2.241775ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: "2021-01-13T20:10:34Z" - latency: 2.582129ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: "2021-01-13T20:09:34Z" - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: "2021-01-13T20:08:34Z" outages: - end: "2021-01-13T20:11:34Z" endLogs: - latency: 2.032018ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T20:11:34Z" - latency: 2.241775ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: "2021-01-13T20:10:34Z" - latency: 2.582129ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: "2021-01-13T20:09:34Z" - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: "2021-01-13T20:08:34Z" message: Connectivity restored after 2m59.999789186s start: "2021-01-13T20:08:34Z" startLogs: - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: "2021-01-13T20:08:34Z" successes: - latency: 2.845865ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:14:34Z" - latency: 2.926345ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:13:34Z" - latency: 2.895796ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:12:34Z" - latency: 2.696844ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:11:34Z" - latency: 1.502064ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:10:34Z" - latency: 1.388857ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:09:34Z" - latency: 1.906383ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:08:34Z" - latency: 2.089073ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:07:34Z" - latency: 2.156994ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:06:34Z" - latency: 1.777043ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: "2021-01-13T21:05:34Z" | [
"oc get podnetworkconnectivitycheck -n openshift-network-diagnostics",
"NAME AGE network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 73m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-default-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-load-balancer-api-external 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-load-balancer-api-internal 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-c-n8mbf 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-d-4hnrz 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-service-cluster 75m",
"oc get podnetworkconnectivitycheck <name> -n openshift-network-diagnostics -o yaml",
"apiVersion: controlplane.operator.openshift.io/v1alpha1 kind: PodNetworkConnectivityCheck metadata: name: network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 namespace: openshift-network-diagnostics spec: sourcePod: network-check-source-7c88f6d9f-hmg2f targetEndpoint: 10.0.0.4:6443 tlsClientCert: name: \"\" status: conditions: - lastTransitionTime: \"2021-01-13T20:11:34Z\" message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnectSuccess status: \"True\" type: Reachable failures: - latency: 2.241775ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:10:34Z\" - latency: 2.582129ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:09:34Z\" - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:08:34Z\" outages: - end: \"2021-01-13T20:11:34Z\" endLogs: - latency: 2.032018ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T20:11:34Z\" - latency: 2.241775ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:10:34Z\" - latency: 2.582129ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:09:34Z\" - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:08:34Z\" message: Connectivity restored after 2m59.999789186s start: \"2021-01-13T20:08:34Z\" startLogs: - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:08:34Z\" successes: - latency: 2.845865ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:14:34Z\" - latency: 2.926345ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:13:34Z\" - latency: 2.895796ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:12:34Z\" - latency: 2.696844ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:11:34Z\" - latency: 1.502064ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:10:34Z\" - latency: 1.388857ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:09:34Z\" - latency: 1.906383ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:08:34Z\" - latency: 2.089073ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:07:34Z\" - latency: 2.156994ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:06:34Z\" - latency: 1.777043ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:05:34Z\""
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/networking/verifying-connectivity-endpoint |
Chapter 1. Support policy for Red Hat build of OpenJDK | Chapter 1. Support policy for Red Hat build of OpenJDK Red Hat will support select major versions of Red Hat build of OpenJDK in its products. For consistency, these versions remain similar to Oracle JDK versions that are designated as long-term support (LTS). A major version of Red Hat build of OpenJDK will be supported for a minimum of six years from the time that version is first introduced. For more information, see the OpenJDK Life Cycle and Support Policy . Note RHEL 6 reached the end of life in November 2020. Because of this, Red Hat build of OpenJDK is not supporting RHEL 6 as a supported configuration. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/release_notes_for_red_hat_build_of_openjdk_21.0.5/rn-openjdk-support-policy |
Chapter 3. Creating an IBM Power Virtual Server workspace | Chapter 3. Creating an IBM Power Virtual Server workspace Important IBM Power(R) Virtual Server using installer-provisioned infrastructure is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 3.1. Creating an IBM Power Virtual Server workspace Use the following procedure to create an IBM Power(R) Virtual Server workspace. Procedure To create an IBM Power(R) Virtual Server workspace, complete step 1 to step 5 from the IBM Cloud(R) documentation for Creating an IBM Power(R) Virtual Server . After it has finished provisioning, retrieve the 32-character alphanumeric Globally Unique Identifier (GUID) of your new workspace by entering the following command: USD ibmcloud resource service-instance <workspace name> 3.2. steps Installing a cluster on IBM Power(R) Virtual Server with customizations | [
"ibmcloud resource service-instance <workspace name>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_ibm_power_virtual_server/creating-ibm-power-vs-workspace |
Chapter 3. Documenting your RHOSP environment | Chapter 3. Documenting your RHOSP environment Documenting the system components, networks, services, and software is important in identifying security concerns, attack vectors, and possible security zone bridging points. The documentation for your Red Hat OpenStack Platform (RHOSP) deployment should include the following information: A description of the system components, networks, services, and software in your RHOSP production, development, and test environments. An inventory of any ephemeral resources, such as virtual machines or virtual disk volumes. 3.1. Documenting the system roles Each node in your Red Hat OpenStack Platform (RHOSP) deployment serves a specific role, either contributing to the infrastructure of the cloud, or providing cloud resources. Nodes that contribute to the infrastructure run the cloud-related services, such as the message queuing service, storage management, monitoring, networking, and other services required to support the operation and provisioning of the cloud. Examples of infrastructure roles include the following: Controller Networker Database Telemetry Nodes that provide cloud resources offer compute or storage capacity for instances running on your cloud. Examples of resource roles include the following: CephStorage Compute ComputeOvsDpdk ObjectStorage Document the system roles that are used in your environment. These roles can be identified within the templates used to deploy RHOSP. For example, there is a NIC configuration file for each role in use in your environment. Procedure Check the existing templates for your deployment for files that specify the roles currently in use. There is a NIC configuration file for each role in use in your environment. In the following example, the RHOSP environment includes the ComputeHCI role, the Compute role, and the Controller role: Each role for your RHOSP environment performs many interrelated services. You can document the services used by each role by inspecting a roles file. If a roles file was generated for your templates, you can find it in the ~/templates directory: If a roles file was not generated for your templtes, you can generate one for the roles you currently use to inspect for documentation purposes: 3.2. Creating a hardware inventory You can retrieve hardware information aobut your Red Hat OpenStack Platform deployment by viewing data that is collected during introspection. Introspection gathers hardware information from the nodes about the CPU, memory, disks, and so on. Prerequisites You have an installed Red Hat OpenStack Platform director environment. You have introspected nodes for your Red Hat OpenStack Platform deployment. You are logged into the director as stack. Procedure From the undercloud, source the stackrc file: List the nodes in your environment: For each baremetal node from which to gather information, and run the following command to retrieve the introspection data: Replace <node> with the name of the node from the list you retrieved in step 1. Optional: To limit the output to a specific type of hardware, you can retrieve a list of the inventory keys and view introspection data for a specific key: Run the following command to get a list of top level keys from introspection data: Select a key, for example disks , and run the following to get more information: 3.3. Creating a software inventory Document the software components in use on nodes deployed in your Red Hat OpenStack Platform (RHOSP) infrastructure. System databases, RHOSP software services and supporting components such as load balancers, DNS, or DHCP services, are critical when assessing the impact of a compromise or vulnerability in a library, application, or class of software. You have an installed Red Hat OpenStack Platform environment. You are logged into the director as stack. Procedure Ensure that you know the entry points for systems and services that can be subject to malicious activity. Run the following commands on the undercloud: RHOSP is deployed in containerized services, therefore you can view the software components on an overcloud node by checking the running containers on that node. Use ssh to connect to an overcloud node and list the running containers. For example, to view the overcloud services on compute-0 , run a command similar to the following: | [
"cd ~/templates tree . ├── environments │ └── network-environment.yaml ├── hci.yaml ├── network │ └── config │ └── multiple-nics │ ├── computehci.yaml │ ├── compute.yaml │ └── controller.yaml ├── network_data.yaml ├── plan-environment.yaml └── roles_data_hci.yaml",
"cd ~/templates find . -name *role* > ./templates/roles_data_hci.yaml",
"openstack overcloud roles generate > --roles-path /usr/share/openstack-tripleo-heat-templates/roles > -o roles_data.yaml Controller Compute",
"source ~/stackrc",
"openstack baremetal node list -c Name +--------------+ | Name | +--------------+ | controller-0 | | controller-1 | | controller-2 | | compute-0 | | compute-1 | | compute-2 | +--------------+",
"openstack baremetal introspection data save <node> | jq",
"openstack baremetal introspection data save controller-0 | jq '.inventory | keys' [ \"bmc_address\", \"bmc_v6address\", \"boot\", \"cpu\", \"disks\", \"hostname\", \"interfaces\", \"memory\", \"system_vendor\" ]",
"openstack baremetal introspection data save controller-1 | jq '.inventory.disks' [ { \"name\": \"/dev/sda\", \"model\": \"QEMU HARDDISK\", \"size\": 85899345920, \"rotational\": true, \"wwn\": null, \"serial\": \"QM00001\", \"vendor\": \"ATA\", \"wwn_with_extension\": null, \"wwn_vendor_extension\": null, \"hctl\": \"0:0:0:0\", \"by_path\": \"/dev/disk/by-path/pci-0000:00:01.1-ata-1\" } ]",
"cat /etc/hosts source stackrc ; openstack endpoint list source overcloudrc ; openstack endpoint list",
"ssh tripleo-admin@compute-0 podman ps"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/security_and_hardening_guide/assembly_documenting-your-rhosp-environment_security_and_hardening |
8.113. mcelog | 8.113. mcelog 8.113.1. RHBA-2013:1658 - mcelog bug fix and enhancement update Updated mcelog packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The mcelog packages contain a daemon that collects and decodes Machine Check Exception (MCE) data on AMD64 and Intel 64 machines. Bug Fixes BZ# 875824 Previously, mcelog packages installed a cron job to report the status of mce logs, which conflicted with running the mclogd service as default mode. Consequently, mcelog competed with the cron job and did not collect complete data. With this update, cron job is not installed in case mcelogd is running, thus fixing this bug. BZ# 919999 Due to a bug in mcelog packages, the AMD Family 15 architecture was not supported. The bug has been fixed and mcelog now supports AMD Family 15 as expected. BZ# 996634 Previously, support for extended logging was enabled by default in mcelog packages. Consequently, on systems with processors without support for extended logging, the mcelog service terminated unexpectedly with the following message: mcelog: Cannot open /dev/cpu/0/msr to set imc_log: Permission denied With this update, extended logging is disabled by default in mcelog packages, and the mcelog service no longer crashes in the aforementioned scenario. Enhancement BZ# 881555 , BZ# 922873 , BZ# 991079 With this update, mcelog packags support Intel Xeon Processor E5-XXXX v3, Intel Xeon Processor E5-XXXX, and Intel Xeon Processor E3-XXXX v3 architectures. Users of mcelog are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/mcelog |
Providing feedback on Red Hat build of Quarkus documentation | Providing feedback on Red Hat build of Quarkus documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/openid_connect_oidc_client_and_token_propagation/proc_providing-feedback-on-red-hat-documentation_security-oidc-client-and-token-propagation |
Chapter 7. Renewing the AMQ Interconnect certificate | Chapter 7. Renewing the AMQ Interconnect certificate Periodically, you must renew the CA certificate that secures the AMQ Interconnect connection between Red Hat OpenStack Platform (RHOSP) and Service Telemetry Framework (STF) when the certificate expires. The renewal is handled automatically by the cert-manager component in Red Hat OpenShift Container Platform, but you must manually copy the renewed certificate to your RHOSP nodes. 7.1. Checking for an expired AMQ Interconnect CA certificate When the CA certificate expires, the AMQ Interconnect connections remain up, but cannot reconnect if they are interrupted. Eventually, some or all of the connections from your Red Hat OpenStack Platform (RHOSP) dispatch routers fail, showing errors on both sides, and the expiry or Not After field in your CA certificate is in the past. Procedure Log in to Red Hat OpenShift Container Platform. Change to the service-telemetry namespace: USD oc project service-telemetry Verify that some or all dispatch router connections have failed: USD oc exec -it deploy/default-interconnect -- qdstat --connections | grep Router | wc 0 0 0 Check for this error in the Red Hat OpenShift Container Platform-hosted AMQ Interconnect logs: USD oc logs -l application=default-interconnect | tail [...] 2022-11-10 20:51:22.863466 +0000 SERVER (info) [C261] Connection from 10.10.10.10:34570 (to 0.0.0.0:5671) failed: amqp:connection:framing-error SSL Failure: error:140940E5:SSL routines:ssl3_read_bytes:ssl handshake failure Log into your RHOSP undercloud. Check for this error in the RHOSP-hosted AMQ Interconnect logs of a node with a failed connection: USD ssh controller-0.ctlplane -- sudo tail /var/log/containers/metrics_qdr/metrics_qdr.log [...] 2022-11-10 20:50:44.311646 +0000 SERVER (info) [C137] Connection to default-interconnect-5671-service-telemetry.apps.mycluster.com:443 failed: amqp:connection:framing-error SSL Failure: error:0A000086:SSL routines::certificate verify failed Confirm that the CA certificate has expired by examining the file on an RHOSP node: USD ssh controller-0.ctlplane -- cat /var/lib/config-data/puppet-generated/metrics_qdr/etc/pki/tls/certs/CA_sslProfile.pem | openssl x509 -text | grep "Not After" Not After : Nov 10 20:31:16 2022 GMT USD date Mon Nov 14 11:10:40 EST 2022 7.2. Updating the AMQ Interconnect CA certificate To update the AMQ Interconnect certificate, you must export it from Red Hat OpenShift Container Platform and copy it to your Red Hat OpenStack Platform (RHOSP) nodes. Procedure Log in to Red Hat OpenShift Container Platform. Change to the service-telemetry namespace: USD oc project service-telemetry Export the CA certificate to STFCA.pem : USD oc get secret/default-interconnect-selfsigned -o jsonpath='{.data.ca\.crt}' | base64 -d > STFCA.pem Copy STFCA.pem to your RHOSP undercloud. Log into your RHOSP undercloud. Edit the stf-connectors.yaml file to contain the new caCertFileContent. For more information, see Section 4.1.5, "Configuring the STF connection for the overcloud" . Copy the STFCA.pem file to each RHOSP overcloud node: [stack@undercloud-0 ~]USD ansible -i overcloud-deploy/overcloud/tripleo-ansible-inventory.yaml allovercloud -b -m copy -a "src=STFCA.pem dest=/var/lib/config-data/puppet-generated/metrics_qdr/etc/pki/tls/certs/CA_sslProfile.pem" Restart the metrics_qdr container on each RHOSP overcloud node: [stack@undercloud-0 ~]USD ansible -i overcloud-deploy/overcloud/tripleo-ansible-inventory.yaml allovercloud -m shell -a "sudo podman restart metrics_qdr" Note You do not need to deploy the overcloud after you copy the STFCA.pem file and restart the metrics_qdr container. You edit the stf-connectors.yaml file so that future deployments do not overwrite the new CA certificate. | [
"oc project service-telemetry",
"oc exec -it deploy/default-interconnect -- qdstat --connections | grep Router | wc 0 0 0",
"oc logs -l application=default-interconnect | tail [...] 2022-11-10 20:51:22.863466 +0000 SERVER (info) [C261] Connection from 10.10.10.10:34570 (to 0.0.0.0:5671) failed: amqp:connection:framing-error SSL Failure: error:140940E5:SSL routines:ssl3_read_bytes:ssl handshake failure",
"ssh controller-0.ctlplane -- sudo tail /var/log/containers/metrics_qdr/metrics_qdr.log [...] 2022-11-10 20:50:44.311646 +0000 SERVER (info) [C137] Connection to default-interconnect-5671-service-telemetry.apps.mycluster.com:443 failed: amqp:connection:framing-error SSL Failure: error:0A000086:SSL routines::certificate verify failed",
"ssh controller-0.ctlplane -- cat /var/lib/config-data/puppet-generated/metrics_qdr/etc/pki/tls/certs/CA_sslProfile.pem | openssl x509 -text | grep \"Not After\" Not After : Nov 10 20:31:16 2022 GMT date Mon Nov 14 11:10:40 EST 2022",
"oc project service-telemetry",
"oc get secret/default-interconnect-selfsigned -o jsonpath='{.data.ca\\.crt}' | base64 -d > STFCA.pem",
"[stack@undercloud-0 ~]USD ansible -i overcloud-deploy/overcloud/tripleo-ansible-inventory.yaml allovercloud -b -m copy -a \"src=STFCA.pem dest=/var/lib/config-data/puppet-generated/metrics_qdr/etc/pki/tls/certs/CA_sslProfile.pem\"",
"[stack@undercloud-0 ~]USD ansible -i overcloud-deploy/overcloud/tripleo-ansible-inventory.yaml allovercloud -m shell -a \"sudo podman restart metrics_qdr\""
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/service_telemetry_framework_1.5/assembly-renewing-the-amq-interconnect-certificate_assembly |
4.12. Configuring ACPI For Use with Integrated Fence Devices | 4.12. Configuring ACPI For Use with Integrated Fence Devices If your cluster uses integrated fence devices, you must configure ACPI (Advanced Configuration and Power Interface) to ensure immediate and complete fencing. If a cluster node is configured to be fenced by an integrated fence device, disable ACPI Soft-Off for that node. Disabling ACPI Soft-Off allows an integrated fence device to turn off a node immediately and completely rather than attempting a clean shutdown (for example, shutdown -h now ). Otherwise, if ACPI Soft-Off is enabled, an integrated fence device can take four or more seconds to turn off a node (refer to note that follows). In addition, if ACPI Soft-Off is enabled and a node panics or freezes during shutdown, an integrated fence device may not be able to turn off the node. Under those circumstances, fencing is delayed or unsuccessful. Consequently, when a node is fenced with an integrated fence device and ACPI Soft-Off is enabled, a cluster recovers slowly or requires administrative intervention to recover. Note The amount of time required to fence a node depends on the integrated fence device used. Some integrated fence devices perform the equivalent of pressing and holding the power button; therefore, the fence device turns off the node in four to five seconds. Other integrated fence devices perform the equivalent of pressing the power button momentarily, relying on the operating system to turn off the node; therefore, the fence device turns off the node in a time span much longer than four to five seconds. To disable ACPI Soft-Off, use chkconfig management and verify that the node turns off immediately when fenced. The preferred way to disable ACPI Soft-Off is with chkconfig management: however, if that method is not satisfactory for your cluster, you can disable ACPI Soft-Off with one of the following alternate methods: Changing the BIOS setting to "instant-off" or an equivalent setting that turns off the node without delay Note Disabling ACPI Soft-Off with the BIOS may not be possible with some computers. Appending acpi=off to the kernel boot command line of the /boot/grub/grub.conf file Important This method completely disables ACPI; some computers do not boot correctly if ACPI is completely disabled. Use this method only if the other methods are not effective for your cluster. The following sections provide procedures for the preferred method and alternate methods of disabling ACPI Soft-Off: Section 4.12.1, "Disabling ACPI Soft-Off with chkconfig Management" - Preferred method Section 4.12.2, "Disabling ACPI Soft-Off with the BIOS" - First alternate method Section 4.12.3, "Disabling ACPI Completely in the grub.conf File" - Second alternate method 4.12.1. Disabling ACPI Soft-Off with chkconfig Management You can use chkconfig management to disable ACPI Soft-Off either by removing the ACPI daemon ( acpid ) from chkconfig management or by turning off acpid . Note This is the preferred method of disabling ACPI Soft-Off. Disable ACPI Soft-Off with chkconfig management at each cluster node as follows: Run either of the following commands: chkconfig --del acpid - This command removes acpid from chkconfig management. - OR - chkconfig --level 345 acpid off - This command turns off acpid . Reboot the node. When the cluster is configured and running, verify that the node turns off immediately when fenced. For information on testing a fence device, see How to test fence devices and fencing configuration in a RHEL 5, 6, or 7 High Availability cluster? . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/s1-acpi-CA |
28.4.7. Configuring Automatic Reporting for Specific Types of Crashes | 28.4.7. Configuring Automatic Reporting for Specific Types of Crashes ABRT can be configured to report any detected issues or crashes automatically without any user interaction. This can be achieved by specifying an analyze-and-report rule as a post-create rule. For example, you can instruct ABRT to report Python crashes to Bugzilla immediately without any user interaction by enabling the rule and replacing the EVENT=report_Bugzilla condition with the EVENT=post-create condition in the /etc/libreport/events.d/python_event.conf file. The new rule will look like the follows: EVENT=post-create analyzer=Python test -f component || abrt-action-save-package-data reporter-bugzilla -c /etc/abrt/plugins/bugzilla.conf Warning Please note that the post-create event is run by abrtd , which usually runs with root privileges. | [
"EVENT=post-create analyzer=Python test -f component || abrt-action-save-package-data reporter-bugzilla -c /etc/abrt/plugins/bugzilla.conf"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sect-abrt-configuration-automatic_reporting |
Chapter 5. Overriding the cluster-wide default node selector for OpenShift Data Foundation post deployment | Chapter 5. Overriding the cluster-wide default node selector for OpenShift Data Foundation post deployment When a cluster-wide default node selector is used for OpenShift Data Foundation, the pods generated by container storage interface (CSI) daemonsets are able to start only on the nodes that match the selector. To be able to use OpenShift Data Foundation from nodes which do not match the selector, override the cluster-wide default node selector by performing the following steps in the command line interface : Procedure Specify a blank node selector for the openshift-storage namespace. Delete the original pods generated by the DaemonSets. | [
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"delete pod -l app=csi-cephfsplugin -n openshift-storage delete pod -l app=csi-rbdplugin -n openshift-storage"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/troubleshooting_openshift_data_foundation/overriding-the-cluster-wide-default-node-selector-for-openshift-data-foundation-post-deployment_rhodf |
Chapter 36. System and Subscription Management | Chapter 36. System and Subscription Management The yum updateinfo commands now respect skip_if_unavailable option If a repository was configured with the skip_if_unavailable=1 option, the yum commands operating on the updateinfo metadata, such as yum updateinfo or yum check-update --security , did not work correctly. Consequently, yum terminated with an error instead of skipping the repository. With this update, the underlying source code has been fixed to respect the skip_if_unavailable option. As a result, the affected yum commands now skip the unavailable repository as expected under the described circumstances. (BZ#1528608) | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.6_release_notes/bug_fixes_system_and_subscription_management |
Chapter 1. About the Migration Toolkit for Virtualization | Chapter 1. About the Migration Toolkit for Virtualization You can use the Migration Toolkit for Virtualization (MTV) to migrate virtual machines from the following source providers to OpenShift Virtualization destination providers: VMware vSphere Red Hat Virtualization (RHV) OpenStack Open Virtual Appliances (OVAs) that were created by VMware vSphere Remote OpenShift Virtualization clusters Additional resources Performance recommendations for migrating from VMware vSphere to OpenShift Virtualization . Performance recommendations for migrating from Red Hat Virtualization to OpenShift Virtualization . 1.1. About cold and warm migration MTV supports cold migration from: VMware vSphere Red Hat Virtualization (RHV) OpenStack Remote OpenShift Virtualization clusters MTV supports warm migration from VMware vSphere and from RHV. 1.1.1. Cold migration Cold migration is the default migration type. The source virtual machines are shut down while the data is copied. 1.1.2. Warm migration Most of the data is copied during the precopy stage while the source virtual machines (VMs) are running. Then the VMs are shut down and the remaining data is copied during the cutover stage. Precopy stage The VMs are not shut down during the precopy stage. The VM disks are copied incrementally using changed block tracking (CBT) snapshots. The snapshots are created at one-hour intervals by default. You can change the snapshot interval by updating the forklift-controller deployment. Important You must enable CBT for each source VM and each VM disk. A VM can support up to 28 CBT snapshots. If the source VM has too many CBT snapshots and the Migration Controller service is not able to create a new snapshot, warm migration might fail. The Migration Controller service deletes each snapshot when the snapshot is no longer required. The precopy stage runs until the cutover stage is started manually or is scheduled to start. Cutover stage The VMs are shut down during the cutover stage and the remaining data is migrated. Data stored in RAM is not migrated. You can start the cutover stage manually by using the MTV console or you can schedule a cutover time in the Migration manifest. | null | https://docs.redhat.com/en/documentation/migration_toolkit_for_virtualization/2.6/html/installing_and_using_the_migration_toolkit_for_virtualization/about-mtv_mtv |
17.9. Managing a Virtual Network | 17.9. Managing a Virtual Network To configure a virtual network on your system: From the Edit menu, select Connection Details . This will open the Connection Details menu. Click the Virtual Networks tab. Figure 17.10. Virtual network configuration All available virtual networks are listed on the left of the menu. You can edit the configuration of a virtual network by selecting it from this box and editing as you see fit. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-virtual_networking-managing_a_virtual_network |
Chapter 4. Installing a cluster on OpenStack on your own infrastructure | Chapter 4. Installing a cluster on OpenStack on your own infrastructure In OpenShift Container Platform version 4.18, you can install a cluster on Red Hat OpenStack Platform (RHOSP) that runs on user-provisioned infrastructure. Using your own infrastructure allows you to integrate your cluster with existing infrastructure and modifications. The process requires more labor on your part than installer-provisioned installations, because you must create all RHOSP resources, like Nova servers, Neutron ports, and security groups. However, Red Hat provides Ansible playbooks to help you in the deployment process. 4.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You verified that OpenShift Container Platform 4.18 is compatible with your RHOSP version by using the Supported platforms for OpenShift clusters section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . You have an RHOSP account where you want to install OpenShift Container Platform. You understand performance and scalability practices for cluster scaling, control plane sizing, and etcd. For more information, see Recommended practices for scaling the cluster . On the machine from which you run the installation program, you have: A single directory in which you can keep the files you create during the installation process Python 3 4.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.18, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 4.3. Resource guidelines for installing OpenShift Container Platform on RHOSP To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP) quota must meet the following requirements: Table 4.1. Recommended resources for a default OpenShift Container Platform cluster on RHOSP Resource Value Floating IP addresses 3 Ports 15 Routers 1 Subnets 1 RAM 88 GB vCPUs 22 Volume storage 275 GB Instances 7 Security groups 3 Security group rules 60 Server groups 2 - plus 1 for each additional availability zone in each machine pool A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Note By default, your security group and security group rule quotas might be low. If you encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60 <project> as an administrator to increase them. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. 4.3.1. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 4.3.2. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory and 2 vCPUs At least 100 GB storage space from the RHOSP quota Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. 4.3.3. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 4.4. Downloading playbook dependencies The Ansible playbooks that simplify the installation process on user-provisioned infrastructure require several Python modules. On the machine where you will run the installer, add the modules' repositories and then download them. Note These instructions assume that you are using Red Hat Enterprise Linux (RHEL) 8. Prerequisites Python 3 is installed on your machine. Procedure On a command line, add the repositories: Register with Red Hat Subscription Manager: USD sudo subscription-manager register # If not done already Pull the latest subscription data: USD sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already Disable the current repositories: USD sudo subscription-manager repos --disable=* # If not done already Add the required repositories: USD sudo subscription-manager repos \ --enable=rhel-8-for-x86_64-baseos-rpms \ --enable=openstack-16-tools-for-rhel-8-x86_64-rpms \ --enable=ansible-2.9-for-rhel-8-x86_64-rpms \ --enable=rhel-8-for-x86_64-appstream-rpms Install the modules: USD sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr ansible-collections-openstack Ensure that the python command points to python3 : USD sudo alternatives --set python /usr/bin/python3 4.5. Downloading the installation playbooks Download Ansible playbooks that you can use to install OpenShift Container Platform on your own Red Hat OpenStack Platform (RHOSP) infrastructure. Prerequisites The curl command-line tool is available on your machine. Procedure To download the playbooks to your working directory, run the following script from a command line: USD xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.18/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.18/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.18/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.18/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.18/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.18/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.18/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.18/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.18/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.18/upi/openstack/down-containers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.18/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.18/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.18/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.18/upi/openstack/update-network-resources.yaml' The playbooks are downloaded to your machine. Important During the installation process, you can modify the playbooks to configure your deployment. Retain all playbooks for the life of your cluster. You must have the playbooks to remove your OpenShift Container Platform cluster from RHOSP. Important You must match any edits you make in the bootstrap.yaml , compute-nodes.yaml , control-plane.yaml , network.yaml , and security-groups.yaml files to the corresponding playbooks that are prefixed with down- . For example, edits to the bootstrap.yaml file must be reflected in the down-bootstrap.yaml file, too. If you do not edit both files, the supported cluster removal process will fail. 4.6. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 4.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 4.8. Creating the Red Hat Enterprise Linux CoreOS (RHCOS) image The OpenShift Container Platform installation program requires that a Red Hat Enterprise Linux CoreOS (RHCOS) image be present in the Red Hat OpenStack Platform (RHOSP) cluster. Retrieve the latest RHCOS image, then upload it using the RHOSP CLI. Prerequisites The RHOSP CLI is installed. Procedure Log in to the Red Hat Customer Portal's Product Downloads page . Under Version , select the most recent release of OpenShift Container Platform 4.18 for Red Hat Enterprise Linux (RHEL) 8. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - OpenStack Image (QCOW) . Decompress the image. Note You must decompress the RHOSP image before the cluster can use it. The name of the downloaded file might not contain a compression extension, like .gz or .tgz . To find out if or how the file is compressed, in a command line, enter: USD file <name_of_downloaded_file> From the image that you downloaded, create an image that is named rhcos in your cluster by using the RHOSP CLI: USD openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos Important Depending on your RHOSP environment, you might be able to upload the image in either .raw or .qcow2 formats . If you use Ceph, you must use the .raw format. Warning If the installation program finds multiple images with the same name, it chooses one of them at random. To avoid this behavior, create unique names for resources in RHOSP. After you upload the image to RHOSP, it is usable in the installation process. 4.9. Verifying external network access The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP). Prerequisites Configure OpenStack's networking service to have DHCP agents forward instances' DNS queries Procedure Using the RHOSP CLI, verify the name and ID of the 'External' network: USD openstack network list --long -c ID -c Name -c "Router Type" Example output +--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+ A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network . Note If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port . 4.10. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 4.10.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API, cluster applications, and the bootstrap process. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> By using the Red Hat OpenStack Platform (RHOSP) CLI, create the bootstrap FIP: USD openstack floating ip create --description "bootstrap machine" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the inventory.yaml file as the values of the following variables: os_api_fip os_bootstrap_fip os_ingress_fip If you use these values, you must also enter an external network as the value of the os_external_network variable in the inventory.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 4.10.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the inventory.yaml file, do not define the following variables: os_api_fip os_bootstrap_fip os_ingress_fip If you cannot provide an external network, you can also leave os_external_network blank. If you do not provide a value for os_external_network , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. Later in the installation process, when you create network resources, you must configure external connectivity on your own. If you run the installer with the wait-for command from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 4.11. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 4.12. Creating network resources on RHOSP Create the network resources that an OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) installation on your own infrastructure requires. To save time, run supplied Ansible playbooks that generate security groups, networks, subnets, routers, and ports. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". Procedure For a dual stack cluster deployment, edit the inventory.yaml file and uncomment the os_subnet6 attribute. To ensure that your network resources have unique names on the RHOSP deployment, create an environment variable and JSON file for use in the Ansible playbooks: Create an environment variable that has a unique name value by running the following command: USD export OS_NET_ID="openshift-USD(dd if=/dev/urandom count=4 bs=1 2>/dev/null |hexdump -e '"%02x"')" Verify that the variable is set by running the following command on a command line: USD echo USDOS_NET_ID Create a JSON object that includes the variable in a file called netid.json by running the following command: USD echo "{\"os_net_id\": \"USDOS_NET_ID\"}" | tee netid.json On a command line, create the network resources by running the following command: USD ansible-playbook -i inventory.yaml network.yaml Note The API and Ingress VIP fields will be overwritten in the inventory.yaml playbook with the IP addresses assigned to the network ports. Note The resources created by the network.yaml playbook are deleted by the down-network.yaml playbook. 4.13. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. You now have the file install-config.yaml in the directory that you specified. Additional resources Installation configuration parameters for OpenStack 4.13.1. Custom subnets in RHOSP deployments Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet's GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file. This subnet is used as the cluster's primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet's UUID. Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements: The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled. The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork . The installation program user has permission to create ports on this network, including ports with fixed IP addresses. Clusters that use custom subnets have the following limitations: If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network. If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines. You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network. Note By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network's CIDR block. To override these default values, set values for platform.openstack.apiVIPs and platform.openstack.ingressVIPs that are outside of the DHCP allocation pool. Important The CIDR ranges for networks are not adjustable after cluster installation. Red Hat does not provide direct guidance on determining the range during cluster installation because it requires careful consideration of the number of created pods per namespace. 4.13.2. Sample customized install-config.yaml file for RHOSP The following example install-config.yaml files demonstrate all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. Example 4.1. Example single stack install-config.yaml file apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... Example 4.2. Example dual stack install-config.yaml file apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.25.0/24 - cidr: fd2e:6f44:5dd8:c956::/64 serviceNetwork: - 172.30.0.0/16 - fd02::/112 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiVIPs: - 192.168.25.10 - fd2e:6f44:5dd8:c956:f816:3eff:fec3:5955 ingressVIPs: - 192.168.25.132 - fd2e:6f44:5dd8:c956:f816:3eff:fe40:aecb controlPlanePort: fixedIPs: - subnet: name: openshift-dual4 - subnet: name: openshift-dual6 network: name: openshift-dual fips: false pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 4.13.3. Setting a custom subnet for machines The IP range that the installation program uses by default might not match the Neutron subnet that you create when you install OpenShift Container Platform. If necessary, update the CIDR value for new machines by editing the installation configuration file. Prerequisites You have the install-config.yaml file that was generated by the OpenShift Container Platform installation program. You have Python 3 installed. Procedure On a command line, browse to the directory that contains the install-config.yaml and inventory.yaml files. From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run the following command: USD python -c 'import os import sys import yaml import re re_os_net_id = re.compile(r"{{\s*os_net_id\s*}}") os_net_id = os.getenv("OS_NET_ID") path = "common.yaml" facts = None for _dict in yaml.safe_load(open(path))[0]["tasks"]: if "os_network" in _dict.get("set_fact", {}): facts = _dict["set_fact"] break if not facts: print("Cannot find `os_network` in common.yaml file. Make sure OpenStack resource names are defined in one of the tasks.") sys.exit(1) os_network = re_os_net_id.sub(os_net_id, facts["os_network"]) os_subnet = re_os_net_id.sub(os_net_id, facts["os_subnet"]) path = "install-config.yaml" data = yaml.safe_load(open(path)) inventory = yaml.safe_load(open("inventory.yaml"))["all"]["hosts"]["localhost"] machine_net = [{"cidr": inventory["os_subnet_range"]}] api_vips = [inventory["os_apiVIP"]] ingress_vips = [inventory["os_ingressVIP"]] ctrl_plane_port = {"network": {"name": os_network}, "fixedIPs": [{"subnet": {"name": os_subnet}}]} if inventory.get("os_subnet6_range"): 1 os_subnet6 = re_os_net_id.sub(os_net_id, facts["os_subnet6"]) machine_net.append({"cidr": inventory["os_subnet6_range"]}) api_vips.append(inventory["os_apiVIP6"]) ingress_vips.append(inventory["os_ingressVIP6"]) data["networking"]["networkType"] = "OVNKubernetes" data["networking"]["clusterNetwork"].append({"cidr": inventory["cluster_network6_cidr"], "hostPrefix": inventory["cluster_network6_prefix"]}) data["networking"]["serviceNetwork"].append(inventory["service_subnet6_range"]) ctrl_plane_port["fixedIPs"].append({"subnet": {"name": os_subnet6}}) data["networking"]["machineNetwork"] = machine_net data["platform"]["openstack"]["apiVIPs"] = api_vips data["platform"]["openstack"]["ingressVIPs"] = ingress_vips data["platform"]["openstack"]["controlPlanePort"] = ctrl_plane_port del data["platform"]["openstack"]["externalDNS"] open(path, "w").write(yaml.dump(data, default_flow_style=False))' 1 Applies to dual stack (IPv4/IPv6) environments. 4.13.4. Emptying compute machine pools To proceed with an installation that uses your own infrastructure, set the number of compute machines in the installation configuration file to zero. Later, you create these machines manually. Prerequisites You have the install-config.yaml file that was generated by the OpenShift Container Platform installation program. Procedure On a command line, browse to the directory that contains install-config.yaml . From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run: USD python -c ' import yaml; path = "install-config.yaml"; data = yaml.safe_load(open(path)); data["compute"][0]["replicas"] = 0; open(path, "w").write(yaml.dump(data, default_flow_style=False))' To set the value manually, open the file and set the value of compute.<first entry>.replicas to 0 . 4.13.5. Cluster deployment on RHOSP provider networks You can deploy your OpenShift Container Platform clusters on Red Hat OpenStack Platform (RHOSP) with a primary network interface on a provider network. Provider networks are commonly used to give projects direct access to a public network that can be used to reach the internet. You can also share provider networks among projects as part of the network creation process. RHOSP provider networks map directly to an existing physical network in the data center. A RHOSP administrator must create them. In the following example, OpenShift Container Platform workloads are connected to a data center by using a provider network: OpenShift Container Platform clusters that are installed on provider networks do not require tenant networks or floating IP addresses. The installer does not create these resources during installation. Example provider network types include flat (untagged) and VLAN (802.1Q tagged). Note A cluster can support as many provider network connections as the network type allows. For example, VLAN networks typically support up to 4096 connections. You can learn more about provider and tenant networks in the RHOSP documentation . 4.13.5.1. RHOSP provider network requirements for cluster installation Before you install an OpenShift Container Platform cluster, your Red Hat OpenStack Platform (RHOSP) deployment and provider network must meet a number of conditions: The RHOSP networking service (Neutron) is enabled and accessible through the RHOSP networking API. The RHOSP networking service has the port security and allowed address pairs extensions enabled . The provider network can be shared with other tenants. Tip Use the openstack network create command with the --share flag to create a network that can be shared. The RHOSP project that you use to install the cluster must own the provider network, as well as an appropriate subnet. Tip To create a network for a project that is named "openshift," enter the following command USD openstack network create --project openshift To create a subnet for a project that is named "openshift," enter the following command USD openstack subnet create --project openshift To learn more about creating networks on RHOSP, read the provider networks documentation . If the cluster is owned by the admin user, you must run the installer as that user to create ports on the network. Important Provider networks must be owned by the RHOSP project that is used to create the cluster. If they are not, the RHOSP Compute service (Nova) cannot request a port from that network. Verify that the provider network can reach the RHOSP metadata service IP address, which is 169.254.169.254 by default. Depending on your RHOSP SDN and networking service configuration, you might need to provide the route when you create the subnet. For example: USD openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2 ... Optional: To secure the network, create role-based access control (RBAC) rules that limit network access to a single project. 4.13.5.2. Deploying a cluster that has a primary interface on a provider network You can deploy an OpenShift Container Platform cluster that has its primary network interface on an Red Hat OpenStack Platform (RHOSP) provider network. Prerequisites Your Red Hat OpenStack Platform (RHOSP) deployment is configured as described by "RHOSP provider network requirements for cluster installation". Procedure In a text editor, open the install-config.yaml file. Set the value of the platform.openstack.apiVIPs property to the IP address for the API VIP. Set the value of the platform.openstack.ingressVIPs property to the IP address for the Ingress VIP. Set the value of the platform.openstack.machinesSubnet property to the UUID of the provider network subnet. Set the value of the networking.machineNetwork.cidr property to the CIDR block of the provider network subnet. Important The platform.openstack.apiVIPs and platform.openstack.ingressVIPs properties must both be unassigned IP addresses from the networking.machineNetwork.cidr block. Section of an installation configuration file for a cluster that relies on a RHOSP provider network ... platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # ... networking: machineNetwork: - cidr: 192.0.2.0/24 1 2 In OpenShift Container Platform 4.12 and later, the apiVIP and ingressVIP configuration settings are deprecated. Instead, use a list format to enter values in the apiVIPs and ingressVIPs configuration settings. Warning You cannot set the platform.openstack.externalNetwork or platform.openstack.externalDNS parameters while using a provider network for the primary network interface. When you deploy the cluster, the installer uses the install-config.yaml file to deploy the cluster on the provider network. Tip You can add additional networks, including provider networks, to the platform.openstack.additionalNetworkIDs list. After you deploy your cluster, you can attach pods to additional networks. For more information, see Understanding multiple networks . 4.14. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines, compute machine sets, and control plane machine sets: USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the compute machine set files to create compute machines by using the machine API, but you must update references to them to match your environment. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: Export the metadata file's infraID key as an environment variable: USD export INFRA_ID=USD(jq -r .infraID metadata.json) Tip Extract the infraID key from metadata.json and use it as a prefix for all of the RHOSP resources that you create. By doing so, you avoid name conflicts when making multiple deployments in the same project. 4.15. Preparing the bootstrap Ignition files The OpenShift Container Platform installation process relies on bootstrap machines that are created from a bootstrap Ignition configuration file. Edit the file and upload it. Then, create a secondary bootstrap Ignition configuration file that Red Hat OpenStack Platform (RHOSP) uses to download the primary file. Prerequisites You have the bootstrap Ignition file that the installer program generates, bootstrap.ign . The infrastructure ID from the installer's metadata file is set as an environment variable ( USDINFRA_ID ). If the variable is not set, see Creating the Kubernetes manifest and Ignition config files . You have an HTTP(S)-accessible way to store the bootstrap Ignition file. The documented procedure uses the RHOSP image service (Glance), but you can also use the RHOSP storage service (Swift), Amazon S3, an internal HTTP server, or an ad hoc Nova server. Procedure Run the following Python script. The script modifies the bootstrap Ignition file to set the hostname and, if available, CA certificate file when it runs: import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f) Using the RHOSP CLI, create an image that uses the bootstrap Ignition file: USD openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name> Get the image's details: USD openstack image show <image_name> Make a note of the file value; it follows the pattern v2/images/<image_ID>/file . Note Verify that the image you created is active. Retrieve the image service's public address: USD openstack catalog show image Combine the public address with the image file value and save the result as the storage location. The location follows the pattern <image_service_public_URL>/v2/images/<image_ID>/file . Generate an auth token and save the token ID: USD openstack token issue -c id -f value Insert the following content into a file called USDINFRA_ID-bootstrap-ignition.json and edit the placeholders to match your own values: { "ignition": { "config": { "merge": [{ "source": "<storage_url>", 1 "httpHeaders": [{ "name": "X-Auth-Token", 2 "value": "<token_ID>" 3 }] }] }, "security": { "tls": { "certificateAuthorities": [{ "source": "data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>" 4 }] } }, "version": "3.2.0" } } 1 Replace the value of ignition.config.merge.source with the bootstrap Ignition file storage URL. 2 Set name in httpHeaders to "X-Auth-Token" . 3 Set value in httpHeaders to your token's ID. 4 If the bootstrap Ignition file server uses a self-signed certificate, include the base64-encoded certificate. Save the secondary Ignition config file. The bootstrap Ignition data will be passed to RHOSP during installation. Warning The bootstrap Ignition file contains sensitive information, like clouds.yaml credentials. Ensure that you store it in a secure place, and delete it after you complete the installation process. 4.16. Creating control plane Ignition config files on RHOSP Installing OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) on your own infrastructure requires control plane Ignition config files. You must create multiple config files. Note As with the bootstrap Ignition configuration, you must explicitly define a hostname for each control plane machine. Prerequisites The infrastructure ID from the installation program's metadata file is set as an environment variable ( USDINFRA_ID ). If the variable is not set, see "Creating the Kubernetes manifest and Ignition config files". Procedure On a command line, run the following Python script: USD for index in USD(seq 0 2); do MASTER_HOSTNAME="USDINFRA_ID-master-USDindex\n" python -c "import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)" <master.ign >"USDINFRA_ID-master-USDindex-ignition.json" done You now have three control plane Ignition files: <INFRA_ID>-master-0-ignition.json , <INFRA_ID>-master-1-ignition.json , and <INFRA_ID>-master-2-ignition.json . 4.17. Updating network resources on RHOSP Update the network resources that an OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) installation on your own infrastructure requires. Prerequisites Python 3 is installed on your machine. You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". Procedure Optional: Add an external network value to the inventory.yaml playbook: Example external network value in the inventory.yaml Ansible Playbook ... # The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external' ... Important If you did not provide a value for os_external_network in the inventory.yaml file, you must ensure that VMs can access Glance and an external connection yourself. Optional: Add external network and floating IP (FIP) address values to the inventory.yaml playbook: Example FIP values in the inventory.yaml Ansible Playbook ... # OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20' Important If you do not define values for os_api_fip and os_ingress_fip , you must perform postinstallation network configuration. If you do not define a value for os_bootstrap_fip , the installation program cannot download debugging information from failed installations. See "Enabling access to the environment" for more information. On a command line, create security groups by running the security-groups.yaml playbook: USD ansible-playbook -i inventory.yaml security-groups.yaml On a command line, update the network resources by running the update-network-resources.yaml playbook: USD ansible-playbook -i inventory.yaml update-network-resources.yaml 1 1 This playbook will add tags to the network, subnets, ports, and router. It also attaches floating IP addresses to the API and Ingress ports and sets the security groups for those ports. Optional: If you want to control the default resolvers that Nova servers use, run the RHOSP CLI command: USD openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> "USDINFRA_ID-nodes" Optional: You can use the inventory.yaml file that you created to customize your installation. For example, you can deploy a cluster that uses bare metal machines. 4.17.1. Deploying a cluster with bare metal machines If you want your cluster to use bare metal machines, modify the inventory.yaml file. Your cluster can have both control plane and compute machines running on bare metal, or just compute machines. Note Be sure that your install-config.yaml file reflects whether the RHOSP network that you use for bare metal workers supports floating IP addresses or not. Prerequisites The RHOSP Bare Metal service (Ironic) is enabled and accessible via the RHOSP Compute API. Bare metal is available as a RHOSP flavor . If your cluster runs on an RHOSP version that is more than 16.1.6 and less than 16.2.4, bare metal workers do not function due to a known issue that causes the metadata service to be unavailable for services on OpenShift Container Platform nodes. The RHOSP network supports both VM and bare metal server attachment. If you want to deploy the machines on a pre-existing network, a RHOSP subnet is provisioned. If you want to deploy the machines on an installer-provisioned network, the RHOSP Bare Metal service (Ironic) is able to listen for and interact with Preboot eXecution Environment (PXE) boot machines that run on tenant networks. You created an inventory.yaml file as part of the OpenShift Container Platform installation process. Procedure In the inventory.yaml file, edit the flavors for machines: If you want to use bare-metal control plane machines, change the value of os_flavor_master to a bare metal flavor. Change the value of os_flavor_worker to a bare metal flavor. An example bare metal inventory.yaml file all: hosts: localhost: ansible_connection: local ansible_python_interpreter: "{{ansible_playbook_python}}" # User-provided values os_subnet_range: '10.0.0.0/16' os_flavor_master: 'my-bare-metal-flavor' 1 os_flavor_worker: 'my-bare-metal-flavor' 2 os_image_rhcos: 'rhcos' os_external_network: 'external' ... 1 If you want to have bare-metal control plane machines, change this value to a bare metal flavor. 2 Change this value to a bare metal flavor to use for compute machines. Use the updated inventory.yaml file to complete the installation process. Machines that are created during deployment use the flavor that you added to the file. Note The installer may time out while waiting for bare metal machines to boot. If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug 4.18. Creating the bootstrap machine on RHOSP Create a bootstrap machine and give it the network access it needs to run on Red Hat OpenStack Platform (RHOSP). Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and bootstrap.yaml Ansible playbooks are in a common directory. The metadata.json file that the installation program created is in the same directory as the Ansible playbooks. Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the bootstrap.yaml playbook: USD ansible-playbook -i inventory.yaml bootstrap.yaml After the bootstrap server is active, view the logs to verify that the Ignition files were received: USD openstack console log show "USDINFRA_ID-bootstrap" 4.19. Creating the control plane machines on RHOSP Create three control plane machines by using the Ignition config files that you generated. Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The infrastructure ID from the installation program's metadata file is set as an environment variable ( USDINFRA_ID ). The inventory.yaml , common.yaml , and control-plane.yaml Ansible playbooks are in a common directory. You have the three Ignition files that were created in "Creating control plane Ignition config files". Procedure On a command line, change the working directory to the location of the playbooks. If the control plane Ignition config files are not already in your working directory, copy them into it. On a command line, run the control-plane.yaml playbook: USD ansible-playbook -i inventory.yaml control-plane.yaml Run the following command to monitor the bootstrapping process: USD openshift-install wait-for bootstrap-complete You will see messages that confirm that the control plane machines are running and have joined the cluster: INFO API v1.31.3 up INFO Waiting up to 30m0s for bootstrapping to complete... ... INFO It is now safe to remove the bootstrap resources 4.20. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 4.21. Deleting bootstrap resources from RHOSP Delete the bootstrap resources that you no longer need. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and down-bootstrap.yaml Ansible playbooks are in a common directory. The control plane machines are running. If you do not know the status of the machines, see "Verifying cluster status". Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the down-bootstrap.yaml playbook: USD ansible-playbook -i inventory.yaml down-bootstrap.yaml The bootstrap port, server, and floating IP address are deleted. Warning If you did not disable the bootstrap Ignition file URL earlier, do so now. 4.22. Creating compute machines on RHOSP After standing up the control plane, create compute machines. Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and compute-nodes.yaml Ansible playbooks are in a common directory. The metadata.json file that the installation program created is in the same directory as the Ansible playbooks. The control plane is active. Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the playbook: USD ansible-playbook -i inventory.yaml compute-nodes.yaml steps Approve the certificate signing requests for the machines. 4.23. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information Certificate Signing Requests 4.24. Verifying a successful installation Verify that the OpenShift Container Platform installation is complete. Prerequisites You have the installation program ( openshift-install ) Procedure On a command line, enter: USD openshift-install --log-level debug wait-for install-complete The program outputs the console URL, as well as the administrator's login information. 4.25. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.18, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 4.26. steps Customize your cluster . If necessary, you can opt out of remote health reporting . If you need to enable external access to node ports, configure ingress cluster traffic by using a node port . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses . | [
"sudo subscription-manager register # If not done already",
"sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already",
"sudo subscription-manager repos --disable=* # If not done already",
"sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=openstack-16-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-rpms",
"sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr ansible-collections-openstack",
"sudo alternatives --set python /usr/bin/python3",
"xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.18/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.18/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.18/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.18/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.18/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.18/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.18/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.18/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.18/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.18/upi/openstack/down-containers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.18/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.18/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.18/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.18/upi/openstack/update-network-resources.yaml'",
"tar -xvf openshift-install-linux.tar.gz",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"file <name_of_downloaded_file>",
"openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos",
"openstack network list --long -c ID -c Name -c \"Router Type\"",
"+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+",
"openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"bootstrap machine\" <external_network>",
"api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>",
"api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>",
"clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'",
"clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"",
"oc edit configmap -n openshift-config cloud-provider-config",
"export OS_NET_ID=\"openshift-USD(dd if=/dev/urandom count=4 bs=1 2>/dev/null |hexdump -e '\"%02x\"')\"",
"echo USDOS_NET_ID",
"echo \"{\\\"os_net_id\\\": \\\"USDOS_NET_ID\\\"}\" | tee netid.json",
"ansible-playbook -i inventory.yaml network.yaml",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.25.0/24 - cidr: fd2e:6f44:5dd8:c956::/64 serviceNetwork: - 172.30.0.0/16 - fd02::/112 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiVIPs: - 192.168.25.10 - fd2e:6f44:5dd8:c956:f816:3eff:fec3:5955 ingressVIPs: - 192.168.25.132 - fd2e:6f44:5dd8:c956:f816:3eff:fe40:aecb controlPlanePort: fixedIPs: - subnet: name: openshift-dual4 - subnet: name: openshift-dual6 network: name: openshift-dual fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"python -c 'import os import sys import yaml import re re_os_net_id = re.compile(r\"{{\\s*os_net_id\\s*}}\") os_net_id = os.getenv(\"OS_NET_ID\") path = \"common.yaml\" facts = None for _dict in yaml.safe_load(open(path))[0][\"tasks\"]: if \"os_network\" in _dict.get(\"set_fact\", {}): facts = _dict[\"set_fact\"] break if not facts: print(\"Cannot find `os_network` in common.yaml file. Make sure OpenStack resource names are defined in one of the tasks.\") sys.exit(1) os_network = re_os_net_id.sub(os_net_id, facts[\"os_network\"]) os_subnet = re_os_net_id.sub(os_net_id, facts[\"os_subnet\"]) path = \"install-config.yaml\" data = yaml.safe_load(open(path)) inventory = yaml.safe_load(open(\"inventory.yaml\"))[\"all\"][\"hosts\"][\"localhost\"] machine_net = [{\"cidr\": inventory[\"os_subnet_range\"]}] api_vips = [inventory[\"os_apiVIP\"]] ingress_vips = [inventory[\"os_ingressVIP\"]] ctrl_plane_port = {\"network\": {\"name\": os_network}, \"fixedIPs\": [{\"subnet\": {\"name\": os_subnet}}]} if inventory.get(\"os_subnet6_range\"): 1 os_subnet6 = re_os_net_id.sub(os_net_id, facts[\"os_subnet6\"]) machine_net.append({\"cidr\": inventory[\"os_subnet6_range\"]}) api_vips.append(inventory[\"os_apiVIP6\"]) ingress_vips.append(inventory[\"os_ingressVIP6\"]) data[\"networking\"][\"networkType\"] = \"OVNKubernetes\" data[\"networking\"][\"clusterNetwork\"].append({\"cidr\": inventory[\"cluster_network6_cidr\"], \"hostPrefix\": inventory[\"cluster_network6_prefix\"]}) data[\"networking\"][\"serviceNetwork\"].append(inventory[\"service_subnet6_range\"]) ctrl_plane_port[\"fixedIPs\"].append({\"subnet\": {\"name\": os_subnet6}}) data[\"networking\"][\"machineNetwork\"] = machine_net data[\"platform\"][\"openstack\"][\"apiVIPs\"] = api_vips data[\"platform\"][\"openstack\"][\"ingressVIPs\"] = ingress_vips data[\"platform\"][\"openstack\"][\"controlPlanePort\"] = ctrl_plane_port del data[\"platform\"][\"openstack\"][\"externalDNS\"] open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'",
"python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"compute\"][0][\"replicas\"] = 0; open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'",
"openstack network create --project openshift",
"openstack subnet create --project openshift",
"openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2",
"platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # networking: machineNetwork: - cidr: 192.0.2.0/24",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"export INFRA_ID=USD(jq -r .infraID metadata.json)",
"import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f)",
"openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name>",
"openstack image show <image_name>",
"openstack catalog show image",
"openstack token issue -c id -f value",
"{ \"ignition\": { \"config\": { \"merge\": [{ \"source\": \"<storage_url>\", 1 \"httpHeaders\": [{ \"name\": \"X-Auth-Token\", 2 \"value\": \"<token_ID>\" 3 }] }] }, \"security\": { \"tls\": { \"certificateAuthorities\": [{ \"source\": \"data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>\" 4 }] } }, \"version\": \"3.2.0\" } }",
"for index in USD(seq 0 2); do MASTER_HOSTNAME=\"USDINFRA_ID-master-USDindex\\n\" python -c \"import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)\" <master.ign >\"USDINFRA_ID-master-USDindex-ignition.json\" done",
"# The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external'",
"# OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20'",
"ansible-playbook -i inventory.yaml security-groups.yaml",
"ansible-playbook -i inventory.yaml update-network-resources.yaml 1",
"openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> \"USDINFRA_ID-nodes\"",
"all: hosts: localhost: ansible_connection: local ansible_python_interpreter: \"{{ansible_playbook_python}}\" # User-provided values os_subnet_range: '10.0.0.0/16' os_flavor_master: 'my-bare-metal-flavor' 1 os_flavor_worker: 'my-bare-metal-flavor' 2 os_image_rhcos: 'rhcos' os_external_network: 'external'",
"./openshift-install wait-for install-complete --log-level debug",
"ansible-playbook -i inventory.yaml bootstrap.yaml",
"openstack console log show \"USDINFRA_ID-bootstrap\"",
"ansible-playbook -i inventory.yaml control-plane.yaml",
"openshift-install wait-for bootstrap-complete",
"INFO API v1.31.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"ansible-playbook -i inventory.yaml down-bootstrap.yaml",
"ansible-playbook -i inventory.yaml compute-nodes.yaml",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3",
"openshift-install --log-level debug wait-for install-complete"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_openstack/installing-openstack-user |
4.6. Securing Virtual Private Networks (VPNs) Using Libreswan | 4.6. Securing Virtual Private Networks (VPNs) Using Libreswan In Red Hat Enterprise Linux 7, a Virtual Private Network ( VPN ) can be configured using the IPsec protocol which is supported by the Libreswan application. Libreswan is a continuation of the Openswan application and many examples from the Openswan documentation are interchangeable with Libreswan . The NetworkManager IPsec plug-in is called NetworkManager-libreswan . Users of GNOME Shell should install the NetworkManager-libreswan-gnome package, which has NetworkManager-libreswan as a dependency. Note that the NetworkManager-libreswan-gnome package is only available from the Optional channel. See Enabling Supplementary and Optional Repositories . The IPsec protocol for VPN is itself configured using the Internet Key Exchange ( IKE ) protocol. The terms IPsec and IKE are used interchangeably. An IPsec VPN is also called an IKE VPN, IKEv2 VPN, XAUTH VPN, Cisco VPN or IKE/IPsec VPN. A variant of an IPsec VPN that also uses the Level 2 Tunneling Protocol ( L2TP ) is usually called an L2TP/IPsec VPN, which requires the Optional channel xl2tpd application. Libreswan is an open-source, user-space IKE implementation available in Red Hat Enterprise Linux 7. IKE version 1 and 2 are implemented as a user-level daemon. The IKE protocol itself is also encrypted. The IPsec protocol is implemented by the Linux kernel and Libreswan configures the kernel to add and remove VPN tunnel configurations. The IKE protocol uses UDP port 500 and 4500. The IPsec protocol consists of two different protocols, Encapsulated Security Payload ( ESP ) which has protocol number 50, and Authenticated Header ( AH ) which as protocol number 51. The AH protocol is not recommended for use. Users of AH are recommended to migrate to ESP with null encryption. The IPsec protocol has two different modes of operation, Tunnel Mode (the default) and Transport Mode . It is possible to configure the kernel with IPsec without IKE. This is called Manual Keying . It is possible to configure manual keying using the ip xfrm commands, however, this is strongly discouraged for security reasons. Libreswan interfaces with the Linux kernel using netlink. Packet encryption and decryption happen in the Linux kernel. Libreswan uses the Network Security Services ( NSS ) cryptographic library. Both libreswan and NSS are certified for use with the Federal Information Processing Standard ( FIPS ) Publication 140-2. Important IKE / IPsec VPNs, implemented by Libreswan and the Linux kernel, is the only VPN technology recommended for use in Red Hat Enterprise Linux 7. Do not use any other VPN technology without understanding the risks of doing so. 4.6.1. Installing Libreswan To install Libreswan , enter the following command as root : To check that Libreswan is installed: After a new installation of Libreswan , the NSS database should be initialized as part of the installation process. Before you start a new database, remove the old database as follows: Then, to initialize a new NSS database, enter the following command as root : Only when operating in FIPS mode, it is necessary to protect the NSS database with a password. To initialize the database for FIPS mode, instead of the command, use: To start the ipsec daemon provided by Libreswan , issue the following command as root : To confirm that the daemon is now running: To ensure that Libreswan will start when the system starts, issue the following command as root : Configure any intermediate as well as host-based firewalls to permit the ipsec service. See Chapter 5, Using Firewalls for information on firewalls and allowing specific services to pass through. Libreswan requires the firewall to allow the following packets: UDP port 500 and 4500 for the Internet Key Exchange ( IKE ) protocol Protocol 50 for Encapsulated Security Payload ( ESP ) IPsec packets Protocol 51 for Authenticated Header ( AH ) IPsec packets (uncommon) We present three examples of using Libreswan to set up an IPsec VPN. The first example is for connecting two hosts together so that they may communicate securely. The second example is connecting two sites together to form one network. The third example is supporting remote users, known as road warriors in this context. 4.6.2. Creating VPN Configurations Using Libreswan Libreswan does not use the terms " source " and " destination " or " server " and " client " since IKE/IPsec are peer to peer protocols. Instead, it uses the terms " left " and " right " to refer to end points (the hosts). This also allows the same configuration to be used on both end points in most cases, although a lot of administrators choose to always use " left " for the local host and " right " for the remote host. There are four commonly used methods for authentication of endpoints: Pre-Shared Keys ( PSK ) is the simplest authentication method. PSKs should consist of random characters and have a length of at least 20 characters. In FIPS mode, PSKs need to comply to a minimum strength requirement depending on the integrity algorithm used. It is recommended not to use PSKs shorter than 64 random characters. Raw RSA keys are commonly used for static host-to-host or subnet-to-subnet IPsec configurations. The hosts are manually configured with each other's public RSA key. This method does not scale well when dozens or more hosts all need to setup IPsec tunnels to each other. X.509 certificates are commonly used for large-scale deployments where there are many hosts that need to connect to a common IPsec gateway. A central certificate authority ( CA ) is used to sign RSA certificates for hosts or users. This central CA is responsible for relaying trust, including the revocations of individual hosts or users. NULL Authentication is used to gain mesh encryption without authentication. It protects against passive attacks but does not protect against active attacks. However, since IKEv2 allows asymmetrical authentication methods, NULL Authentication can also be used for internet scale Opportunistic IPsec, where clients authenticate the server, but servers do not authenticate the client. This model is similar to secure websites using TLS (also known as https:// websites). In addition to these authentication methods, an additional authentication can be added to protect against possible attacks by quantum computers. This additional authentication method is called Postquantum Preshared Keys ( PPK . Individual clients or groups of clients can use their own PPK by specifying a ( PPKID that corresponds to an out-of-band configured PreShared Key. See Section 4.6.9, "Using the Protection against Quantum Computers" . 4.6.3. Creating Host-To-Host VPN Using Libreswan To configure Libreswan to create a host-to-host IPsec VPN, between two hosts referred to as " left " and " right " , enter the following commands as root on both of the hosts ( " left " and " right " ) to create new raw RSA key pairs: This generates an RSA key pair for the host. The process of generating RSA keys can take many minutes, especially on virtual machines with low entropy. To view the host public key so it can be specified in a configuration as the " left " side, issue the following command as root on the host where the new hostkey was added, using the CKAID returned by the " newhostkey " command: You will need this key to add to the configuration file on both hosts as explained below. If you forgot the CKAID, you can obtain a list of all host keys on a machine using: The secret part of the keypair is stored inside the " NSS database " which resides in /etc/ipsec.d/*.db . To make a configuration file for this host-to-host tunnel, the lines leftrsasigkey= and rightrsasigkey= from above are added to a custom configuration file placed in the /etc/ipsec.d/ directory. Using an editor running as root , create a file with a suitable name in the following format: /etc/ipsec.d/my_host-to-host.conf Edit the file as follows: Public keys can also be configured by their CKAID instead of by their RSAID. In that case use " leftckaid= " instead of " leftrsasigkey= " You can use the identical configuration file on both left and right hosts. Libreswan automatically detects if it is " left " or " right " based on the specified IP addresses or hostnames. If one of the hosts is a mobile host, which implies the IP address is not known in advance, then on the mobile client use %defaultroute as its IP address. This will pick up the dynamic IP address automatically. On the static server host that accepts connections from incoming mobile hosts, specify the mobile host using %any for its IP address. Ensure the leftrsasigkey value is obtained from the " left " host and the rightrsasigkey value is obtained from the " right " host. The same applies when using leftckaid and rightckaid . Restart ipsec to ensure it reads the new configuration and if configured to start on boot, to confirm that the tunnels establish: When using the auto=start option, the IPsec tunnel should be established within a few seconds. You can manually load and start the tunnel by entering the following commands as root : 4.6.3.1. Verifying Host-To-Host VPN Using Libreswan The IKE negotiation takes place on UDP ports 500 and 4500. IPsec packets show up as Encapsulated Security Payload (ESP) packets. The ESP protocol has no ports. When the VPN connection needs to pass through a NAT router, the ESP packets are encapsulated in UDP packets on port 4500. To verify that packets are being sent through the VPN tunnel, issue a command as root in the following format: Where interface is the interface known to carry the traffic. To end the capture with tcpdump , press Ctrl + C . Note The tcpdump command interacts a little unexpectedly with IPsec . It only sees the outgoing encrypted packet, not the outgoing plaintext packet. It does see the encrypted incoming packet, as well as the decrypted incoming packet. If possible, run tcpdump on a router between the two machines and not on one of the endpoints itself. When using the Virtual Tunnel Interface (VTI), tcpdump on the physical interface shows ESP packets, while tcpdump on the VTI interface shows the cleartext traffic. To check the tunnel is succesfully established, and additionally see how much traffic has gone through the tunnel, enter the following command as root : 4.6.4. Configuring Site-to-Site VPN Using Libreswan In order for Libreswan to create a site-to-site IPsec VPN, joining together two networks, an IPsec tunnel is created between two hosts, endpoints, which are configured to permit traffic from one or more subnets to pass through. They can therefore be thought of as gateways to the remote portion of the network. The configuration of the site-to-site VPN only differs from the host-to-host VPN in that one or more networks or subnets must be specified in the configuration file. To configure Libreswan to create a site-to-site IPsec VPN, first configure a host-to-host IPsec VPN as described in Section 4.6.3, "Creating Host-To-Host VPN Using Libreswan" and then copy or move the file to a file with a suitable name, such as /etc/ipsec.d/my_site-to-site.conf . Using an editor running as root , edit the custom configuration file /etc/ipsec.d/my_site-to-site.conf as follows: To bring the tunnels up, restart Libreswan or manually load and initiate all the connections using the following commands as root : 4.6.4.1. Verifying Site-to-Site VPN Using Libreswan Verifying that packets are being sent through the VPN tunnel is the same procedure as explained in Section 4.6.3.1, "Verifying Host-To-Host VPN Using Libreswan" . 4.6.5. Configuring Site-to-Site Single Tunnel VPN Using Libreswan Often, when a site-to-site tunnel is built, the gateways need to communicate with each other using their internal IP addresses instead of their public IP addresses. This can be accomplished using a single tunnel. If the left host, with host name west , has internal IP address 192.0.1.254 and the right host, with host name east , has internal IP address 192.0.2.254 , store the following configuration using a single tunnel to the /etc/ipsec.d/myvpn.conf file on both servers: 4.6.6. Configuring Subnet Extrusion Using Libreswan IPsec is often deployed in a hub-and-spoke architecture. Each leaf node has an IP range that is part of a larger range. Leaves communicate with each other through the hub. This is called subnet extrusion . Example 4.2. Configuring Simple Subnet Extrusion Setup In the following example, we configure the head office with 10.0.0.0/8 and two branches that use a smaller /24 subnet. At the head office: At the " branch1 " office, we use the same connection. Additionally, we use a pass-through connection to exclude our local LAN traffic from being sent through the tunnel: 4.6.7. Configuring IKEv2 Remote Access VPN Libreswan Road warriors are traveling users with mobile clients with a dynamically assigned IP address, such as laptops. These are authenticated using certificates. To avoid needing to use the old IKEv1 XAUTH protocol, IKEv2 is used in the following example: On the server: Where: left= 1.2.3.4 The 1.2.3.4 value specifies the actual IP address or host name of your server. leftcert=vpn-server.example.com This option specifies a certificate referring to its friendly name or nickname that has been used to import the certificate. Usually, the name is generated as a part of a PKCS #12 certificate bundle in the form of a .p12 file. See the pkcs12(1) and pk12util(1) man pages for more information. On the mobile client, the road warrior's device, use a slight variation of the configuration: Where: auto=start This option enables the user to connect to the VPN whenever the ipsec system service is started. Replace it with the auto=add if you want to establish the connection later. 4.6.8. Configuring IKEv1 Remote Access VPN Libreswan and XAUTH with X.509 Libreswan offers a method to natively assign IP address and DNS information to roaming VPN clients as the connection is established by using the XAUTH IPsec extension. Extended authentication (XAUTH) can be deployed using PSK or X.509 certificates. Deploying using X.509 is more secure. Client certificates can be revoked by a certificate revocation list or by Online Certificate Status Protocol ( OCSP ). With X.509 certificates, individual clients cannot impersonate the server. With a PSK, also called Group Password, this is theoretically possible. XAUTH requires the VPN client to additionally identify itself with a user name and password. For One time Passwords (OTP), such as Google Authenticator or RSA SecureID tokens, the one-time token is appended to the user password. There are three possible back ends for XAUTH: xauthby=pam This uses the configuration in /etc/pam.d/pluto to authenticate the user. Pluggable Authentication Modules (PAM) can be configured to use various back ends by itself. It can use the system account user-password scheme, an LDAP directory, a RADIUS server or a custom password authentication module. See the Using Pluggable Authentication Modules (PAM) chapter for more information. xauthby=file This uses the /etc/ipsec.d/passwd configuration file (it should not be confused with the /etc/ipsec.d/nsspassword file). The format of this file is similar to the Apache .htpasswd file and the Apache htpasswd command can be used to create entries in this file. However, after the user name and password, a third column is required with the connection name of the IPsec connection used, for example when using a conn remoteusers to offer VPN to remove users, a password file entry should look as follows: user1:USDapr1USDMIwQ3DHbUSD1I69LzTnZhnCT2DPQmAOK.:remoteusers Note When using the htpasswd command, the connection name has to be manually added after the user:password part on each line. xauthby=alwaysok The server always pretends the XAUTH user and password combination is correct. The client still has to specify a user name and a password, although the server ignores these. This should only be used when users are already identified by X.509 certificates, or when testing the VPN without needing an XAUTH back end. An example server configuration with X.509 certificates: When xauthfail is set to soft, instead of hard, authentication failures are ignored, and the VPN is setup as if the user authenticated properly. A custom updown script can be used to check for the environment variable XAUTH_FAILED . Such users can then be redirected, for example, using iptables DNAT, to a " walled garden " where they can contact the administrator or renew a paid subscription to the service. VPN clients use the modecfgdomain value and the DNS entries to redirect queries for the specified domain to these specified nameservers. This allows roaming users to access internal-only resources using the internal DNS names. Note while IKEv2 supports a comma-separated list of domain names and nameserver IP addresses using modecfgdomains and modecfgdns , the IKEv1 protocol only supports one domain name, and libreswan only supports up to two nameserver IP addresses. Optionally, to send a banner text to VPN cliens, use the modecfgbanner option. If leftsubnet is not 0.0.0.0/0 , split tunneling configuration requests are sent automatically to the client. For example, when using leftsubnet=10.0.0.0/8 , the VPN client would only send traffic for 10.0.0.0/8 through the VPN. On the client, the user has to input a user password, which depends on the backend used. For example: xauthby=file The administrator generated the password and stored it in the /etc/ipsec.d/passwd file. xauthby=pam The password is obtained at the location specified in the PAM configuration in the /etc/pam.d/pluto file. xauthby=alwaysok The password is not checked and always accepted. Use this option for testing purposes or if you want to ensure compatibility for xauth-only clients. Additional Resources For more information about XAUTH, see the Extended Authentication within ISAKMP/Oakley (XAUTH) Internet-Draft document. 4.6.9. Using the Protection against Quantum Computers Using IKEv1 with PreShared Keys provided protection against quantum attackers. The redesign of IKEv2 does not offer this protection natively. Libreswan offers the use of Postquantum Preshared Keys ( PPK ) to protect IKEv2 connections against quantum attacks. To enable optional PPK support, add ppk=yes to the connection definition. To require PPK, add ppk=insist . Then, each client can be given a PPK ID with a secret value that is communicated out-of-band (and preferably quantum safe). The PPK's should be very strong in randomness and not be based on dictionary words. The PPK ID and PPK data itself are stored in ipsec.secrets , for example: The PPKS option refers to static PPKs. There is an experimental function to use one-time-pad based Dynamic PPKs. Upon each connection, a new part of a onetime pad is used as the PPK. When used, that part of the dynamic PPK inside the file is overwritten with zeroes to prevent re-use. If there is no more one time pad material left, the connection fails. See the ipsec.secrets(5) man page for more information. Warning The implementation of dynamic PPKs is provided as a Technology Preview and this functionality should be used with caution. See the 7.5 Release Notes for more information. 4.6.10. Additional Resources The following sources of information provide additional resources regarding Libreswan and the ipsec daemon. 4.6.10.1. Installed Documentation ipsec(8) man page - Describes command options for ipsec . ipsec.conf(5) man page - Contains information on configuring ipsec . ipsec.secrets(5) man page - Describes the format of the ipsec.secrets file. ipsec_auto(8) man page - Describes the use of the auto command line client for manipulating Libreswan IPsec connections established using automatic exchanges of keys. ipsec_rsasigkey(8) man page - Describes the tool used to generate RSA signature keys. /usr/share/doc/libreswan- version / 4.6.10.2. Online Documentation https://libreswan.org The website of the upstream project. https://libreswan.org/wiki The Libreswan Project Wiki. https://libreswan.org/man/ All Libreswan man pages. NIST Special Publication 800-77: Guide to IPsec VPNs Practical guidance to organizations on implementing security services based on IPsec. | [
"~]# yum install libreswan",
"~]USD yum info libreswan",
"~]# systemctl stop ipsec ~]# rm /etc/ipsec.d/*db",
"~]# ipsec initnss Initializing NSS database",
"~]# certutil -N -d sql:/etc/ipsec.d Enter a password which will be used to encrypt your keys. The password should be at least 8 characters long, and should contain at least one non-alphabetic character. Enter new password: Re-enter password:",
"~]# systemctl start ipsec",
"~]USD systemctl status ipsec * ipsec.service - Internet Key Exchange (IKE) Protocol Daemon for IPsec Loaded: loaded (/usr/lib/systemd/system/ipsec.service; disabled; vendor preset: disabled) Active: active (running) since Sun 2018-03-18 18:44:43 EDT; 3s ago Docs: man:ipsec(8) man:pluto(8) man:ipsec.conf(5) Process: 20358 ExecStopPost=/usr/sbin/ipsec --stopnflog (code=exited, status=0/SUCCESS) Process: 20355 ExecStopPost=/sbin/ip xfrm state flush (code=exited, status=0/SUCCESS) Process: 20352 ExecStopPost=/sbin/ip xfrm policy flush (code=exited, status=0/SUCCESS) Process: 20347 ExecStop=/usr/libexec/ipsec/whack --shutdown (code=exited, status=0/SUCCESS) Process: 20634 ExecStartPre=/usr/sbin/ipsec --checknflog (code=exited, status=0/SUCCESS) Process: 20631 ExecStartPre=/usr/sbin/ipsec --checknss (code=exited, status=0/SUCCESS) Process: 20369 ExecStartPre=/usr/libexec/ipsec/_stackmanager start (code=exited, status=0/SUCCESS) Process: 20366 ExecStartPre=/usr/libexec/ipsec/addconn --config /etc/ipsec.conf --checkconfig (code=exited, status=0/SUCCESS) Main PID: 20646 (pluto) Status: \"Startup completed.\" CGroup: /system.slice/ipsec.service └─20646 /usr/libexec/ipsec/pluto --leak-detective --config /etc/ipsec.conf --nofork",
"~]# systemctl enable ipsec",
"~]# ipsec newhostkey --output /etc/ipsec.d/hostkey.secrets Generated RSA key pair with CKAID 14936e48e756eb107fa1438e25a345b46d80433f was stored in the NSS database",
"~]# ipsec showhostkey --left --ckaid 14936e48e756eb107fa1438e25a345b46d80433f # rsakey AQPFKElpV leftrsasigkey=0sAQPFKElpV2GdCF0Ux9Kqhcap53Kaa+uCgduoT2I3x6LkRK8N+GiVGkRH4Xg+WMrzRb94kDDD8m/BO/Md+A30u0NjDk724jWuUU215rnpwvbdAob8pxYc4ReSgjQ/DkqQvsemoeF4kimMU1OBPNU7lBw4hTBFzu+iVUYMELwQSXpremLXHBNIamUbe5R1+ibgxO19l/PAbZwxyGX/ueBMBvSQ+H0UqdGKbq7UgSEQTFa4/gqdYZDDzx55tpZk2Z3es+EWdURwJOgGiiiIFuBagasHFpeu9Teb1VzRyytnyNiJCBVhWVqsB4h6eaQ9RpAMmqBdBeNHfXwb6/hg+JIKJgjidXvGtgWBYNDpG40fEFh9USaFlSdiHO+dmGyZQ74Rg9sWLtiVdlH1YEBUtQb8f8FVry9wSn6AZqPlpGgUdtkTYUCaaifsYH4hoIA0nku4Fy/Ugej89ZdrSN7Lt+igns4FysMmBOl9Wi9+LWnfl+dm4Nc6UNgLE8kZc+8vMJGkLi4SYjk2/MFYgqGX/COxSCPBFUZFiNK7Wda0kWea/FqE1heem7rvKAPIiqMymjSmytZI9hhkCD16pCdgrO3fJXsfAUChYYSPyPQClkavvBL/wNK9zlaOwssTaKTj4Xn90SrZaxTEjpqUeQ==",
"~]# ipsec showhostkey --list < 1 > RSA keyid: AQPFKElpV ckaid: 14936e48e756eb107fa1438e25a345b46d80433f",
"conn mytunnel [email protected] left=192.1.2.23 leftrsasigkey=0sAQOrlo+hOafUZDlCQmXFrje/oZm [...] W2n417C/4urYHQkCvuIQ== [email protected] right=192.1.2.45 rightrsasigkey=0sAQO3fwC6nSSGgt64DWiYZzuHbc4 [...] D/v8t5YTQ== authby=rsasig # load and initiate automatically auto=start",
"~]# systemctl restart ipsec",
"~]# ipsec auto --add mytunnel ~]# ipsec auto --up mytunnel",
"~]# tcpdump -n -i interface esp or udp port 500 or udp port 4500 00:32:32.632165 IP 192.1.2.45 > 192.1.2.23: ESP(spi=0x63ad7e17,seq=0x1a), length 132 00:32:32.632592 IP 192.1.2.23 > 192.1.2.45: ESP(spi=0x4841b647,seq=0x1a), length 132 00:32:32.632592 IP 192.0.2.254 > 192.0.1.254: ICMP echo reply, id 2489, seq 7, length 64 00:32:33.632221 IP 192.1.2.45 > 192.1.2.23: ESP(spi=0x63ad7e17,seq=0x1b), length 132 00:32:33.632731 IP 192.1.2.23 > 192.1.2.45: ESP(spi=0x4841b647,seq=0x1b), length 132 00:32:33.632731 IP 192.0.2.254 > 192.0.1.254: ICMP echo reply, id 2489, seq 8, length 64 00:32:34.632183 IP 192.1.2.45 > 192.1.2.23: ESP(spi=0x63ad7e17,seq=0x1c), length 132 00:32:34.632607 IP 192.1.2.23 > 192.1.2.45: ESP(spi=0x4841b647,seq=0x1c), length 132 00:32:34.632607 IP 192.0.2.254 > 192.0.1.254: ICMP echo reply, id 2489, seq 9, length 64 00:32:35.632233 IP 192.1.2.45 > 192.1.2.23: ESP(spi=0x63ad7e17,seq=0x1d), length 132 00:32:35.632685 IP 192.1.2.23 > 192.1.2.45: ESP(spi=0x4841b647,seq=0x1d), length 132 00:32:35.632685 IP 192.0.2.254 > 192.0.1.254: ICMP echo reply, id 2489, seq 10, length 64",
"~]# ipsec whack --trafficstatus 006 #2: \"mytunnel\", type=ESP, add_time=1234567890, inBytes=336, outBytes=336, id='@east'",
"conn mysubnet also=mytunnel leftsubnet=192.0.1.0/24 rightsubnet=192.0.2.0/24 auto=start conn mysubnet6 also=mytunnel connaddrfamily=ipv6 leftsubnet=2001:db8:0:1::/64 rightsubnet=2001:db8:0:2::/64 auto=start conn mytunnel [email protected] left=192.1.2.23 leftrsasigkey=0sAQOrlo+hOafUZDlCQmXFrje/oZm [...] W2n417C/4urYHQkCvuIQ== [email protected] right=192.1.2.45 rightrsasigkey=0sAQO3fwC6nSSGgt64DWiYZzuHbc4 [...] D/v8t5YTQ== authby=rsasig",
"~]# ipsec auto --add mysubnet",
"~]# ipsec auto --add mysubnet6",
"~]# ipsec auto --up mysubnet 104 \"mysubnet\" #1: STATE_MAIN_I1: initiate 003 \"mysubnet\" #1: received Vendor ID payload [Dead Peer Detection] 003 \"mytunnel\" #1: received Vendor ID payload [FRAGMENTATION] 106 \"mysubnet\" #1: STATE_MAIN_I2: sent MI2, expecting MR2 108 \"mysubnet\" #1: STATE_MAIN_I3: sent MI3, expecting MR3 003 \"mysubnet\" #1: received Vendor ID payload [CAN-IKEv2] 004 \"mysubnet\" #1: STATE_MAIN_I4: ISAKMP SA established {auth=OAKLEY_RSA_SIG cipher=aes_128 prf=oakley_sha group=modp2048} 117 \"mysubnet\" #2: STATE_QUICK_I1: initiate 004 \"mysubnet\" #2: STATE_QUICK_I2: sent QI2, IPsec SA established tunnel mode {ESP=>0x9414a615 <0x1a8eb4ef xfrm=AES_128-HMAC_SHA1 NATOA=none NATD=none DPD=none}",
"~]# ipsec auto --up mysubnet6 003 \"mytunnel\" #1: received Vendor ID payload [FRAGMENTATION] 117 \"mysubnet\" #2: STATE_QUICK_I1: initiate 004 \"mysubnet\" #2: STATE_QUICK_I2: sent QI2, IPsec SA established tunnel mode {ESP=>0x06fe2099 <0x75eaa862 xfrm=AES_128-HMAC_SHA1 NATOA=none NATD=none DPD=none}",
"conn mysubnet [email protected] leftrsasigkey=0sAQOrlo+hOafUZDlCQmXFrje/oZm [...] W2n417C/4urYHQkCvuIQ== left=192.1.2.23 leftsourceip=192.0.1.254 leftsubnet=192.0.1.0/24 [email protected] rightrsasigkey=0sAQO3fwC6nSSGgt64DWiYZzuHbc4 [...] D/v8t5YTQ== right=192.1.2.45 rightsourceip=192.0.2.254 rightsubnet=192.0.2.0/24 auto=start authby=rsasig",
"conn branch1 left=1.2.3.4 leftid=@headoffice leftsubnet=0.0.0.0/0 leftrsasigkey=0sA[...] # right=5.6.7.8 rightid=@branch1 rightsubnet=10.0.1.0/24 rightrsasigkey=0sAXXXX[...] # auto=start authby=rsasig conn branch2 left=1.2.3.4 leftid=@headoffice leftsubnet=0.0.0.0/0 leftrsasigkey=0sA[...] # right=10.11.12.13 rightid=@branch2 rightsubnet=10.0.2.0/24 rightrsasigkey=0sAYYYY[...] # auto=start authby=rsasig",
"conn branch1 left=1.2.3.4 leftid=@headoffice leftsubnet=0.0.0.0/0 leftrsasigkey=0sA[...] # right=10.11.12.13 rightid=@branch2 rightsubnet=10.0.1.0/24 rightrsasigkey=0sAYYYY[...] # auto=start authby=rsasig conn passthrough left=1.2.3.4 right=0.0.0.0 leftsubnet=10.0.1.0/24 rightsubnet=10.0.1.0/24 authby=never type=passthrough auto=route",
"conn roadwarriors ikev2=insist # Support (roaming) MOBIKE clients (RFC 4555) mobike=yes fragmentation=yes left=1.2.3.4 # if access to the LAN is given, enable this, otherwise use 0.0.0.0/0 # leftsubnet=10.10.0.0/16 leftsubnet=0.0.0.0/0 leftcert=vpn-server.example.com leftid=%fromcert leftxauthserver=yes leftmodecfgserver=yes right=%any # trust our own Certificate Agency rightca=%same # pick an IP address pool to assign to remote users # 100.64.0.0/16 prevents RFC1918 clashes when remote users are behind NAT rightaddresspool=100.64.13.100-100.64.13.254 # if you want remote clients to use some local DNS zones and servers modecfgdns=\"1.2.3.4, 5.6.7.8\" modecfgdomains=\"internal.company.com, corp\" rightxauthclient=yes rightmodecfgclient=yes authby=rsasig # optionally, run the client X.509 ID through pam to allow/deny client # pam-authorize=yes # load connection, don't initiate auto=add # kill vanished roadwarriors dpddelay=1m dpdtimeout=5m dpdaction=%clear",
"conn to-vpn-server ikev2=insist # pick up our dynamic IP left=%defaultroute leftsubnet=0.0.0.0/0 leftcert=myname.example.com leftid=%fromcert leftmodecfgclient=yes # right can also be a DNS hostname right=1.2.3.4 # if access to the remote LAN is required, enable this, otherwise use 0.0.0.0/0 # rightsubnet=10.10.0.0/16 rightsubnet=0.0.0.0/0 # trust our own Certificate Agency rightca=%same authby=rsasig # allow narrowing to the server's suggested assigned IP and remote subnet narrowing=yes # Support (roaming) MOBIKE clients (RFC 4555) mobike=yes # Initiate connection auto=start",
"conn xauth-rsa ikev2=never auto=add authby=rsasig pfs=no rekey=no left=ServerIP leftcert=vpn.example.com #leftid=%fromcert leftid=vpn.example.com leftsendcert=always leftsubnet=0.0.0.0/0 rightaddresspool=10.234.123.2-10.234.123.254 right=%any rightrsasigkey=%cert modecfgdns=\"1.2.3.4,8.8.8.8\" modecfgdomains=example.com modecfgbanner=\"Authorized access is allowed\" leftxauthserver=yes rightxauthclient=yes leftmodecfgserver=yes rightmodecfgclient=yes modecfgpull=yes xauthby=pam dpddelay=30 dpdtimeout=120 dpdaction=clear ike_frag=yes # for walled-garden on xauth failure # xauthfail=soft # leftupdown=/custom/_updown",
"@west @east : PPKS \"user1\" \"thestringismeanttobearandomstr\""
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/sec-Securing_Virtual_Private_Networks |
Chapter 10. Multiple regions and zones configuration for a cluster on VMware vSphere | Chapter 10. Multiple regions and zones configuration for a cluster on VMware vSphere As an administrator, you can specify multiple regions and zones for your OpenShift Container Platform cluster that runs on a VMware vSphere instance. This configuration reduces the risk of a hardware failure or network outage causing your cluster to fail. A failure domain configuration lists parameters that create a topology. The following list states some of these parameters: computeCluster datacenter datastore networks resourcePool After you define multiple regions and zones for your OpenShift Container Platform cluster, you can create or migrate nodes to another failure domain. Important If you want to migrate pre-existing OpenShift Container Platform cluster compute nodes to a failure domain, you must define a new compute machine set for the compute node. This new machine set can scale up a compute node according to the topology of the failure domain, and scale down the pre-existing compute node. The cloud provider adds topology.kubernetes.io/zone and topology.kubernetes.io/region labels to any compute node provisioned by a machine set resource. For more information, see Creating a compute machine set . 10.1. Specifying multiple regions and zones for your cluster on vSphere You can configure the infrastructures.config.openshift.io configuration resource to specify multiple regions and zones for your OpenShift Container Platform cluster that runs on a VMware vSphere instance. Topology-aware features for the cloud controller manager and the vSphere Container Storage Interface (CSI) Operator Driver require information about the vSphere topology where you host your OpenShift Container Platform cluster. This topology information exists in the infrastructures.config.openshift.io configuration resource. Before you specify regions and zones for your cluster, you must ensure that all data centers and compute clusters contain tags, so that the cloud provider can add labels to your node. For example, if data-center-1 represents region-a and compute-cluster-1 represents zone-1 , the cloud provider adds an openshift-region category label with a value of region-a to data-center-1 . Additionally, the cloud provider adds an openshift-zone category tag with a value of zone-1 to compute-cluster-1 . Note You can migrate control plane nodes with vMotion capabilities to a failure domain. After you add these nodes to a failure domain, the cloud provider adds topology.kubernetes.io/zone and topology.kubernetes.io/region labels to these nodes. Prerequisites You created the openshift-region and openshift-zone tag categories on the vCenter server. You ensured that each data center and compute cluster contains tags that represent the name of their associated region or zone, or both. Optional: If you defined API and Ingress static IP addresses to the installation program, you must ensure that all regions and zones share a common layer 2 network. This configuration ensures that API and Ingress Virtual IP (VIP) addresses can interact with your cluster. Important If you do not supply tags to all data centers and compute clusters before you create a node or migrate a node, the cloud provider cannot add the topology.kubernetes.io/zone and topology.kubernetes.io/region labels to the node. This means that services cannot route traffic to your node. Procedure Edit the infrastructures.config.openshift.io custom resource definition (CRD) of your cluster to specify multiple regions and zones in the failureDomains section of the resource by running the following command: USD oc edit infrastructures.config.openshift.io cluster Example infrastructures.config.openshift.io CRD for a instance named cluster with multiple regions and zones defined in its configuration spec: cloudConfig: key: config name: cloud-provider-config platformSpec: type: vSphere vsphere: vcenters: - datacenters: - <region_a_data_center> - <region_b_data_center> port: 443 server: <your_vcenter_server> failureDomains: - name: <failure_domain_1> region: <region_a> zone: <zone_a> server: <your_vcenter_server> topology: datacenter: <region_a_dc> computeCluster: "</region_a_dc/host/zone_a_cluster>" resourcePool: "</region_a_dc/host/zone_a_cluster/Resources/resource_pool>" datastore: "</region_a_dc/datastore/datastore_a>" networks: - port-group - name: <failure_domain_2> region: <region_a> zone: <zone_b> server: <your_vcenter_server> topology: computeCluster: </region_a_dc/host/zone_b_cluster> datacenter: <region_a_dc> datastore: </region_a_dc/datastore/datastore_a> networks: - port-group - name: <failure_domain_3> region: <region_b> zone: <zone_a> server: <your_vcenter_server> topology: computeCluster: </region_b_dc/host/zone_a_cluster> datacenter: <region_b_dc> datastore: </region_b_dc/datastore/datastore_b> networks: - port-group nodeNetworking: external: {} internal: {} Important After you create a failure domain and you define it in a CRD for a VMware vSphere cluster, you must not modify or delete the failure domain. Doing any of these actions with this configuration can impact the availability and fault tolerance of a control plane machine. Save the resource file to apply the changes. Additional resources Parameters for the cluster-wide infrastructure CRD 10.2. Enabling a multiple layer 2 network for your cluster You can configure your cluster to use a multiple layer 2 network configuration so that data transfer among nodes can span across multiple networks. Prerequisites You configured network connectivity among machines so that cluster components can communicate with each other. Procedure If you installed your cluster with installer-provisioned infrastructure, you must ensure that all control plane nodes share a common layer 2 network. Additionally, ensure compute nodes that are configured for Ingress pod scheduling share a common layer 2 network. If you need compute nodes to span multiple layer 2 networks, you can create infrastructure nodes that can host Ingress pods. If you need to provision workloads across additional layer 2 networks, you can create compute machine sets on vSphere and then move these workloads to your target layer 2 networks. If you installed your cluster on infrastructure that you provided, which is defined as a user-provisioned infrastructure, complete the following actions to meet your needs: Configure your API load balancer and network so that the load balancer can reach the API and Machine Config Server on the control plane nodes. Configure your Ingress load balancer and network so that the load balancer can reach the Ingress pods on the compute or infrastructure nodes. Additional resources Installing a cluster on vSphere with network customizations Creating infrastructure machine sets for production environments Creating a compute machine set 10.3. Parameters for the cluster-wide infrastructure CRD You must set values for specific parameters in the cluster-wide infrastructure, infrastructures.config.openshift.io , Custom Resource Definition (CRD) to define multiple regions and zones for your OpenShift Container Platform cluster that runs on a VMware vSphere instance. The following table lists mandatory parameters for defining multiple regions and zones for your OpenShift Container Platform cluster: Parameter Description vcenters The vCenter servers for your OpenShift Container Platform cluster. You can specify either a single vCenter, or up to 3 vCenters, which is currently a Technology Preview feature. datacenters vCenter data centers where VMs associated with the OpenShift Container Platform cluster will be created or presently exist. port The TCP port of the vCenter server. server The fully qualified domain name (FQDN) of the vCenter server. failureDomains The list of failure domains. name The name of the failure domain. region The value of the openshift-region tag assigned to the topology for the failure failure domain. zone The value of the openshift-zone tag assigned to the topology for the failure failure domain. topology The vCenter reources associated with the failure domain. datacenter The data center associated with the failure domain. computeCluster The full path of the compute cluster associated with the failure domain. resourcePool The full path of the resource pool associated with the failure domain. datastore The full path of the datastore associated with the failure domain. networks A list of port groups associated with the failure domain. Only one portgroup may be defined. Additional resources Specifying multiple regions and zones for your cluster on vSphere | [
"oc edit infrastructures.config.openshift.io cluster",
"spec: cloudConfig: key: config name: cloud-provider-config platformSpec: type: vSphere vsphere: vcenters: - datacenters: - <region_a_data_center> - <region_b_data_center> port: 443 server: <your_vcenter_server> failureDomains: - name: <failure_domain_1> region: <region_a> zone: <zone_a> server: <your_vcenter_server> topology: datacenter: <region_a_dc> computeCluster: \"</region_a_dc/host/zone_a_cluster>\" resourcePool: \"</region_a_dc/host/zone_a_cluster/Resources/resource_pool>\" datastore: \"</region_a_dc/datastore/datastore_a>\" networks: - port-group - name: <failure_domain_2> region: <region_a> zone: <zone_b> server: <your_vcenter_server> topology: computeCluster: </region_a_dc/host/zone_b_cluster> datacenter: <region_a_dc> datastore: </region_a_dc/datastore/datastore_a> networks: - port-group - name: <failure_domain_3> region: <region_b> zone: <zone_a> server: <your_vcenter_server> topology: computeCluster: </region_b_dc/host/zone_a_cluster> datacenter: <region_b_dc> datastore: </region_b_dc/datastore/datastore_b> networks: - port-group nodeNetworking: external: {} internal: {}"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_on_vmware_vsphere/post-install-vsphere-zones-regions-configuration |
Chapter 2. Credentials | Chapter 2. Credentials You can use credentials to store secrets that can be used for authentication purposes with resources, such as decision environments, rulebook activations and projects for Event-Driven Ansible controller, and projects for automation controller. Credentials authenticate users when launching jobs against machines and importing project content from a version control system. You can grant users and teams the ability to use these credentials without exposing the credential to the user. If a user moves to a different team or leaves the organization, you do not have to rekey all of your systems just because that credential was previously available. 2.1. Credentials list view When you log in to the Ansible Automation Platform and select Automation Decisions Infrastructure Credentials , the Credentials page has a pre-loaded Decision Environment Container Registry credential. When you create your own credentials, they will be added to this list view. . From the menu bar, you can search for credentials in the Name search field. You also have the following options in the menu bar: Choose how fields are shown in the list view by clicking the Manage columns icon. You have four options in which you can arrange your fields: Column - Shows the column in the table. Description - Shows the column when the item is expanded as a full width description. Expanded - Shows the column when the item is expanded as a detail. Hidden - Hides the column. Choose between a List view or a Card view , by clicking the icons. 2.2. Setting up credentials You can create a credential to use with a source plugin or a private container registry that you select. You can make your credential available to a team or individuals. Procedure Log in to the Ansible Automation Platform Dashboard. From the navigation panel, select Automation Decisions Infrastructure Credentials . Click Create credential . Insert the following: Name Insert the name. Description This field is optional. Organization Click the list to select an organization or select Default . Credential type Click the list to select your Credential type. Note When you select the credential type, the Type Details section is displayed with fields that are applicable for the credential type you chose. Complete the fields that are applicable to the credential type you selected. Click Create credential . After saving the credential, the credentials details page is displayed. From there or the Credentials list view, you can edit or delete it. 2.3. Editing a credential You can edit existing credentials to ensure the appropriate level of access for your organization. Procedure Edit the credential by using one of these methods: From the Credentials list view, click the Edit credential icon to the desired credential. From the Credentials list view, select the name of the credential, click Edit credential . Edit the appropriate details and click Save credential . 2.4. Copying a credential When setting up a new credential with field inputs that are similar to your existing credentials, you can use the Copy credential feature in the Details tab to duplicate information instead of manually entering it. While setting up credentials can be a lengthy process, the ability to copy the required fields from an existing credential saves time and, in some cases, reduces the possibility of human error. Procedure On the Credentials list page, click the name of the credential that you want to copy. This takes you to the Details tab of the credential. Click Copy credential in the upper right of the Details tab. Note You can also click the Copy credential icon to the desired credential on the Credentials list page. A message is displayed confirming that your selected credential has been copied: "<Name of credential> copied." Click the Back to credentials tab to view the credential you just copied. The copied credential is displayed with the same name as the original credential followed by a time stamp in 24-hour format (for example, <Name of credential> @ 17:26:30 ). Edit the details you prefer for your copied credential. Click Save credential . 2.5. Deleting a credential You can delete credentials if they are no longer needed for your organization. Procedure Delete the credential by using one of these methods: From the Credentials list view, click the More Actions icon ... to the desired credential and click Delete credential . From the Credentials list view, select the name of the credential, click the More Actions icon ... to Edit credential , and click Delete credential . In the pop-up window, select Yes, I confirm that I want to delete this credential . Note If your credential is still in use by other resources in your organization, a warning message is displayed letting you know that the credential cannot be deleted. Also, if your credential is being used in an event stream, you cannot delete it until the event stream is deleted or attached to a different credential. In general, avoid deleting a credential that is in use because it can lead to broken activations. Click Delete credential . You can delete multiple credentials at a time by selecting the checkbox to each credential and clicking the More Actions icon ... in the menu bar and then clicking Delete selected credentials . | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/using_automation_decisions/eda-credentials |
Uninstall | Uninstall builds for Red Hat OpenShift 1.3 Uninstalling Builds Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/builds_for_red_hat_openshift/1.3/html/uninstall/index |
7.14. bind | 7.14. bind 7.14.1. RHSA-2013:0550 - Moderate: bind security and enhancement update Updated bind packages that fix one security issue and add one enhancement are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. The Berkeley Internet Name Domain (BIND) is an implementation of the Domain Name System (DNS) protocols. BIND includes a DNS server (named); a resolver library (routines for applications to use when interfacing with DNS); and tools for verifying that the DNS server is operating correctly. DNS64 is used to automatically generate DNS records so IPv6 based clients can access IPv4 systems through a NAT64 server. Security Fix CVE-2012-5689 A flaw was found in the DNS64 implementation in BIND when using Response Policy Zones (RPZ). If a remote attacker sent a specially-crafted query to a named server that is using RPZ rewrite rules, named could exit unexpectedly with an assertion failure. Note that DNS64 support is not enabled by default. Enhancement BZ# 906312 Previously, it was impossible to configure the maximum number of responses sent per second to one client. This allowed remote attackers to conduct traffic amplification attacks using DNS queries with spoofed source IP addresses. With this update, it is possible to use the new "rate-limit" configuration option in named.conf and configure the maximum number of queries which the server responds to. Refer to the BIND documentation for more details about the "rate-limit" option. All bind users are advised to upgrade to these updated packages, which contain patches to correct this issue and add this enhancement. After installing the update, the BIND daemon (named) will be restarted automatically. 7.14.2. RHBA-2013:0475 - bind bug fix update Updated bind packages that multiples bugs are now available for Red Hat Enterprise Linux 6. BIND (Berkeley Internet Name Domain) is an implementation of the DNS (Domain Name System) protocols. BIND includes a DNS server (named), which resolves host names to IP addresses; a resolver library (routines for applications to use when interfacing with DNS); and tools for verifying that the DNS server is operating properly. Bug Fixes BZ#827282 Previously, initscript sometimes reported a spurious error message "initscript: silence spurious "named.pid: No such file or directory" due to a race condition when the DNS server (named) was stopped. This spurious error message has been suppressed and is no longer reported in this scenario. BZ# 837165 Due to a race condition in the rbtdb.c source file, the named daemon could terminate unexpectedly with the INSIST error code. This bug has been fixed in the code and the named daemon no longer crashes in the described scenario. BZ#853806 Previously, BIND rejected "forward" and "forwarders" statements in static-stub zones. Consequently, it was impossible to forward certain queries to specified servers. With this update, BIND accepts those options for static-stub zones properly, thus fixing this bug. All users of bind are advised to upgrade to these updated packages, which fix these bugs. 7.14.3. RHSA-2013:0689 - Important: bind security and bug fix update Updated bind packages that fix one security issue and one bug are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having important security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available from the CVE link associated with the description below. The Berkeley Internet Name Domain (BIND) is an implementation of the Domain Name System (DNS) protocols. BIND includes a DNS server (named); a resolver library (routines for applications to use when interfacing with DNS); and tools for verifying that the DNS server is operating correctly. Security Fix CVE-2013-2266 A denial of service flaw was found in the libdns library. A remote attacker could use this flaw to send a specially-crafted DNS query to named that, when processed, would cause named to use an excessive amount of memory, or possibly crash. Note: This update disables the syntax checking of NAPTR (Naming Authority Pointer) resource records. Bug Fix BZ# 928439 Previously, rebuilding the bind-dyndb-ldap source RPM failed with a "/usr/include/dns/view.h:76:21: error: dns/rrl.h: No such file or directory" error. All bind users are advised to upgrade to these updated packages, which contain patches to correct these issues. After installing the update, the BIND daemon (named) will be restarted automatically. 7.14.4. RHBA-2013:1177 - bind bug fix update Updated bind packages that fix one bug are now available for Red Hat Enterprise Linux 6. BIND (Berkeley Internet Name Domain) is an implementation of the DNS (Domain Name System) protocols. BIND includes a DNS server (named), which resolves host names to IP addresses; a resolver library (routines for applications to use when interfacing with DNS); and tools for verifying that the DNS server is operating properly. Bug Fix BZ# 996955 Due to a missing gss_release_name() call, the BIND DNS server leaked memory when the "tkey-gssapi-credential" option was used in the BIND configuration. This update properly frees all memory in case the "tkey-gssapi-credential" is used, and BIND no longer leaks memory when GSSAPI credentials are used internally by the server for authentication. Users of bind are advised to upgrade to these updated packages, which fix this bug. After installing the update, the BIND daemon (named) will be restarted automatically. 7.14.5. RHSA-2013:1114 - Important: bind security update Updated bind packages that fix one security issue are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having important security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available from the CVE link associated with the description below. The Berkeley Internet Name Domain (BIND) is an implementation of the Domain Name System (DNS) protocols. BIND includes a DNS server (named); a resolver library (routines for applications to use when interfacing with DNS); and tools for verifying that the DNS server is operating correctly. Security Fix CVE-2013-4854 A denial of service flaw was found in BIND. A remote attacker could use this flaw to send a specially-crafted DNS query to named that, when processed, would cause named to crash when rejecting the malformed query. All bind users are advised to upgrade to these updated packages, which contain a backported patch to correct this issue. After installing the update, the BIND daemon (named) will be restarted automatically. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/bind |
20.39. Managing Snapshots | 20.39. Managing Snapshots The sections that follow describe actions that can be done in order to manipulate guest virtual machine snapshots. Snapshots take the disk, memory, and device state of a guest virtual machine at a specified point in time, and save it for future use. Snapshots have many uses, from saving a "clean" copy of an OS image to saving a guest virtual machine's state before what may be a potentially destructive operation. Snapshots are identified with a unique name. See the libvirt upstream website for documentation of the XML format used to represent properties of snapshots. Important Red Hat Enterprise Linux 7 only supports creating snapshots while the guest virtual machine is paused or powered down. Creating snapshots of running guests (also known as live snapshots ) is available on Red Hat Virtualization . For details, call your service representative. 20.39.1. Creating Snapshots The virsh snapshot-create command creates a snapshot for guest virtual machine with the properties specified in the guest virtual machine's XML file (such as <name> and <description> elements, as well as <disks> ). To create a snapshot run: The guest virtual machine name, id, or uid may be used as the guest virtual machine requirement. The XML requirement is a string that must in the very least contain the name , description , and disks elements. The remaining optional arguments are as follows: --disk-only - the memory state of the guest virtual machine is not included in the snapshot. If the XML file string is completely omitted, libvirt will choose a value for all fields. The new snapshot will become current, as listed by snapshot-current. In addition, the snapshot will only include the disk state rather than the usual system checkpoint with guest virtual machine state. Disk snapshots are faster than full system checkpoints, but reverting to a disk snapshot may require fsck or journal replays, since it is like the disk state at the point when the power cord is abruptly pulled. Note that mixing --halt and --disk-only loses any data that was not flushed to disk at the time. --halt - causes the guest virtual machine to be left in an inactive state after the snapshot is created. Mixing --halt and --disk-only loses any data that was not flushed to disk at the time as well as the memory state. --redefine specifies that if all XML elements produced by virsh snapshot-dumpxml are valid; it can be used to migrate snapshot hierarchy from one machine to another, to recreate hierarchy for the case of a transient guest virtual machine that goes away and is later recreated with the same name and UUID, or to make slight alterations in the snapshot metadata (such as host-specific aspects of the guest virtual machine XML embedded in the snapshot). When this flag is supplied, the xmlfile argument is mandatory, and the guest virtual machine's current snapshot will not be altered unless the --current flag is also given. --no-metadata creates the snapshot, but any metadata is immediately discarded (that is, libvirt does not treat the snapshot as current, and cannot revert to the snapshot unless --redefine is later used to teach libvirt about the metadata again). --reuse-external , if used and snapshot XML requests an external snapshot with a destination of an existing file, the destination must exist, and is reused; otherwise, a snapshot is refused to avoid losing contents of the existing files. --quiesce libvirt will try to freeze and unfreeze the guest virtual machine's mounted file system(s), using the guest agent. However, if the guest virtual machine does not have a guest agent, snapshot creation will fail. The snapshot can contain the memory state of the virtual guest machine. The snapshot must be external. --atomic causes libvirt to guarantee that the snapshot either succeeds, or fails with no changes. Note that not all hypervisors support this. If this flag is not specified, then some hypervisors may fail after partially performing the action, and virsh dumpxml must be used to see whether any partial changes occurred. Existence of snapshot metadata will prevent attempts to undefine a persistent guest virtual machine. However, for transient guest virtual machines, snapshot metadata is silently lost when the guest virtual machine quits running (whether by a command such as destroy or by an internal guest action). 20.39.2. Creating a Snapshot for the Current Guest Virtual Machine The virsh snapshot-create-as command creates a snapshot for guest virtual machine with the properties specified in the domain XML file (such as name and description elements). If these values are not included in the XML string, libvirt will choose a value. To create a snapshot run: The remaining optional arguments are as follows: --print-xml creates appropriate XML for snapshot-create as output, rather than actually creating a snapshot. --halt keeps the guest virtual machine in an inactive state after the snapshot is created. --disk-only creates a snapshot that does not include the guest virtual machine state. --memspec can be used to control whether a checkpoint is internal or external. The flag is mandatory, followed by a memspec of the form [file=]name[,snapshot=type] , where type can be none, internal, or external. To include a literal comma in file=name, escape it with a second comma. --diskspec option can be used to control how --disk-only and external checkpoints create external files. This option can occur multiple times, according to the number of <disk> elements in the domain XML. Each <diskspec> is in the form disk [,snapshot=type][,driver=type][,file=name] . If --diskspec is omitted for a specific disk, the default behavior in the virtual machine configuraition is used. To include a literal comma in disk or in file=name , escape it with a second comma. A literal --diskspec must precede each diskspec unless all three of domain , name , and description are also present. For example, a diskspec of vda,snapshot=external,file=/path/to,,new results in the following XML: Important Red Hat recommends the use of external snapshots, as they are more flexible and reliable when handled by other virtualization tools. To create an external snapshot, use the virsh-create-as command with the --diskspec vda,snapshot=external option If this option is not used, virsh creates internal snapshots, which are not recommended for use due to their lack of stability and optimization. For more information, see Section A.13, "Workaround for Creating External Snapshots with libvirt" . --reuse-external is specified, and the domain XML or diskspec option requests an external snapshot with a destination of an existing file, then the destination must exist, and is reused; otherwise, a snapshot is refused to avoid losing contents of the existing files. --quiesce is specified, libvirt will try to use guest agent to freeze and unfreeze guest virtual machine's mounted file systems. However, if domain has no guest agent, snapshot creation will fail. Currently, this requires --disk-only to be passed as well. --no-metadata creates snapshot data but any metadata is immediately discarded (that is, libvirt does not treat the snapshot as current, and cannot revert to the snapshot unless snapshot-create is later used to teach libvirt about the metadata again). This flag is incompatible with --print-xml --atomic will cause libvirt to guarantee that the snapshot either succeeds, or fails with no changes. Note that not all hypervisors support this. If this flag is not specified, then some hypervisors may fail after partially performing the action, and virsh dumpxml must be used to see whether any partial changes occurred. Warning Creating snapshots of KVM guests running on a 64-bit ARM platform host currently does not work. Note that KVM on 64-bit ARM is not supported by Red Hat. 20.39.3. Displaying the Snapshot Currently in Use The virsh snapshot-current command is used to query which snapshot is currently in use. If snapshotname is not used, snapshot XML for the guest virtual machine's current snapshot (if there is one) will be displayed as output. If --name is specified, just the current snapshot name instead of the full XML will be sent as output. If --security-info is supplied security sensitive information will be included in the XML. Using snapshotname , generates a request to make the existing named snapshot become the current snapshot, without reverting it to the guest virtual machine. 20.39.4. snapshot-edit This command is used to edit the snapshot that is currently in use: If both snapshotname and --current are specified, it forces the edited snapshot to become the current snapshot. If snapshotname is omitted, then --current must be supplied, in order to edit the current snapshot. This is equivalent to the following command sequence below, but it also includes some error checking: If the --rename is specified, then the snapshot is renamed. If --clone is specified, then changing the snapshot name will create a clone of the snapshot metadata. If neither is specified, then the edits will not change the snapshot name. Note that changing a snapshot name must be done with care, since the contents of some snapshots, such as internal snapshots within a single qcow2 file, are accessible only from the original snapshot name. 20.39.5. snapshot-info The snapshot-info domain command displays information about the snapshots. To use, run: Outputs basic information about a specified snapshot , or the current snapshot with --current . 20.39.6. snapshot-list List all of the available snapshots for the given guest virtual machine, defaulting to show columns for the snapshot name, creation time, and guest virtual machine state. To use, run: The optional arguments are as follows: --parent adds a column to the output table giving the name of the parent of each snapshot. This option may not be used with --roots or --tree . --roots filters the list to show only the snapshots that have no parents. This option may not be used with --parent or --tree . --tree displays output in a tree format, listing just snapshot names. This option may not be used with --roots or --parent . --from filters the list to snapshots which are children of the given snapshot or, if --current is provided, will cause the list to start at the current snapshot. When used in isolation or with --parent , the list is limited to direct children unless --descendants is also present. When used with --tree , the use of --descendants is implied. This option is not compatible with --roots . Note that the starting point of --from or --current is not included in the list unless the --tree option is also present. --leaves is specified, the list will be filtered to just snapshots that have no children. Likewise, if --no-leaves is specified, the list will be filtered to just snapshots with children. (Note that omitting both options does no filtering, while providing both options will either produce the same list or error out depending on whether the server recognizes the flags) Filtering options are not compatible with --tree . --metadata is specified, the list will be filtered to just snapshots that involve libvirt metadata, and thus would prevent the undefining of a persistent guest virtual machine, or be lost on destroy of a transient guest virtual machine. Likewise, if --no-metadata is specified, the list will be filtered to just snapshots that exist without the need for libvirt metadata. --inactive is specified, the list will be filtered to snapshots that were taken when the guest virtual machine was shut off. If --active is specified, the list will be filtered to snapshots that were taken when the guest virtual machine was running, and where the snapshot includes the memory state to revert to that running state. If --disk-only is specified, the list will be filtered to snapshots that were taken when the guest virtual machine was running, but where the snapshot includes only disk state. --internal is specified, the list will be filtered to snapshots that use internal storage of existing disk images. If --external is specified, the list will be filtered to snapshots that use external files for disk images or memory state. 20.39.7. snapshot-dumpxml The virsh snapshot-dumpxml domain snapshot command outputs the snapshot XML for the guest virtual machine's snapshot named snapshot. To use, run: The --security-info option will also include security sensitive information. Use virsh snapshot-current to easily access the XML of the current snapshot. 20.39.8. snapshot-parent Outputs the name of the parent snapshot, if any, for the given snapshot, or for the current snapshot with --current . To use, run: 20.39.9. snapshot-revert Reverts the given domain to the snapshot specified by snapshot , or to the current snapshot with --current . Warning Be aware that this is a destructive action; any changes in the domain since the last snapshot was taken will be lost. Also note that the state of the domain after snapshot-revert is complete will be the state of the domain at the time the original snapshot was taken. To revert the snapshot, run: Normally, reverting to a snapshot leaves the domain in the state it was at the time the snapshot was created, except that a disk snapshot with no guest virtual machine state leaves the domain in an inactive state. Passing either the --running or --paused option will perform additional state changes (such as booting an inactive domain, or pausing a running domain). Since transient domains cannot be inactive, it is required to use one of these flags when reverting to a disk snapshot of a transient domain. There are two cases where a snapshot revert involves extra risk, which requires the use of --force to proceed. One is the case of a snapshot that lacks full domain information for reverting configuration; since libvirt cannot prove that the current configuration matches what was in use at the time of the snapshot, supplying --force assures libvirt that the snapshot is compatible with the current configuration (and if it is not, the domain will likely fail to run). The other is the case of reverting from a running domain to an active state where a new hypervisor has to be created rather than reusing the existing hypervisor, because it implies drawbacks such as breaking any existing VNC or Spice connections; this condition happens with an active snapshot that uses a provably incompatible configuration, as well as with an inactive snapshot that is combined with the --start or --pause flag. 20.39.10. snapshot-delete The virsh snapshot-delete domain command deletes the snapshot for the specified domain. To do this, run: This command deletes the snapshot for the domain named snapshot , or the current snapshot with --current . If this snapshot has child snapshots, changes from this snapshot will be merged into the children. If the option --children is used, then it will delete this snapshot and any children of this snapshot. If --children-only is used, then it will delete any children of this snapshot, but leave this snapshot intact. These two flags are mutually exclusive. The --metadata is used it will delete the snapshot's metadata maintained by libvirt , while leaving the snapshot contents intact for access by external tools; otherwise deleting a snapshot also removes its data contents from that point in time. | [
"virsh snapshot-create domain XML file [--redefine [--current] [--no-metadata] [--halt] [--disk-only] [--reuse-external] [--quiesce] [--atomic]",
"snapshot-create-as domain {[--print-xml] | [--no-metadata] [--halt] [--reuse-external]} [name] [description] [--disk-only [--quiesce]] [--atomic] [[--memspec memspec]] [--diskspec] diskspec]",
"<disk name='vda' snapshot='external'> <source file='/path/to,new'/> </disk>",
"virsh snapshot-current domain {[--name] | [--security-info] | [snapshotname]}",
"virsh snapshot-edit domain [snapshotname] [--current] {[--rename] [--clone]}",
"virsh snapshot-dumpxml dom name > snapshot.xml vi snapshot.xml [note - this can be any editor] virsh snapshot-create dom snapshot.xml --redefine [--current]",
"snapshot-info domain { snapshot | --current}",
"virsh snapshot-list domain [{--parent | --roots | --tree}] [{[--from] snapshot | --current} [--descendants]] [--metadata] [--no-metadata] [--leaves] [--no-leaves] [--inactive] [--active] [--disk-only] [--internal] [--external]",
"virsh snapshot-dumpxml domain snapshot [--security-info]",
"virsh snapshot-parent domain { snapshot | --current}",
"virsh snapshot-revert domain { snapshot | --current} [{--running | --paused}] [--force]",
"virsh snapshot-delete domain { snapshot | --current} [--metadata] [{--children | --children-only}]"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-managing_guest_virtual_machines_with_virsh-managing_snapshots |
Using systemd unit files to customize and optimize your system | Using systemd unit files to customize and optimize your system Red Hat Enterprise Linux 8 Optimize system performance and extend configuration with systemd Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/using_systemd_unit_files_to_customize_and_optimize_your_system/index |
27.2. Defining Self-Service Settings | 27.2. Defining Self-Service Settings Self-service access control rules define the operations that an entity can perform on itself. These rules define only what attributes a user (or other IdM entity) can edit on their personal entries. Three self-service rules exist by default: A rule for editing some general attributes in the personal entry, including given name and surname, phone numbers, and addresses. A rule to edit personal passwords, including two Samba passwords, the Kerberos password, and the general user password. A rule to manage personal SSH keys. 27.2.1. Creating Self-Service Rules from the Web UI Open the IPA Server tab in the top menu, and select the Self Service Permissions subtab. Click the Add link at the top of the list of self-service ACIs. Enter the name of the rule in the pop-up window. Spaces are allowed. Select the checkboxes by the attributes which this ACI will permit users to edit. Click the Add button to save the new self-service ACI. 27.2.2. Creating Self-Service Rules from the Command Line A new self-service rule can be added using the selfservice-add command. There are two required options, --permissions to set whether the ACI grants write, add, or delete permission and --attrs to give the full list of attributes which this ACI grants permission to. 27.2.3. Editing Self-Service Rules In the self-service entry in the web UI, the only element that can be edited is the list of attributes that are included in the ACI. The checkboxes can be selected or deselected. Figure 27.1. Self-Service Edit Page With the command line, self-service rules are edited using the ipa selfservice-mod command. The --attrs option overwrites whatever the list of supported attributes was, so always include the complete list of attributes along with any new attributes. Important Include all of the attributes when modifying a self-service rule, including existing ones. | [
"ipa selfservice-add \"Users can manage their own name details\" --permissions=write --attrs=givenname,displayname,title,initials ----------------------------------------------------------- Added selfservice \"Users can manage their own name details\" ----------------------------------------------------------- Self-service name: Users can manage their own name details Permissions: write Attributes: givenname, displayname, title, initials",
"ipa selfservice-mod \"Users can manage their own name details\" --attrs=givenname,displayname,title,initials,surname -------------------------------------------------------------- Modified selfservice \"Users can manage their own name details\" -------------------------------------------------------------- Self-service name: Users can manage their own name details Permissions: write Attributes: givenname, displayname, title, initials"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/self-service |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/using_single_sign-on_with_jboss_eap/con_making-open-source-more-inclusive |
Chapter 29. Configuring a custom PKI | Chapter 29. Configuring a custom PKI Some platform components, such as the web console, use Routes for communication and must trust other components' certificates to interact with them. If you are using a custom public key infrastructure (PKI), you must configure it so its privately signed CA certificates are recognized across the cluster. You can leverage the Proxy API to add cluster-wide trusted CA certificates. You must do this either during installation or at runtime. During installation , configure the cluster-wide proxy . You must define your privately signed CA certificates in the install-config.yaml file's additionalTrustBundle setting. The installation program generates a ConfigMap that is named user-ca-bundle that contains the additional CA certificates you defined. The Cluster Network Operator then creates a trusted-ca-bundle ConfigMap that merges these CA certificates with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle; this ConfigMap is referenced in the Proxy object's trustedCA field. At runtime , modify the default Proxy object to include your privately signed CA certificates (part of cluster's proxy enablement workflow). This involves creating a ConfigMap that contains the privately signed CA certificates that should be trusted by the cluster, and then modifying the proxy resource with the trustedCA referencing the privately signed certificates' ConfigMap. Note The installer configuration's additionalTrustBundle field and the proxy resource's trustedCA field are used to manage the cluster-wide trust bundle; additionalTrustBundle is used at install time and the proxy's trustedCA is used at runtime. The trustedCA field is a reference to a ConfigMap containing the custom certificate and key pair used by the cluster component. 29.1. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 29.2. Enabling the cluster-wide proxy The Proxy object is used to manage the cluster-wide egress proxy. When a cluster is installed or upgraded without the proxy configured, a Proxy object is still generated but it will have a nil spec . For example: apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: trustedCA: name: "" status: A cluster administrator can configure the proxy for OpenShift Container Platform by modifying this cluster Proxy object. Note Only the Proxy object named cluster is supported, and no additional proxies can be created. Warning Enabling the cluster-wide proxy causes the Machine Config Operator (MCO) to trigger node reboot. Prerequisites Cluster administrator permissions OpenShift Container Platform oc CLI tool installed Procedure Create a config map that contains any additional CA certificates required for proxying HTTPS connections. Note You can skip this step if the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Create a file called user-ca-bundle.yaml with the following contents, and provide the values of your PEM-encoded certificates: apiVersion: v1 data: ca-bundle.crt: | 1 <MY_PEM_ENCODED_CERTS> 2 kind: ConfigMap metadata: name: user-ca-bundle 3 namespace: openshift-config 4 1 This data key must be named ca-bundle.crt . 2 One or more PEM-encoded X.509 certificates used to sign the proxy's identity certificate. 3 The config map name that will be referenced from the Proxy object. 4 The config map must be in the openshift-config namespace. Create the config map from this file: USD oc create -f user-ca-bundle.yaml Use the oc edit command to modify the Proxy object: USD oc edit proxy/cluster Configure the necessary fields for the proxy: apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 readinessEndpoints: - http://www.google.com 4 - https://www.google.com trustedCA: name: user-ca-bundle 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. The URL scheme must be either http or https . Specify a URL for the proxy that supports the URL scheme. For example, most proxies will report an error if they are configured to use https but they only support http . This failure message may not propagate to the logs and can appear to be a network connection failure instead. If using a proxy that listens for https connections from the cluster, you may need to configure the cluster to accept the CAs and certificates that the proxy uses. 3 A comma-separated list of destination domain names, domains, IP addresses (or other network CIDRs), and port numbers to exclude proxying. Note Port numbers are only supported when configuring IPv6 addresses. Port numbers are not supported when configuring IPv4 addresses. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy or httpsProxy fields are set. 4 One or more URLs external to the cluster to use to perform a readiness check before writing the httpProxy and httpsProxy values to status. 5 A reference to the config map in the openshift-config namespace that contains additional CA certificates required for proxying HTTPS connections. Note that the config map must already exist before referencing it here. This field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Save the file to apply the changes. 29.3. Certificate injection using Operators Once your custom CA certificate is added to the cluster via ConfigMap, the Cluster Network Operator merges the user-provided and system CA certificates into a single bundle and injects the merged bundle into the Operator requesting the trust bundle injection. Important After adding a config.openshift.io/inject-trusted-cabundle="true" label to the config map, existing data in it is deleted. The Cluster Network Operator takes ownership of a config map and only accepts ca-bundle as data. You must use a separate config map to store service-ca.crt by using the service.beta.openshift.io/inject-cabundle=true annotation or a similar configuration. Adding a config.openshift.io/inject-trusted-cabundle="true" label and service.beta.openshift.io/inject-cabundle=true annotation on the same config map can cause issues. Operators request this injection by creating an empty ConfigMap with the following label: config.openshift.io/inject-trusted-cabundle="true" An example of the empty ConfigMap: apiVersion: v1 data: {} kind: ConfigMap metadata: labels: config.openshift.io/inject-trusted-cabundle: "true" name: ca-inject 1 namespace: apache 1 Specifies the empty ConfigMap name. The Operator mounts this ConfigMap into the container's local trust store. Note Adding a trusted CA certificate is only needed if the certificate is not included in the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle. Certificate injection is not limited to Operators. The Cluster Network Operator injects certificates across any namespace when an empty ConfigMap is created with the config.openshift.io/inject-trusted-cabundle=true label. The ConfigMap can reside in any namespace, but the ConfigMap must be mounted as a volume to each container within a pod that requires a custom CA. For example: apiVersion: apps/v1 kind: Deployment metadata: name: my-example-custom-ca-deployment namespace: my-example-custom-ca-ns spec: ... spec: ... containers: - name: my-container-that-needs-custom-ca volumeMounts: - name: trusted-ca mountPath: /etc/pki/ca-trust/extracted/pem readOnly: true volumes: - name: trusted-ca configMap: name: ca-inject items: - key: ca-bundle.crt 1 path: tls-ca-bundle.pem 2 1 ca-bundle.crt is required as the ConfigMap key. 2 tls-ca-bundle.pem is required as the ConfigMap path. | [
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: trustedCA: name: \"\" status:",
"apiVersion: v1 data: ca-bundle.crt: | 1 <MY_PEM_ENCODED_CERTS> 2 kind: ConfigMap metadata: name: user-ca-bundle 3 namespace: openshift-config 4",
"oc create -f user-ca-bundle.yaml",
"oc edit proxy/cluster",
"apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 readinessEndpoints: - http://www.google.com 4 - https://www.google.com trustedCA: name: user-ca-bundle 5",
"config.openshift.io/inject-trusted-cabundle=\"true\"",
"apiVersion: v1 data: {} kind: ConfigMap metadata: labels: config.openshift.io/inject-trusted-cabundle: \"true\" name: ca-inject 1 namespace: apache",
"apiVersion: apps/v1 kind: Deployment metadata: name: my-example-custom-ca-deployment namespace: my-example-custom-ca-ns spec: spec: containers: - name: my-container-that-needs-custom-ca volumeMounts: - name: trusted-ca mountPath: /etc/pki/ca-trust/extracted/pem readOnly: true volumes: - name: trusted-ca configMap: name: ca-inject items: - key: ca-bundle.crt 1 path: tls-ca-bundle.pem 2"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/networking/configuring-a-custom-pki |
E.6. Source Editor | E.6. Source Editor The Source Editor is a simple text editor which is aware of XML Schema formatting rules. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/source_editor |
Chapter 1. Overview | Chapter 1. Overview AMQ C++ is a library for developing messaging applications. It enables you to write C++ applications that send and receive AMQP messages. AMQ C++ is part of AMQ Clients, a suite of messaging libraries supporting multiple languages and platforms. For an overview of the clients, see AMQ Clients Overview . For information about this release, see AMQ Clients 2.8 Release Notes . AMQ C++ is based on the Proton API from Apache Qpid . For detailed API documentation, see the AMQ C++ API reference . 1.1. Key features An event-driven API that simplifies integration with existing applications SSL/TLS for secure communication Flexible SASL authentication Automatic reconnect and failover Seamless conversion between AMQP and language-native data types Access to all the features and capabilities of AMQP 1.0 1.2. Supported standards and protocols AMQ C++ supports the following industry-recognized standards and network protocols: Version 1.0 of the Advanced Message Queueing Protocol (AMQP) Versions 1.0, 1.1, 1.2, and 1.3 of the Transport Layer Security (TLS) protocol, the successor to SSL Simple Authentication and Security Layer (SASL) mechanisms supported by Cyrus SASL , including ANONYMOUS, PLAIN, SCRAM, EXTERNAL, and GSSAPI (Kerberos) Modern TCP with IPv6 1.3. Supported configurations AMQ C++ supports the OS and language versions listed below. For more information, see Red Hat AMQ 7 Supported Configurations . Red Hat Enterprise Linux 6 with GNU C++, compiling as C++03 or C++0x (partial C++11 support) Red Hat Enterprise Linux 7 and 8 with GNU C++, compiling as C++03 or C++11 Microsoft Windows 10 Pro with Microsoft Visual Studio 2015 or newer Microsoft Windows Server 2012 R2 and 2016 with Microsoft Visual Studio 2015 or newer AMQ C++ is supported in combination with the following AMQ components and versions: All versions of AMQ Broker All versions of AMQ Interconnect A-MQ 6 versions 6.2.1 and newer 1.4. Terms and concepts This section introduces the core API entities and describes how they operate together. Table 1.1. API terms Entity Description Container A top-level container of connections. Connection A channel for communication between two peers on a network. It contains sessions. Session A context for sending and receiving messages. It contains senders and receivers. Sender A channel for sending messages to a target. It has a target. Receiver A channel for receiving messages from a source. It has a source. Source A named point of origin for messages. Target A named destination for messages. Message An application-specific piece of information. Delivery A message transfer. AMQ C++ sends and receives messages . Messages are transferred between connected peers over senders and receivers . Senders and receivers are established over sessions . Sessions are established over connections . Connections are established between two uniquely identified containers . Though a connection can have multiple sessions, often this is not needed. The API allows you to ignore sessions unless you require them. A sending peer creates a sender to send messages. The sender has a target that identifies a queue or topic at the remote peer. A receiving peer creates a receiver to receive messages. The receiver has a source that identifies a queue or topic at the remote peer. The sending of a message is called a delivery . The message is the content sent, including all metadata such as headers and annotations. The delivery is the protocol exchange associated with the transfer of that content. To indicate that a delivery is complete, either the sender or the receiver settles it. When the other side learns that it has been settled, it will no longer communicate about that delivery. The receiver can also indicate whether it accepts or rejects the message. 1.5. Document conventions The sudo command In this document, sudo is used for any command that requires root privileges. Exercise caution when using sudo because any changes can affect the entire system. For more information about sudo , see Using the sudo command . File paths In this document, all file paths are valid for Linux, UNIX, and similar operating systems (for example, /home/andrea ). On Microsoft Windows, you must use the equivalent Windows paths (for example, C:\Users\andrea ). Variable text This document contains code blocks with variables that you must replace with values specific to your environment. Variable text is enclosed in arrow braces and styled as italic monospace. For example, in the following command, replace <project-dir> with the value for your environment: USD cd <project-dir> | [
"cd <project-dir>"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_cpp_client/overview |
Chapter 6. Working with nodes | Chapter 6. Working with nodes 6.1. Viewing and listing the nodes in your Red Hat OpenShift Service on AWS cluster You can list all the nodes in your cluster to obtain information such as status, age, memory usage, and details about the nodes. When you perform node management operations, the CLI interacts with node objects that are representations of actual node hosts. The master uses the information from node objects to validate nodes with health checks. 6.1.1. About listing all the nodes in a cluster You can get detailed information on the nodes in the cluster. The following command lists all nodes: USD oc get nodes The following example is a cluster with healthy nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.31.3 node1.example.com Ready worker 7h v1.31.3 node2.example.com Ready worker 7h v1.31.3 The following example is a cluster with one unhealthy node: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.31.3 node1.example.com NotReady,SchedulingDisabled worker 7h v1.31.3 node2.example.com Ready worker 7h v1.31.3 The conditions that trigger a NotReady status are shown later in this section. The -o wide option provides additional information on nodes. USD oc get nodes -o wide Example output NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME master.example.com Ready master 171m v1.31.3 10.0.129.108 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.31.3-30.rhaos4.10.gitf2f339d.el8-dev node1.example.com Ready worker 72m v1.31.3 10.0.129.222 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.31.3-30.rhaos4.10.gitf2f339d.el8-dev node2.example.com Ready worker 164m v1.31.3 10.0.142.150 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.31.3-30.rhaos4.10.gitf2f339d.el8-dev The following command lists information about a single node: USD oc get node <node> For example: USD oc get node node1.example.com Example output NAME STATUS ROLES AGE VERSION node1.example.com Ready worker 7h v1.31.3 The following command provides more detailed information about a specific node, including the reason for the current condition: USD oc describe node <node> For example: USD oc describe node node1.example.com Note The following example contains some values that are specific to Red Hat OpenShift Service on AWS on AWS. Example output Name: node1.example.com 1 Roles: worker 2 Labels: kubernetes.io/os=linux kubernetes.io/hostname=ip-10-0-131-14 kubernetes.io/arch=amd64 3 node-role.kubernetes.io/worker= node.kubernetes.io/instance-type=m4.large node.openshift.io/os_id=rhcos node.openshift.io/os_version=4.5 region=east topology.kubernetes.io/region=us-east-1 topology.kubernetes.io/zone=us-east-1a Annotations: cluster.k8s.io/machine: openshift-machine-api/ahardin-worker-us-east-2a-q5dzc 4 machineconfiguration.openshift.io/currentConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/desiredConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/state: Done volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Wed, 13 Feb 2019 11:05:57 -0500 Taints: <none> 5 Unschedulable: false Conditions: 6 Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- OutOfDisk False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:07:09 -0500 KubeletReady kubelet is posting ready status Addresses: 7 InternalIP: 10.0.140.16 InternalDNS: ip-10-0-140-16.us-east-2.compute.internal Hostname: ip-10-0-140-16.us-east-2.compute.internal Capacity: 8 attachable-volumes-aws-ebs: 39 cpu: 2 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8172516Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7558116Ki pods: 250 System Info: 9 Machine ID: 63787c9534c24fde9a0cde35c13f1f66 System UUID: EC22BF97-A006-4A58-6AF8-0A38DEEA122A Boot ID: f24ad37d-2594-46b4-8830-7f7555918325 Kernel Version: 3.10.0-957.5.1.el7.x86_64 OS Image: Red Hat Enterprise Linux CoreOS 410.8.20190520.0 (Ootpa) Operating System: linux Architecture: amd64 Container Runtime Version: cri-o://1.31.3-0.6.dev.rhaos4.3.git9ad059b.el8-rc2 Kubelet Version: v1.31.3 Kube-Proxy Version: v1.31.3 PodCIDR: 10.128.4.0/24 ProviderID: aws:///us-east-2a/i-04e87b31dc6b3e171 Non-terminated Pods: (12 in total) 10 Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- openshift-cluster-node-tuning-operator tuned-hdl5q 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-dns dns-default-l69zr 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-image-registry node-ca-9hmcg 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-ingress router-default-76455c45c-c5ptv 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-machine-config-operator machine-config-daemon-cvqw9 20m (1%) 0 (0%) 50Mi (0%) 0 (0%) openshift-marketplace community-operators-f67fh 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-monitoring alertmanager-main-0 50m (3%) 50m (3%) 210Mi (2%) 10Mi (0%) openshift-monitoring node-exporter-l7q8d 10m (0%) 20m (1%) 20Mi (0%) 40Mi (0%) openshift-monitoring prometheus-adapter-75d769c874-hvb85 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-multus multus-kw8w5 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-ovn-kubernetes ovnkube-node-t4dsn 80m (0%) 0 (0%) 1630Mi (0%) 0 (0%) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 380m (25%) 270m (18%) memory 880Mi (11%) 250Mi (3%) attachable-volumes-aws-ebs 0 0 Events: 11 Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeHasSufficientPID 6d (x5 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 6d kubelet, m01.example.com Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasNoDiskPressure Normal NodeHasSufficientDisk 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientDisk Normal NodeHasSufficientPID 6d kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal Starting 6d kubelet, m01.example.com Starting kubelet. #... 1 The name of the node. 2 The role of the node, either master or worker . 3 The labels applied to the node. 4 The annotations applied to the node. 5 The taints applied to the node. 6 The node conditions and status. The conditions stanza lists the Ready , PIDPressure , MemoryPressure , DiskPressure and OutOfDisk status. These condition are described later in this section. 7 The IP address and hostname of the node. 8 The pod resources and allocatable resources. 9 Information about the node host. 10 The pods on the node. 11 The events reported by the node. Among the information shown for nodes, the following node conditions appear in the output of the commands shown in this section: Table 6.1. Node Conditions Condition Description Ready If true , the node is healthy and ready to accept pods. If false , the node is not healthy and is not accepting pods. If unknown , the node controller has not received a heartbeat from the node for the node-monitor-grace-period (the default is 40 seconds). DiskPressure If true , the disk capacity is low. MemoryPressure If true , the node memory is low. PIDPressure If true , there are too many processes on the node. OutOfDisk If true , the node has insufficient free space on the node for adding new pods. NetworkUnavailable If true , the network for the node is not correctly configured. NotReady If true , one of the underlying components, such as the container runtime or network, is experiencing issues or is not yet configured. SchedulingDisabled Pods cannot be scheduled for placement on the node. 6.1.2. Listing pods on a node in your cluster You can list all the pods on a specific node. Procedure To list all or selected pods on selected nodes: USD oc get pod --selector=<nodeSelector> USD oc get pod --selector=kubernetes.io/os Or: USD oc get pod -l=<nodeSelector> USD oc get pod -l kubernetes.io/os=linux To list all pods on a specific node, including terminated pods: USD oc get pod --all-namespaces --field-selector=spec.nodeName=<nodename> 6.1.3. Viewing memory and CPU usage statistics on your nodes You can display usage statistics about nodes, which provide the runtime environments for containers. These usage statistics include CPU, memory, and storage consumption. Prerequisites You must have cluster-reader permission to view the usage statistics. Metrics must be installed to view the usage statistics. Procedure To view the usage statistics: USD oc adm top nodes Example output NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% ip-10-0-12-143.ec2.compute.internal 1503m 100% 4533Mi 61% ip-10-0-132-16.ec2.compute.internal 76m 5% 1391Mi 18% ip-10-0-140-137.ec2.compute.internal 398m 26% 2473Mi 33% ip-10-0-142-44.ec2.compute.internal 656m 43% 6119Mi 82% ip-10-0-146-165.ec2.compute.internal 188m 12% 3367Mi 45% ip-10-0-19-62.ec2.compute.internal 896m 59% 5754Mi 77% ip-10-0-44-193.ec2.compute.internal 632m 42% 5349Mi 72% To view the usage statistics for nodes with labels: USD oc adm top node --selector='' You must choose the selector (label query) to filter on. Supports = , == , and != . 6.2. Working with nodes As an administrator, you can perform several tasks to make your clusters more efficient. You can use the oc adm command to cordon, uncordon, and drain a specific node. Note Cordoning and draining are only allowed on worker nodes that are part of Red Hat OpenShift Cluster Manager machine pools. 6.2.1. Understanding how to evacuate pods on nodes Evacuating pods allows you to migrate all or selected pods from a given node or nodes. You can only evacuate pods backed by a replication controller. The replication controller creates new pods on other nodes and removes the existing pods from the specified node(s). Bare pods, meaning those not backed by a replication controller, are unaffected by default. You can evacuate a subset of pods by specifying a pod-selector. Pod selectors are based on labels, so all the pods with the specified label will be evacuated. Procedure Mark the nodes unschedulable before performing the pod evacuation. Mark the node as unschedulable: USD oc adm cordon <node1> Example output node/<node1> cordoned Check that the node status is Ready,SchedulingDisabled : USD oc get node <node1> Example output NAME STATUS ROLES AGE VERSION <node1> Ready,SchedulingDisabled worker 1d v1.31.3 Evacuate the pods using one of the following methods: Evacuate all or selected pods on one or more nodes: USD oc adm drain <node1> <node2> [--pod-selector=<pod_selector>] Force the deletion of bare pods using the --force option. When set to true , deletion continues even if there are pods not managed by a replication controller, replica set, job, daemon set, or stateful set: USD oc adm drain <node1> <node2> --force=true Set a period of time in seconds for each pod to terminate gracefully, use --grace-period . If negative, the default value specified in the pod will be used: USD oc adm drain <node1> <node2> --grace-period=-1 Ignore pods managed by daemon sets using the --ignore-daemonsets flag set to true : USD oc adm drain <node1> <node2> --ignore-daemonsets=true Set the length of time to wait before giving up using the --timeout flag. A value of 0 sets an infinite length of time: USD oc adm drain <node1> <node2> --timeout=5s Delete pods even if there are pods using emptyDir volumes by setting the --delete-emptydir-data flag to true . Local data is deleted when the node is drained: USD oc adm drain <node1> <node2> --delete-emptydir-data=true List objects that will be migrated without actually performing the evacuation, using the --dry-run option set to true : USD oc adm drain <node1> <node2> --dry-run=true Instead of specifying specific node names (for example, <node1> <node2> ), you can use the --selector=<node_selector> option to evacuate pods on selected nodes. Mark the node as schedulable when done. USD oc adm uncordon <node1> 6.3. Using the Node Tuning Operator Learn about the Node Tuning Operator and how you can use it to manage node-level tuning by orchestrating the tuned daemon. Purpose The Node Tuning Operator helps you manage node-level tuning by orchestrating the TuneD daemon and achieves low latency performance by using the Performance Profile controller. The majority of high-performance applications require some level of kernel tuning. The Node Tuning Operator provides a unified management interface to users of node-level sysctls and more flexibility to add custom tuning specified by user needs. The Operator manages the containerized TuneD daemon for Red Hat OpenShift Service on AWS as a Kubernetes daemon set. It ensures the custom tuning specification is passed to all containerized TuneD daemons running in the cluster in the format that the daemons understand. The daemons run on all nodes in the cluster, one per node. Node-level settings applied by the containerized TuneD daemon are rolled back on an event that triggers a profile change or when the containerized TuneD daemon is terminated gracefully by receiving and handling a termination signal. The Node Tuning Operator uses the Performance Profile controller to implement automatic tuning to achieve low latency performance for Red Hat OpenShift Service on AWS applications. The cluster administrator configures a performance profile to define node-level settings such as the following: Updating the kernel to kernel-rt. Choosing CPUs for housekeeping. Choosing CPUs for running workloads. The Node Tuning Operator is part of a standard Red Hat OpenShift Service on AWS installation in version 4.1 and later. Note In earlier versions of Red Hat OpenShift Service on AWS, the Performance Addon Operator was used to implement automatic tuning to achieve low latency performance for OpenShift applications. In Red Hat OpenShift Service on AWS 4.11 and later, this functionality is part of the Node Tuning Operator. 6.3.1. Accessing an example Node Tuning Operator specification Use this process to access an example Node Tuning Operator specification. Procedure Run the following command to access an example Node Tuning Operator specification: oc get tuned.tuned.openshift.io/default -o yaml -n openshift-cluster-node-tuning-operator The default CR is meant for delivering standard node-level tuning for the Red Hat OpenShift Service on AWS platform and it can only be modified to set the Operator Management state. Any other custom changes to the default CR will be overwritten by the Operator. For custom tuning, create your own Tuned CRs. Newly created CRs will be combined with the default CR and custom tuning applied to Red Hat OpenShift Service on AWS nodes based on node or pod labels and profile priorities. Warning While in certain situations the support for pod labels can be a convenient way of automatically delivering required tuning, this practice is discouraged and strongly advised against, especially in large-scale clusters. The default Tuned CR ships without pod label matching. If a custom profile is created with pod label matching, then the functionality will be enabled at that time. The pod label functionality will be deprecated in future versions of the Node Tuning Operator. 6.3.2. Custom tuning specification The custom resource (CR) for the Operator has two major sections. The first section, profile: , is a list of TuneD profiles and their names. The second, recommend: , defines the profile selection logic. Multiple custom tuning specifications can co-exist as multiple CRs in the Operator's namespace. The existence of new CRs or the deletion of old CRs is detected by the Operator. All existing custom tuning specifications are merged and appropriate objects for the containerized TuneD daemons are updated. Management state The Operator Management state is set by adjusting the default Tuned CR. By default, the Operator is in the Managed state and the spec.managementState field is not present in the default Tuned CR. Valid values for the Operator Management state are as follows: Managed: the Operator will update its operands as configuration resources are updated Unmanaged: the Operator will ignore changes to the configuration resources Removed: the Operator will remove its operands and resources the Operator provisioned Profile data The profile: section lists TuneD profiles and their names. profile: - name: tuned_profile_1 data: | # TuneD profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other TuneD daemon plugins supported by the containerized TuneD # ... - name: tuned_profile_n data: | # TuneD profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings Recommended profiles The profile: selection logic is defined by the recommend: section of the CR. The recommend: section is a list of items to recommend the profiles based on a selection criteria. recommend: <recommend-item-1> # ... <recommend-item-n> The individual items of the list: - machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8 tunedConfig: reapply_sysctl: <bool> 9 1 Optional. 2 A dictionary of key/value MachineConfig labels. The keys must be unique. 3 If omitted, profile match is assumed unless a profile with a higher priority matches first or machineConfigLabels is set. 4 An optional list. 5 Profile ordering priority. Lower numbers mean higher priority ( 0 is the highest priority). 6 A TuneD profile to apply on a match. For example tuned_profile_1 . 7 Optional operand configuration. 8 Turn debugging on or off for the TuneD daemon. Options are true for on or false for off. The default is false . 9 Turn reapply_sysctl functionality on or off for the TuneD daemon. Options are true for on and false for off. <match> is an optional list recursively defined as follows: - label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4 1 Node or pod label name. 2 Optional node or pod label value. If omitted, the presence of <label_name> is enough to match. 3 Optional object type ( node or pod ). If omitted, node is assumed. 4 An optional <match> list. If <match> is not omitted, all nested <match> sections must also evaluate to true . Otherwise, false is assumed and the profile with the respective <match> section will not be applied or recommended. Therefore, the nesting (child <match> sections) works as logical AND operator. Conversely, if any item of the <match> list matches, the entire <match> list evaluates to true . Therefore, the list acts as logical OR operator. If machineConfigLabels is defined, machine config pool based matching is turned on for the given recommend: list item. <mcLabels> specifies the labels for a machine config. The machine config is created automatically to apply host settings, such as kernel boot parameters, for the profile <tuned_profile_name> . This involves finding all machine config pools with machine config selector matching <mcLabels> and setting the profile <tuned_profile_name> on all nodes that are assigned the found machine config pools. To target nodes that have both master and worker roles, you must use the master role. The list items match and machineConfigLabels are connected by the logical OR operator. The match item is evaluated first in a short-circuit manner. Therefore, if it evaluates to true , the machineConfigLabels item is not considered. Important When using machine config pool based matching, it is advised to group nodes with the same hardware configuration into the same machine config pool. Not following this practice might result in TuneD operands calculating conflicting kernel parameters for two or more nodes sharing the same machine config pool. Example: Node or pod label based matching - match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node The CR above is translated for the containerized TuneD daemon into its recommend.conf file based on the profile priorities. The profile with the highest priority ( 10 ) is openshift-control-plane-es and, therefore, it is considered first. The containerized TuneD daemon running on a given node looks to see if there is a pod running on the same node with the tuned.openshift.io/elasticsearch label set. If not, the entire <match> section evaluates as false . If there is such a pod with the label, in order for the <match> section to evaluate to true , the node label also needs to be node-role.kubernetes.io/master or node-role.kubernetes.io/infra . If the labels for the profile with priority 10 matched, openshift-control-plane-es profile is applied and no other profile is considered. If the node/pod label combination did not match, the second highest priority profile ( openshift-control-plane ) is considered. This profile is applied if the containerized TuneD pod runs on a node with labels node-role.kubernetes.io/master or node-role.kubernetes.io/infra . Finally, the profile openshift-node has the lowest priority of 30 . It lacks the <match> section and, therefore, will always match. It acts as a profile catch-all to set openshift-node profile, if no other profile with higher priority matches on a given node. Example: Machine config pool based matching apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: "worker-custom" priority: 20 profile: openshift-node-custom To minimize node reboots, label the target nodes with a label the machine config pool's node selector will match, then create the Tuned CR above and finally create the custom machine config pool itself. Cloud provider-specific TuneD profiles With this functionality, all Cloud provider-specific nodes can conveniently be assigned a TuneD profile specifically tailored to a given Cloud provider on a Red Hat OpenShift Service on AWS cluster. This can be accomplished without adding additional node labels or grouping nodes into machine config pools. This functionality takes advantage of spec.providerID node object values in the form of <cloud-provider>://<cloud-provider-specific-id> and writes the file /var/lib/ocp-tuned/provider with the value <cloud-provider> in NTO operand containers. The content of this file is then used by TuneD to load provider-<cloud-provider> profile if such profile exists. The openshift profile that both openshift-control-plane and openshift-node profiles inherit settings from is now updated to use this functionality through the use of conditional profile loading. Neither NTO nor TuneD currently include any Cloud provider-specific profiles. However, it is possible to create a custom profile provider-<cloud-provider> that will be applied to all Cloud provider-specific cluster nodes. Example GCE Cloud provider profile apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: provider-gce namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=GCE Cloud provider-specific profile # Your tuning for GCE Cloud provider goes here. name: provider-gce Note Due to profile inheritance, any setting specified in the provider-<cloud-provider> profile will be overwritten by the openshift profile and its child profiles. 6.3.3. Default profiles set on a cluster The following are the default profiles set on a cluster. apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Optimize systems running OpenShift (provider specific parent profile) include=-provider-USD{f:exec:cat:/var/lib/ocp-tuned/provider},openshift name: openshift recommend: - profile: openshift-control-plane priority: 30 match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra - profile: openshift-node priority: 40 Starting with Red Hat OpenShift Service on AWS 4.9, all OpenShift TuneD profiles are shipped with the TuneD package. You can use the oc exec command to view the contents of these profiles: USD oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/openshift{,-control-plane,-node} -name tuned.conf -exec grep -H ^ {} \; 6.3.4. Supported TuneD daemon plugins Excluding the [main] section, the following TuneD plugins are supported when using custom profiles defined in the profile: section of the Tuned CR: audio cpu disk eeepc_she modules mounts net scheduler scsi_host selinux sysctl sysfs usb video vm bootloader There is some dynamic tuning functionality provided by some of these plugins that is not supported. The following TuneD plugins are currently not supported: script systemd Note The TuneD bootloader plugin only supports Red Hat Enterprise Linux CoreOS (RHCOS) worker nodes. Additional resources Available TuneD Plugins Getting Started with TuneD | [
"oc get nodes",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.31.3 node1.example.com Ready worker 7h v1.31.3 node2.example.com Ready worker 7h v1.31.3",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.31.3 node1.example.com NotReady,SchedulingDisabled worker 7h v1.31.3 node2.example.com Ready worker 7h v1.31.3",
"oc get nodes -o wide",
"NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME master.example.com Ready master 171m v1.31.3 10.0.129.108 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.31.3-30.rhaos4.10.gitf2f339d.el8-dev node1.example.com Ready worker 72m v1.31.3 10.0.129.222 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.31.3-30.rhaos4.10.gitf2f339d.el8-dev node2.example.com Ready worker 164m v1.31.3 10.0.142.150 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.31.3-30.rhaos4.10.gitf2f339d.el8-dev",
"oc get node <node>",
"oc get node node1.example.com",
"NAME STATUS ROLES AGE VERSION node1.example.com Ready worker 7h v1.31.3",
"oc describe node <node>",
"oc describe node node1.example.com",
"Name: node1.example.com 1 Roles: worker 2 Labels: kubernetes.io/os=linux kubernetes.io/hostname=ip-10-0-131-14 kubernetes.io/arch=amd64 3 node-role.kubernetes.io/worker= node.kubernetes.io/instance-type=m4.large node.openshift.io/os_id=rhcos node.openshift.io/os_version=4.5 region=east topology.kubernetes.io/region=us-east-1 topology.kubernetes.io/zone=us-east-1a Annotations: cluster.k8s.io/machine: openshift-machine-api/ahardin-worker-us-east-2a-q5dzc 4 machineconfiguration.openshift.io/currentConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/desiredConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/state: Done volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Wed, 13 Feb 2019 11:05:57 -0500 Taints: <none> 5 Unschedulable: false Conditions: 6 Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- OutOfDisk False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:07:09 -0500 KubeletReady kubelet is posting ready status Addresses: 7 InternalIP: 10.0.140.16 InternalDNS: ip-10-0-140-16.us-east-2.compute.internal Hostname: ip-10-0-140-16.us-east-2.compute.internal Capacity: 8 attachable-volumes-aws-ebs: 39 cpu: 2 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8172516Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7558116Ki pods: 250 System Info: 9 Machine ID: 63787c9534c24fde9a0cde35c13f1f66 System UUID: EC22BF97-A006-4A58-6AF8-0A38DEEA122A Boot ID: f24ad37d-2594-46b4-8830-7f7555918325 Kernel Version: 3.10.0-957.5.1.el7.x86_64 OS Image: Red Hat Enterprise Linux CoreOS 410.8.20190520.0 (Ootpa) Operating System: linux Architecture: amd64 Container Runtime Version: cri-o://1.31.3-0.6.dev.rhaos4.3.git9ad059b.el8-rc2 Kubelet Version: v1.31.3 Kube-Proxy Version: v1.31.3 PodCIDR: 10.128.4.0/24 ProviderID: aws:///us-east-2a/i-04e87b31dc6b3e171 Non-terminated Pods: (12 in total) 10 Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- openshift-cluster-node-tuning-operator tuned-hdl5q 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-dns dns-default-l69zr 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-image-registry node-ca-9hmcg 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-ingress router-default-76455c45c-c5ptv 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-machine-config-operator machine-config-daemon-cvqw9 20m (1%) 0 (0%) 50Mi (0%) 0 (0%) openshift-marketplace community-operators-f67fh 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-monitoring alertmanager-main-0 50m (3%) 50m (3%) 210Mi (2%) 10Mi (0%) openshift-monitoring node-exporter-l7q8d 10m (0%) 20m (1%) 20Mi (0%) 40Mi (0%) openshift-monitoring prometheus-adapter-75d769c874-hvb85 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-multus multus-kw8w5 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-ovn-kubernetes ovnkube-node-t4dsn 80m (0%) 0 (0%) 1630Mi (0%) 0 (0%) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 380m (25%) 270m (18%) memory 880Mi (11%) 250Mi (3%) attachable-volumes-aws-ebs 0 0 Events: 11 Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeHasSufficientPID 6d (x5 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 6d kubelet, m01.example.com Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasNoDiskPressure Normal NodeHasSufficientDisk 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientDisk Normal NodeHasSufficientPID 6d kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal Starting 6d kubelet, m01.example.com Starting kubelet. #",
"oc get pod --selector=<nodeSelector>",
"oc get pod --selector=kubernetes.io/os",
"oc get pod -l=<nodeSelector>",
"oc get pod -l kubernetes.io/os=linux",
"oc get pod --all-namespaces --field-selector=spec.nodeName=<nodename>",
"oc adm top nodes",
"NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% ip-10-0-12-143.ec2.compute.internal 1503m 100% 4533Mi 61% ip-10-0-132-16.ec2.compute.internal 76m 5% 1391Mi 18% ip-10-0-140-137.ec2.compute.internal 398m 26% 2473Mi 33% ip-10-0-142-44.ec2.compute.internal 656m 43% 6119Mi 82% ip-10-0-146-165.ec2.compute.internal 188m 12% 3367Mi 45% ip-10-0-19-62.ec2.compute.internal 896m 59% 5754Mi 77% ip-10-0-44-193.ec2.compute.internal 632m 42% 5349Mi 72%",
"oc adm top node --selector=''",
"oc adm cordon <node1>",
"node/<node1> cordoned",
"oc get node <node1>",
"NAME STATUS ROLES AGE VERSION <node1> Ready,SchedulingDisabled worker 1d v1.31.3",
"oc adm drain <node1> <node2> [--pod-selector=<pod_selector>]",
"oc adm drain <node1> <node2> --force=true",
"oc adm drain <node1> <node2> --grace-period=-1",
"oc adm drain <node1> <node2> --ignore-daemonsets=true",
"oc adm drain <node1> <node2> --timeout=5s",
"oc adm drain <node1> <node2> --delete-emptydir-data=true",
"oc adm drain <node1> <node2> --dry-run=true",
"oc adm uncordon <node1>",
"get tuned.tuned.openshift.io/default -o yaml -n openshift-cluster-node-tuning-operator",
"profile: - name: tuned_profile_1 data: | # TuneD profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other TuneD daemon plugins supported by the containerized TuneD - name: tuned_profile_n data: | # TuneD profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings",
"recommend: <recommend-item-1> <recommend-item-n>",
"- machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8 tunedConfig: reapply_sysctl: <bool> 9",
"- label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4",
"- match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: \"worker-custom\" priority: 20 profile: openshift-node-custom",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: provider-gce namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=GCE Cloud provider-specific profile # Your tuning for GCE Cloud provider goes here. name: provider-gce",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Optimize systems running OpenShift (provider specific parent profile) include=-provider-USD{f:exec:cat:/var/lib/ocp-tuned/provider},openshift name: openshift recommend: - profile: openshift-control-plane priority: 30 match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra - profile: openshift-node priority: 40",
"oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/openshift{,-control-plane,-node} -name tuned.conf -exec grep -H ^ {} \\;"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/nodes/working-with-nodes |
Chapter 30. Customizing project preferences | Chapter 30. Customizing project preferences In Business Central, a project is a part of your space and stores the related assets. You can add multiple projects in a space. For example, an organization includes various departments, such as HR, Payroll, Engineering, and R&D. You can map each department to a space in Business Central, along with adding respective projects. You can customize the project settings in Business Central. Also, you can create a new project or clone projects from an existing Git repository. Procedure In Business Central, select the Admin icon in the upper-right corner and select Projects . In the Project Preferences panel, select the preference you want to modify. The project preferences include: Project Importing : This preference consists of the following property: Select the Allow multiple projects to be imported on cluster to import multiple projects on a cluster. File exporting : This preference consists of the following properties: Table 30.1. File exporting properties Field Description PDF orientation Determines whether the PDF orientation is portrait or landscape. PDF units Determines whether the PDF unit is PT , MM , CN , or IN . PDF page format Determines whether the PDF page format is A[0-10] , B[0-10] , or C[0-10] . Spaces : This preference consists of the following properties: Table 30.2. Spaces properties Field Description Name The default name of the space that is created automatically if none exists. Owner The default owner of the space that is created automatically if none exists. Group ID The default group ID of the space that is created automatically if none exists. Alias (in singular) Determines the customized alias (singular) of the space. Alias (in plural) Determines the customized alias (plural) of the space. Default values : This preference consists of the following properties: Table 30.3. Default values properties Field Description Version The default version number of a project when creating projects. Description The default description of a project when creating projects. Branch The default branch to be used when using a Git repository. Assets Per Page Used to customize the number of assets per page in the project. The default value is 15 . Advanced GAV preferences : This preference consists of the following properties: Table 30.4. Advanced GAV preference properties Field Description Disable GAV conflict check? Determines whether to enable or disable the GAV conflict check. Disabling this checkbox enables the projects to contain the same GAV (group ID, artifact, and version). Allow child GAV edition? Determines whether to allow child or subprojects to contain GAV edition. Note Duplicate GAV detection is disabled for projects in the development mode. To enable duplicate GAV detection for a project in Business Central, go to project Settings General Settings Version and toggle the Development Mode option to OFF (if applicable). Click Save . | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/managing_red_hat_process_automation_manager_and_kie_server_settings/customizing-project-preferences-proc_configuring-central |
Appendix B. Revision History | Appendix B. Revision History Revision History Revision 1.3-7 Fri Mar 7 2014 Eliska Slobodova Updated a note about pNFS. Revision 1.3-4 Tue Feb 18 2014 Eliska Slobodova Added a note about Mellanox SR-IOV support. Revision 1.3-3 Wed Jan 15 2014 Eliska Slobodova Updated a note about the Hyper-V balloon driver. Revision 1.3-2 Mon Feb 25 2013 Martin Prpic Added Subscription Asset Manager release notes. Revision 1.2-1 Thu Feb 21 2013 Martin Prpic Release of the Red Hat Enterprise Linux 6.4 Release Notes. Revision 1.1-14 Wed Dec 4 2012 Martin Prpic Release of the Red Hat Enterprise Linux 6.4 Beta Release Notes. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_release_notes/appe-6.4_release_notes-revision_history |
Chapter 26. Load balancing with MetalLB | Chapter 26. Load balancing with MetalLB 26.1. Configuring MetalLB address pools As a cluster administrator, you can add, modify, and delete address pools. The MetalLB Operator uses the address pool custom resources to set the IP addresses that MetalLB can assign to services. The namespace used in the examples assume the namespace is metallb-system . For more information about how to install the MetalLB Operator, see About MetalLB and the MetalLB Operator . 26.1.1. About the IPAddressPool custom resource The fields for the IPAddressPool custom resource are described in the following tables. Table 26.1. MetalLB IPAddressPool pool custom resource Field Type Description metadata.name string Specifies the name for the address pool. When you add a service, you can specify this pool name in the metallb.io/address-pool annotation to select an IP address from a specific pool. The names doc-example , silver , and gold are used throughout the documentation. metadata.namespace string Specifies the namespace for the address pool. Specify the same namespace that the MetalLB Operator uses. metadata.label string Optional: Specifies the key value pair assigned to the IPAddressPool . This can be referenced by the ipAddressPoolSelectors in the BGPAdvertisement and L2Advertisement CRD to associate the IPAddressPool with the advertisement spec.addresses string Specifies a list of IP addresses for MetalLB Operator to assign to services. You can specify multiple ranges in a single pool; they will all share the same settings. Specify each range in CIDR notation or as starting and ending IP addresses separated with a hyphen. spec.autoAssign boolean Optional: Specifies whether MetalLB automatically assigns IP addresses from this pool. Specify false if you want explicitly request an IP address from this pool with the metallb.io/address-pool annotation. The default value is true . spec.avoidBuggyIPs boolean Optional: This ensures when enabled that IP addresses ending .0 and .255 are not allocated from the pool. The default value is false . Some older consumer network equipment mistakenly block IP addresses ending in .0 and .255. You can assign IP addresses from an IPAddressPool to services and namespaces by configuring the spec.serviceAllocation specification. Table 26.2. MetalLB IPAddressPool custom resource spec.serviceAllocation subfields Field Type Description priority int Optional: Defines the priority between IP address pools when more than one IP address pool matches a service or namespace. A lower number indicates a higher priority. namespaces array (string) Optional: Specifies a list of namespaces that you can assign to IP addresses in an IP address pool. namespaceSelectors array (LabelSelector) Optional: Specifies namespace labels that you can assign to IP addresses from an IP address pool by using label selectors in a list format. serviceSelectors array (LabelSelector) Optional: Specifies service labels that you can assign to IP addresses from an address pool by using label selectors in a list format. 26.1.2. Configuring an address pool As a cluster administrator, you can add address pools to your cluster to control the IP addresses that MetalLB can assign to load-balancer services. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a file, such as ipaddresspool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example labels: 1 zone: east spec: addresses: - 203.0.113.1-203.0.113.10 - 203.0.113.65-203.0.113.75 1 This label assigned to the IPAddressPool can be referenced by the ipAddressPoolSelectors in the BGPAdvertisement CRD to associate the IPAddressPool with the advertisement. Apply the configuration for the IP address pool: USD oc apply -f ipaddresspool.yaml Verification View the address pool: USD oc describe -n metallb-system IPAddressPool doc-example Example output Name: doc-example Namespace: metallb-system Labels: zone=east Annotations: <none> API Version: metallb.io/v1beta1 Kind: IPAddressPool Metadata: ... Spec: Addresses: 203.0.113.1-203.0.113.10 203.0.113.65-203.0.113.75 Auto Assign: true Events: <none> Confirm that the address pool name, such as doc-example , and the IP address ranges appear in the output. 26.1.3. Configure MetalLB address pool for VLAN As a cluster administrator, you can add address pools to your cluster to control the IP addresses on a created VLAN that MetalLB can assign to load-balancer services Prerequisites Install the OpenShift CLI ( oc ). Configure a separate VLAN. Log in as a user with cluster-admin privileges. Procedure Create a file, such as ipaddresspool-vlan.yaml , that is similar to the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-vlan labels: zone: east 1 spec: addresses: - 192.168.100.1-192.168.100.254 2 1 This label assigned to the IPAddressPool can be referenced by the ipAddressPoolSelectors in the BGPAdvertisement CRD to associate the IPAddressPool with the advertisement. 2 This IP range must match the subnet assigned to the VLAN on your network. To support layer 2 (L2) mode, the IP address range must be within the same subnet as the cluster nodes. Apply the configuration for the IP address pool: USD oc apply -f ipaddresspool-vlan.yaml To ensure this configuration applies to the VLAN you need to set the spec gatewayConfig.ipForwarding to Global . Run the following command to edit the network configuration custom resource (CR): USD oc edit network.config.openshift/cluster Update the spec.defaultNetwork.ovnKubernetesConfig section to include the gatewayConfig.ipForwarding set to Global . It should look something like this: Example ... spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: gatewayConfig: ipForwarding: Global ... 26.1.4. Example address pool configurations 26.1.4.1. Example: IPv4 and CIDR ranges You can specify a range of IP addresses in CIDR notation. You can combine CIDR notation with the notation that uses a hyphen to separate lower and upper bounds. apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-cidr namespace: metallb-system spec: addresses: - 192.168.100.0/24 - 192.168.200.0/24 - 192.168.255.1-192.168.255.5 26.1.4.2. Example: Reserve IP addresses You can set the autoAssign field to false to prevent MetalLB from automatically assigning the IP addresses from the pool. When you add a service, you can request a specific IP address from the pool or you can specify the pool name in an annotation to request any IP address from the pool. apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-reserved namespace: metallb-system spec: addresses: - 10.0.100.0/28 autoAssign: false 26.1.4.3. Example: IPv4 and IPv6 addresses You can add address pools that use IPv4 and IPv6. You can specify multiple ranges in the addresses list, just like several IPv4 examples. Whether the service is assigned a single IPv4 address, a single IPv6 address, or both is determined by how you add the service. The spec.ipFamilies and spec.ipFamilyPolicy fields control how IP addresses are assigned to the service. apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-combined namespace: metallb-system spec: addresses: - 10.0.100.0/28 - 2002:2:2::1-2002:2:2::100 26.1.4.4. Example: Assign IP address pools to services or namespaces You can assign IP addresses from an IPAddressPool to services and namespaces that you specify. If you assign a service or namespace to more than one IP address pool, MetalLB uses an available IP address from the higher-priority IP address pool. If no IP addresses are available from the assigned IP address pools with a high priority, MetalLB uses available IP addresses from an IP address pool with lower priority or no priority. Note You can use the matchLabels label selector, the matchExpressions label selector, or both, for the namespaceSelectors and serviceSelectors specifications. This example demonstrates one label selector for each specification. apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-service-allocation namespace: metallb-system spec: addresses: - 192.168.20.0/24 serviceAllocation: priority: 50 1 namespaces: 2 - namespace-a - namespace-b namespaceSelectors: 3 - matchLabels: zone: east serviceSelectors: 4 - matchExpressions: - key: security operator: In values: - S1 1 Assign a priority to the address pool. A lower number indicates a higher priority. 2 Assign one or more namespaces to the IP address pool in a list format. 3 Assign one or more namespace labels to the IP address pool by using label selectors in a list format. 4 Assign one or more service labels to the IP address pool by using label selectors in a list format. 26.1.5. steps Configuring MetalLB with an L2 advertisement and label Configuring MetalLB BGP peers Configuring services to use MetalLB 26.2. About advertising for the IP address pools You can configure MetalLB so that the IP address is advertised with layer 2 protocols, the BGP protocol, or both. With layer 2, MetalLB provides a fault-tolerant external IP address. With BGP, MetalLB provides fault-tolerance for the external IP address and load balancing. MetalLB supports advertising using L2 and BGP for the same set of IP addresses. MetalLB provides the flexibility to assign address pools to specific BGP peers effectively to a subset of nodes on the network. This allows for more complex configurations, for example facilitating the isolation of nodes or the segmentation of the network. 26.2.1. About the BGPAdvertisement custom resource The fields for the BGPAdvertisements object are defined in the following table: Table 26.3. BGPAdvertisements configuration Field Type Description metadata.name string Specifies the name for the BGP advertisement. metadata.namespace string Specifies the namespace for the BGP advertisement. Specify the same namespace that the MetalLB Operator uses. spec.aggregationLength integer Optional: Specifies the number of bits to include in a 32-bit CIDR mask. To aggregate the routes that the speaker advertises to BGP peers, the mask is applied to the routes for several service IP addresses and the speaker advertises the aggregated route. For example, with an aggregation length of 24 , the speaker can aggregate several 10.0.1.x/32 service IP addresses and advertise a single 10.0.1.0/24 route. spec.aggregationLengthV6 integer Optional: Specifies the number of bits to include in a 128-bit CIDR mask. For example, with an aggregation length of 124 , the speaker can aggregate several fc00:f853:0ccd:e799::x/128 service IP addresses and advertise a single fc00:f853:0ccd:e799::0/124 route. spec.communities string Optional: Specifies one or more BGP communities. Each community is specified as two 16-bit values separated by the colon character. Well-known communities must be specified as 16-bit values: NO_EXPORT : 65535:65281 NO_ADVERTISE : 65535:65282 NO_EXPORT_SUBCONFED : 65535:65283 Note You can also use community objects that are created along with the strings. spec.localPref integer Optional: Specifies the local preference for this advertisement. This BGP attribute applies to BGP sessions within the Autonomous System. spec.ipAddressPools string Optional: The list of IPAddressPools to advertise with this advertisement, selected by name. spec.ipAddressPoolSelectors string Optional: A selector for the IPAddressPools that gets advertised with this advertisement. This is for associating the IPAddressPool to the advertisement based on the label assigned to the IPAddressPool instead of the name itself. If no IPAddressPool is selected by this or by the list, the advertisement is applied to all the IPAddressPools . spec.nodeSelectors string Optional: NodeSelectors allows to limit the nodes to announce as hops for the load balancer IP. When empty, all the nodes are announced as hops. spec.peers string Optional: Use a list to specify the metadata.name values for each BGPPeer resource that receives advertisements for the MetalLB service IP address. The MetalLB service IP address is assigned from the IP address pool. By default, the MetalLB service IP address is advertised to all configured BGPPeer resources. Use this field to limit the advertisement to specific BGPpeer resources. 26.2.2. Configuring MetalLB with a BGP advertisement and a basic use case Configure MetalLB as follows so that the peer BGP routers receive one 203.0.113.200/32 route and one fc00:f853:ccd:e799::1/128 route for each load-balancer IP address that MetalLB assigns to a service. Because the localPref and communities fields are not specified, the routes are advertised with localPref set to zero and no BGP communities. 26.2.2.1. Example: Advertise a basic address pool configuration with BGP Configure MetalLB as follows so that the IPAddressPool is advertised with the BGP protocol. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an IP address pool. Create a file, such as ipaddresspool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-basic spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124 Apply the configuration for the IP address pool: USD oc apply -f ipaddresspool.yaml Create a BGP advertisement. Create a file, such as bgpadvertisement.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-basic namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-basic Apply the configuration: USD oc apply -f bgpadvertisement.yaml 26.2.3. Configuring MetalLB with a BGP advertisement and an advanced use case Configure MetalLB as follows so that MetalLB assigns IP addresses to load-balancer services in the ranges between 203.0.113.200 and 203.0.113.203 and between fc00:f853:ccd:e799::0 and fc00:f853:ccd:e799::f . To explain the two BGP advertisements, consider an instance when MetalLB assigns the IP address of 203.0.113.200 to a service. With that IP address as an example, the speaker advertises two routes to BGP peers: 203.0.113.200/32 , with localPref set to 100 and the community set to the numeric value of the NO_ADVERTISE community. This specification indicates to the peer routers that they can use this route but they should not propagate information about this route to BGP peers. 203.0.113.200/30 , aggregates the load-balancer IP addresses assigned by MetalLB into a single route. MetalLB advertises the aggregated route to BGP peers with the community attribute set to 8000:800 . BGP peers propagate the 203.0.113.200/30 route to other BGP peers. When traffic is routed to a node with a speaker, the 203.0.113.200/32 route is used to forward the traffic into the cluster and to a pod that is associated with the service. As you add more services and MetalLB assigns more load-balancer IP addresses from the pool, peer routers receive one local route, 203.0.113.20x/32 , for each service, as well as the 203.0.113.200/30 aggregate route. Each service that you add generates the /30 route, but MetalLB deduplicates the routes to one BGP advertisement before communicating with peer routers. 26.2.3.1. Example: Advertise an advanced address pool configuration with BGP Configure MetalLB as follows so that the IPAddressPool is advertised with the BGP protocol. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an IP address pool. Create a file, such as ipaddresspool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-adv labels: zone: east spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124 autoAssign: false Apply the configuration for the IP address pool: USD oc apply -f ipaddresspool.yaml Create a BGP advertisement. Create a file, such as bgpadvertisement1.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-adv-1 namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-adv communities: - 65535:65282 aggregationLength: 32 localPref: 100 Apply the configuration: USD oc apply -f bgpadvertisement1.yaml Create a file, such as bgpadvertisement2.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-adv-2 namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-adv communities: - 8000:800 aggregationLength: 30 aggregationLengthV6: 124 Apply the configuration: USD oc apply -f bgpadvertisement2.yaml 26.2.4. Advertising an IP address pool from a subset of nodes To advertise an IP address from an IP addresses pool, from a specific set of nodes only, use the .spec.nodeSelector specification in the BGPAdvertisement custom resource. This specification associates a pool of IP addresses with a set of nodes in the cluster. This is useful when you have nodes on different subnets in a cluster and you want to advertise an IP addresses from an address pool from a specific subnet, for example a public-facing subnet only. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an IP address pool by using a custom resource: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: pool1 spec: addresses: - 4.4.4.100-4.4.4.200 - 2001:100:4::200-2001:100:4::400 Control which nodes in the cluster the IP address from pool1 advertises from by defining the .spec.nodeSelector value in the BGPAdvertisement custom resource: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: example spec: ipAddressPools: - pool1 nodeSelector: - matchLabels: kubernetes.io/hostname: NodeA - matchLabels: kubernetes.io/hostname: NodeB In this example, the IP address from pool1 advertises from NodeA and NodeB only. 26.2.5. About the L2Advertisement custom resource The fields for the l2Advertisements object are defined in the following table: Table 26.4. L2 advertisements configuration Field Type Description metadata.name string Specifies the name for the L2 advertisement. metadata.namespace string Specifies the namespace for the L2 advertisement. Specify the same namespace that the MetalLB Operator uses. spec.ipAddressPools string Optional: The list of IPAddressPools to advertise with this advertisement, selected by name. spec.ipAddressPoolSelectors string Optional: A selector for the IPAddressPools that gets advertised with this advertisement. This is for associating the IPAddressPool to the advertisement based on the label assigned to the IPAddressPool instead of the name itself. If no IPAddressPool is selected by this or by the list, the advertisement is applied to all the IPAddressPools . spec.nodeSelectors string Optional: NodeSelectors limits the nodes to announce as hops for the load balancer IP. When empty, all the nodes are announced as hops. Important Limiting the nodes to announce as hops is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . spec.interfaces string Optional: The list of interfaces that are used to announce the load balancer IP. 26.2.6. Configuring MetalLB with an L2 advertisement Configure MetalLB as follows so that the IPAddressPool is advertised with the L2 protocol. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an IP address pool. Create a file, such as ipaddresspool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-l2 spec: addresses: - 4.4.4.0/24 autoAssign: false Apply the configuration for the IP address pool: USD oc apply -f ipaddresspool.yaml Create a L2 advertisement. Create a file, such as l2advertisement.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement namespace: metallb-system spec: ipAddressPools: - doc-example-l2 Apply the configuration: USD oc apply -f l2advertisement.yaml 26.2.7. Configuring MetalLB with a L2 advertisement and label The ipAddressPoolSelectors field in the BGPAdvertisement and L2Advertisement custom resource definitions is used to associate the IPAddressPool to the advertisement based on the label assigned to the IPAddressPool instead of the name itself. This example shows how to configure MetalLB so that the IPAddressPool is advertised with the L2 protocol by configuring the ipAddressPoolSelectors field. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an IP address pool. Create a file, such as ipaddresspool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-l2-label labels: zone: east spec: addresses: - 172.31.249.87/32 Apply the configuration for the IP address pool: USD oc apply -f ipaddresspool.yaml Create a L2 advertisement advertising the IP using ipAddressPoolSelectors . Create a file, such as l2advertisement.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement-label namespace: metallb-system spec: ipAddressPoolSelectors: - matchExpressions: - key: zone operator: In values: - east Apply the configuration: USD oc apply -f l2advertisement.yaml 26.2.8. Configuring MetalLB with an L2 advertisement for selected interfaces By default, the IP addresses from IP address pool that has been assigned to the service, is advertised from all the network interfaces. The interfaces field in the L2Advertisement custom resource definition is used to restrict those network interfaces that advertise the IP address pool. This example shows how to configure MetalLB so that the IP address pool is advertised only from the network interfaces listed in the interfaces field of all nodes. Prerequisites You have installed the OpenShift CLI ( oc ). You are logged in as a user with cluster-admin privileges. Procedure Create an IP address pool. Create a file, such as ipaddresspool.yaml , and enter the configuration details like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-l2 spec: addresses: - 4.4.4.0/24 autoAssign: false Apply the configuration for the IP address pool like the following example: USD oc apply -f ipaddresspool.yaml Create a L2 advertisement advertising the IP with interfaces selector. Create a YAML file, such as l2advertisement.yaml , and enter the configuration details like the following example: apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement namespace: metallb-system spec: ipAddressPools: - doc-example-l2 interfaces: - interfaceA - interfaceB Apply the configuration for the advertisement like the following example: USD oc apply -f l2advertisement.yaml Important The interface selector does not affect how MetalLB chooses the node to announce a given IP by using L2. The chosen node does not announce the service if the node does not have the selected interface. 26.2.9. Configuring MetalLB with secondary networks From OpenShift Container Platform 4.14 the default network behavior is to not allow forwarding of IP packets between network interfaces. Therefore, when MetalLB is configured on a secondary interface, you need to add a machine configuration to enable IP forwarding for only the required interfaces. Note OpenShift Container Platform clusters upgraded from 4.13 are not affected because a global parameter is set during upgrade to enable global IP forwarding. To enable IP forwarding for the secondary interface, you have two options: Enable IP forwarding for a specific interface. Enable IP forwarding for all interfaces. Note Enabling IP forwarding for a specific interface provides more granular control, while enabling it for all interfaces applies a global setting. 26.2.9.1. Enabling IP forwarding for a specific interface Procedure Patch the Cluster Network Operator, setting the parameter routingViaHost to true , by running the following command: USD oc patch network.operator cluster -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"gatewayConfig": {"routingViaHost": true} }}}}' --type=merge Enable forwarding for a specific secondary interface, such as bridge-net by creating and applying a MachineConfig CR: Base64-encode the string that is used to configure network kernel parameters by running the following command on your local machine: USD echo -e "net.ipv4.conf.bridge-net.forwarding = 1\nnet.ipv6.conf.bridge-net.forwarding = 1\nnet.ipv4.conf.bridge-net.rp_filter = 0\nnet.ipv6.conf.bridge-net.rp_filter = 0" | base64 -w0 Example output bmV0LmlwdjQuY29uZi5icmlkZ2UtbmV0LmZvcndhcmRpbmcgPSAxCm5ldC5pcHY2LmNvbmYuYnJpZGdlLW5ldC5mb3J3YXJkaW5nID0gMQpuZXQuaXB2NC5jb25mLmJyaWRnZS1uZXQucnBfZmlsdGVyID0gMApuZXQuaXB2Ni5jb25mLmJyaWRnZS1uZXQucnBfZmlsdGVyID0gMAo= Create the MachineConfig CR to enable IP forwarding for the specified secondary interface named bridge-net . Save the following YAML in the enable-ip-forward.yaml file: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: <node_role> 1 name: 81-enable-global-forwarding spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,bmV0LmlwdjQuY29uZi5icmlkZ2UtbmV0LmZvcndhcmRpbmcgPSAxCm5ldC5pcHY2LmNvbmYuYnJpZGdlLW5ldC5mb3J3YXJkaW5nID0gMQpuZXQuaXB2NC5jb25mLmJyaWRnZS1uZXQucnBfZmlsdGVyID0gMApuZXQuaXB2Ni5jb25mLmJyaWRnZS1uZXQucnBfZmlsdGVyID0gMAo= 2 verification: {} filesystem: root mode: 644 path: /etc/sysctl.d/enable-global-forwarding.conf osImageURL: "" 1 Node role where you want to enable IP forwarding, for example, worker 2 Populate with the generated base64 string Apply the configuration by running the following command: USD oc apply -f enable-ip-forward.yaml Verification After you apply the machine config, verify the changes by following this procedure: Enter into a debug session on the target node by running the following command: USD oc debug node/<node-name> This step instantiates a debug pod called <node-name>-debug . Set /host as the root directory within the debug shell by running the following command: USD chroot /host The debug pod mounts the host's root file system in /host within the pod. By changing the root directory to /host , you can run binaries contained in the host's executable paths. Verify that IP forwarding is enabled by running the following command: USD cat /etc/sysctl.d/enable-global-forwarding.conf Expected output net.ipv4.conf.bridge-net.forwarding = 1 net.ipv6.conf.bridge-net.forwarding = 1 net.ipv4.conf.bridge-net.rp_filter = 0 net.ipv6.conf.bridge-net.rp_filter = 0 The output indicates that IPv4 and IPv6 packet forwarding is enabled on the bridge-net interface. 26.2.9.2. Enabling IP forwarding globally Enable IP forwarding globally by running the following command: USD oc patch network.operator cluster -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"gatewayConfig":{"ipForwarding": "Global"}}}}} 26.2.10. Additional resources Configuring a community alias . 26.3. Configuring MetalLB BGP peers As a cluster administrator, you can add, modify, and delete Border Gateway Protocol (BGP) peers. The MetalLB Operator uses the BGP peer custom resources to identify which peers that MetalLB speaker pods contact to start BGP sessions. The peers receive the route advertisements for the load-balancer IP addresses that MetalLB assigns to services. 26.3.1. About the BGP peer custom resource The fields for the BGP peer custom resource are described in the following table. Table 26.5. MetalLB BGP peer custom resource Field Type Description metadata.name string Specifies the name for the BGP peer custom resource. metadata.namespace string Specifies the namespace for the BGP peer custom resource. spec.myASN integer Specifies the Autonomous System number for the local end of the BGP session. Specify the same value in all BGP peer custom resources that you add. The range is 0 to 4294967295 . spec.peerASN integer Specifies the Autonomous System number for the remote end of the BGP session. The range is 0 to 4294967295 . spec.peerAddress string Specifies the IP address of the peer to contact for establishing the BGP session. spec.sourceAddress string Optional: Specifies the IP address to use when establishing the BGP session. The value must be an IPv4 address. spec.peerPort integer Optional: Specifies the network port of the peer to contact for establishing the BGP session. The range is 0 to 16384 . spec.holdTime string Optional: Specifies the duration for the hold time to propose to the BGP peer. The minimum value is 3 seconds ( 3s ). The common units are seconds and minutes, such as 3s , 1m , and 5m30s . To detect path failures more quickly, also configure BFD. spec.keepaliveTime string Optional: Specifies the maximum interval between sending keep-alive messages to the BGP peer. If you specify this field, you must also specify a value for the holdTime field. The specified value must be less than the value for the holdTime field. spec.routerID string Optional: Specifies the router ID to advertise to the BGP peer. If you specify this field, you must specify the same value in every BGP peer custom resource that you add. spec.password string Optional: Specifies the MD5 password to send to the peer for routers that enforce TCP MD5 authenticated BGP sessions. spec.passwordSecret string Optional: Specifies name of the authentication secret for the BGP Peer. The secret must live in the metallb namespace and be of type basic-auth. spec.bfdProfile string Optional: Specifies the name of a BFD profile. spec.nodeSelectors object[] Optional: Specifies a selector, using match expressions and match labels, to control which nodes can connect to the BGP peer. spec.ebgpMultiHop boolean Optional: Specifies that the BGP peer is multiple network hops away. If the BGP peer is not directly connected to the same network, the speaker cannot establish a BGP session unless this field is set to true . This field applies to external BGP . External BGP is the term that is used to describe when a BGP peer belongs to a different Autonomous System. connectTime duration Specifies how long BGP waits between connection attempts to a neighbor. Note The passwordSecret field is mutually exclusive with the password field, and contains a reference to a secret containing the password to use. Setting both fields results in a failure of the parsing. 26.3.2. Configuring a BGP peer As a cluster administrator, you can add a BGP peer custom resource to exchange routing information with network routers and advertise the IP addresses for services. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Configure MetalLB with a BGP advertisement. Procedure Create a file, such as bgppeer.yaml , with content like the following example: apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: doc-example-peer spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10 Apply the configuration for the BGP peer: USD oc apply -f bgppeer.yaml 26.3.3. Configure a specific set of BGP peers for a given address pool This procedure illustrates how to: Configure a set of address pools ( pool1 and pool2 ). Configure a set of BGP peers ( peer1 and peer2 ). Configure BGP advertisement to assign pool1 to peer1 and pool2 to peer2 . Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create address pool pool1 . Create a file, such as ipaddresspool1.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: pool1 spec: addresses: - 4.4.4.100-4.4.4.200 - 2001:100:4::200-2001:100:4::400 Apply the configuration for the IP address pool pool1 : USD oc apply -f ipaddresspool1.yaml Create address pool pool2 . Create a file, such as ipaddresspool2.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: pool2 spec: addresses: - 5.5.5.100-5.5.5.200 - 2001:100:5::200-2001:100:5::400 Apply the configuration for the IP address pool pool2 : USD oc apply -f ipaddresspool2.yaml Create BGP peer1 . Create a file, such as bgppeer1.yaml , with content like the following example: apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: peer1 spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10 Apply the configuration for the BGP peer: USD oc apply -f bgppeer1.yaml Create BGP peer2 . Create a file, such as bgppeer2.yaml , with content like the following example: apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: peer2 spec: peerAddress: 10.0.0.2 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10 Apply the configuration for the BGP peer2: USD oc apply -f bgppeer2.yaml Create BGP advertisement 1. Create a file, such as bgpadvertisement1.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-1 namespace: metallb-system spec: ipAddressPools: - pool1 peers: - peer1 communities: - 65535:65282 aggregationLength: 32 aggregationLengthV6: 128 localPref: 100 Apply the configuration: USD oc apply -f bgpadvertisement1.yaml Create BGP advertisement 2. Create a file, such as bgpadvertisement2.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-2 namespace: metallb-system spec: ipAddressPools: - pool2 peers: - peer2 communities: - 65535:65282 aggregationLength: 32 aggregationLengthV6: 128 localPref: 100 Apply the configuration: USD oc apply -f bgpadvertisement2.yaml 26.3.4. Exposing a service through a network VRF You can expose a service through a virtual routing and forwarding (VRF) instance by associating a VRF on a network interface with a BGP peer. Important Exposing a service through a VRF on a BGP peer is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . By using a VRF on a network interface to expose a service through a BGP peer, you can segregate traffic to the service, configure independent routing decisions, and enable multi-tenancy support on a network interface. Note By establishing a BGP session through an interface belonging to a network VRF, MetalLB can advertise services through that interface and enable external traffic to reach the service through this interface. However, the network VRF routing table is different from the default VRF routing table used by OVN-Kubernetes. Therefore, the traffic cannot reach the OVN-Kubernetes network infrastructure. To enable the traffic directed to the service to reach the OVN-Kubernetes network infrastructure, you must configure routing rules to define the hops for network traffic. See the NodeNetworkConfigurationPolicy resource in "Managing symmetric routing with MetalLB" in the Additional resources section for more information. These are the high-level steps to expose a service through a network VRF with a BGP peer: Define a BGP peer and add a network VRF instance. Specify an IP address pool for MetalLB. Configure a BGP route advertisement for MetalLB to advertise a route using the specified IP address pool and the BGP peer associated with the VRF instance. Deploy a service to test the configuration. Prerequisites You installed the OpenShift CLI ( oc ). You logged in as a user with cluster-admin privileges. You defined a NodeNetworkConfigurationPolicy to associate a Virtual Routing and Forwarding (VRF) instance with a network interface. For more information about completing this prerequisite, see the Additional resources section. You installed MetalLB on your cluster. Procedure Create a BGPPeer custom resources (CR): Create a file, such as frrviavrf.yaml , with content like the following example: apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: frrviavrf namespace: metallb-system spec: myASN: 100 peerASN: 200 peerAddress: 192.168.130.1 vrf: ens4vrf 1 1 Specifies the network VRF instance to associate with the BGP peer. MetalLB can advertise services and make routing decisions based on the routing information in the VRF. Note You must configure this network VRF instance in a NodeNetworkConfigurationPolicy CR. See the Additional resources for more information. Apply the configuration for the BGP peer by running the following command: USD oc apply -f frrviavrf.yaml Create an IPAddressPool CR: Create a file, such as first-pool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: first-pool namespace: metallb-system spec: addresses: - 192.169.10.0/32 Apply the configuration for the IP address pool by running the following command: USD oc apply -f first-pool.yaml Create a BGPAdvertisement CR: Create a file, such as first-adv.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: first-adv namespace: metallb-system spec: ipAddressPools: - first-pool peers: - frrviavrf 1 1 In this example, MetalLB advertises a range of IP addresses from the first-pool IP address pool to the frrviavrf BGP peer. Apply the configuration for the BGP advertisement by running the following command: USD oc apply -f first-adv.yaml Create a Namespace , Deployment , and Service CR: Create a file, such as deploy-service.yaml , with content like the following example: apiVersion: v1 kind: Namespace metadata: name: test --- apiVersion: apps/v1 kind: Deployment metadata: name: server namespace: test spec: selector: matchLabels: app: server template: metadata: labels: app: server spec: containers: - name: server image: registry.redhat.io/ubi9/ubi ports: - name: http containerPort: 30100 command: ["/bin/sh", "-c"] args: ["sleep INF"] --- apiVersion: v1 kind: Service metadata: name: server1 namespace: test spec: ports: - name: http port: 30100 protocol: TCP targetPort: 30100 selector: app: server type: LoadBalancer Apply the configuration for the namespace, deployment, and service by running the following command: USD oc apply -f deploy-service.yaml Verification Identify a MetalLB speaker pod by running the following command: USD oc get -n metallb-system pods -l component=speaker Example output NAME READY STATUS RESTARTS AGE speaker-c6c5f 6/6 Running 0 69m Verify that the state of the BGP session is Established in the speaker pod by running the following command, replacing the variables to match your configuration: USD oc exec -n metallb-system <speaker_pod> -c frr -- vtysh -c "show bgp vrf <vrf_name> neigh" Example output BGP neighbor is 192.168.30.1, remote AS 200, local AS 100, external link BGP version 4, remote router ID 192.168.30.1, local router ID 192.168.30.71 BGP state = Established, up for 04:20:09 ... Verify that the service is advertised correctly by running the following command: USD oc exec -n metallb-system <speaker_pod> -c frr -- vtysh -c "show bgp vrf <vrf_name> ipv4" Additional resources About virtual routing and forwarding Example: Network interface with a VRF instance node network configuration policy Configuring an egress service Managing symmetric routing with MetalLB 26.3.5. Example BGP peer configurations 26.3.5.1. Example: Limit which nodes connect to a BGP peer You can specify the node selectors field to control which nodes can connect to a BGP peer. apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-nodesel namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64501 myASN: 64500 nodeSelectors: - matchExpressions: - key: kubernetes.io/hostname operator: In values: [compute-1.example.com, compute-2.example.com] 26.3.5.2. Example: Specify a BFD profile for a BGP peer You can specify a BFD profile to associate with BGP peers. BFD compliments BGP by providing more rapid detection of communication failures between peers than BGP alone. apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-peer-bfd namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64501 myASN: 64500 holdTime: "10s" bfdProfile: doc-example-bfd-profile-full Note Deleting the bidirectional forwarding detection (BFD) profile and removing the bfdProfile added to the border gateway protocol (BGP) peer resource does not disable the BFD. Instead, the BGP peer starts using the default BFD profile. To disable BFD from a BGP peer resource, delete the BGP peer configuration and recreate it without a BFD profile. For more information, see BZ#2050824 . 26.3.5.3. Example: Specify BGP peers for dual-stack networking To support dual-stack networking, add one BGP peer custom resource for IPv4 and one BGP peer custom resource for IPv6. apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-dual-stack-ipv4 namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64500 myASN: 64500 --- apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-dual-stack-ipv6 namespace: metallb-system spec: peerAddress: 2620:52:0:88::104 peerASN: 64500 myASN: 64500 26.3.6. steps Configuring services to use MetalLB 26.4. Configuring community alias As a cluster administrator, you can configure a community alias and use it across different advertisements. 26.4.1. About the community custom resource The community custom resource is a collection of aliases for communities. Users can define named aliases to be used when advertising ipAddressPools using the BGPAdvertisement . The fields for the community custom resource are described in the following table. Note The community CRD applies only to BGPAdvertisement. Table 26.6. MetalLB community custom resource Field Type Description metadata.name string Specifies the name for the community . metadata.namespace string Specifies the namespace for the community . Specify the same namespace that the MetalLB Operator uses. spec.communities string Specifies a list of BGP community aliases that can be used in BGPAdvertisements. A community alias consists of a pair of name (alias) and value (number:number). Link the BGPAdvertisement to a community alias by referring to the alias name in its spec.communities field. Table 26.7. CommunityAlias Field Type Description name string The name of the alias for the community . value string The BGP community value corresponding to the given name. 26.4.2. Configuring MetalLB with a BGP advertisement and community alias Configure MetalLB as follows so that the IPAddressPool is advertised with the BGP protocol and the community alias set to the numeric value of the NO_ADVERTISE community. In the following example, the peer BGP router doc-example-peer-community receives one 203.0.113.200/32 route and one fc00:f853:ccd:e799::1/128 route for each load-balancer IP address that MetalLB assigns to a service. A community alias is configured with the NO_ADVERTISE community. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an IP address pool. Create a file, such as ipaddresspool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-community spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124 Apply the configuration for the IP address pool: USD oc apply -f ipaddresspool.yaml Create a community alias named community1 . apiVersion: metallb.io/v1beta1 kind: Community metadata: name: community1 namespace: metallb-system spec: communities: - name: NO_ADVERTISE value: '65535:65282' Create a BGP peer named doc-example-bgp-peer . Create a file, such as bgppeer.yaml , with content like the following example: apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: doc-example-bgp-peer spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10 Apply the configuration for the BGP peer: USD oc apply -f bgppeer.yaml Create a BGP advertisement with the community alias. Create a file, such as bgpadvertisement.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgp-community-sample namespace: metallb-system spec: aggregationLength: 32 aggregationLengthV6: 128 communities: - NO_ADVERTISE 1 ipAddressPools: - doc-example-bgp-community peers: - doc-example-peer 1 Specify the CommunityAlias.name here and not the community custom resource (CR) name. Apply the configuration: USD oc apply -f bgpadvertisement.yaml 26.5. Configuring MetalLB BFD profiles As a cluster administrator, you can add, modify, and delete Bidirectional Forwarding Detection (BFD) profiles. The MetalLB Operator uses the BFD profile custom resources to identify which BGP sessions use BFD to provide faster path failure detection than BGP alone provides. 26.5.1. About the BFD profile custom resource The fields for the BFD profile custom resource are described in the following table. Table 26.8. BFD profile custom resource Field Type Description metadata.name string Specifies the name for the BFD profile custom resource. metadata.namespace string Specifies the namespace for the BFD profile custom resource. spec.detectMultiplier integer Specifies the detection multiplier to determine packet loss. The remote transmission interval is multiplied by this value to determine the connection loss detection timer. For example, when the local system has the detect multiplier set to 3 and the remote system has the transmission interval set to 300 , the local system detects failures only after 900 ms without receiving packets. The range is 2 to 255 . The default value is 3 . spec.echoMode boolean Specifies the echo transmission mode. If you are not using distributed BFD, echo transmission mode works only when the peer is also FRR. The default value is false and echo transmission mode is disabled. When echo transmission mode is enabled, consider increasing the transmission interval of control packets to reduce bandwidth usage. For example, consider increasing the transmit interval to 2000 ms. spec.echoInterval integer Specifies the minimum transmission interval, less jitter, that this system uses to send and receive echo packets. The range is 10 to 60000 . The default value is 50 ms. spec.minimumTtl integer Specifies the minimum expected TTL for an incoming control packet. This field applies to multi-hop sessions only. The purpose of setting a minimum TTL is to make the packet validation requirements more stringent and avoid receiving control packets from other sessions. The default value is 254 and indicates that the system expects only one hop between this system and the peer. spec.passiveMode boolean Specifies whether a session is marked as active or passive. A passive session does not attempt to start the connection. Instead, a passive session waits for control packets from a peer before it begins to reply. Marking a session as passive is useful when you have a router that acts as the central node of a star network and you want to avoid sending control packets that you do not need the system to send. The default value is false and marks the session as active. spec.receiveInterval integer Specifies the minimum interval that this system is capable of receiving control packets. The range is 10 to 60000 . The default value is 300 ms. spec.transmitInterval integer Specifies the minimum transmission interval, less jitter, that this system uses to send control packets. The range is 10 to 60000 . The default value is 300 ms. 26.5.2. Configuring a BFD profile As a cluster administrator, you can add a BFD profile and configure a BGP peer to use the profile. BFD provides faster path failure detection than BGP alone. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a file, such as bfdprofile.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BFDProfile metadata: name: doc-example-bfd-profile-full namespace: metallb-system spec: receiveInterval: 300 transmitInterval: 300 detectMultiplier: 3 echoMode: false passiveMode: true minimumTtl: 254 Apply the configuration for the BFD profile: USD oc apply -f bfdprofile.yaml 26.5.3. steps Configure a BGP peer to use the BFD profile. 26.6. Configuring services to use MetalLB As a cluster administrator, when you add a service of type LoadBalancer , you can control how MetalLB assigns an IP address. 26.6.1. Request a specific IP address Like some other load-balancer implementations, MetalLB accepts the spec.loadBalancerIP field in the service specification. If the requested IP address is within a range from any address pool, MetalLB assigns the requested IP address. If the requested IP address is not within any range, MetalLB reports a warning. Example service YAML for a specific IP address apiVersion: v1 kind: Service metadata: name: <service_name> annotations: metallb.io/address-pool: <address_pool_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer loadBalancerIP: <ip_address> If MetalLB cannot assign the requested IP address, the EXTERNAL-IP for the service reports <pending> and running oc describe service <service_name> includes an event like the following example. Example event when MetalLB cannot assign a requested IP address ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning AllocationFailed 3m16s metallb-controller Failed to allocate IP for "default/invalid-request": "4.3.2.1" is not allowed in config 26.6.2. Request an IP address from a specific pool To assign an IP address from a specific range, but you are not concerned with the specific IP address, then you can use the metallb.io/address-pool annotation to request an IP address from the specified address pool. Example service YAML for an IP address from a specific pool apiVersion: v1 kind: Service metadata: name: <service_name> annotations: metallb.io/address-pool: <address_pool_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer If the address pool that you specify for <address_pool_name> does not exist, MetalLB attempts to assign an IP address from any pool that permits automatic assignment. 26.6.3. Accept any IP address By default, address pools are configured to permit automatic assignment. MetalLB assigns an IP address from these address pools. To accept any IP address from any pool that is configured for automatic assignment, no special annotation or configuration is required. Example service YAML for accepting any IP address apiVersion: v1 kind: Service metadata: name: <service_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer 26.6.4. Share a specific IP address By default, services do not share IP addresses. However, if you need to colocate services on a single IP address, you can enable selective IP sharing by adding the metallb.io/allow-shared-ip annotation to the services. apiVersion: v1 kind: Service metadata: name: service-http annotations: metallb.io/address-pool: doc-example metallb.io/allow-shared-ip: "web-server-svc" 1 spec: ports: - name: http port: 80 2 protocol: TCP targetPort: 8080 selector: <label_key>: <label_value> 3 type: LoadBalancer loadBalancerIP: 172.31.249.7 4 --- apiVersion: v1 kind: Service metadata: name: service-https annotations: metallb.io/address-pool: doc-example metallb.io/allow-shared-ip: "web-server-svc" spec: ports: - name: https port: 443 protocol: TCP targetPort: 8080 selector: <label_key>: <label_value> type: LoadBalancer loadBalancerIP: 172.31.249.7 1 Specify the same value for the metallb.io/allow-shared-ip annotation. This value is referred to as the sharing key . 2 Specify different port numbers for the services. 3 Specify identical pod selectors if you must specify externalTrafficPolicy: local so the services send traffic to the same set of pods. If you use the cluster external traffic policy, then the pod selectors do not need to be identical. 4 Optional: If you specify the three preceding items, MetalLB might colocate the services on the same IP address. To ensure that services share an IP address, specify the IP address to share. By default, Kubernetes does not allow multiprotocol load balancer services. This limitation would normally make it impossible to run a service like DNS that needs to listen on both TCP and UDP. To work around this limitation of Kubernetes with MetalLB, create two services: For one service, specify TCP and for the second service, specify UDP. In both services, specify the same pod selector. Specify the same sharing key and spec.loadBalancerIP value to colocate the TCP and UDP services on the same IP address. 26.6.5. Configuring a service with MetalLB You can configure a load-balancing service to use an external IP address from an address pool. Prerequisites Install the OpenShift CLI ( oc ). Install the MetalLB Operator and start MetalLB. Configure at least one address pool. Configure your network to route traffic from the clients to the host network for the cluster. Procedure Create a <service_name>.yaml file. In the file, ensure that the spec.type field is set to LoadBalancer . Refer to the examples for information about how to request the external IP address that MetalLB assigns to the service. Create the service: USD oc apply -f <service_name>.yaml Example output service/<service_name> created Verification Describe the service: USD oc describe service <service_name> Example output 1 The annotation is present if you request an IP address from a specific pool. 2 The service type must indicate LoadBalancer . 3 The load-balancer ingress field indicates the external IP address if the service is assigned correctly. 4 The events field indicates the node name that is assigned to announce the external IP address. If you experience an error, the events field indicates the reason for the error. 26.7. Managing symmetric routing with MetalLB As a cluster administrator, you can effectively manage traffic for pods behind a MetalLB load-balancer service with multiple host interfaces by implementing features from MetalLB, NMState, and OVN-Kubernetes. By combining these features in this context, you can provide symmetric routing, traffic segregation, and support clients on different networks with overlapping CIDR addresses. To achieve this functionality, learn how to implement virtual routing and forwarding (VRF) instances with MetalLB, and configure egress services. Important Configuring symmetric traffic by using a VRF instance with MetalLB and an egress service is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 26.7.1. Challenges of managing symmetric routing with MetalLB When you use MetalLB with multiple host interfaces, MetalLB exposes and announces a service through all available interfaces on the host. This can present challenges relating to network isolation, asymmetric return traffic and overlapping CIDR addresses. One option to ensure that return traffic reaches the correct client is to use static routes. However, with this solution, MetalLB cannot isolate the services and then announce each service through a different interface. Additionally, static routing requires manual configuration and requires maintenance if remote sites are added. A further challenge of symmetric routing when implementing a MetalLB service is scenarios where external systems expect the source and destination IP address for an application to be the same. The default behavior for OpenShift Container Platform is to assign the IP address of the host network interface as the source IP address for traffic originating from pods. This is problematic with multiple host interfaces. You can overcome these challenges by implementing a configuration that combines features from MetalLB, NMState, and OVN-Kubernetes. 26.7.2. Overview of managing symmetric routing by using VRFs with MetalLB You can overcome the challenges of implementing symmetric routing by using NMState to configure a VRF instance on a host, associating the VRF instance with a MetalLB BGPPeer resource, and configuring an egress service for egress traffic with OVN-Kubernetes. Figure 26.1. Network overview of managing symmetric routing by using VRFs with MetalLB The configuration process involves three stages: 1. Define a VRF and routing rules Configure a NodeNetworkConfigurationPolicy custom resource (CR) to associate a VRF instance with a network interface. Use the VRF routing table to direct ingress and egress traffic. 2. Link the VRF to a MetalLB BGPPeer Configure a MetalLB BGPPeer resource to use the VRF instance on a network interface. By associating the BGPPeer resource with the VRF instance, the designated network interface becomes the primary interface for the BGP session, and MetalLB advertises the services through this interface. 3. Configure an egress service Configure an egress service to choose the network associated with the VRF instance for egress traffic. Optional: Configure an egress service to use the IP address of the MetalLB load-balancer service as the source IP for egress traffic. 26.7.3. Configuring symmetric routing by using VRFs with MetalLB You can configure symmetric network routing for applications behind a MetalLB service that require the same ingress and egress network paths. This example associates a VRF routing table with MetalLB and an egress service to enable symmetric routing for ingress and egress traffic for pods behind a LoadBalancer service. Note If you use the sourceIPBy: "LoadBalancerIP" setting in the EgressService CR, you must specify the load-balancer node in the BGPAdvertisement custom resource (CR). You can use the sourceIPBy: "Network" setting on clusters that use OVN-Kubernetes configured with the gatewayConfig.routingViaHost specification set to true only. Additionally, if you use the sourceIPBy: "Network" setting, you must schedule the application workload on nodes configured with the network VRF instance. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Install the Kubernetes NMState Operator. Install the MetalLB Operator. Procedure Create a NodeNetworkConfigurationPolicy CR to define the VRF instance: Create a file, such as node-network-vrf.yaml , with content like the following example: apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: vrfpolicy 1 spec: nodeSelector: vrf: "true" 2 maxUnavailable: 3 desiredState: interfaces: - name: ens4vrf 3 type: vrf 4 state: up vrf: port: - ens4 5 route-table-id: 2 6 - name: ens4 7 type: ethernet state: up ipv4: address: - ip: 192.168.130.130 prefix-length: 24 dhcp: false enabled: true routes: 8 config: - destination: 0.0.0.0/0 metric: 150 -hop-address: 192.168.130.1 -hop-interface: ens4 table-id: 2 route-rules: 9 config: - ip-to: 172.30.0.0/16 priority: 998 route-table: 254 10 - ip-to: 10.132.0.0/14 priority: 998 route-table: 254 - ip-to: 169.254.0.0/17 priority: 998 route-table: 254 1 The name of the policy. 2 This example applies the policy to all nodes with the label vrf:true . 3 The name of the interface. 4 The type of interface. This example creates a VRF instance. 5 The node interface that the VRF attaches to. 6 The name of the route table ID for the VRF. 7 The IPv4 address of the interface associated with the VRF. 8 Defines the configuration for network routes. The -hop-address field defines the IP address of the hop for the route. The -hop-interface field defines the outgoing interface for the route. In this example, the VRF routing table is 2 , which references the ID that you define in the EgressService CR. 9 Defines additional route rules. The ip-to fields must match the Cluster Network CIDR, Service Network CIDR, and Internal Masquerade subnet CIDR. You can view the values for these CIDR address specifications by running the following command: oc describe network.operator/cluster . 10 The main routing table that the Linux kernel uses when calculating routes has the ID 254 . Apply the policy by running the following command: USD oc apply -f node-network-vrf.yaml Create a BGPPeer custom resource (CR): Create a file, such as frr-via-vrf.yaml , with content like the following example: apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: frrviavrf namespace: metallb-system spec: myASN: 100 peerASN: 200 peerAddress: 192.168.130.1 vrf: ens4vrf 1 1 Specifies the VRF instance to associate with the BGP peer. MetalLB can advertise services and make routing decisions based on the routing information in the VRF. Apply the configuration for the BGP peer by running the following command: USD oc apply -f frr-via-vrf.yaml Create an IPAddressPool CR: Create a file, such as first-pool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: first-pool namespace: metallb-system spec: addresses: - 192.169.10.0/32 Apply the configuration for the IP address pool by running the following command: USD oc apply -f first-pool.yaml Create a BGPAdvertisement CR: Create a file, such as first-adv.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: first-adv namespace: metallb-system spec: ipAddressPools: - first-pool peers: - frrviavrf 1 nodeSelectors: - matchLabels: egress-service.k8s.ovn.org/test-server1: "" 2 1 In this example, MetalLB advertises a range of IP addresses from the first-pool IP address pool to the frrviavrf BGP peer. 2 In this example, the EgressService CR configures the source IP address for egress traffic to use the load-balancer service IP address. Therefore, you must specify the load-balancer node for return traffic to use the same return path for the traffic originating from the pod. Apply the configuration for the BGP advertisement by running the following command: USD oc apply -f first-adv.yaml Create an EgressService CR: Create a file, such as egress-service.yaml , with content like the following example: apiVersion: k8s.ovn.org/v1 kind: EgressService metadata: name: server1 1 namespace: test 2 spec: sourceIPBy: "LoadBalancerIP" 3 nodeSelector: matchLabels: vrf: "true" 4 network: "2" 5 1 Specify the name for the egress service. The name of the EgressService resource must match the name of the load-balancer service that you want to modify. 2 Specify the namespace for the egress service. The namespace for the EgressService must match the namespace of the load-balancer service that you want to modify. The egress service is namespace-scoped. 3 This example assigns the LoadBalancer service ingress IP address as the source IP address for egress traffic. 4 If you specify LoadBalancer for the sourceIPBy specification, a single node handles the LoadBalancer service traffic. In this example, only a node with the label vrf: "true" can handle the service traffic. If you do not specify a node, OVN-Kubernetes selects a worker node to handle the service traffic. When a node is selected, OVN-Kubernetes labels the node in the following format: egress-service.k8s.ovn.org/<svc_namespace>-<svc_name>: "" . 5 Specify the routing table ID for egress traffic. Ensure that the value matches the route-table-id ID defined in the NodeNetworkConfigurationPolicy resource, for example, route-table-id: 2 . Apply the configuration for the egress service by running the following command: USD oc apply -f egress-service.yaml Verification Verify that you can access the application endpoint of the pods running behind the MetalLB service by running the following command: USD curl <external_ip_address>:<port_number> 1 1 Update the external IP address and port number to suit your application endpoint. Optional: If you assigned the LoadBalancer service ingress IP address as the source IP address for egress traffic, verify this configuration by using tools such as tcpdump to analyze packets received at the external client. Additional resources About virtual routing and forwarding Exposing a service through a network VRF Example: Network interface with a VRF instance node network configuration policy Configuring an egress service 26.8. Configuring the integration of MetalLB and FRR-K8s FRRouting (FRR) is a free, open source internet routing protocol suite for Linux and UNIX platforms. FRR-K8s is a Kubernetes based DaemonSet that exposes a subset of the FRR API in a Kubernetes-compliant manner. As a cluster administrator, you can use the FRRConfiguration custom resource (CR) to access some of the FRR services not provided by MetalLB, for example, receiving routes. MetalLB generates the FRR-K8s configuration corresponding to the MetalLB configuration applied. 26.8.1. FRR configurations You can create multiple FRRConfiguration CRs to use FRR services in MetalLB . MetalLB generates an FRRConfiguration object which FRR-K8s merges with all other configurations that all users have created. For example, you can configure FRR-K8s to receive all of the prefixes advertised by a given neighbor. The following example configures FRR-K8s to receive all of the prefixes advertised by a BGPPeer with host 172.18.0.5 : Example FRRConfiguration CR apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: metallb-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.18.0.5 asn: 64512 toReceive: allowed: mode: all You can also configure FRR-K8s to always block a set of prefixes, regardless of the configuration applied. This can be useful to avoid routes towards the pods or ClusterIPs CIDRs that might result in cluster malfunctions. The following example blocks the set of prefixes 192.168.1.0/24 : Example MetalLB CR apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: bgpBackend: frr-k8s frrk8sConfig: alwaysBlock: - 192.168.1.0/24 You can set FRR-K8s to block the Cluster Network CIDR and Service Network CIDR. You can view the values for these CIDR address specifications by running the following command: USD oc describe network.config/cluster 26.8.2. Configuring the FRRConfiguration CRD The following section provides reference examples that use the FRRConfiguration custom resource (CR). 26.8.2.1. The routers field You can use the routers field to configure multiple routers, one for each Virtual Routing and Forwarding (VRF) resource. For each router, you must define the Autonomous System Number (ASN). You can also define a list of Border Gateway Protocol (BGP) neighbors to connect to, as in the following example: Example FRRConfiguration CR apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.30.0.3 asn: 4200000000 ebgpMultiHop: true port: 180 - address: 172.18.0.6 asn: 4200000000 port: 179 26.8.2.2. The toAdvertise field By default, FRR-K8s does not advertise the prefixes configured as part of a router configuration. In order to advertise them, you use the toAdvertise field. You can advertise a subset of the prefixes, as in the following example: Example FRRConfiguration CR apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.30.0.3 asn: 4200000000 ebgpMultiHop: true port: 180 toAdvertise: allowed: prefixes: 1 - 192.168.2.0/24 prefixes: - 192.168.2.0/24 - 192.169.2.0/24 1 Advertises a subset of prefixes. The following example shows you how to advertise all of the prefixes: Example FRRConfiguration CR apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.30.0.3 asn: 4200000000 ebgpMultiHop: true port: 180 toAdvertise: allowed: mode: all 1 prefixes: - 192.168.2.0/24 - 192.169.2.0/24 1 Advertises all prefixes. 26.8.2.3. The toReceive field By default, FRR-K8s does not process any prefixes advertised by a neighbor. You can use the toReceive field to process such addresses. You can configure for a subset of the prefixes, as in this example: Example FRRConfiguration CR apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.18.0.5 asn: 64512 port: 179 toReceive: allowed: prefixes: - prefix: 192.168.1.0/24 - prefix: 192.169.2.0/24 ge: 25 1 le: 28 2 1 2 The prefix is applied if the prefix length is less than or equal to the le prefix length and greater than or equal to the ge prefix length. The following example configures FRR to handle all the prefixes announced: Example FRRConfiguration CR apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.18.0.5 asn: 64512 port: 179 toReceive: allowed: mode: all 26.8.2.4. The bgp field You can use the bgp field to define various BFD profiles and associate them with a neighbor. In the following example, BFD backs up the BGP session and FRR can detect link failures: Example FRRConfiguration CR apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.30.0.3 asn: 64512 port: 180 bfdProfile: defaultprofile bfdProfiles: - name: defaultprofile 26.8.2.5. The nodeSelector field By default, FRR-K8s applies the configuration to all nodes where the daemon is running. You can use the nodeSelector field to specify the nodes to which you want to apply the configuration. For example: Example FRRConfiguration CR apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 nodeSelector: labelSelector: foo: "bar" The fields for the FRRConfiguration custom resource are described in the following table: Table 26.9. MetalLB FRRConfiguration custom resource Field Type Description spec.bgp.routers array Specifies the routers that FRR is to configure (one per VRF). spec.bgp.routers.asn integer The autonomous system number to use for the local end of the session. spec.bgp.routers.id string Specifies the ID of the bgp router. spec.bgp.routers.vrf string Specifies the host vrf used to establish sessions from this router. spec.bgp.routers.neighbors array Specifies the neighbors to establish BGP sessions with. spec.bgp.routers.neighbors.asn integer Specifies the autonomous system number to use for the local end of the session. spec.bgp.routers.neighbors.address string Specifies the IP address to establish the session with. spec.bgp.routers.neighbors.port integer Specifies the port to dial when establishing the session. Defaults to 179. spec.bgp.routers.neighbors.password string Specifies the password to use for establishing the BGP session. Password and PasswordSecret are mutually exclusive. spec.bgp.routers.neighbors.passwordSecret string Specifies the name of the authentication secret for the neighbor. The secret must be of type "kubernetes.io/basic-auth", and in the same namespace as the FRR-K8s daemon. The key "password" stores the password in the secret. Password and PasswordSecret are mutually exclusive. spec.bgp.routers.neighbors.holdTime duration Specifies the requested BGP hold time, per RFC4271. Defaults to 180s. spec.bgp.routers.neighbors.keepaliveTime duration Specifies the requested BGP keepalive time, per RFC4271. Defaults to 60s . spec.bgp.routers.neighbors.connectTime duration Specifies how long BGP waits between connection attempts to a neighbor. spec.bgp.routers.neighbors.ebgpMultiHop boolean Indicates if the BGPPeer is multi-hops away. spec.bgp.routers.neighbors.bfdProfile string Specifies the name of the BFD Profile to use for the BFD session associated with the BGP session. If not set, the BFD session is not set up. spec.bgp.routers.neighbors.toAdvertise.allowed array Represents the list of prefixes to advertise to a neighbor, and the associated properties. spec.bgp.routers.neighbors.toAdvertise.allowed.prefixes string array Specifies the list of prefixes to advertise to a neighbor. This list must match the prefixes that you define in the router. spec.bgp.routers.neighbors.toAdvertise.allowed.mode string Specifies the mode to use when handling the prefixes. You can set to filtered to allow only the prefixes in the prefixes list. You can set to all to allow all the prefixes configured on the router. spec.bgp.routers.neighbors.toAdvertise.withLocalPref array Specifies the prefixes associated with an advertised local preference. You must specify the prefixes associated with a local preference in the prefixes allowed to be advertised. spec.bgp.routers.neighbors.toAdvertise.withLocalPref.prefixes string array Specifies the prefixes associated with the local preference. spec.bgp.routers.neighbors.toAdvertise.withLocalPref.localPref integer Specifies the local preference associated with the prefixes. spec.bgp.routers.neighbors.toAdvertise.withCommunity array Specifies the prefixes associated with an advertised BGP community. You must include the prefixes associated with a local preference in the list of prefixes that you want to advertise. spec.bgp.routers.neighbors.toAdvertise.withCommunity.prefixes string array Specifies the prefixes associated with the community. spec.bgp.routers.neighbors.toAdvertise.withCommunity.community string Specifies the community associated with the prefixes. spec.bgp.routers.neighbors.toReceive array Specifies the prefixes to receive from a neighbor. spec.bgp.routers.neighbors.toReceive.allowed array Specifies the information that you want to receive from a neighbor. spec.bgp.routers.neighbors.toReceive.allowed.prefixes array Specifies the prefixes allowed from a neighbor. spec.bgp.routers.neighbors.toReceive.allowed.mode string Specifies the mode to use when handling the prefixes. When set to filtered , only the prefixes in the prefixes list are allowed. When set to all , all the prefixes configured on the router are allowed. spec.bgp.routers.neighbors.disableMP boolean Disables MP BGP to prevent it from separating IPv4 and IPv6 route exchanges into distinct BGP sessions. spec.bgp.routers.prefixes string array Specifies all prefixes to advertise from this router instance. spec.bgp.bfdProfiles array Specifies the list of bfd profiles to use when configuring the neighbors. spec.bgp.bfdProfiles.name string The name of the BFD Profile to be referenced in other parts of the configuration. spec.bgp.bfdProfiles.receiveInterval integer Specifies the minimum interval at which this system can receive control packets, in milliseconds. Defaults to 300ms . spec.bgp.bfdProfiles.transmitInterval integer Specifies the minimum transmission interval, excluding jitter, that this system wants to use to send BFD control packets, in milliseconds. Defaults to 300ms . spec.bgp.bfdProfiles.detectMultiplier integer Configures the detection multiplier to determine packet loss. To determine the connection loss-detection timer, multiply the remote transmission interval by this value. spec.bgp.bfdProfiles.echoInterval integer Configures the minimal echo receive transmission-interval that this system can handle, in milliseconds. Defaults to 50ms . spec.bgp.bfdProfiles.echoMode boolean Enables or disables the echo transmission mode. This mode is disabled by default, and not supported on multihop setups. spec.bgp.bfdProfiles.passiveMode boolean Mark session as passive. A passive session does not attempt to start the connection and waits for control packets from peers before it begins replying. spec.bgp.bfdProfiles.MinimumTtl integer For multihop sessions only. Configures the minimum expected TTL for an incoming BFD control packet. spec.nodeSelector string Limits the nodes that attempt to apply this configuration. If specified, only those nodes whose labels match the specified selectors attempt to apply the configuration. If it is not specified, all nodes attempt to apply this configuration. status string Defines the observed state of FRRConfiguration. 26.8.3. How FRR-K8s merges multiple configurations In a case where multiple users add configurations that select the same node, FRR-K8s merges the configurations. Each configuration can only extend others. This means that it is possible to add a new neighbor to a router, or to advertise an additional prefix to a neighbor, but not possible to remove a component added by another configuration. 26.8.3.1. Configuration conflicts Certain configurations can cause conflicts, leading to errors, for example: different ASN for the same router (in the same VRF) different ASN for the same neighbor (with the same IP / port) multiple BFD profiles with the same name but different values When the daemon finds an invalid configuration for a node, it reports the configuration as invalid and reverts to the valid FRR configuration. 26.8.3.2. Merging When merging, it is possible to do the following actions: Extend the set of IPs that you want to advertise to a neighbor. Add an extra neighbor with its set of IPs. Extend the set of IPs to which you want to associate a community. Allow incoming routes for a neighbor. Each configuration must be self contained. This means, for example, that it is not possible to allow prefixes that are not defined in the router section by leveraging prefixes coming from another configuration. If the configurations to be applied are compatible, merging works as follows: FRR-K8s combines all the routers. FRR-K8s merges all prefixes and neighbors for each router. FRR-K8s merges all filters for each neighbor. Note A less restrictive filter has precedence over a stricter one. For example, a filter accepting some prefixes has precedence over a filter not accepting any, and a filter accepting all prefixes has precedence over one that accepts some. 26.9. MetalLB logging, troubleshooting, and support If you need to troubleshoot MetalLB configuration, see the following sections for commonly used commands. 26.9.1. Setting the MetalLB logging levels MetalLB uses FRRouting (FRR) in a container with the default setting of info generates a lot of logging. You can control the verbosity of the logs generated by setting the logLevel as illustrated in this example. Gain a deeper insight into MetalLB by setting the logLevel to debug as follows: Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Create a file, such as setdebugloglevel.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: logLevel: debug nodeSelector: node-role.kubernetes.io/worker: "" Apply the configuration: USD oc replace -f setdebugloglevel.yaml Note Use oc replace as the understanding is the metallb CR is already created and here you are changing the log level. Display the names of the speaker pods: USD oc get -n metallb-system pods -l component=speaker Example output NAME READY STATUS RESTARTS AGE speaker-2m9pm 4/4 Running 0 9m19s speaker-7m4qw 3/4 Running 0 19s speaker-szlmx 4/4 Running 0 9m19s Note Speaker and controller pods are recreated to ensure the updated logging level is applied. The logging level is modified for all the components of MetalLB. View the speaker logs: USD oc logs -n metallb-system speaker-7m4qw -c speaker Example output View the FRR logs: USD oc logs -n metallb-system speaker-7m4qw -c frr Example output 26.9.1.1. FRRouting (FRR) log levels The following table describes the FRR logging levels. Table 26.10. Log levels Log level Description all Supplies all logging information for all logging levels. debug Information that is diagnostically helpful to people. Set to debug to give detailed troubleshooting information. info Provides information that always should be logged but under normal circumstances does not require user intervention. This is the default logging level. warn Anything that can potentially cause inconsistent MetalLB behaviour. Usually MetalLB automatically recovers from this type of error. error Any error that is fatal to the functioning of MetalLB . These errors usually require administrator intervention to fix. none Turn off all logging. 26.9.2. Troubleshooting BGP issues The BGP implementation that Red Hat supports uses FRRouting (FRR) in a container in the speaker pods. As a cluster administrator, if you need to troubleshoot BGP configuration issues, you need to run commands in the FRR container. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Display the names of the speaker pods: USD oc get -n metallb-system pods -l component=speaker Example output NAME READY STATUS RESTARTS AGE speaker-66bth 4/4 Running 0 56m speaker-gvfnf 4/4 Running 0 56m ... Display the running configuration for FRR: USD oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c "show running-config" Example output 1 The router bgp section indicates the ASN for MetalLB. 2 Confirm that a neighbor <ip-address> remote-as <peer-ASN> line exists for each BGP peer custom resource that you added. 3 If you configured BFD, confirm that the BFD profile is associated with the correct BGP peer and that the BFD profile appears in the command output. 4 Confirm that the network <ip-address-range> lines match the IP address ranges that you specified in address pool custom resources that you added. Display the BGP summary: USD oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c "show bgp summary" Example output 1 Confirm that the output includes a line for each BGP peer custom resource that you added. 2 Output that shows 0 messages received and messages sent indicates a BGP peer that does not have a BGP session. Check network connectivity and the BGP configuration of the BGP peer. Display the BGP peers that received an address pool: USD oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c "show bgp ipv4 unicast 203.0.113.200/30" Replace ipv4 with ipv6 to display the BGP peers that received an IPv6 address pool. Replace 203.0.113.200/30 with an IPv4 or IPv6 IP address range from an address pool. Example output 1 Confirm that the output includes an IP address for a BGP peer. 26.9.3. Troubleshooting BFD issues The Bidirectional Forwarding Detection (BFD) implementation that Red Hat supports uses FRRouting (FRR) in a container in the speaker pods. The BFD implementation relies on BFD peers also being configured as BGP peers with an established BGP session. As a cluster administrator, if you need to troubleshoot BFD configuration issues, you need to run commands in the FRR container. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Display the names of the speaker pods: USD oc get -n metallb-system pods -l component=speaker Example output NAME READY STATUS RESTARTS AGE speaker-66bth 4/4 Running 0 26m speaker-gvfnf 4/4 Running 0 26m ... Display the BFD peers: USD oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c "show bfd peers brief" Example output <.> Confirm that the PeerAddress column includes each BFD peer. If the output does not list a BFD peer IP address that you expected the output to include, troubleshoot BGP connectivity with the peer. If the status field indicates down , check for connectivity on the links and equipment between the node and the peer. You can determine the node name for the speaker pod with a command like oc get pods -n metallb-system speaker-66bth -o jsonpath='{.spec.nodeName}' . 26.9.4. MetalLB metrics for BGP and BFD OpenShift Container Platform captures the following Prometheus metrics for MetalLB that relate to BGP peers and BFD profiles. Table 26.11. MetalLB BFD metrics Name Description frrk8s_bfd_control_packet_input Counts the number of BFD control packets received from each BFD peer. frrk8s_bfd_control_packet_output Counts the number of BFD control packets sent to each BFD peer. frrk8s_bfd_echo_packet_input Counts the number of BFD echo packets received from each BFD peer. frrk8s_bfd_echo_packet_output Counts the number of BFD echo packets sent to each BFD. frrk8s_bfd_session_down_events Counts the number of times the BFD session with a peer entered the down state. frrk8s_bfd_session_up Indicates the connection state with a BFD peer. 1 indicates the session is up and 0 indicates the session is down . frrk8s_bfd_session_up_events Counts the number of times the BFD session with a peer entered the up state. frrk8s_bfd_zebra_notifications Counts the number of BFD Zebra notifications for each BFD peer. Table 26.12. MetalLB BGP metrics Name Description frrk8s_bgp_announced_prefixes_total Counts the number of load balancer IP address prefixes that are advertised to BGP peers. The terms prefix and aggregated route have the same meaning. frrk8s_bgp_session_up Indicates the connection state with a BGP peer. 1 indicates the session is up and 0 indicates the session is down . frrk8s_bgp_updates_total Counts the number of BGP update messages sent to each BGP peer. frrk8s_bgp_opens_sent Counts the number of BGP open messages sent to each BGP peer. frrk8s_bgp_opens_received Counts the number of BGP open messages received from each BGP peer. frrk8s_bgp_notifications_sent Counts the number of BGP notification messages sent to each BGP peer. frrk8s_bgp_updates_total_received Counts the number of BGP update messages received from each BGP peer. frrk8s_bgp_keepalives_sent Counts the number of BGP keepalive messages sent to each BGP peer. frrk8s_bgp_keepalives_received Counts the number of BGP keepalive messages received from each BGP peer. frrk8s_bgp_route_refresh_sent Counts the number of BGP route refresh messages sent to each BGP peer. frrk8s_bgp_total_sent Counts the number of total BGP messages sent to each BGP peer. frrk8s_bgp_total_received Counts the number of total BGP messages received from each BGP peer. Additional resources See Querying metrics for all projects with the monitoring dashboard for information about using the monitoring dashboard. 26.9.5. About collecting MetalLB data You can use the oc adm must-gather CLI command to collect information about your cluster, your MetalLB configuration, and the MetalLB Operator. The following features and objects are associated with MetalLB and the MetalLB Operator: The namespace and child objects that the MetalLB Operator is deployed in All MetalLB Operator custom resource definitions (CRDs) The oc adm must-gather CLI command collects the following information from FRRouting (FRR) that Red Hat uses to implement BGP and BFD: /etc/frr/frr.conf /etc/frr/frr.log /etc/frr/daemons configuration file /etc/frr/vtysh.conf The log and configuration files in the preceding list are collected from the frr container in each speaker pod. In addition to the log and configuration files, the oc adm must-gather CLI command collects the output from the following vtysh commands: show running-config show bgp ipv4 show bgp ipv6 show bgp neighbor show bfd peer No additional configuration is required when you run the oc adm must-gather CLI command. Additional resources Gathering data about your cluster | [
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example labels: 1 zone: east spec: addresses: - 203.0.113.1-203.0.113.10 - 203.0.113.65-203.0.113.75",
"oc apply -f ipaddresspool.yaml",
"oc describe -n metallb-system IPAddressPool doc-example",
"Name: doc-example Namespace: metallb-system Labels: zone=east Annotations: <none> API Version: metallb.io/v1beta1 Kind: IPAddressPool Metadata: Spec: Addresses: 203.0.113.1-203.0.113.10 203.0.113.65-203.0.113.75 Auto Assign: true Events: <none>",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-vlan labels: zone: east 1 spec: addresses: - 192.168.100.1-192.168.100.254 2",
"oc apply -f ipaddresspool-vlan.yaml",
"oc edit network.config.openshift/cluster",
"spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: gatewayConfig: ipForwarding: Global",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-cidr namespace: metallb-system spec: addresses: - 192.168.100.0/24 - 192.168.200.0/24 - 192.168.255.1-192.168.255.5",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-reserved namespace: metallb-system spec: addresses: - 10.0.100.0/28 autoAssign: false",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-combined namespace: metallb-system spec: addresses: - 10.0.100.0/28 - 2002:2:2::1-2002:2:2::100",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-service-allocation namespace: metallb-system spec: addresses: - 192.168.20.0/24 serviceAllocation: priority: 50 1 namespaces: 2 - namespace-a - namespace-b namespaceSelectors: 3 - matchLabels: zone: east serviceSelectors: 4 - matchExpressions: - key: security operator: In values: - S1",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-basic spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124",
"oc apply -f ipaddresspool.yaml",
"apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-basic namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-basic",
"oc apply -f bgpadvertisement.yaml",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-adv labels: zone: east spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124 autoAssign: false",
"oc apply -f ipaddresspool.yaml",
"apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-adv-1 namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-adv communities: - 65535:65282 aggregationLength: 32 localPref: 100",
"oc apply -f bgpadvertisement1.yaml",
"apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-adv-2 namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-adv communities: - 8000:800 aggregationLength: 30 aggregationLengthV6: 124",
"oc apply -f bgpadvertisement2.yaml",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: pool1 spec: addresses: - 4.4.4.100-4.4.4.200 - 2001:100:4::200-2001:100:4::400",
"apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: example spec: ipAddressPools: - pool1 nodeSelector: - matchLabels: kubernetes.io/hostname: NodeA - matchLabels: kubernetes.io/hostname: NodeB",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-l2 spec: addresses: - 4.4.4.0/24 autoAssign: false",
"oc apply -f ipaddresspool.yaml",
"apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement namespace: metallb-system spec: ipAddressPools: - doc-example-l2",
"oc apply -f l2advertisement.yaml",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-l2-label labels: zone: east spec: addresses: - 172.31.249.87/32",
"oc apply -f ipaddresspool.yaml",
"apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement-label namespace: metallb-system spec: ipAddressPoolSelectors: - matchExpressions: - key: zone operator: In values: - east",
"oc apply -f l2advertisement.yaml",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-l2 spec: addresses: - 4.4.4.0/24 autoAssign: false",
"oc apply -f ipaddresspool.yaml",
"apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement namespace: metallb-system spec: ipAddressPools: - doc-example-l2 interfaces: - interfaceA - interfaceB",
"oc apply -f l2advertisement.yaml",
"oc patch network.operator cluster -p '{\"spec\":{\"defaultNetwork\":{\"ovnKubernetesConfig\":{\"gatewayConfig\": {\"routingViaHost\": true} }}}}' --type=merge",
"echo -e \"net.ipv4.conf.bridge-net.forwarding = 1\\nnet.ipv6.conf.bridge-net.forwarding = 1\\nnet.ipv4.conf.bridge-net.rp_filter = 0\\nnet.ipv6.conf.bridge-net.rp_filter = 0\" | base64 -w0",
"bmV0LmlwdjQuY29uZi5icmlkZ2UtbmV0LmZvcndhcmRpbmcgPSAxCm5ldC5pcHY2LmNvbmYuYnJpZGdlLW5ldC5mb3J3YXJkaW5nID0gMQpuZXQuaXB2NC5jb25mLmJyaWRnZS1uZXQucnBfZmlsdGVyID0gMApuZXQuaXB2Ni5jb25mLmJyaWRnZS1uZXQucnBfZmlsdGVyID0gMAo=",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: <node_role> 1 name: 81-enable-global-forwarding spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,bmV0LmlwdjQuY29uZi5icmlkZ2UtbmV0LmZvcndhcmRpbmcgPSAxCm5ldC5pcHY2LmNvbmYuYnJpZGdlLW5ldC5mb3J3YXJkaW5nID0gMQpuZXQuaXB2NC5jb25mLmJyaWRnZS1uZXQucnBfZmlsdGVyID0gMApuZXQuaXB2Ni5jb25mLmJyaWRnZS1uZXQucnBfZmlsdGVyID0gMAo= 2 verification: {} filesystem: root mode: 644 path: /etc/sysctl.d/enable-global-forwarding.conf osImageURL: \"\"",
"oc apply -f enable-ip-forward.yaml",
"oc debug node/<node-name>",
"chroot /host",
"cat /etc/sysctl.d/enable-global-forwarding.conf",
"net.ipv4.conf.bridge-net.forwarding = 1 net.ipv6.conf.bridge-net.forwarding = 1 net.ipv4.conf.bridge-net.rp_filter = 0 net.ipv6.conf.bridge-net.rp_filter = 0",
"oc patch network.operator cluster -p '{\"spec\":{\"defaultNetwork\":{\"ovnKubernetesConfig\":{\"gatewayConfig\":{\"ipForwarding\": \"Global\"}}}}}",
"apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: doc-example-peer spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10",
"oc apply -f bgppeer.yaml",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: pool1 spec: addresses: - 4.4.4.100-4.4.4.200 - 2001:100:4::200-2001:100:4::400",
"oc apply -f ipaddresspool1.yaml",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: pool2 spec: addresses: - 5.5.5.100-5.5.5.200 - 2001:100:5::200-2001:100:5::400",
"oc apply -f ipaddresspool2.yaml",
"apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: peer1 spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10",
"oc apply -f bgppeer1.yaml",
"apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: peer2 spec: peerAddress: 10.0.0.2 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10",
"oc apply -f bgppeer2.yaml",
"apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-1 namespace: metallb-system spec: ipAddressPools: - pool1 peers: - peer1 communities: - 65535:65282 aggregationLength: 32 aggregationLengthV6: 128 localPref: 100",
"oc apply -f bgpadvertisement1.yaml",
"apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-2 namespace: metallb-system spec: ipAddressPools: - pool2 peers: - peer2 communities: - 65535:65282 aggregationLength: 32 aggregationLengthV6: 128 localPref: 100",
"oc apply -f bgpadvertisement2.yaml",
"apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: frrviavrf namespace: metallb-system spec: myASN: 100 peerASN: 200 peerAddress: 192.168.130.1 vrf: ens4vrf 1",
"oc apply -f frrviavrf.yaml",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: first-pool namespace: metallb-system spec: addresses: - 192.169.10.0/32",
"oc apply -f first-pool.yaml",
"apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: first-adv namespace: metallb-system spec: ipAddressPools: - first-pool peers: - frrviavrf 1",
"oc apply -f first-adv.yaml",
"apiVersion: v1 kind: Namespace metadata: name: test --- apiVersion: apps/v1 kind: Deployment metadata: name: server namespace: test spec: selector: matchLabels: app: server template: metadata: labels: app: server spec: containers: - name: server image: registry.redhat.io/ubi9/ubi ports: - name: http containerPort: 30100 command: [\"/bin/sh\", \"-c\"] args: [\"sleep INF\"] --- apiVersion: v1 kind: Service metadata: name: server1 namespace: test spec: ports: - name: http port: 30100 protocol: TCP targetPort: 30100 selector: app: server type: LoadBalancer",
"oc apply -f deploy-service.yaml",
"oc get -n metallb-system pods -l component=speaker",
"NAME READY STATUS RESTARTS AGE speaker-c6c5f 6/6 Running 0 69m",
"oc exec -n metallb-system <speaker_pod> -c frr -- vtysh -c \"show bgp vrf <vrf_name> neigh\"",
"BGP neighbor is 192.168.30.1, remote AS 200, local AS 100, external link BGP version 4, remote router ID 192.168.30.1, local router ID 192.168.30.71 BGP state = Established, up for 04:20:09",
"oc exec -n metallb-system <speaker_pod> -c frr -- vtysh -c \"show bgp vrf <vrf_name> ipv4\"",
"apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-nodesel namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64501 myASN: 64500 nodeSelectors: - matchExpressions: - key: kubernetes.io/hostname operator: In values: [compute-1.example.com, compute-2.example.com]",
"apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-peer-bfd namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64501 myASN: 64500 holdTime: \"10s\" bfdProfile: doc-example-bfd-profile-full",
"apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-dual-stack-ipv4 namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64500 myASN: 64500 --- apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-dual-stack-ipv6 namespace: metallb-system spec: peerAddress: 2620:52:0:88::104 peerASN: 64500 myASN: 64500",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-community spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124",
"oc apply -f ipaddresspool.yaml",
"apiVersion: metallb.io/v1beta1 kind: Community metadata: name: community1 namespace: metallb-system spec: communities: - name: NO_ADVERTISE value: '65535:65282'",
"apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: doc-example-bgp-peer spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10",
"oc apply -f bgppeer.yaml",
"apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgp-community-sample namespace: metallb-system spec: aggregationLength: 32 aggregationLengthV6: 128 communities: - NO_ADVERTISE 1 ipAddressPools: - doc-example-bgp-community peers: - doc-example-peer",
"oc apply -f bgpadvertisement.yaml",
"apiVersion: metallb.io/v1beta1 kind: BFDProfile metadata: name: doc-example-bfd-profile-full namespace: metallb-system spec: receiveInterval: 300 transmitInterval: 300 detectMultiplier: 3 echoMode: false passiveMode: true minimumTtl: 254",
"oc apply -f bfdprofile.yaml",
"apiVersion: v1 kind: Service metadata: name: <service_name> annotations: metallb.io/address-pool: <address_pool_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer loadBalancerIP: <ip_address>",
"Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning AllocationFailed 3m16s metallb-controller Failed to allocate IP for \"default/invalid-request\": \"4.3.2.1\" is not allowed in config",
"apiVersion: v1 kind: Service metadata: name: <service_name> annotations: metallb.io/address-pool: <address_pool_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer",
"apiVersion: v1 kind: Service metadata: name: <service_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer",
"apiVersion: v1 kind: Service metadata: name: service-http annotations: metallb.io/address-pool: doc-example metallb.io/allow-shared-ip: \"web-server-svc\" 1 spec: ports: - name: http port: 80 2 protocol: TCP targetPort: 8080 selector: <label_key>: <label_value> 3 type: LoadBalancer loadBalancerIP: 172.31.249.7 4 --- apiVersion: v1 kind: Service metadata: name: service-https annotations: metallb.io/address-pool: doc-example metallb.io/allow-shared-ip: \"web-server-svc\" spec: ports: - name: https port: 443 protocol: TCP targetPort: 8080 selector: <label_key>: <label_value> type: LoadBalancer loadBalancerIP: 172.31.249.7",
"oc apply -f <service_name>.yaml",
"service/<service_name> created",
"oc describe service <service_name>",
"Name: <service_name> Namespace: default Labels: <none> Annotations: metallb.io/address-pool: doc-example 1 Selector: app=service_name Type: LoadBalancer 2 IP Family Policy: SingleStack IP Families: IPv4 IP: 10.105.237.254 IPs: 10.105.237.254 LoadBalancer Ingress: 192.168.100.5 3 Port: <unset> 80/TCP TargetPort: 8080/TCP NodePort: <unset> 30550/TCP Endpoints: 10.244.0.50:8080 Session Affinity: None External Traffic Policy: Cluster Events: 4 Type Reason Age From Message ---- ------ ---- ---- ------- Normal nodeAssigned 32m (x2 over 32m) metallb-speaker announcing from node \"<node_name>\"",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: vrfpolicy 1 spec: nodeSelector: vrf: \"true\" 2 maxUnavailable: 3 desiredState: interfaces: - name: ens4vrf 3 type: vrf 4 state: up vrf: port: - ens4 5 route-table-id: 2 6 - name: ens4 7 type: ethernet state: up ipv4: address: - ip: 192.168.130.130 prefix-length: 24 dhcp: false enabled: true routes: 8 config: - destination: 0.0.0.0/0 metric: 150 next-hop-address: 192.168.130.1 next-hop-interface: ens4 table-id: 2 route-rules: 9 config: - ip-to: 172.30.0.0/16 priority: 998 route-table: 254 10 - ip-to: 10.132.0.0/14 priority: 998 route-table: 254 - ip-to: 169.254.0.0/17 priority: 998 route-table: 254",
"oc apply -f node-network-vrf.yaml",
"apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: frrviavrf namespace: metallb-system spec: myASN: 100 peerASN: 200 peerAddress: 192.168.130.1 vrf: ens4vrf 1",
"oc apply -f frr-via-vrf.yaml",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: first-pool namespace: metallb-system spec: addresses: - 192.169.10.0/32",
"oc apply -f first-pool.yaml",
"apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: first-adv namespace: metallb-system spec: ipAddressPools: - first-pool peers: - frrviavrf 1 nodeSelectors: - matchLabels: egress-service.k8s.ovn.org/test-server1: \"\" 2",
"oc apply -f first-adv.yaml",
"apiVersion: k8s.ovn.org/v1 kind: EgressService metadata: name: server1 1 namespace: test 2 spec: sourceIPBy: \"LoadBalancerIP\" 3 nodeSelector: matchLabels: vrf: \"true\" 4 network: \"2\" 5",
"oc apply -f egress-service.yaml",
"curl <external_ip_address>:<port_number> 1",
"apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: metallb-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.18.0.5 asn: 64512 toReceive: allowed: mode: all",
"apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: bgpBackend: frr-k8s frrk8sConfig: alwaysBlock: - 192.168.1.0/24",
"oc describe network.config/cluster",
"apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.30.0.3 asn: 4200000000 ebgpMultiHop: true port: 180 - address: 172.18.0.6 asn: 4200000000 port: 179",
"apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.30.0.3 asn: 4200000000 ebgpMultiHop: true port: 180 toAdvertise: allowed: prefixes: 1 - 192.168.2.0/24 prefixes: - 192.168.2.0/24 - 192.169.2.0/24",
"apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.30.0.3 asn: 4200000000 ebgpMultiHop: true port: 180 toAdvertise: allowed: mode: all 1 prefixes: - 192.168.2.0/24 - 192.169.2.0/24",
"apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.18.0.5 asn: 64512 port: 179 toReceive: allowed: prefixes: - prefix: 192.168.1.0/24 - prefix: 192.169.2.0/24 ge: 25 1 le: 28 2",
"apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.18.0.5 asn: 64512 port: 179 toReceive: allowed: mode: all",
"apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.30.0.3 asn: 64512 port: 180 bfdProfile: defaultprofile bfdProfiles: - name: defaultprofile",
"apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 nodeSelector: labelSelector: foo: \"bar\"",
"apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: logLevel: debug nodeSelector: node-role.kubernetes.io/worker: \"\"",
"oc replace -f setdebugloglevel.yaml",
"oc get -n metallb-system pods -l component=speaker",
"NAME READY STATUS RESTARTS AGE speaker-2m9pm 4/4 Running 0 9m19s speaker-7m4qw 3/4 Running 0 19s speaker-szlmx 4/4 Running 0 9m19s",
"oc logs -n metallb-system speaker-7m4qw -c speaker",
"{\"branch\":\"main\",\"caller\":\"main.go:92\",\"commit\":\"3d052535\",\"goversion\":\"gc / go1.17.1 / amd64\",\"level\":\"info\",\"msg\":\"MetalLB speaker starting (commit 3d052535, branch main)\",\"ts\":\"2022-05-17T09:55:05Z\",\"version\":\"\"} {\"caller\":\"announcer.go:110\",\"event\":\"createARPResponder\",\"interface\":\"ens4\",\"level\":\"info\",\"msg\":\"created ARP responder for interface\",\"ts\":\"2022-05-17T09:55:05Z\"} {\"caller\":\"announcer.go:119\",\"event\":\"createNDPResponder\",\"interface\":\"ens4\",\"level\":\"info\",\"msg\":\"created NDP responder for interface\",\"ts\":\"2022-05-17T09:55:05Z\"} {\"caller\":\"announcer.go:110\",\"event\":\"createARPResponder\",\"interface\":\"tun0\",\"level\":\"info\",\"msg\":\"created ARP responder for interface\",\"ts\":\"2022-05-17T09:55:05Z\"} {\"caller\":\"announcer.go:119\",\"event\":\"createNDPResponder\",\"interface\":\"tun0\",\"level\":\"info\",\"msg\":\"created NDP responder for interface\",\"ts\":\"2022-05-17T09:55:05Z\"} I0517 09:55:06.515686 95 request.go:665] Waited for 1.026500832s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/operators.coreos.com/v1alpha1?timeout=32s {\"Starting Manager\":\"(MISSING)\",\"caller\":\"k8s.go:389\",\"level\":\"info\",\"ts\":\"2022-05-17T09:55:08Z\"} {\"caller\":\"speakerlist.go:310\",\"level\":\"info\",\"msg\":\"node event - forcing sync\",\"node addr\":\"10.0.128.4\",\"node event\":\"NodeJoin\",\"node name\":\"ci-ln-qb8t3mb-72292-7s7rh-worker-a-vvznj\",\"ts\":\"2022-05-17T09:55:08Z\"} {\"caller\":\"service_controller.go:113\",\"controller\":\"ServiceReconciler\",\"enqueueing\":\"openshift-kube-controller-manager-operator/metrics\",\"epslice\":\"{\\\"metadata\\\":{\\\"name\\\":\\\"metrics-xtsxr\\\",\\\"generateName\\\":\\\"metrics-\\\",\\\"namespace\\\":\\\"openshift-kube-controller-manager-operator\\\",\\\"uid\\\":\\\"ac6766d7-8504-492c-9d1e-4ae8897990ad\\\",\\\"resourceVersion\\\":\\\"9041\\\",\\\"generation\\\":4,\\\"creationTimestamp\\\":\\\"2022-05-17T07:16:53Z\\\",\\\"labels\\\":{\\\"app\\\":\\\"kube-controller-manager-operator\\\",\\\"endpointslice.kubernetes.io/managed-by\\\":\\\"endpointslice-controller.k8s.io\\\",\\\"kubernetes.io/service-name\\\":\\\"metrics\\\"},\\\"annotations\\\":{\\\"endpoints.kubernetes.io/last-change-trigger-time\\\":\\\"2022-05-17T07:21:34Z\\\"},\\\"ownerReferences\\\":[{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Service\\\",\\\"name\\\":\\\"metrics\\\",\\\"uid\\\":\\\"0518eed3-6152-42be-b566-0bd00a60faf8\\\",\\\"controller\\\":true,\\\"blockOwnerDeletion\\\":true}],\\\"managedFields\\\":[{\\\"manager\\\":\\\"kube-controller-manager\\\",\\\"operation\\\":\\\"Update\\\",\\\"apiVersion\\\":\\\"discovery.k8s.io/v1\\\",\\\"time\\\":\\\"2022-05-17T07:20:02Z\\\",\\\"fieldsType\\\":\\\"FieldsV1\\\",\\\"fieldsV1\\\":{\\\"f:addressType\\\":{},\\\"f:endpoints\\\":{},\\\"f:metadata\\\":{\\\"f:annotations\\\":{\\\".\\\":{},\\\"f:endpoints.kubernetes.io/last-change-trigger-time\\\":{}},\\\"f:generateName\\\":{},\\\"f:labels\\\":{\\\".\\\":{},\\\"f:app\\\":{},\\\"f:endpointslice.kubernetes.io/managed-by\\\":{},\\\"f:kubernetes.io/service-name\\\":{}},\\\"f:ownerReferences\\\":{\\\".\\\":{},\\\"k:{\\\\\\\"uid\\\\\\\":\\\\\\\"0518eed3-6152-42be-b566-0bd00a60faf8\\\\\\\"}\\\":{}}},\\\"f:ports\\\":{}}}]},\\\"addressType\\\":\\\"IPv4\\\",\\\"endpoints\\\":[{\\\"addresses\\\":[\\\"10.129.0.7\\\"],\\\"conditions\\\":{\\\"ready\\\":true,\\\"serving\\\":true,\\\"terminating\\\":false},\\\"targetRef\\\":{\\\"kind\\\":\\\"Pod\\\",\\\"namespace\\\":\\\"openshift-kube-controller-manager-operator\\\",\\\"name\\\":\\\"kube-controller-manager-operator-6b98b89ddd-8d4nf\\\",\\\"uid\\\":\\\"dd5139b8-e41c-4946-a31b-1a629314e844\\\",\\\"resourceVersion\\\":\\\"9038\\\"},\\\"nodeName\\\":\\\"ci-ln-qb8t3mb-72292-7s7rh-master-0\\\",\\\"zone\\\":\\\"us-central1-a\\\"}],\\\"ports\\\":[{\\\"name\\\":\\\"https\\\",\\\"protocol\\\":\\\"TCP\\\",\\\"port\\\":8443}]}\",\"level\":\"debug\",\"ts\":\"2022-05-17T09:55:08Z\"}",
"oc logs -n metallb-system speaker-7m4qw -c frr",
"Started watchfrr 2022/05/17 09:55:05 ZEBRA: client 16 says hello and bids fair to announce only bgp routes vrf=0 2022/05/17 09:55:05 ZEBRA: client 31 says hello and bids fair to announce only vnc routes vrf=0 2022/05/17 09:55:05 ZEBRA: client 38 says hello and bids fair to announce only static routes vrf=0 2022/05/17 09:55:05 ZEBRA: client 43 says hello and bids fair to announce only bfd routes vrf=0 2022/05/17 09:57:25.089 BGP: Creating Default VRF, AS 64500 2022/05/17 09:57:25.090 BGP: dup addr detect enable max_moves 5 time 180 freeze disable freeze_time 0 2022/05/17 09:57:25.090 BGP: bgp_get: Registering BGP instance (null) to zebra 2022/05/17 09:57:25.090 BGP: Registering VRF 0 2022/05/17 09:57:25.091 BGP: Rx Router Id update VRF 0 Id 10.131.0.1/32 2022/05/17 09:57:25.091 BGP: RID change : vrf VRF default(0), RTR ID 10.131.0.1 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF br0 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF ens4 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF ens4 addr 10.0.128.4/32 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF ens4 addr fe80::c9d:84da:4d86:5618/64 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF lo 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF ovs-system 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF tun0 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF tun0 addr 10.131.0.1/23 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF tun0 addr fe80::40f1:d1ff:feb6:5322/64 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF veth2da49fed 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF veth2da49fed addr fe80::24bd:d1ff:fec1:d88/64 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF veth2fa08c8c 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF veth2fa08c8c addr fe80::6870:ff:fe96:efc8/64 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF veth41e356b7 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF veth41e356b7 addr fe80::48ff:37ff:fede:eb4b/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF veth1295c6e2 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF veth1295c6e2 addr fe80::b827:a2ff:feed:637/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF veth9733c6dc 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF veth9733c6dc addr fe80::3cf4:15ff:fe11:e541/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF veth336680ea 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF veth336680ea addr fe80::94b1:8bff:fe7e:488c/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF vetha0a907b7 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF vetha0a907b7 addr fe80::3855:a6ff:fe73:46c3/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF vethf35a4398 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF vethf35a4398 addr fe80::40ef:2fff:fe57:4c4d/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF vethf831b7f4 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF vethf831b7f4 addr fe80::f0d9:89ff:fe7c:1d32/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF vxlan_sys_4789 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF vxlan_sys_4789 addr fe80::80c1:82ff:fe4b:f078/64 2022/05/17 09:57:26.094 BGP: 10.0.0.1 [FSM] Timer (start timer expire). 2022/05/17 09:57:26.094 BGP: 10.0.0.1 [FSM] BGP_Start (Idle->Connect), fd -1 2022/05/17 09:57:26.094 BGP: Allocated bnc 10.0.0.1/32(0)(VRF default) peer 0x7f807f7631a0 2022/05/17 09:57:26.094 BGP: sendmsg_zebra_rnh: sending cmd ZEBRA_NEXTHOP_REGISTER for 10.0.0.1/32 (vrf VRF default) 2022/05/17 09:57:26.094 BGP: 10.0.0.1 [FSM] Waiting for NHT 2022/05/17 09:57:26.094 BGP: bgp_fsm_change_status : vrf default(0), Status: Connect established_peers 0 2022/05/17 09:57:26.094 BGP: 10.0.0.1 went from Idle to Connect 2022/05/17 09:57:26.094 BGP: 10.0.0.1 [FSM] TCP_connection_open_failed (Connect->Active), fd -1 2022/05/17 09:57:26.094 BGP: bgp_fsm_change_status : vrf default(0), Status: Active established_peers 0 2022/05/17 09:57:26.094 BGP: 10.0.0.1 went from Connect to Active 2022/05/17 09:57:26.094 ZEBRA: rnh_register msg from client bgp: hdr->length=8, type=nexthop vrf=0 2022/05/17 09:57:26.094 ZEBRA: 0: Add RNH 10.0.0.1/32 type Nexthop 2022/05/17 09:57:26.094 ZEBRA: 0:10.0.0.1/32: Evaluate RNH, type Nexthop (force) 2022/05/17 09:57:26.094 ZEBRA: 0:10.0.0.1/32: NH has become unresolved 2022/05/17 09:57:26.094 ZEBRA: 0: Client bgp registers for RNH 10.0.0.1/32 type Nexthop 2022/05/17 09:57:26.094 BGP: VRF default(0): Rcvd NH update 10.0.0.1/32(0) - metric 0/0 #nhops 0/0 flags 0x6 2022/05/17 09:57:26.094 BGP: NH update for 10.0.0.1/32(0)(VRF default) - flags 0x6 chgflags 0x0 - evaluate paths 2022/05/17 09:57:26.094 BGP: evaluate_paths: Updating peer (10.0.0.1(VRF default)) status with NHT 2022/05/17 09:57:30.081 ZEBRA: Event driven route-map update triggered 2022/05/17 09:57:30.081 ZEBRA: Event handler for route-map: 10.0.0.1-out 2022/05/17 09:57:30.081 ZEBRA: Event handler for route-map: 10.0.0.1-in 2022/05/17 09:57:31.104 ZEBRA: netlink_parse_info: netlink-listen (NS 0) type RTM_NEWNEIGH(28), len=76, seq=0, pid=0 2022/05/17 09:57:31.104 ZEBRA: Neighbor Entry received is not on a VLAN or a BRIDGE, ignoring 2022/05/17 09:57:31.105 ZEBRA: netlink_parse_info: netlink-listen (NS 0) type RTM_NEWNEIGH(28), len=76, seq=0, pid=0 2022/05/17 09:57:31.105 ZEBRA: Neighbor Entry received is not on a VLAN or a BRIDGE, ignoring",
"oc get -n metallb-system pods -l component=speaker",
"NAME READY STATUS RESTARTS AGE speaker-66bth 4/4 Running 0 56m speaker-gvfnf 4/4 Running 0 56m",
"oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c \"show running-config\"",
"Building configuration Current configuration: ! frr version 7.5.1_git frr defaults traditional hostname some-hostname log file /etc/frr/frr.log informational log timestamp precision 3 service integrated-vtysh-config ! router bgp 64500 1 bgp router-id 10.0.1.2 no bgp ebgp-requires-policy no bgp default ipv4-unicast no bgp network import-check neighbor 10.0.2.3 remote-as 64500 2 neighbor 10.0.2.3 bfd profile doc-example-bfd-profile-full 3 neighbor 10.0.2.3 timers 5 15 neighbor 10.0.2.4 remote-as 64500 neighbor 10.0.2.4 bfd profile doc-example-bfd-profile-full neighbor 10.0.2.4 timers 5 15 ! address-family ipv4 unicast network 203.0.113.200/30 4 neighbor 10.0.2.3 activate neighbor 10.0.2.3 route-map 10.0.2.3-in in neighbor 10.0.2.4 activate neighbor 10.0.2.4 route-map 10.0.2.4-in in exit-address-family ! address-family ipv6 unicast network fc00:f853:ccd:e799::/124 neighbor 10.0.2.3 activate neighbor 10.0.2.3 route-map 10.0.2.3-in in neighbor 10.0.2.4 activate neighbor 10.0.2.4 route-map 10.0.2.4-in in exit-address-family ! route-map 10.0.2.3-in deny 20 ! route-map 10.0.2.4-in deny 20 ! ip nht resolve-via-default ! ipv6 nht resolve-via-default ! line vty ! bfd profile doc-example-bfd-profile-full transmit-interval 35 receive-interval 35 passive-mode echo-mode echo-interval 35 minimum-ttl 10 ! ! end",
"oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c \"show bgp summary\"",
"IPv4 Unicast Summary: BGP router identifier 10.0.1.2, local AS number 64500 vrf-id 0 BGP table version 1 RIB entries 1, using 192 bytes of memory Peers 2, using 29 KiB of memory Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd PfxSnt 10.0.2.3 4 64500 387 389 0 0 0 00:32:02 0 1 1 10.0.2.4 4 64500 0 0 0 0 0 never Active 0 2 Total number of neighbors 2 IPv6 Unicast Summary: BGP router identifier 10.0.1.2, local AS number 64500 vrf-id 0 BGP table version 1 RIB entries 1, using 192 bytes of memory Peers 2, using 29 KiB of memory Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd PfxSnt 10.0.2.3 4 64500 387 389 0 0 0 00:32:02 NoNeg 10.0.2.4 4 64500 0 0 0 0 0 never Active 0 Total number of neighbors 2",
"oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c \"show bgp ipv4 unicast 203.0.113.200/30\"",
"BGP routing table entry for 203.0.113.200/30 Paths: (1 available, best #1, table default) Advertised to non peer-group peers: 10.0.2.3 1 Local 0.0.0.0 from 0.0.0.0 (10.0.1.2) Origin IGP, metric 0, weight 32768, valid, sourced, local, best (First path received) Last update: Mon Jan 10 19:49:07 2022",
"oc get -n metallb-system pods -l component=speaker",
"NAME READY STATUS RESTARTS AGE speaker-66bth 4/4 Running 0 26m speaker-gvfnf 4/4 Running 0 26m",
"oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c \"show bfd peers brief\"",
"Session count: 2 SessionId LocalAddress PeerAddress Status ========= ============ =========== ====== 3909139637 10.0.1.2 10.0.2.3 up <.>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/networking/load-balancing-with-metallb |
Metadata APIs | Metadata APIs OpenShift Container Platform 4.15 Reference guide for metadata APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/metadata_apis/index |
Chapter 13. AWS S3 Source | Chapter 13. AWS S3 Source Receive data from AWS S3. 13.1. Configuration Options The following table summarizes the configuration options available for the aws-s3-source Kamelet: Property Name Description Type Default Example accessKey * Access Key The access key obtained from AWS string bucketNameOrArn * Bucket Name The S3 Bucket name or ARN string region * AWS Region The AWS region to connect to string "eu-west-1" secretKey * Secret Key The secret key obtained from AWS string autoCreateBucket Autocreate Bucket Setting the autocreation of the S3 bucket bucketName. boolean false deleteAfterRead Auto-delete Objects Delete objects after consuming them boolean true Note Fields marked with an asterisk (*) are mandatory. 13.2. Dependencies At runtime, the aws-s3-source Kamelet relies upon the presence of the following dependencies: camel:kamelet camel:aws2-s3 13.3. Usage This section describes how you can use the aws-s3-source . 13.3.1. Knative Source You can use the aws-s3-source Kamelet as a Knative source by binding it to a Knative object. aws-s3-source-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-s3-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-s3-source properties: accessKey: "The Access Key" bucketNameOrArn: "The Bucket Name" region: "eu-west-1" secretKey: "The Secret Key" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 13.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 13.3.1.2. Procedure for using the cluster CLI Save the aws-s3-source-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the source by using the following command: oc apply -f aws-s3-source-binding.yaml 13.3.1.3. Procedure for using the Kamel CLI Configure and run the source by using the following command: kamel bind aws-s3-source -p "source.accessKey=The Access Key" -p "source.bucketNameOrArn=The Bucket Name" -p "source.region=eu-west-1" -p "source.secretKey=The Secret Key" channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. 13.3.2. Kafka Source You can use the aws-s3-source Kamelet as a Kafka source by binding it to a Kafka topic. aws-s3-source-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-s3-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-s3-source properties: accessKey: "The Access Key" bucketNameOrArn: "The Bucket Name" region: "eu-west-1" secretKey: "The Secret Key" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 13.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 13.3.2.2. Procedure for using the cluster CLI Save the aws-s3-source-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the source by using the following command: oc apply -f aws-s3-source-binding.yaml 13.3.2.3. Procedure for using the Kamel CLI Configure and run the source by using the following command: kamel bind aws-s3-source -p "source.accessKey=The Access Key" -p "source.bucketNameOrArn=The Bucket Name" -p "source.region=eu-west-1" -p "source.secretKey=The Secret Key" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 13.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/aws-s3-source.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-s3-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-s3-source properties: accessKey: \"The Access Key\" bucketNameOrArn: \"The Bucket Name\" region: \"eu-west-1\" secretKey: \"The Secret Key\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel",
"apply -f aws-s3-source-binding.yaml",
"kamel bind aws-s3-source -p \"source.accessKey=The Access Key\" -p \"source.bucketNameOrArn=The Bucket Name\" -p \"source.region=eu-west-1\" -p \"source.secretKey=The Secret Key\" channel:mychannel",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-s3-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-s3-source properties: accessKey: \"The Access Key\" bucketNameOrArn: \"The Bucket Name\" region: \"eu-west-1\" secretKey: \"The Secret Key\" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic",
"apply -f aws-s3-source-binding.yaml",
"kamel bind aws-s3-source -p \"source.accessKey=The Access Key\" -p \"source.bucketNameOrArn=The Bucket Name\" -p \"source.region=eu-west-1\" -p \"source.secretKey=The Secret Key\" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/kamelets_reference/aws-s3-source |
2.2.9. Verifying Which Ports Are Listening | 2.2.9. Verifying Which Ports Are Listening Unnecessary open ports should be avoided because it increases the attack surface of your system. If after the system has been in service you find unexpected open ports in listening state, that might be signs of intrusion and it should be investigated. Issue the following command, as root, from the console to determine which ports are listening for connections from the network: Review the output of the command with the services needed on the system, turn off what is not specifically required or authorized, repeat the check. Proceed then to make external checks using nmap from another system connected via the network to the first system. This can be used verify the rules in iptables . Make a scan for every IP address shown in the netstat output (except for localhost 127.0.0.0 or ::1 range) from an external system. Use the -6 option for scanning an IPv6 address. See man nmap(1) for more information. The following is an example of the command to be issued from the console of another system to determine which ports are listening for TCP connections from the network: See the netstat (8) , nmap (1) , and services (5) manual pages for more information. | [
"~]# netstat -tanp | grep LISTEN tcp 0 0 0.0.0.0:45876 0.0.0.0:* LISTEN 1193/rpc.statd tcp 0 0 192.168.122.1:53 0.0.0.0:* LISTEN 1241/dnsmasq tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 1783/cupsd tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 7696/sendmail tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1167/rpcbind tcp 0 0 127.0.0.1:30003 0.0.0.0:* LISTEN 1118/tcsd tcp 0 0 :::631 :::* LISTEN 1/init tcp 0 0 :::35018 :::* LISTEN 1193/rpc.statd tcp 0 0 :::111 :::* LISTEN 1167/rpcbind",
"~]# nmap -sT -O 192.168.122.1"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-server_security-verifying_which_ports_are_listening |
Chapter 5. Sending and receiving messages from a topic | Chapter 5. Sending and receiving messages from a topic Send messages to and receive messages from a Kafka cluster installed on OpenShift. This procedure describes how to use Kafka clients to produce and consume messages. You can deploy clients to OpenShift or connect local Kafka clients to the OpenShift cluster. You can use either or both options to test your Kafka cluster installation. For the local clients, you access the Kafka cluster using an OpenShift route connection. You will use the oc command-line tool to deploy and run the Kafka clients. Prerequisites You have created a Kafka cluster on OpenShift . For a local producer and consumer: You have created a route for external access to the Kafka cluster running in OpenShift . You can access the latest Kafka client binaries from the Streams for Apache Kafka software downloads page . Sending and receiving messages from Kafka clients deployed to the OpenShift cluster Deploy producer and consumer clients to the OpenShift cluster. You can then use the clients to send and receive messages from the Kafka cluster in the same namespace. The deployment uses the Streams for Apache Kafka container image for running Kafka. Use the oc command-line interface to deploy a Kafka producer. This example deploys a Kafka producer that connects to the Kafka cluster my-cluster A topic named my-topic is created. Deploying a Kafka producer to OpenShift oc run kafka-producer -ti \ --image=registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 \ --rm=true \ --restart=Never \ -- bin/kafka-console-producer.sh \ --bootstrap-server my-cluster-kafka-bootstrap:9092 \ --topic my-topic Note If the connection fails, check that the Kafka cluster is running and the correct cluster name is specified as the bootstrap-server . From the command prompt, enter a number of messages. Navigate in the OpenShift web console to the Home > Projects page and select the amq-streams-kafka project you created. From the list of pods, click kafka-producer to view the producer pod details. Select Logs page to check the messages you entered are present. Use the oc command-line interface to deploy a Kafka consumer. Deploying a Kafka consumer to OpenShift oc run kafka-consumer -ti \ --image=registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 \ --rm=true \ --restart=Never \ -- bin/kafka-console-consumer.sh \ --bootstrap-server my-cluster-kafka-bootstrap:9092 \ --topic my-topic \ --from-beginning The consumer consumed messages produced to my-topic . From the command prompt, confirm that you see the incoming messages in the consumer console. Navigate in the OpenShift web console to the Home > Projects page and select the amq-streams-kafka project you created. From the list of pods, click kafka-consumer to view the consumer pod details. Select the Logs page to check the messages you consumed are present. Sending and receiving messages from Kafka clients running locally Use a command-line interface to run a Kafka producer and consumer on a local machine. Download and extract the Streams for Apache Kafka <version> binaries from the Streams for Apache Kafka software downloads page . Unzip the amq-streams- <version> -bin.zip file to any destination. Open a command-line interface, and start the Kafka console producer with the topic my-topic and the authentication properties for TLS. Add the properties that are required for accessing the Kafka broker with an OpenShift route . Use the hostname and port 443 for the OpenShift route you are using. Use the password and reference to the truststore you created for the broker certificate. Starting a local Kafka producer kafka-console-producer.sh \ --bootstrap-server my-cluster-kafka-listener1-bootstrap-amq-streams-kafka.apps.ci-ln-50kcyvt-72292.origin-ci-int-gce.dev.rhcloud.com:443 \ --producer-property security.protocol=SSL \ --producer-property ssl.truststore.password=password \ --producer-property ssl.truststore.location=client.truststore.jks \ --topic my-topic Type your message into the command-line interface where the producer is running. Press enter to send the message. Open a new command-line interface tab or window, and start the Kafka console consumer to receive the messages. Use the same connection details as the producer. Starting a local Kafka consumer kafka-console-consumer.sh \ --bootstrap-server my-cluster-kafka-listener1-bootstrap-amq-streams-kafka.apps.ci-ln-50kcyvt-72292.origin-ci-int-gce.dev.rhcloud.com:443 \ --consumer-property security.protocol=SSL \ --consumer-property ssl.truststore.password=password \ --consumer-property ssl.truststore.location=client.truststore.jks \ --topic my-topic --from-beginning Confirm that you see the incoming messages in the consumer console. Press Crtl+C to exit the Kafka console producer and consumer. | [
"run kafka-producer -ti --image=registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic",
"run kafka-consumer -ti --image=registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic --from-beginning",
"kafka-console-producer.sh --bootstrap-server my-cluster-kafka-listener1-bootstrap-amq-streams-kafka.apps.ci-ln-50kcyvt-72292.origin-ci-int-gce.dev.rhcloud.com:443 --producer-property security.protocol=SSL --producer-property ssl.truststore.password=password --producer-property ssl.truststore.location=client.truststore.jks --topic my-topic",
"kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-listener1-bootstrap-amq-streams-kafka.apps.ci-ln-50kcyvt-72292.origin-ci-int-gce.dev.rhcloud.com:443 --consumer-property security.protocol=SSL --consumer-property ssl.truststore.password=password --consumer-property ssl.truststore.location=client.truststore.jks --topic my-topic --from-beginning"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/getting_started_with_streams_for_apache_kafka_on_openshift/proc-using-amq-streams-str |
Chapter 1. Accessing Red Hat Satellite | Chapter 1. Accessing Red Hat Satellite After Red Hat Satellite has been installed and configured, use the Satellite web UI interface to log in to Satellite for further configuration. 1.1. Satellite web UI overview You can manage and monitor your Satellite infrastructure from a browser with the Satellite web UI. For example, you can use the following navigation features in the Satellite web UI: Navigation feature Description Organization dropdown Choose the organization you want to manage. Location dropdown Choose the location you want to manage. Monitor Provides summary dashboards and reports. Content Provides content management tools. This includes content views, activation keys, and lifecycle environments. Hosts Provides host inventory and provisioning configuration tools. Configure Provides general configuration tools and data, including host groups and Ansible content. Infrastructure Provides tools on configuring how Satellite interacts with the environment. Provides event notifications to keep administrators informed of important environment changes. Administer Provides advanced configuration for settings such as users, role-based access control (RBAC), and general settings. 1.2. Importing the Katello root CA certificate The first time you log in to Satellite, you might see a warning informing you that you are using the default self-signed certificate and you might not be able to connect this browser to Satellite until the root CA certificate is imported in the browser. Use the following procedure to locate the root CA certificate on Satellite and to import it into your browser. To use the CLI instead of the Satellite web UI, see CLI Procedure . Prerequisites Your Red Hat Satellite is installed and configured. Procedure Identify the fully qualified domain name of your Satellite Server: Access the pub directory on your Satellite Server using a web browser pointed to the fully qualified domain name: When you access Satellite for the first time, an untrusted connection warning displays in your web browser. Accept the self-signed certificate and add the Satellite URL as a security exception to override the settings. This procedure might differ depending on the browser being used. Ensure that the Satellite URL is valid before you accept the security exception. Select katello-server-ca.crt . Import the certificate into your browser as a certificate authority and trust it to identify websites. CLI procedure From the Satellite CLI, copy the katello-server-ca.crt file to the machine you use to access the Satellite web UI: In the browser, import the katello-server-ca.crt certificate as a certificate authority and trust it to identify websites. 1.3. Logging in to Satellite Use the web user interface to log in to Satellite for further configuration. Prerequisites Ensure that the Katello root CA certificate is installed in your browser. For more information, see Section 1.2, "Importing the Katello root CA certificate" . Procedure Access Satellite Server using a web browser pointed to the fully qualified domain name: Enter the user name and password created during the configuration process. If a user was not created during the configuration process, the default user name is admin . If you have problems logging in, you can reset the password. For more information, see Section 1.8, "Resetting the administrative user password" . 1.4. Using Red Hat Identity Management credentials to log in to the Satellite Hammer CLI This section describes how to log in to your Satellite Hammer CLI with your Red Hat Identity Management (IdM) login and password. Prerequisites You have enrolled your Satellite Server into Red Hat Identity Management and configured it to use Red Hat Identity Management for authentication. More specifically, you have enabled access both to the Satellite web UI and the Satellite API. For more information, see Using Red Hat Identity Management in Installing Satellite Server in a connected network environment . The host on which you run this procedure is configured to use Red Hat Identity Management credentials to log users in to your Satellite Hammer CLI. For more information, see Configuring the Hammer CLI to Use Red Hat Identity Management User Authentication in Installing Satellite Server in a connected network environment . The host is an Red Hat Identity Management client. An Red Hat Identity Management server is running and reachable by the host. Procedure Obtain a Kerberos ticket-granting ticket (TGT) on behalf of a Satellite user: Warning If, when you were setting Red Hat Identity Management to be the authentication provider, you enabled access to both the Satellite API and the Satellite web UI, an attacker can now obtain an API session after the user receives the Kerberos TGT. The attack is possible even if the user did not previously enter the Satellite login credentials anywhere, for example in the browser. If automatic negotiate authentication is not enabled, use the TGT to authenticate to Hammer manually: Optional: Destroy all cached Kerberos tickets in the collection: You are still logged in, even after destroying the Kerberos ticket. Verification Use any hammer command to ensure that the system does not ask you to authenticate again: Note To log out of Hammer, enter: hammer auth logout . 1.5. Using Red Hat Identity Management credentials to log in to the Satellite web UI with a Firefox browser This section describes how to use the Firefox browser to log in to your Satellite web UI with your Red Hat Identity Management (IdM) login and password. Prerequisites You have enrolled your Satellite Server into Red Hat Identity Management and configured the server to use Red Hat Identity Management for authentication. For more information, see Using Red Hat Identity Management in Installing Satellite Server in a connected network environment . The host on which you are using a Firefox browser to log in to the Satellite web UI is an Red Hat Identity Management client. You have a valid Red Hat Identity Management login and password. Red Hat recommends using the latest stable Firefox browser. Your Firefox browser is configured for Single Sign-On (SSO). For more information, see Configuring Firefox to use Kerberos for single sign-on in Configuring authentication and authorization in Red Hat Enterprise Linux . An Red Hat Identity Management server is running and reachable by the host. Procedure Obtain the Kerberos ticket granting ticket (TGT) for yourself using your Red Hat Identity Management credentials: In your browser address bar, enter the URL of your Satellite Server. You are logged in automatically. Note Alternatively, you can skip the first two steps and enter your login and password in the fields displayed on the Satellite web UI. This is also the only option if the host from which you are accessing the Satellite web UI is not an Red Hat Identity Management client. 1.6. Using Red Hat Identity Management credentials to log in to the Satellite web UI with a Chrome browser This section describes how to use a Chrome browser to log in to your Satellite web UI with your Red Hat Identity Management login and password. Prerequisites You have enrolled your Satellite Server into Red Hat Identity Management and configured the server to use Red Hat Identity Management for authentication. For more information, see Using Red Hat Identity Management in Installing Satellite Server in a connected network environment . The host on which you are using the Chrome browser to log in to the Satellite web UI is an Red Hat Identity Management client. You have a valid Red Hat Identity Management login and password. Red Hat recommends using the latest stable Chrome browser. An Red Hat Identity Management server is running and reachable by the host. Procedure Enable the Chrome browser to use Kerberos authentication: Note Instead of allowlisting the whole domain, you can also allowlist a specific Satellite Server. Obtain the Kerberos ticket-granting ticket (TGT) for yourself using your Red Hat Identity Management credentials: In your browser address bar, enter the URL of your Satellite Server. You are logged in automatically. Note Alternatively, you can skip the first three steps and enter your login and password in the fields displayed on the Satellite web UI. This is also the only option if the host from which you are accessing the Satellite web UI is not an Red Hat Identity Management client. 1.7. Changing the password These steps show how to change your password. Procedure In the Satellite web UI, click your user name at the top right corner. Select My Account from the menu. In the Current Password field, enter the current password. In the Password field, enter a new password. In the Verify field, enter the new password again. Click Submit to save your new password. 1.8. Resetting the administrative user password Use the following procedures to reset the administrative password to randomly generated characters or to set a new administrative password. To reset the administrative user password Log in to the base operating system where Satellite Server is installed. Enter the following command to reset the password: Use this password to reset the password in the Satellite web UI. Edit the ~/.hammer/cli.modules.d/foreman.yml file on Satellite Server to add the new password: Unless you update the ~/.hammer/cli.modules.d/foreman.yml file, you cannot use the new password with Hammer CLI. To set a new administrative user password Log in to the base operating system where Satellite Server is installed. To set the password, enter the following command: Edit the ~/.hammer/cli.modules.d/foreman.yml file on Satellite Server to add the new password: Unless you update the ~/.hammer/cli.modules.d/foreman.yml file, you cannot use the new password with Hammer CLI. 1.9. Setting a custom message on the Login page Procedure In the Satellite web UI, navigate to Administer > Settings , and click the General tab. Click the edit button to Login page footer text , and enter the desired text to be displayed on the login page. For example, this text may be a warning message required by your company. Click Save . Log out of the Satellite web UI and verify that the custom text is now displayed on the login page below the Satellite version number. | [
"hostname -f",
"https:// satellite.example.com /pub",
"scp /var/www/html/pub/katello-server-ca.crt username@hostname:remotefile",
"https:// satellite.example.com /",
"kinit idm_user",
"hammer auth login negotiate",
"kdestroy -A",
"hammer host list",
"kinit idm_user Password for idm_user@ EXAMPLE.COM :",
"google-chrome --auth-server-whitelist=\"*. example.com \" --auth-negotiate-delegate-whitelist=\"*. example.com \"",
"kinit idm_user Password for idm_user@_EXAMPLE.COM :",
"foreman-rake permissions:reset Reset to user: admin, password: qwJxBptxb7Gfcjj5",
"vi ~/.hammer/cli.modules.d/foreman.yml",
"foreman-rake permissions:reset password= new_password",
"vi ~/.hammer/cli.modules.d/foreman.yml"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/administering_red_hat_satellite/Accessing_Server_admin |
Chapter 13. Configuring seccomp profiles | Chapter 13. Configuring seccomp profiles An OpenShift Container Platform container or a pod runs a single application that performs one or more well-defined tasks. The application usually requires only a small subset of the underlying operating system kernel APIs. Secure computing mode, seccomp, is a Linux kernel feature that can be used to limit the process running in a container to only using a subset of the available system calls. The restricted-v2 SCC applies to all newly created pods in 4.15. The default seccomp profile runtime/default is applied to these pods. Seccomp profiles are stored as JSON files on the disk. Important Seccomp profiles cannot be applied to privileged containers. 13.1. Verifying the default seccomp profile applied to a pod OpenShift Container Platform ships with a default seccomp profile that is referenced as runtime/default . In 4.15, newly created pods have the Security Context Constraint (SCC) set to restricted-v2 and the default seccomp profile applies to the pod. Procedure You can verify the Security Context Constraint (SCC) and the default seccomp profile set on a pod by running the following commands: Verify what pods are running in the namespace: USD oc get pods -n <namespace> For example, to verify what pods are running in the workshop namespace run the following: USD oc get pods -n workshop Example output NAME READY STATUS RESTARTS AGE parksmap-1-4xkwf 1/1 Running 0 2m17s parksmap-1-deploy 0/1 Completed 0 2m22s Inspect the pods: USD oc get pod parksmap-1-4xkwf -n workshop -o yaml Example output apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/network-status: |- [{ "name": "openshift-sdn", "interface": "eth0", "ips": [ "10.131.0.18" ], "default": true, "dns": {} }] k8s.v1.cni.cncf.io/network-status: |- [{ "name": "openshift-sdn", "interface": "eth0", "ips": [ "10.131.0.18" ], "default": true, "dns": {} }] openshift.io/deployment-config.latest-version: "1" openshift.io/deployment-config.name: parksmap openshift.io/deployment.name: parksmap-1 openshift.io/generated-by: OpenShiftWebConsole openshift.io/scc: restricted-v2 1 seccomp.security.alpha.kubernetes.io/pod: runtime/default 2 1 The restricted-v2 SCC is added by default if your workload does not have access to a different SCC. 2 Newly created pods in 4.15 will have the seccomp profile configured to runtime/default as mandated by the SCC. 13.1.1. Upgraded cluster In clusters upgraded to 4.15 all authenticated users have access to the restricted and restricted-v2 SCC. A workload admitted by the SCC restricted for example, on a OpenShift Container Platform v4.10 cluster when upgraded may get admitted by restricted-v2 . This is because restricted-v2 is the more restrictive SCC between restricted and restricted-v2 . Note The workload must be able to run with retricted-v2 . Conversely with a workload that requires privilegeEscalation: true this workload will continue to have the restricted SCC available for any authenticated user. This is because restricted-v2 does not allow privilegeEscalation . 13.1.2. Newly installed cluster For newly installed OpenShift Container Platform 4.11 or later clusters, the restricted-v2 replaces the restricted SCC as an SCC that is available to be used by any authenticated user. A workload with privilegeEscalation: true , is not admitted into the cluster since restricted-v2 is the only SCC available for authenticated users by default. The feature privilegeEscalation is allowed by restricted but not by restricted-v2 . More features are denied by restricted-v2 than were allowed by restricted SCC. A workload with privilegeEscalation: true may be admitted into a newly installed OpenShift Container Platform 4.11 or later cluster. To give access to the restricted SCC to the ServiceAccount running the workload (or any other SCC that can admit this workload) using a RoleBinding run the following command: USD oc -n <workload-namespace> adm policy add-scc-to-user <scc-name> -z <serviceaccount_name> In OpenShift Container Platform 4.15 the ability to add the pod annotations seccomp.security.alpha.kubernetes.io/pod: runtime/default and container.seccomp.security.alpha.kubernetes.io/<container_name>: runtime/default is deprecated. 13.2. Configuring a custom seccomp profile You can configure a custom seccomp profile, which allows you to update the filters based on the application requirements. This allows cluster administrators to have greater control over the security of workloads running in OpenShift Container Platform. Seccomp security profiles list the system calls (syscalls) a process can make. Permissions are broader than SELinux, which restrict operations, such as write , system-wide. 13.2.1. Creating seccomp profiles You can use the MachineConfig object to create profiles. Seccomp can restrict system calls (syscalls) within a container, limiting the access of your application. Prerequisites You have cluster admin permissions. You have created a custom security context constraints (SCC). For more information, see Additional resources . Procedure Create the MachineConfig object: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: custom-seccomp spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<hash> filesystem: root mode: 0644 path: /var/lib/kubelet/seccomp/seccomp-nostat.json 13.2.2. Setting up the custom seccomp profile Prerequisite You have cluster administrator permissions. You have created a custom security context constraints (SCC). For more information, see "Additional resources". You have created a custom seccomp profile. Procedure Upload your custom seccomp profile to /var/lib/kubelet/seccomp/<custom-name>.json by using the Machine Config. See "Additional resources" for detailed steps. Update the custom SCC by providing reference to the created custom seccomp profile: seccompProfiles: - localhost/<custom-name>.json 1 1 Provide the name of your custom seccomp profile. 13.2.3. Applying the custom seccomp profile to the workload Prerequisite The cluster administrator has set up the custom seccomp profile. For more details, see "Setting up the custom seccomp profile". Procedure Apply the seccomp profile to the workload by setting the securityContext.seccompProfile.type field as following: Example spec: securityContext: seccompProfile: type: Localhost localhostProfile: <custom-name>.json 1 1 Provide the name of your custom seccomp profile. Alternatively, you can use the pod annotations seccomp.security.alpha.kubernetes.io/pod: localhost/<custom-name>.json . However, this method is deprecated in OpenShift Container Platform 4.15. During deployment, the admission controller validates the following: The annotations against the current SCCs allowed by the user role. The SCC, which includes the seccomp profile, is allowed for the pod. If the SCC is allowed for the pod, the kubelet runs the pod with the specified seccomp profile. Important Ensure that the seccomp profile is deployed to all worker nodes. Note The custom SCC must have the appropriate priority to be automatically assigned to the pod or meet other conditions required by the pod, such as allowing CAP_NET_ADMIN. 13.3. Additional resources Managing security context constraints Postinstallation machine configuration tasks | [
"oc get pods -n <namespace>",
"oc get pods -n workshop",
"NAME READY STATUS RESTARTS AGE parksmap-1-4xkwf 1/1 Running 0 2m17s parksmap-1-deploy 0/1 Completed 0 2m22s",
"oc get pod parksmap-1-4xkwf -n workshop -o yaml",
"apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/network-status: |- [{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.131.0.18\" ], \"default\": true, \"dns\": {} }] k8s.v1.cni.cncf.io/network-status: |- [{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.131.0.18\" ], \"default\": true, \"dns\": {} }] openshift.io/deployment-config.latest-version: \"1\" openshift.io/deployment-config.name: parksmap openshift.io/deployment.name: parksmap-1 openshift.io/generated-by: OpenShiftWebConsole openshift.io/scc: restricted-v2 1 seccomp.security.alpha.kubernetes.io/pod: runtime/default 2",
"oc -n <workload-namespace> adm policy add-scc-to-user <scc-name> -z <serviceaccount_name>",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: custom-seccomp spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<hash> filesystem: root mode: 0644 path: /var/lib/kubelet/seccomp/seccomp-nostat.json",
"seccompProfiles: - localhost/<custom-name>.json 1",
"spec: securityContext: seccompProfile: type: Localhost localhostProfile: <custom-name>.json 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/security_and_compliance/seccomp-profiles |
Chapter 10. GenericSecretSource schema reference | Chapter 10. GenericSecretSource schema reference Used in: KafkaClientAuthenticationOAuth , KafkaListenerAuthenticationCustom , KafkaListenerAuthenticationOAuth Property Property type Description key string The key under which the secret value is stored in the OpenShift Secret. secretName string The name of the OpenShift Secret containing the secret value. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-GenericSecretSource-reference |
6.3. Resizing a btrfs File System | 6.3. Resizing a btrfs File System It is not possible to resize a btrfs file system but it is possible to resize each of the devices it uses. If there is only one device in use then this works the same as resizing the file system. If there are multiple devices in use then they must be manually resized to achieve the desired result. Note The unit size is not case specific; it accepts both G or g for GiB. The command does not accept t for terabytes or p for petabytes. It only accepts k , m , and g . Enlarging a btrfs File System To enlarge the file system on a single device, use the command: For example: To enlarge a multi-device file system, the device to be enlarged must be specified. First, show all devices that have a btrfs file system at a specified mount point: For example: Then, after identifying the devid of the device to be enlarged, use the following command: For example: Note The amount can also be max instead of a specified amount. This will use all remaining free space on the device. Shrinking a btrfs File System To shrink the file system on a single device, use the command: For example: To shrink a multi-device file system, the device to be shrunk must be specified. First, show all devices that have a btrfs file system at a specified mount point: For example: Then, after identifying the devid of the device to be shrunk, use the following command: For example: Set the File System Size To set the file system to a specific size on a single device, use the command: For example: To set the file system size of a multi-device file system, the device to be changed must be specified. First, show all devices that have a btrfs file system at the specified mount point: For example: Then, after identifying the devid of the device to be changed, use the following command: For example: | [
"btrfs filesystem resize amount / mount-point",
"btrfs filesystem resize +200M /btrfssingle Resize '/btrfssingle' of '+200M'",
"btrfs filesystem show /mount-point",
"btrfs filesystem show /btrfstest Label: none uuid: 755b41b7-7a20-4a24-abb3-45fdbed1ab39 Total devices 4 FS bytes used 192.00KiB devid 1 size 1.00GiB used 224.75MiB path /dev/vdc devid 2 size 524.00MiB used 204.75MiB path /dev/vdd devid 3 size 1.00GiB used 8.00MiB path /dev/vde devid 4 size 1.00GiB used 8.00MiB path /dev/vdf Btrfs v3.16.2",
"btrfs filesystem resize devid : amount /mount-point",
"btrfs filesystem resize 2:+200M /btrfstest Resize '/btrfstest/' of '2:+200M'",
"btrfs filesystem resize amount / mount-point",
"btrfs filesystem resize -200M /btrfssingle Resize '/btrfssingle' of '-200M'",
"btrfs filesystem show /mount-point",
"btrfs filesystem show /btrfstest Label: none uuid: 755b41b7-7a20-4a24-abb3-45fdbed1ab39 Total devices 4 FS bytes used 192.00KiB devid 1 size 1.00GiB used 224.75MiB path /dev/vdc devid 2 size 524.00MiB used 204.75MiB path /dev/vdd devid 3 size 1.00GiB used 8.00MiB path /dev/vde devid 4 size 1.00GiB used 8.00MiB path /dev/vdf Btrfs v3.16.2",
"btrfs filesystem resize devid : amount /mount-point",
"btrfs filesystem resize 2:-200M /btrfstest Resize '/btrfstest' of '2:-200M'",
"btrfs filesystem resize amount / mount-point",
"btrfs filesystem resize 700M /btrfssingle Resize '/btrfssingle' of '700M'",
"btrfs filesystem show / mount-point",
"btrfs filesystem show /btrfstest Label: none uuid: 755b41b7-7a20-4a24-abb3-45fdbed1ab39 Total devices 4 FS bytes used 192.00KiB devid 1 size 1.00GiB used 224.75MiB path /dev/vdc devid 2 size 724.00MiB used 204.75MiB path /dev/vdd devid 3 size 1.00GiB used 8.00MiB path /dev/vde devid 4 size 1.00GiB used 8.00MiB path /dev/vdf Btrfs v3.16.2",
"btrfs filesystem resize devid : amount /mount-point",
"btrfs filesystem resize 2:300M /btrfstest Resize '/btrfstest' of '2:300M'"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/resizing-btrfs |
Chapter 8. Config map reference for the Cluster Monitoring Operator | Chapter 8. Config map reference for the Cluster Monitoring Operator 8.1. Cluster Monitoring Operator configuration reference Parts of OpenShift Container Platform cluster monitoring are configurable. The API is accessible by setting parameters defined in various config maps. To configure monitoring components, edit the ConfigMap object named cluster-monitoring-config in the openshift-monitoring namespace. These configurations are defined by ClusterMonitoringConfiguration . To configure monitoring components that monitor user-defined projects, edit the ConfigMap object named user-workload-monitoring-config in the openshift-user-workload-monitoring namespace. These configurations are defined by UserWorkloadConfiguration . The configuration file is always defined under the config.yaml key in the config map data. Important Not all configuration parameters for the monitoring stack are exposed. Only the parameters and fields listed in this reference are supported for configuration. For more information about supported configurations, see Maintenance and support for monitoring Configuring cluster monitoring is optional. If a configuration does not exist or is empty, default values are used. If the configuration has invalid YAML data, or if it contains unsupported or duplicated fields that bypassed early validation, the Cluster Monitoring Operator stops reconciling the resources and reports the Degraded=True status in the status conditions of the Operator. 8.2. AdditionalAlertmanagerConfig 8.2.1. Description The AdditionalAlertmanagerConfig resource defines settings for how a component communicates with additional Alertmanager instances. 8.2.2. Required apiVersion Appears in: PrometheusK8sConfig , PrometheusRestrictedConfig , ThanosRulerConfig Property Type Description apiVersion string Defines the API version of Alertmanager. Possible values are v1 or v2 . The default is v2 . bearerToken *v1.SecretKeySelector Defines the secret key reference containing the bearer token to use when authenticating to Alertmanager. pathPrefix string Defines the path prefix to add in front of the push endpoint path. scheme string Defines the URL scheme to use when communicating with Alertmanager instances. Possible values are http or https . The default value is http . staticConfigs []string A list of statically configured Alertmanager endpoints in the form of <hosts>:<port> . timeout *string Defines the timeout value used when sending alerts. tlsConfig TLSConfig Defines the TLS settings to use for Alertmanager connections. 8.3. AlertmanagerMainConfig 8.3.1. Description The AlertmanagerMainConfig resource defines settings for the Alertmanager component in the openshift-monitoring namespace. Appears in: ClusterMonitoringConfiguration Property Type Description enabled *bool A Boolean flag that enables or disables the main Alertmanager instance in the openshift-monitoring namespace. The default value is true . enableUserAlertmanagerConfig bool A Boolean flag that enables or disables user-defined namespaces to be selected for AlertmanagerConfig lookups. This setting only applies if the user workload monitoring instance of Alertmanager is not enabled. The default value is false . logLevel string Defines the log level setting for Alertmanager. The possible values are: error , warn , info , debug . The default value is info . nodeSelector map[string]string Defines the nodes on which the Pods are scheduled. resources *v1.ResourceRequirements Defines resource requests and limits for the Alertmanager container. secrets []string Defines a list of secrets to be mounted into Alertmanager. The secrets must reside within the same namespace as the Alertmanager object. They are added as volumes named secret-<secret-name> and mounted at /etc/alertmanager/secrets/<secret-name> in the alertmanager container of the Alertmanager pods. tolerations []v1.Toleration Defines tolerations for the pods. topologySpreadConstraints []v1.TopologySpreadConstraint Defines a pod's topology spread constraints. volumeClaimTemplate *monv1.EmbeddedPersistentVolumeClaim Defines persistent storage for Alertmanager. Use this setting to configure the persistent volume claim, including storage class, volume size, and name. 8.4. AlertmanagerUserWorkloadConfig 8.4.1. Description The AlertmanagerUserWorkloadConfig resource defines the settings for the Alertmanager instance used for user-defined projects. Appears in: UserWorkloadConfiguration Property Type Description enabled bool A Boolean flag that enables or disables a dedicated instance of Alertmanager for user-defined alerts in the openshift-user-workload-monitoring namespace. The default value is false . enableAlertmanagerConfig bool A Boolean flag to enable or disable user-defined namespaces to be selected for AlertmanagerConfig lookup. The default value is false . logLevel string Defines the log level setting for Alertmanager for user workload monitoring. The possible values are error , warn , info , and debug . The default value is info . resources *v1.ResourceRequirements Defines resource requests and limits for the Alertmanager container. secrets []string Defines a list of secrets to be mounted into Alertmanager. The secrets must be located within the same namespace as the Alertmanager object. They are added as volumes named secret-<secret-name> and mounted at /etc/alertmanager/secrets/<secret-name> in the alertmanager container of the Alertmanager pods. nodeSelector map[string]string Defines the nodes on which the pods are scheduled. tolerations []v1.Toleration Defines tolerations for the pods. topologySpreadConstraints []v1.TopologySpreadConstraint Defines a pod's topology spread constraints. volumeClaimTemplate *monv1.EmbeddedPersistentVolumeClaim Defines persistent storage for Alertmanager. Use this setting to configure the persistent volume claim, including storage class, volume size and name. 8.5. ClusterMonitoringConfiguration 8.5.1. Description The ClusterMonitoringConfiguration resource defines settings that customize the default platform monitoring stack through the cluster-monitoring-config config map in the openshift-monitoring namespace. Property Type Description alertmanagerMain * AlertmanagerMainConfig AlertmanagerMainConfig defines settings for the Alertmanager component in the openshift-monitoring namespace. enableUserWorkload *bool UserWorkloadEnabled is a Boolean flag that enables monitoring for user-defined projects. userWorkload * UserWorkloadConfig UserWorkload defines settings for the monitoring of user-defined projects. kubeStateMetrics * KubeStateMetricsConfig KubeStateMetricsConfig defines settings for the kube-state-metrics agent. metricsServer * MetricsServerConfig MetricsServer defines settings for the Metrics Server component. prometheusK8s * PrometheusK8sConfig PrometheusK8sConfig defines settings for the Prometheus component. prometheusOperator * PrometheusOperatorConfig PrometheusOperatorConfig defines settings for the Prometheus Operator component. prometheusOperatorAdmissionWebhook * PrometheusOperatorAdmissionWebhookConfig PrometheusOperatorAdmissionWebhookConfig defines settings for the admission webhook component of Prometheus Operator. openshiftStateMetrics * OpenShiftStateMetricsConfig OpenShiftMetricsConfig defines settings for the openshift-state-metrics agent. telemeterClient * TelemeterClientConfig TelemeterClientConfig defines settings for the Telemeter Client component. thanosQuerier * ThanosQuerierConfig ThanosQuerierConfig defines settings for the Thanos Querier component. nodeExporter NodeExporterConfig NodeExporterConfig defines settings for the node-exporter agent. monitoringPlugin * MonitoringPluginConfig MonitoringPluginConfig defines settings for the monitoring console-plugin component. 8.6. KubeStateMetricsConfig 8.6.1. Description The KubeStateMetricsConfig resource defines settings for the kube-state-metrics agent. Appears in: ClusterMonitoringConfiguration Property Type Description nodeSelector map[string]string Defines the nodes on which the pods are scheduled. resources *v1.ResourceRequirements Defines resource requests and limits for the KubeStateMetrics container. tolerations []v1.Toleration Defines tolerations for the pods. topologySpreadConstraints []v1.TopologySpreadConstraint Defines a pod's topology spread constraints. 8.7. MetricsServerConfig 8.7.1. Description The MetricsServerConfig resource defines settings for the Metrics Server component. Appears in: ClusterMonitoringConfiguration Property Type Description audit *Audit Defines the audit configuration used by the Metrics Server instance. Possible profile values are Metadata , Request , RequestResponse , and None . The default value is Metadata . nodeSelector map[string]string Defines the nodes on which the pods are scheduled. tolerations []v1.Toleration Defines tolerations for the pods. resources *v1.ResourceRequirements Defines resource requests and limits for the Metrics Server container. topologySpreadConstraints []v1.TopologySpreadConstraint Defines a pod's topology spread constraints. 8.8. MonitoringPluginConfig 8.8.1. Description The MonitoringPluginConfig resource defines settings for the web console plugin component in the openshift-monitoring namespace. Appears in: ClusterMonitoringConfiguration Property Type Description nodeSelector map[string]string Defines the nodes on which the pods are scheduled. resources *v1.ResourceRequirements Defines resource requests and limits for the console-plugin container. tolerations []v1.Toleration Defines tolerations for the pods. topologySpreadConstraints []v1.TopologySpreadConstraint Defines a pod's topology spread constraints. 8.9. NodeExporterCollectorBuddyInfoConfig 8.9.1. Description The NodeExporterCollectorBuddyInfoConfig resource works as an on/off switch for the buddyinfo collector of the node-exporter agent. By default, the buddyinfo collector is disabled. Appears in: NodeExporterCollectorConfig Property Type Description enabled bool A Boolean flag that enables or disables the buddyinfo collector. 8.10. NodeExporterCollectorConfig 8.10.1. Description The NodeExporterCollectorConfig resource defines settings for individual collectors of the node-exporter agent. Appears in: NodeExporterConfig Property Type Description cpufreq NodeExporterCollectorCpufreqConfig Defines the configuration of the cpufreq collector, which collects CPU frequency statistics. Disabled by default. tcpstat NodeExporterCollectorTcpStatConfig Defines the configuration of the tcpstat collector, which collects TCP connection statistics. Disabled by default. netdev NodeExporterCollectorNetDevConfig Defines the configuration of the netdev collector, which collects network devices statistics. Enabled by default. netclass NodeExporterCollectorNetClassConfig Defines the configuration of the netclass collector, which collects information about network devices. Enabled by default. buddyinfo NodeExporterCollectorBuddyInfoConfig Defines the configuration of the buddyinfo collector, which collects statistics about memory fragmentation from the node_buddyinfo_blocks metric. This metric collects data from /proc/buddyinfo . Disabled by default. mountstats NodeExporterCollectorMountStatsConfig Defines the configuration of the mountstats collector, which collects statistics about NFS volume I/O activities. Disabled by default. ksmd NodeExporterCollectorKSMDConfig Defines the configuration of the ksmd collector, which collects statistics from the kernel same-page merger daemon. Disabled by default. processes NodeExporterCollectorProcessesConfig Defines the configuration of the processes collector, which collects statistics from processes and threads running in the system. Disabled by default. systemd NodeExporterCollectorSystemdConfig Defines the configuration of the systemd collector, which collects statistics on the systemd daemon and its managed services. Disabled by default. 8.11. NodeExporterCollectorCpufreqConfig 8.11.1. Description Use the NodeExporterCollectorCpufreqConfig resource to enable or disable the cpufreq collector of the node-exporter agent. By default, the cpufreq collector is disabled. Under certain circumstances, enabling the cpufreq collector increases CPU usage on machines with many cores. If you enable this collector and have machines with many cores, monitor your systems closely for excessive CPU usage. Appears in: NodeExporterCollectorConfig Property Type Description enabled bool A Boolean flag that enables or disables the cpufreq collector. 8.12. NodeExporterCollectorKSMDConfig 8.12.1. Description Use the NodeExporterCollectorKSMDConfig resource to enable or disable the ksmd collector of the node-exporter agent. By default, the ksmd collector is disabled. Appears in: NodeExporterCollectorConfig Property Type Description enabled bool A Boolean flag that enables or disables the ksmd collector. 8.13. NodeExporterCollectorMountStatsConfig 8.13.1. Description Use the NodeExporterCollectorMountStatsConfig resource to enable or disable the mountstats collector of the node-exporter agent. By default, the mountstats collector is disabled. If you enable the collector, the following metrics become available: node_mountstats_nfs_read_bytes_total , node_mountstats_nfs_write_bytes_total , and node_mountstats_nfs_operations_requests_total . Be aware that these metrics can have a high cardinality. If you enable this collector, closely monitor any increases in memory usage for the prometheus-k8s pods. Appears in: NodeExporterCollectorConfig Property Type Description enabled bool A Boolean flag that enables or disables the mountstats collector. 8.14. NodeExporterCollectorNetClassConfig 8.14.1. Description Use the NodeExporterCollectorNetClassConfig resource to enable or disable the netclass collector of the node-exporter agent. By default, the netclass collector is enabled. If you disable this collector, these metrics become unavailable: node_network_info , node_network_address_assign_type , node_network_carrier , node_network_carrier_changes_total , node_network_carrier_up_changes_total , node_network_carrier_down_changes_total , node_network_device_id , node_network_dormant , node_network_flags , node_network_iface_id , node_network_iface_link , node_network_iface_link_mode , node_network_mtu_bytes , node_network_name_assign_type , node_network_net_dev_group , node_network_speed_bytes , node_network_transmit_queue_length , and node_network_protocol_type . Appears in: NodeExporterCollectorConfig Property Type Description enabled bool A Boolean flag that enables or disables the netclass collector. useNetlink bool A Boolean flag that activates the netlink implementation of the netclass collector. The default value is true , which activates the netlink mode. This implementation improves the performance of the netclass collector. 8.15. NodeExporterCollectorNetDevConfig 8.15.1. Description Use the NodeExporterCollectorNetDevConfig resource to enable or disable the netdev collector of the node-exporter agent. By default, the netdev collector is enabled. If disabled, these metrics become unavailable: node_network_receive_bytes_total , node_network_receive_compressed_total , node_network_receive_drop_total , node_network_receive_errs_total , node_network_receive_fifo_total , node_network_receive_frame_total , node_network_receive_multicast_total , node_network_receive_nohandler_total , node_network_receive_packets_total , node_network_transmit_bytes_total , node_network_transmit_carrier_total , node_network_transmit_colls_total , node_network_transmit_compressed_total , node_network_transmit_drop_total , node_network_transmit_errs_total , node_network_transmit_fifo_total , and node_network_transmit_packets_total . Appears in: NodeExporterCollectorConfig Property Type Description enabled bool A Boolean flag that enables or disables the netdev collector. 8.16. NodeExporterCollectorProcessesConfig 8.16.1. Description Use the NodeExporterCollectorProcessesConfig resource to enable or disable the processes collector of the node-exporter agent. If the collector is enabled, the following metrics become available: node_processes_max_processes , node_processes_pids , node_processes_state , node_processes_threads , node_processes_threads_state . The metric node_processes_state and node_processes_threads_state can have up to five series each, depending on the state of the processes and threads. The possible states of a process or a thread are: D (UNINTERRUPTABLE_SLEEP), R (RUNNING & RUNNABLE), S (INTERRUPTABLE_SLEEP), T (STOPPED), or Z (ZOMBIE). By default, the processes collector is disabled. Appears in: NodeExporterCollectorConfig Property Type Description enabled bool A Boolean flag that enables or disables the processes collector. 8.17. NodeExporterCollectorSystemdConfig 8.17.1. Description Use the NodeExporterCollectorSystemdConfig resource to enable or disable the systemd collector of the node-exporter agent. By default, the systemd collector is disabled. If enabled, the following metrics become available: node_systemd_system_running , node_systemd_units , node_systemd_version . If the unit uses a socket, it also generates the following metrics: node_systemd_socket_accepted_connections_total , node_systemd_socket_current_connections , node_systemd_socket_refused_connections_total . You can use the units parameter to select the systemd units to be included by the systemd collector. The selected units are used to generate the node_systemd_unit_state metric, which shows the state of each systemd unit. However, this metric's cardinality might be high (at least five series per unit per node). If you enable this collector with a long list of selected units, closely monitor the prometheus-k8s deployment for excessive memory usage. Note that the node_systemd_timer_last_trigger_seconds metric is only shown if you have configured the value of the units parameter as logrotate.timer . Appears in: NodeExporterCollectorConfig Property Type Description enabled bool A Boolean flag that enables or disables the systemd collector. units []string A list of regular expression (regex) patterns that match systemd units to be included by the systemd collector. By default, the list is empty, so the collector exposes no metrics for systemd units. 8.18. NodeExporterCollectorTcpStatConfig 8.18.1. Description The NodeExporterCollectorTcpStatConfig resource works as an on/off switch for the tcpstat collector of the node-exporter agent. By default, the tcpstat collector is disabled. Appears in: NodeExporterCollectorConfig Property Type Description enabled bool A Boolean flag that enables or disables the tcpstat collector. 8.19. NodeExporterConfig 8.19.1. Description The NodeExporterConfig resource defines settings for the node-exporter agent. Appears in: ClusterMonitoringConfiguration Property Type Description collectors NodeExporterCollectorConfig Defines which collectors are enabled and their additional configuration parameters. maxProcs uint32 The target number of CPUs on which the node-exporter's process will run. The default value is 0 , which means that node-exporter runs on all CPUs. If a kernel deadlock occurs or if performance degrades when reading from sysfs concurrently, you can change this value to 1 , which limits node-exporter to running on one CPU. For nodes with a high CPU count, you can set the limit to a low number, which saves resources by preventing Go routines from being scheduled to run on all CPUs. However, I/O performance degrades if the maxProcs value is set too low and there are many metrics to collect. ignoredNetworkDevices *[]string A list of network devices, defined as regular expressions, that you want to exclude from the relevant collector configuration such as netdev and netclass . If no list is specified, the Cluster Monitoring Operator uses a predefined list of devices to be excluded to minimize the impact on memory usage. If the list is empty, no devices are excluded. If you modify this setting, monitor the prometheus-k8s deployment closely for excessive memory usage. resources *v1.ResourceRequirements Defines resource requests and limits for the NodeExporter container. 8.20. OpenShiftStateMetricsConfig 8.20.1. Description The OpenShiftStateMetricsConfig resource defines settings for the openshift-state-metrics agent. Appears in: ClusterMonitoringConfiguration Property Type Description nodeSelector map[string]string Defines the nodes on which the pods are scheduled. resources *v1.ResourceRequirements Defines resource requests and limits for the OpenShiftStateMetrics container. tolerations []v1.Toleration Defines tolerations for the pods. topologySpreadConstraints []v1.TopologySpreadConstraint Defines the pod's topology spread constraints. 8.21. PrometheusK8sConfig 8.21.1. Description The PrometheusK8sConfig resource defines settings for the Prometheus component. Appears in: ClusterMonitoringConfiguration Property Type Description additionalAlertmanagerConfigs [] AdditionalAlertmanagerConfig Configures additional Alertmanager instances that receive alerts from the Prometheus component. By default, no additional Alertmanager instances are configured. enforcedBodySizeLimit string Enforces a body size limit for Prometheus scraped metrics. If a scraped target's body response is larger than the limit, the scrape will fail. The following values are valid: an empty value to specify no limit, a numeric value in Prometheus size format (such as 64MB ), or the string automatic , which indicates that the limit will be automatically calculated based on cluster capacity. The default value is empty, which indicates no limit. externalLabels map[string]string Defines labels to be added to any time series or alerts when communicating with external systems such as federation, remote storage, and Alertmanager. By default, no labels are added. logLevel string Defines the log level setting for Prometheus. The possible values are: error , warn , info , and debug . The default value is info . nodeSelector map[string]string Defines the nodes on which the pods are scheduled. queryLogFile string Specifies the file to which PromQL queries are logged. This setting can be either a filename, in which case the queries are saved to an emptyDir volume at /var/log/prometheus , or a full path to a location where an emptyDir volume will be mounted and the queries saved. Writing to /dev/stderr , /dev/stdout or /dev/null is supported, but writing to any other /dev/ path is not supported. Relative paths are also not supported. By default, PromQL queries are not logged. remoteWrite [] RemoteWriteSpec Defines the remote write configuration, including URL, authentication, and relabeling settings. resources *v1.ResourceRequirements Defines resource requests and limits for the Prometheus container. retention string Defines the duration for which Prometheus retains data. This definition must be specified using the following regular expression pattern: [0-9]+(ms|s|m|h|d|w|y) (ms = milliseconds, s= seconds,m = minutes, h = hours, d = days, w = weeks, y = years). The default value is 15d . retentionSize string Defines the maximum amount of disk space used by data blocks plus the write-ahead log (WAL). Supported values are B , KB , KiB , MB , MiB , GB , GiB , TB , TiB , PB , PiB , EB , and EiB . By default, no limit is defined. tolerations []v1.Toleration Defines tolerations for the pods. topologySpreadConstraints []v1.TopologySpreadConstraint Defines the pod's topology spread constraints. collectionProfile CollectionProfile Defines the metrics collection profile that Prometheus uses to collect metrics from the platform components. Supported values are full or minimal . In the full profile (default), Prometheus collects all metrics that are exposed by the platform components. In the minimal profile, Prometheus only collects metrics necessary for the default platform alerts, recording rules, telemetry, and console dashboards. volumeClaimTemplate *monv1.EmbeddedPersistentVolumeClaim Defines persistent storage for Prometheus. Use this setting to configure the persistent volume claim, including storage class, volume size and name. 8.22. PrometheusOperatorConfig 8.22.1. Description The PrometheusOperatorConfig resource defines settings for the Prometheus Operator component. Appears in: ClusterMonitoringConfiguration , UserWorkloadConfiguration Property Type Description logLevel string Defines the log level settings for Prometheus Operator. The possible values are error , warn , info , and debug . The default value is info . nodeSelector map[string]string Defines the nodes on which the pods are scheduled. resources *v1.ResourceRequirements Defines resource requests and limits for the PrometheusOperator container. tolerations []v1.Toleration Defines tolerations for the pods. topologySpreadConstraints []v1.TopologySpreadConstraint Defines the pod's topology spread constraints. 8.23. PrometheusOperatorAdmissionWebhookConfig 8.23.1. Description The PrometheusOperatorAdmissionWebhookConfig resource defines settings for the admission webhook workload for Prometheus Operator. Appears in: ClusterMonitoringConfiguration Property Type Description resources *v1.ResourceRequirements Defines resource requests and limits for the prometheus-operator-admission-webhook container. topologySpreadConstraints []v1.TopologySpreadConstraint Defines a pod's topology spread constraints. 8.24. PrometheusRestrictedConfig 8.24.1. Description The PrometheusRestrictedConfig resource defines the settings for the Prometheus component that monitors user-defined projects. Appears in: UserWorkloadConfiguration Property Type Description scrapeInterval string Configures the default interval between consecutive scrapes in case the ServiceMonitor or PodMonitor resource does not specify any value. The interval must be set between 5 seconds and 5 minutes. The value can be expressed in: seconds (for example 30s ), minutes (for example 1m ) or a mix of minutes and seconds (for example 1m30s ). The default value is 30s . evaluationInterval string Configures the default interval between rule evaluations in case the PrometheusRule resource does not specify any value. The interval must be set between 5 seconds and 5 minutes. The value can be expressed in: seconds (for example 30s ), minutes (for example 1m ) or a mix of minutes and seconds (for example 1m30s ). It only applies to PrometheusRule resources with the openshift.io/prometheus-rule-evaluation-scope=\"leaf-prometheus\" label. The default value is 30s . additionalAlertmanagerConfigs [] AdditionalAlertmanagerConfig Configures additional Alertmanager instances that receive alerts from the Prometheus component. By default, no additional Alertmanager instances are configured. enforcedLabelLimit *uint64 Specifies a per-scrape limit on the number of labels accepted for a sample. If the number of labels exceeds this limit after metric relabeling, the entire scrape is treated as failed. The default value is 0 , which means that no limit is set. enforcedLabelNameLengthLimit *uint64 Specifies a per-scrape limit on the length of a label name for a sample. If the length of a label name exceeds this limit after metric relabeling, the entire scrape is treated as failed. The default value is 0 , which means that no limit is set. enforcedLabelValueLengthLimit *uint64 Specifies a per-scrape limit on the length of a label value for a sample. If the length of a label value exceeds this limit after metric relabeling, the entire scrape is treated as failed. The default value is 0 , which means that no limit is set. enforcedSampleLimit *uint64 Specifies a global limit on the number of scraped samples that will be accepted. This setting overrides the SampleLimit value set in any user-defined ServiceMonitor or PodMonitor object if the value is greater than enforcedTargetLimit . Administrators can use this setting to keep the overall number of samples under control. The default value is 0 , which means that no limit is set. enforcedTargetLimit *uint64 Specifies a global limit on the number of scraped targets. This setting overrides the TargetLimit value set in any user-defined ServiceMonitor or PodMonitor object if the value is greater than enforcedSampleLimit . Administrators can use this setting to keep the overall number of targets under control. The default value is 0 . externalLabels map[string]string Defines labels to be added to any time series or alerts when communicating with external systems such as federation, remote storage, and Alertmanager. By default, no labels are added. logLevel string Defines the log level setting for Prometheus. The possible values are error , warn , info , and debug . The default setting is info . nodeSelector map[string]string Defines the nodes on which the pods are scheduled. queryLogFile string Specifies the file to which PromQL queries are logged. This setting can be either a filename, in which case the queries are saved to an emptyDir volume at /var/log/prometheus , or a full path to a location where an emptyDir volume will be mounted and the queries saved. Writing to /dev/stderr , /dev/stdout or /dev/null is supported, but writing to any other /dev/ path is not supported. Relative paths are also not supported. By default, PromQL queries are not logged. remoteWrite [] RemoteWriteSpec Defines the remote write configuration, including URL, authentication, and relabeling settings. resources *v1.ResourceRequirements Defines resource requests and limits for the Prometheus container. retention string Defines the duration for which Prometheus retains data. This definition must be specified using the following regular expression pattern: [0-9]+(ms|s|m|h|d|w|y) (ms = milliseconds, s= seconds,m = minutes, h = hours, d = days, w = weeks, y = years). The default value is 24h . retentionSize string Defines the maximum amount of disk space used by data blocks plus the write-ahead log (WAL). Supported values are B , KB , KiB , MB , MiB , GB , GiB , TB , TiB , PB , PiB , EB , and EiB . The default value is nil . tolerations []v1.Toleration Defines tolerations for the pods. topologySpreadConstraints []v1.TopologySpreadConstraint Defines the pod's topology spread constraints. volumeClaimTemplate *monv1.EmbeddedPersistentVolumeClaim Defines persistent storage for Prometheus. Use this setting to configure the storage class and size of a volume. 8.25. RemoteWriteSpec 8.25.1. Description The RemoteWriteSpec resource defines the settings for remote write storage. 8.25.2. Required url Appears in: PrometheusK8sConfig , PrometheusRestrictedConfig Property Type Description authorization *monv1.SafeAuthorization Defines the authorization settings for remote write storage. basicAuth *monv1.BasicAuth Defines Basic authentication settings for the remote write endpoint URL. bearerTokenFile string Defines the file that contains the bearer token for the remote write endpoint. However, because you cannot mount secrets in a pod, in practice you can only reference the token of the service account. headers map[string]string Specifies the custom HTTP headers to be sent along with each remote write request. Headers set by Prometheus cannot be overwritten. metadataConfig *monv1.MetadataConfig Defines settings for sending series metadata to remote write storage. name string Defines the name of the remote write queue. This name is used in metrics and logging to differentiate queues. If specified, this name must be unique. oauth2 *monv1.OAuth2 Defines OAuth2 authentication settings for the remote write endpoint. proxyUrl string Defines an optional proxy URL. If the cluster-wide proxy is enabled, it replaces the proxyUrl setting. The cluster-wide proxy supports both HTTP and HTTPS proxies, with HTTPS taking precedence. queueConfig *monv1.QueueConfig Allows tuning configuration for remote write queue parameters. remoteTimeout string Defines the timeout value for requests to the remote write endpoint. sendExemplars *bool Enables sending exemplars via remote write. When enabled, this setting configures Prometheus to store a maximum of 100,000 exemplars in memory. This setting only applies to user-defined monitoring and is not applicable to core platform monitoring. sigv4 *monv1.Sigv4 Defines AWS Signature Version 4 authentication settings. tlsConfig *monv1.SafeTLSConfig Defines TLS authentication settings for the remote write endpoint. url string Defines the URL of the remote write endpoint to which samples will be sent. writeRelabelConfigs []monv1.RelabelConfig Defines the list of remote write relabel configurations. 8.26. TLSConfig 8.26.1. Description The TLSConfig resource configures the settings for TLS connections. 8.26.2. Required insecureSkipVerify Appears in: AdditionalAlertmanagerConfig Property Type Description ca *v1.SecretKeySelector Defines the secret key reference containing the Certificate Authority (CA) to use for the remote host. cert *v1.SecretKeySelector Defines the secret key reference containing the public certificate to use for the remote host. key *v1.SecretKeySelector Defines the secret key reference containing the private key to use for the remote host. serverName string Used to verify the hostname on the returned certificate. insecureSkipVerify bool When set to true , disables the verification of the remote host's certificate and name. 8.27. TelemeterClientConfig 8.27.1. Description TelemeterClientConfig defines settings for the Telemeter Client component. 8.27.2. Required nodeSelector tolerations Appears in: ClusterMonitoringConfiguration Property Type Description nodeSelector map[string]string Defines the nodes on which the pods are scheduled. resources *v1.ResourceRequirements Defines resource requests and limits for the TelemeterClient container. tolerations []v1.Toleration Defines tolerations for the pods. topologySpreadConstraints []v1.TopologySpreadConstraint Defines the pod's topology spread constraints. 8.28. ThanosQuerierConfig 8.28.1. Description The ThanosQuerierConfig resource defines settings for the Thanos Querier component. Appears in: ClusterMonitoringConfiguration Property Type Description enableRequestLogging bool A Boolean flag that enables or disables request logging. The default value is false . logLevel string Defines the log level setting for Thanos Querier. The possible values are error , warn , info , and debug . The default value is info . enableCORS bool A Boolean flag that enables setting CORS headers. The headers allow access from any origin. The default value is false . nodeSelector map[string]string Defines the nodes on which the pods are scheduled. resources *v1.ResourceRequirements Defines resource requests and limits for the Thanos Querier container. tolerations []v1.Toleration Defines tolerations for the pods. topologySpreadConstraints []v1.TopologySpreadConstraint Defines the pod's topology spread constraints. 8.29. ThanosRulerConfig 8.29.1. Description The ThanosRulerConfig resource defines configuration for the Thanos Ruler instance for user-defined projects. Appears in: UserWorkloadConfiguration Property Type Description additionalAlertmanagerConfigs [] AdditionalAlertmanagerConfig Configures how the Thanos Ruler component communicates with additional Alertmanager instances. The default value is nil . evaluationInterval string Configures the default interval between Prometheus rule evaluations in case the PrometheusRule resource does not specify any value. The interval must be set between 5 seconds and 5 minutes. The value can be expressed in: seconds (for example 30s ), minutes (for example 1m ) or a mix of minutes and seconds (for example 1m30s ). It applies to PrometheusRule resources without the openshift.io/prometheus-rule-evaluation-scope=\"leaf-prometheus\" label. The default value is 15s . logLevel string Defines the log level setting for Thanos Ruler. The possible values are error , warn , info , and debug . The default value is info . nodeSelector map[string]string Defines the nodes on which the Pods are scheduled. resources *v1.ResourceRequirements Defines resource requests and limits for the Alertmanager container. retention string Defines the duration for which Prometheus retains data. This definition must be specified using the following regular expression pattern: [0-9]+(ms|s|m|h|d|w|y) (ms = milliseconds, s= seconds,m = minutes, h = hours, d = days, w = weeks, y = years). The default value is 15d . tolerations []v1.Toleration Defines tolerations for the pods. topologySpreadConstraints []v1.TopologySpreadConstraint Defines the pod's topology spread constraints. volumeClaimTemplate *monv1.EmbeddedPersistentVolumeClaim Defines persistent storage for Thanos Ruler. Use this setting to configure the storage class and size of a volume. 8.30. UserWorkloadConfig 8.30.1. Description The UserWorkloadConfig resource defines settings for the monitoring of user-defined projects. Appears in: ClusterMonitoringConfiguration Property Type Description rulesWithoutLabelEnforcementAllowed *bool A Boolean flag that enables or disables the ability to deploy user-defined PrometheusRules objects for which the namespace label is not enforced to the namespace of the object. Such objects should be created in a namespace configured under the namespacesWithoutLabelEnforcement property of the UserWorkloadConfiguration resource. The default value is true . 8.31. UserWorkloadConfiguration 8.31.1. Description The UserWorkloadConfiguration resource defines the settings responsible for user-defined projects in the user-workload-monitoring-config config map in the openshift-user-workload-monitoring namespace. You can only enable UserWorkloadConfiguration after you have set enableUserWorkload to true in the cluster-monitoring-config config map under the openshift-monitoring namespace. Property Type Description alertmanager * AlertmanagerUserWorkloadConfig Defines the settings for the Alertmanager component in user workload monitoring. prometheus * PrometheusRestrictedConfig Defines the settings for the Prometheus component in user workload monitoring. prometheusOperator * PrometheusOperatorConfig Defines the settings for the Prometheus Operator component in user workload monitoring. thanosRuler * ThanosRulerConfig Defines the settings for the Thanos Ruler component in user workload monitoring. namespacesWithoutLabelEnforcement []string Defines the list of namespaces for which Prometheus and Thanos Ruler in user-defined monitoring do not enforce the namespace label value in PrometheusRule objects. The namespacesWithoutLabelEnforcement property allows users to define recording and alerting rules that can query across multiple projects (not limited to user-defined projects) instead of deploying identical PrometheusRule objects in each user project. To make the resulting alerts and metrics visible to project users, the query expressions should return a namespace label with a non-empty value. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/monitoring/config-map-reference-for-the-cluster-monitoring-operator |
Chapter 125. KafkaBridgeSpec schema reference | Chapter 125. KafkaBridgeSpec schema reference Used in: KafkaBridge Full list of KafkaBridgeSpec schema properties Configures a Kafka Bridge cluster. Configuration options relate to: Kafka cluster bootstrap address Security (encryption, authentication, and authorization) Consumer configuration Producer configuration HTTP configuration 125.1. logging Kafka Bridge has its own configurable loggers: rootLogger.level logger. <operation-id> You can replace <operation-id> in the logger. <operation-id> logger to set log levels for specific operations: createConsumer deleteConsumer subscribe unsubscribe poll assign commit send sendToPartition seekToBeginning seekToEnd seek healthy ready openapi Each operation is defined according OpenAPI specification, and has a corresponding API endpoint through which the bridge receives requests from HTTP clients. You can change the log level on each endpoint to create fine-grained logging information about the incoming and outgoing HTTP requests. Each logger has to be configured assigning it a name as http.openapi.operation. <operation-id> . For example, configuring the logging level for the send operation logger means defining the following: Kafka Bridge uses the Apache log4j2 logger implementation. Loggers are defined in the log4j2.properties file, which has the following default configuration for healthy and ready endpoints: The log level of all other operations is set to INFO by default. Use the logging property to configure loggers and logger levels. You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.valueFrom.configMapKeyRef.name property to the name of the ConfigMap containing the external logging configuration. The logging.valueFrom.configMapKeyRef.name and logging.valueFrom.configMapKeyRef.key properties are mandatory. Default logging is used if the name or key is not set. Inside the ConfigMap, the logging configuration is described using log4j.properties . For more information about log levels, see Apache logging services . Here we see examples of inline and external logging. Inline logging apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge spec: # ... logging: type: inline loggers: rootLogger.level: INFO # enabling DEBUG just for send operation logger.send.name: "http.openapi.operation.send" logger.send.level: DEBUG # ... External logging apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge spec: # ... logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: bridge-logj42.properties # ... Any available loggers that are not configured have their level set to OFF . If the Kafka Bridge was deployed using the Cluster Operator, changes to Kafka Bridge logging levels are applied dynamically. If you use external logging, a rolling update is triggered when logging appenders are changed. Garbage collector (GC) Garbage collector logging can also be enabled (or disabled) using the jvmOptions property . 125.2. KafkaBridgeSpec schema properties Property Property type Description replicas integer The number of pods in the Deployment . Defaults to 1 . image string The container image used for Kafka Bridge pods. If no image name is explicitly specified, the image name corresponds to the image specified in the Cluster Operator configuration. If an image name is not defined in the Cluster Operator configuration, a default value is used. bootstrapServers string A list of host:port pairs for establishing the initial connection to the Kafka cluster. tls ClientTls TLS configuration for connecting Kafka Bridge to the cluster. authentication KafkaClientAuthenticationTls , KafkaClientAuthenticationScramSha256 , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationPlain , KafkaClientAuthenticationOAuth Authentication configuration for connecting to the cluster. http KafkaBridgeHttpConfig The HTTP related configuration. adminClient KafkaBridgeAdminClientSpec Kafka AdminClient related configuration. consumer KafkaBridgeConsumerSpec Kafka consumer related configuration. producer KafkaBridgeProducerSpec Kafka producer related configuration. resources ResourceRequirements CPU and memory resources to reserve. jvmOptions JvmOptions Currently not supported JVM Options for pods. logging InlineLogging , ExternalLogging Logging configuration for Kafka Bridge. clientRackInitImage string The image of the init container used for initializing the client.rack . rack Rack Configuration of the node label which will be used as the client.rack consumer configuration. enableMetrics boolean Enable the metrics for the Kafka Bridge. Default is false. livenessProbe Probe Pod liveness checking. readinessProbe Probe Pod readiness checking. template KafkaBridgeTemplate Template for Kafka Bridge resources. The template allows users to specify how a Deployment and Pod is generated. tracing JaegerTracing , OpenTelemetryTracing The configuration of tracing in Kafka Bridge. | [
"logger.send.name = http.openapi.operation.send logger.send.level = DEBUG",
"logger.healthy.name = http.openapi.operation.healthy logger.healthy.level = WARN logger.ready.name = http.openapi.operation.ready logger.ready.level = WARN",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge spec: # logging: type: inline loggers: rootLogger.level: INFO # enabling DEBUG just for send operation logger.send.name: \"http.openapi.operation.send\" logger.send.level: DEBUG #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge spec: # logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: bridge-logj42.properties #"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-kafkabridgespec-reference |
1.3. Setting Up Hardware | 1.3. Setting Up Hardware Setting up hardware consists of connecting cluster nodes to other hardware required to run the Red Hat High Availability Add-On. The amount and type of hardware varies according to the purpose and availability requirements of the cluster. Typically, an enterprise-level cluster requires the following type of hardware (see Figure 1.1, "Red Hat High Availability Add-On Hardware Overview" ). For considerations about hardware and other cluster configuration concerns, see Chapter 3, Before Configuring the Red Hat High Availability Add-On or check with an authorized Red Hat representative. Cluster nodes - Computers that are capable of running Red Hat Enterprise Linux 6 software, with at least 1GB of RAM. Network switches for public network - This is required for client access to the cluster. Network switches for private network - This is required for communication among the cluster nodes and other cluster hardware such as network power switches and Fibre Channel switches. Fencing device - A fencing device is required. A network power switch is recommended to perform fencing in an enterprise-level cluster. For information about supported fencing devices, see Appendix A, Fence Device Parameters . Storage - Some type of storage is required for a cluster. Figure 1.1, "Red Hat High Availability Add-On Hardware Overview" shows shared storage, but shared storage may not be required for your specific use. Figure 1.1. Red Hat High Availability Add-On Hardware Overview | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-hw-setup-CA |
Chapter 7. Templates and Pools | Chapter 7. Templates and Pools 7.1. Templates and Pools The Red Hat Virtualization environment provides administrators with tools to simplify the provisioning of virtual machines to users. These are templates and pools. A template is a shortcut that allows an administrator to quickly create a new virtual machine based on an existing, pre-configured virtual machine, bypassing operating system installation and configuration. This is especially helpful for virtual machines that will be used like appliances, for example web server virtual machines. If an organization uses many instances of a particular web server, an administrator can create a virtual machine that will be used as a template, installing an operating system, the web server, any supporting packages, and applying unique configuration changes. The administrator can then create a template based on the working virtual machine that will be used to create new, identical virtual machines as they are required. Virtual machine pools are groups of virtual machines based on a given template that can be rapidly provisioned to users. Permission to use virtual machines in a pool is granted at the pool level; a user who is granted permission to use the pool will be assigned any virtual machine from the pool. Inherent in a virtual machine pool is the transitory nature of the virtual machines within it. Because users are assigned virtual machines without regard for which virtual machine in the pool they have used in the past, pools are not suited for purposes which require data persistence. Virtual machine pools are best suited for scenarios where either user data is stored in a central location and the virtual machine is a means to accessing and using that data, or data persistence is not important. The creation of a pool results in the creation of the virtual machines that populate the pool, in a stopped state. These are then started on user request. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/technical_reference/chap-templates_and_pools |
Subsets and Splits