title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
Chapter 2. Deploy OpenShift Data Foundation using local storage devices
Chapter 2. Deploy OpenShift Data Foundation using local storage devices Deploying OpenShift Data Foundation on OpenShift Container Platform using local storage devices provides you with the option to create internal cluster resources. Follow this deployment method to use local storage to back persistent volumes for your OpenShift Container Platform applications. Use this section to deploy OpenShift Data Foundation on IBM Z infrastructure where OpenShift Container Platform is already installed. 2.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.17 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 2.2. Installing Local Storage Operator Install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Type local storage in the Filter by keyword box to find the Local Storage Operator from the list of operators, and click on it. Set the following options on the Install Operator page: Update channel as stable . Installation mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Update approval as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 2.3. Finding available storage devices (optional) This step is additional information and can be skipped as the disks are automatically discovered during storage cluster creation. Use this procedure to identify the device names for each of the three or more worker nodes that you have labeled with the OpenShift Data Foundation label cluster.ocs.openshift.io/openshift-storage='' before creating Persistent Volumes (PV) for IBM Z. Procedure List and verify the name of the worker nodes with the OpenShift Data Foundation label. Example output: Log in to each worker node that is used for OpenShift Data Foundation resources and find the unique by-id device name for each available raw block device. Example output: In this example, for bmworker01 , the available local device is sdb . Identify the unique ID for each of the devices selected in Step 2. In the above example, the ID for the local device sdb Repeat the above step to identify the device ID for all the other nodes that have the storage devices to be used by OpenShift Data Foundation. See this Knowledge Base article for more details. 2.4. Enabling DASD devices If you are using DASD devices you must enable them before creating an OpenShift Data Foundation cluster on IBM Z. Once the DASD devices are available to z/VM guests, complete the following steps from the compute or infrastructure node on which an OpenShift Data Foundation storage node is being installed. Procedure To enable the DASD device, run the following command: 1 For <device_bus_id>, specify the id of the DASD device bus-ID. For example 0.0.b100 . To verify the status of the DASD device you can use the the lsdasd and`lsblk` commands. To low-level format the device and specify the disk name, run the following command: 1 For <device_name>, specify the disk name. For example dasdb . Important The use of DASD quick-formatting Extent Space Efficient (ESE) DASD is not supported on OpenShift Data Foundation. If you are using ESE DASDs, make sure to disable quick-formatting with the --mode=full parameter. To auto-create one partition using the whole disk, run the following command: 1 For <device_name>, enter the disk name you have specified in the step. For example dasdb . Once these steps are completed, the device is available during OpenShift Data Foundation deployment as /dev/dasdb1 . Important During LocalVolumeSet creation, make sure to select only the Part option as device type. Additional resources For details on the commands, see Commands for Linux on IBM Z in IBM documentation. 2.5. Creating OpenShift Data Foundation cluster on IBM Z Use this procedure to create an OpenShift Data Foundation cluster on IBM Z. Prerequisites Ensure that all the requirements in the Requirements for installing OpenShift Data Foundation using local storage devices section are met. You must have at least three worker nodes with the same storage type and size attached to each node (for example, 200 GB) to use local storage devices on IBM Z or IBM(R) LinuxONE. Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, perform the following: Select the Create a new StorageClass using the local storage devices for Backing storage type option. Select Full Deployment for the Deployment type option. Click . Important You are prompted to install the Local Storage Operator if it is not already installed. Click Install , and follow the procedure as described in Installing Local Storage Operator . In the Create local volume set page, provide the following information: Enter a name for the LocalVolumeSet and the StorageClass . By default, the local volume set name appears for the storage class name. You can change the name. Choose one of the following: Disks on all nodes Uses the available disks that match the selected filters on all the nodes. Disks on selected nodes Uses the available disks that match the selected filters only on the selected nodes. Important The flexible scaling feature is enabled only when the storage cluster that you created with three or more nodes are spread across fewer than the minimum requirement of three availability zones. For information about flexible scaling, see knowledgebase article on Scaling OpenShift Data Foundation cluster using YAML when flexible scaling is enabled . Flexible scaling features get enabled at the time of deployment and can not be enabled or disabled later on. If the nodes selected do not match the OpenShift Data Foundation cluster requirement of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide. From the available list of Disk Type , select SSD/NVME . Expand the Advanced section and set the following options: Volume Mode Block is selected by default. Device Type Select one or more device type from the dropdown list. Note For a multi-path device, select the Mpath option from the drop-down exclusively. For a DASD-based cluster, ensure that only the Part option is included in the Device Type and remove the 'Disk' option. Disk Size Set a minimum size of 100GB for the device and maximum available size of the device that needs to be included. Maximum Disks Limit This indicates the maximum number of PVs that can be created on a node. If this field is left empty, then PVs are created for all the available disks on the matching nodes. Click . A pop-up to confirm the creation of LocalVolumeSet is displayed. Click Yes to continue. In the Capacity and nodes page, configure the following: Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class. You can check the box to select Taint nodes. Click . Optional: In the Security and network page, configure the following based on your requirement: To enable encryption, select Enable data encryption for block and file storage . Choose one or both of the following Encryption level : Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Select Connect to an external key management service checkbox. This is optional for cluster-wide encryption. Key Management Service Provider is set to Vault by default. Enter Vault Service Name , host Address of Vault server ('https:// <hostname or ip> ''), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide CA Certificate , Client Certificate and Client Private Key . Click Save . Select Default (SDN) as Multus is not yet supported on OpenShift Data Foundation on IBM Z. Click . In the Data Protection page, if you are configuring Regional-DR solution for Openshift Data Foundation then select the Prepare cluster for disaster recovery (Regional-DR only) checkbox, else click . In the Review and create page:: Review the configuration details. To modify any configuration settings, click Back to go back to the configuration page. Click Create StorageSystem . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources . Verify that Status of StorageCluster is Ready and has a green tick mark to it. To verify if flexible scaling is enabled on your storage cluster, perform the following steps: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources ocs-storagecluster . In the YAML tab, search for the keys flexibleScaling in spec section and failureDomain in status section. If flexible scaling is true and failureDomain is set to host, flexible scaling feature is enabled. To verify that all components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation deployment . Additional resources To expand the capacity of the initial cluster, see the Scaling Storage guide.
[ "oc annotate namespace openshift-storage openshift.io/node-selector=", "oc get nodes -l=cluster.ocs.openshift.io/openshift-storage=", "NAME STATUS ROLES AGE VERSION bmworker01 Ready worker 6h45m v1.16.2 bmworker02 Ready worker 6h45m v1.16.2 bmworker03 Ready worker 6h45m v1.16.2", "oc debug node/<node name>", "oc debug node/bmworker01 Starting pod/bmworker01-debug To use host binaries, run `chroot /host` Pod IP: 10.0.135.71 If you don't see a command prompt, try pressing enter. sh-4.2# chroot /host sh-4.4# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 500G 0 loop sda 8:0 0 120G 0 disk |-sda1 8:1 0 384M 0 part /boot `-sda4 8:4 0 119.6G 0 part `-coreos-luks-root-nocrypt 253:0 0 119.6G 0 dm /sysroot sdb 8:16 0 500G 0 disk", "sh-4.4#ls -l /dev/disk/by-id/ | grep sdb lrwxrwxrwx. 1 root root 9 Feb 3 16:49 scsi-360050763808104bc2800000000000259 -> ../../sdb lrwxrwxrwx. 1 root root 9 Feb 3 16:49 scsi-SIBM_2145_00e020412f0aXX00 -> ../../sdb lrwxrwxrwx. 1 root root 9 Feb 3 16:49 scsi-0x60050763808104bc2800000000000259 -> ../../sdb", "scsi-0x60050763808104bc2800000000000259", "sudo chzdev -e <device_bus_id> 1", "sudo dasdfmt /dev/<device_name> -b 4096 -p --mode=full 1", "sudo fdasd -a /dev/<device_name> 1", "spec: flexibleScaling: true [...] status: failureDomain: host" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_openshift_data_foundation_using_ibm_z/deploy-using-local-storage-devices-ibmz
Chapter 1. System requirements and supported architectures
Chapter 1. System requirements and supported architectures Red Hat Enterprise Linux 9 delivers a stable, secure, consistent foundation across hybrid cloud deployments with the tools needed to deliver workloads faster with less effort. You can deploy RHEL as a guest on supported hypervisors and Cloud provider environments as well as on physical infrastructure, so your applications can take advantage of innovations in the leading hardware architecture platforms. Review the guidelines provided for system, hardware, security, memory, and RAID before installing. If you want to use your system as a virtualization host, review the necessary hardware requirements for virtualization . Red Hat Enterprise Linux supports the following architectures: AMD and Intel 64-bit architectures The 64-bit ARM architecture IBM Power Systems, Little Endian 64-bit IBM Z architectures 1.1. Supported installation targets An installation target is a storage device that stores Red Hat Enterprise Linux and boots the system. Red Hat Enterprise Linux supports the following installation targets for IBMZ , IBM Power, AMD64, Intel 64, and 64-bit ARM systems: Storage connected by a standard internal interface, such as DASD, SCSI, SATA, or SAS BIOS/firmware RAID devices on the Intel64, AMD64 and arm64 architectures NVDIMM devices in sector mode on the Intel64 and AMD64 architectures, supported by the nd_pmem driver. Storage connected via Fibre Channel Host Bus Adapters, such as DASDs (IBM Z architecture only) and SCSI LUNs, including multipath devices. Some might require vendor-provided drivers. Xen block devices on Intel processors in Xen virtual machines. VirtIO block devices on Intel processors in KVM virtual machines. Red Hat does not support installation to USB drives or SD memory cards. For information about support for third-party virtualization technologies, see the Red Hat Hardware Compatibility List . 1.2. Disk and memory requirements If several operating systems are installed, it is important that you verify that the allocated disk space is separate from the disk space required by Red Hat Enterprise Linux. In some cases, it is important to dedicate specific partitions to Red Hat Enterprise Linux, for example, for AMD64, Intel 64, and 64-bit ARM, at least two partitions ( / and swap ) must be dedicated to RHEL and for IBM Power Systems servers, at least three partitions ( / , swap , and a PReP boot partition) must be dedicated to RHEL. Additionally, you must have a minimum of 10 GiB of available disk space. To install Red Hat Enterprise Linux, you must have a minimum of 10 GiB of space in either unpartitioned disk space or in partitions that can be deleted. For more information, see Partitioning reference . Table 1.1. Minimum RAM requirements Installation type Minimum RAM Local media installation (USB, DVD) 1.5 GiB for aarch64, IBM Z and x86_64 architectures 3 GiB for ppc64le architecture NFS network installation 1.5 GiB for aarch64, IBM Z and x86_64 architectures 3 GiB for ppc64le architecture HTTP, HTTPS or FTP network installation 3 GiB for IBM Z and x86_64 architectures 4 GiB for aarch64 and ppc64le architectures It is possible to complete the installation with less memory than the minimum requirements. The exact requirements depend on your environment and installation path. Test various configurations to determine the minimum required RAM for your environment. Installing Red Hat Enterprise Linux using a Kickstart file has the same minimum RAM requirements as a standard installation. However, additional RAM may be required if your Kickstart file includes commands that require additional memory, or write data to the RAM disk. For more information, see Automatically installing RHEL . 1.3. Graphics display resolution requirements Your system must have the following minimum resolution to ensure a smooth and error-free installation of Red Hat Enterprise Linux. Table 1.2. Display resolution Product version Resolution Red Hat Enterprise Linux 9 Minimum : 800 x 600 Recommended : 1026 x 768 1.4. UEFI Secure Boot and Beta release requirements If you plan to install a Beta release of Red Hat Enterprise Linux, on systems having UEFI Secure Boot enabled, then first disable the UEFI Secure Boot option and then begin the installation. UEFI Secure Boot requires that the operating system kernel is signed with a recognized private key, which the system's firmware verifies using the corresponding public key. For Red Hat Enterprise Linux Beta releases, the kernel is signed with a Red Hat Beta-specific public key, which the system fails to recognize by default. As a result, the system fails to even boot the installation media. Additional resources For information about installing RHEL on IBM, see IBM installation documentation Security hardening Composing a customized RHEL system image Red Hat ecosystem catalog RHEL technology capabilities and limits
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/interactively_installing_rhel_over_the_network/system-requirements-and-supported-architectures_rhel-installer
Chapter 20. Configuring Web Services
Chapter 20. Configuring Web Services JBoss EAP offers the ability to configure the behavior of deployed web services through the webservices subsystem using the management console or the management CLI. You can configure published endpoint addresses and handler chains. You can also enable the collection of runtime statistics for web services. For more information, see Configuring the Web Services Subsystem in Developing Web Services Applications for JBoss EAP.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/configuration_guide/configuring_web_services_subsystem
Chapter 4. Configuring the overcloud
Chapter 4. Configuring the overcloud Use Red Hat OpenStack Platform (RHOSP) director to install and configure spine leaf networking in the RHOSP overcloud. The high-level steps are: Define the overcloud networks for each leaf . Create a composable role for each leaf and attach the composable network to each respective role . Create a unique NIC configuration for each role . Set the control plane parameters and the change bridge mappings so that each leaf routes traffic through the specific bridge or VLAN on that leaf . Define virtual IPs (VIPs) for your overcloud endpoints, and identify the subnet for each VIP . Provision your overcloud networks and overcloud VIPs . Register the bare metal nodes in your overcloud . Note Skip steps 7, 8, and 9 if you are using pre-provisioned bare metal nodes. Introspect the bare metal nodes in your overcloud . Provision bare metal nodes . Deploy your overcloud using the configuration you set in the earlier steps . 4.1. Defining the leaf networks The Red Hat OpenStack Platform (RHOSP) director creates the overcloud leaf networks from a YAML-formatted, custom network definition file that you construct. This custom network definition file lists each composable network and its attributes and also defines the subnets needed for each leaf. Complete the following steps to create a YAML-formatted, custom network definition file that contains the specifications for your spine-leaf network on the overcloud. Later, the provisioning process creates a heat environment file from your network definition file that you will include when you deploy your RHOSP overcloud. Prerequisites Access to the undercloud host and credentials for the stack user. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credential file: Create a templates directory under /home/stack : Use the default template, routed-networks.yaml , template as a basis to create a custom network definition template for your environment, by copying it to your templates directory: Example Edit your copy of the network definition template to define each base network and respective leaf subnets as a composable network item. Tip For information, see Network definition file configuration options in the Director Installation and Usage guide. Example The following example demonstrates how to define the Internal API network and its leaf networks: Note You do not define the Control Plane networks in your custom network definition template because the undercloud has already created these networks. However, you must set the parameters manually so that the overcloud can configure the NICs accordingly. For more information, see Configuring routed spine-leaf in the undercloud . Note There is currently no automatic validation for the network subnet and allocation_pools values. Ensure that you define these values consistently and that there is no conflict with existing networks. Note Add the vip parameter and set the value to true for the networks that host the Controller-based services. In this example, the InternalApi network contains these services. steps Note the path and file name of the custom network definition file that you have created. You will need this information later when you provision your networks for the RHOSP overcloud. Proceed to the step Defining leaf roles and attaching networks . Additional resources Network definition file configuration options in the Director Installation and Usage guide 4.2. Defining leaf roles and attaching networks The Red Hat OpenStack Platform (RHOSP) director creates a composable role for each leaf and attaches the composable network to each respective role from a roles template that you construct. Start by copying the default Controller, Compute, and Ceph Storage roles from the director core templates, and modifying these to meet your environment's needs. After you have created all of the individual roles, you run the openstack overcloud roles generate command to concatenate them into one large custom roles data file. Prerequisites Access to the undercloud host and credentials for the stack user. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credential file: Copy the default roles for Controller, Compute, and Ceph Storage roles that ship with RHOSP to the home directory of the stack user. Rename the files to reflect that they are leaf 0: Copy the leaf 0 files as a basis for your leaf 1 and leaf 2 files: Edit the parameters in each file to align with their respective leaf parameters. Tip For information about the various parameters in a roles data template, see Examining role parameters in the Director Installation and Usage guide. Example - ComputeLeaf0 Example - CephStorageLeaf0 Edit the network parameter in the leaf 1 and leaf 2 files so that they align with the respective leaf network parameters. Example - ComputeLeaf1 Example - CephStorageLeaf1 Note This applies only to leaf 1 and leaf 2. The network parameter for leaf 0 retains the base subnet values, which are the lowercase names of each subnet combined with a _subnet suffix. For example, the Internal API for leaf 0 is internal_api_subnet . When your role configuration is complete, run the overcloud roles generate command to generate the full roles data file. Example This creates one custom roles data file that includes all of the custom roles for each respective leaf network. steps Note the path and file name of the custom roles data file that the overcloud roles generate command has output. You will need this information later when you deploy your overcloud. Proceed to the step Creating a custom NIC configuration for leaf roles . Additional resources Examining role parameters in the Director Installation and Usage guide 4.3. Creating a custom NIC configuration for leaf roles Each role that the Red Hat OpenStack Platform (RHOSP) director creates requires a unique NIC configuration. Complete the following steps to create a custom set of NIC templates and a custom environment file that maps the custom templates to the respective role. Prerequisites Access to the undercloud host and credentials for the stack user. You have a custom network definition file. You have a custom roles data file. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credential file: Copy the content from one of the default NIC templates to use as a basis for a custom template for your NIC configuration. Example In this example, the single-nic-vlans NIC template is being copied and will be used as the basis for a custom template for your NIC configuration: Edit each NIC configuration in the NIC templates that you copied in the earlier step to reflect the specifics for your spine-leaf topology. Example Tip For more information, see Custom network interface templates in the Director Installation and Usage guide. Create a custom environment file, such as spine-leaf-nic-roles-map.yaml , that contains a parameter_defaults section that maps the custom NIC templates to each custom role. Example steps Note the path and file name of your custom NIC templates and the custom environment file that maps the custom NIC templates to each custom role. You will need this information later when you deploy your overcloud. Proceed to the step Mapping separate networks and setting control plane parameters . Additional resources Custom network interface templates in the Director Installation and Usage guide 4.4. Mapping separate networks and setting control plane parameters In a spine leaf architecture, each leaf routes traffic through the specific bridge or VLAN on that leaf, which is often the case with edge computing scenarios. So, you must change the default mappings where the Red Hat OpenStack Platform (RHOSP) Controller and Compute network configurations use a br-ex bridge. The RHOSP director creates the control plane network during undercloud creation. However, the overcloud requires access to the control plane for each leaf. To enable this access, you must define additional parameters in your deployment. Complete the following steps to create a custom network environment file that contains the separate network mappings and sets access to the control plane networks for the overcloud. Prerequisites Access to the undercloud host and credentials for the stack user. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credential file: In a new custom environment file, such as spine-leaf-ctlplane.yaml , create a parameter_defaults section and set the NeutronBridgeMappings parameter for each leaf that uses the default br-ex bridge. Important The name of the custom environment file that you create to contain your network definition must end in either .yaml or .template . For flat network mappings, list each leaf in the NeutronFlatNetworks parameter and set the NeutronBridgeMappings parameter for each leaf: Example Tip For more information, see Chapter 17. Networking (neutron) Parameters in the Overcloud Parameters guide For VLAN network mappings, add vlan to NeutronNetworkType , and by using NeutronNetworkVLANRanges , map VLANs for the leaf networks: Example Note You can use both flat networks and VLANs in your spine-leaf topology. Add the control plane subnet mapping for each spine-leaf network by using the <role>ControlPlaneSubnet parameter: Example steps Note the path and file name of the custom network environment file that you have created. You will need this information later when you deploy your overcloud. Proceed to the step Setting the subnet for virtual IP addresses . Additional resources Chapter 17. Networking (neutron) Parameters in the Overcloud Parameters guide 4.5. Setting the subnet for virtual IP addresses The Red Hat OpenStack Platform (RHOSP) Controller role typically hosts virtual IP (VIP) addresses for each network. By default, the RHOSP overcloud takes the VIPs from the base subnet of each network except for the control plane. The control plane uses ctlplane-subnet , which is the default subnet name created during a standard undercloud installation. In this spine-leaf scenario, the default base provisioning network is leaf0 instead of ctlplane-subnet . This means that you must add overriding values to the VipSubnetMap parameter to change the subnet that the control plane VIP uses. Additionally, if the VIPs for each network do not use the base subnet of one or more networks, you must add additional overrides to the VipSubnetMap parameter to ensure that the RHOSP director creates VIPs on the subnet associated with the L2 network segment that connects the Controller nodes. Complete the following steps to create a YAML-formatted, custom network VIP definition file that contains the overrides for your VIPs on the overcloud. Later, the provisioning process creates a heat environment file from your network VIP definition file that you will include when you deploy your RHOSP overcloud. You will also use your network VIP definition file when you run the overcloud deploy command. Prerequisites Access to the undercloud host and credentials for the stack user. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credential file: In a new custom network VIP definition template, such as spine-leaf-vip-data.yaml , create a parameter_defaults section and add the VipSubnetMap parameter based on your requirements. If you use leaf0 for the provisioning-control plane network, set the ctlplane VIP remapping to leaf0 : Tip For more information, see Configuring and provisioning network VIPs for the overcloud in the Director Installation and Usage guide. If you use a different leaf for multiple VIPs, set the VIP remapping to suit these requirements. For example, use the following snippet to configure the VipSubnetMap parameter to use leaf1 for all VIPs: steps Note the path and file name of the custom network VIP definition template that you have created. You will need this information later when you provision your network VIPs for the RHOSP overcloud. Proceed to the step Provisioning networks and VIPs for the overcloud . Additional resources Chapter 17. Networking (neutron) Parameters in the Overcloud Parameters guide 4.6. Provisioning networks and VIPs for the overcloud The Red Hat OpenStack Platform (RHOSP) provisioning process creates a heat environment file from your network definition file that contains your network specifications. If you are using VIPs, the RHOSP provisioning process works the same way: RHOSP creates a heat environment file from your VIP definition file that contains your VIP specifications. After you provision your networks and VIPs, you have two heat environment files that you will use later to deploy your overcloud. Prerequisites Access to the undercloud host and credentials for the stack user. You have a network configuration template. If you are using VIPs, you have a VIP definition template. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credential file: Using the network configuration template that was created earlier, provision your overcloud networks, using the --output option to name the file that the overcloud network provision command outputs: Tip For more information, see Configuring and provisioning overcloud network definitions in the Director Installation and Usage guide. Example Important The name of the output file that you specify must end in either .yaml or .template . Using the VIP definition file created earlier, provision your overcloud VIPs, using the --output option to name the file that the overcloud network provision command outputs: Tip For more information, see Configuring and provisioning network VIPs for the overcloud in the Director Installation and Usage guide. Important The name of the output file that you specify must end in either .yaml or .template . Note the path and file names of the generated output files. You will need this information later when you deploy your overcloud. Verification You can use the following commands to confirm that the command created your overcloud networks and subnets: Replace <network>, <subnet>, and <port> with the name or UUID of the network, subnet, and port that you want to check. steps If you are using pre-provisioned nodes, skip to Running the overcloud deployment command . Otherwise, proceed to the step Registering bare metal nodes on the overcloud . Additional resources Configuring and provisioning overcloud network definitions in the Director Installation and Usage guide Configuring and provisioning network VIPs for the overcloud in the Director Installation and Usage guide overcloud network provision in the Command Line Interface Reference overcloud network vip provision in the Command Line Interface Reference 4.7. Registering bare metal nodes on the overcloud Registering your physical machines is the first of three steps for provisioning bare metal nodes. Red Hat OpenStack Platform (RHOSP) director requires a custom node definition template that specifies the hardware and power management details of your physical machines. You can create this template in JSON or YAML formats. After you register your physical machines as bare metal nodes, you introspect them, and then you finally provision them. Note If you are using pre-provisioned bare metal nodes then you can skip registering and introspecting bare metal nodes on the overcloud. Prerequisites Access to the undercloud host and credentials for the stack user. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credential file: Inside a new node definition template, such as barematal-nodes.yaml , create a list of your physical machines that specifies their hardware and power management details. Example Tip For more information about template parameter values and for a JSON example, see Registering nodes for the overcloud in the Director Installation and Usage guide. Verify the template formatting and syntax. Example Correct any errors and save your node definition template. Import your node definition template to RHOSP director to register each node from your template into director: Example Verification When the node registration and configuration is complete, confirm that director has successfully registered the nodes: The baremetal node list command should include the imported nodes and the status should be manageable . steps Proceed to the step, Introspecting bare metal nodes on the overcloud . Additional resources Registering nodes for the overcloud in the Director Installation and Usage guide. overcloud node import in the Command Line Interface Reference 4.8. Introspecting bare metal nodes on the overcloud After you register a physical machine as a bare metal node, you can automatically add its hardware details and create ports for each of its Ethernet MAC addresses by using Red Hat OpenStack Platform (RHOSP) director introspection. After you perform introspection on your bare metal nodes, the final step is to provision them. Note If you are using pre-provisioned bare metal nodes then you can skip registering and introspecting bare metal nodes on the overcloud. Prerequisites Access to the undercloud host and credentials for the stack user. You have registered your bare metal nodes for your overcloud with RHOSP. Procedure Log in to the undercloud host as the stack user. Source the undercloud credentials file: Run the pre-introspection validation group to check the introspection requirements: Review the results of the validation report. (Optional) Review detailed output from a specific validation: Replace <UUID> with the UUID of the specific validation from the report that you want to review. Important A FAILED validation does not prevent you from deploying or running RHOSP. However, a FAILED validation can indicate a potential issue with a production environment. Inspect the hardware attributes of all nodes: Tip For more information, see Using director introspection to collect bare metal node hardware information in the Director Installation and Usage guide. Monitor the introspection progress logs in a separate terminal window: Verification After the introspection completes, all nodes change to an available state. steps Proceed to the step, Provisioning bare metal nodes for the overcloud . Additional resources Using director introspection to collect bare metal node hardware information in the Director Installation and Usage guide overcloud node introspect in the Command Line Interface Reference 4.9. Provisioning bare metal nodes for the overcloud To provision your bare metal nodes for Red Hat OpenStack Platform (RHOSP), you define the quantity and attributes of the bare metal nodes that you want to deploy in a node definition file in YAML format, and assign overcloud roles to these nodes. You also define the network layout of the nodes. The provisioning process creates a heat environment file from your node definition file. This heat environment file contains the node specifications you configured in your node definition file, including node count, predictive node placement, custom images, and custom NICs. When you deploy your overcloud, include this file in the deployment command. The provisioning process also provisions the port resources for all networks defined for each node or role in the node definition file. Note If you are using pre-provisioned bare metal nodes then you can skip provisioning bare metal nodes on the overcloud. Prerequisites Access to the undercloud host and credentials for the stack user. The bare metal nodes are registered, introspected, and available for provisioning and deployment. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credential file: Create a bare metal nodes definition file, such as spine-leaf-baremetal-nodes.yaml , and define the node count for each role that you want to provision. Example Tip For more information about the properties that you can set bare metal node definition file, see Provisioning bare metal nodes for the overcloud in the Director Installation and Usage guide. Provision the overcloud bare metal nodes, using the overcloud node provision command. Example Important The name of the output file that you specify must end in either .yaml or .template . Monitor the provisioning progress in a separate terminal. When provisioning is successful, the node state changes from available to active : Use the metalsmith tool to obtain a unified view of your nodes, including allocations and ports: Note the path and file name of the generated output file. You will need this information later when you deploy your overcloud. Verification Confirm the association of nodes to hostnames: steps Proceed to the step Deploying a spine-leaf enabled overcloud . Additional resources Provisioning bare metal nodes for the overcloud in the Director Installation and Usage guide 4.10. Deploying a spine-leaf enabled overcloud The last step in deploying your Red Hat OpenStack Platform (RHOSP) overcloud is to run the overcloud deploy command. This command uses as inputs all of the various overcloud templates and environment files that you have constructed that represents the blueprint of your overcloud. Using these templates and environment files, the RHOSP director installs and configures your overcloud. Prerequisites Access to the undercloud host and credentials for the stack user. You have performed all of the steps listed in the earlier procedures in this section and have assembled all of the various heat templates and environment files to use as inputs for the overcloud deploy command. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credential file: Collate the custom environment files and custom templates that you need for your overcloud environment, both the unedited heat template files provided with your director installation, and the custom files you created. This should include the following files: Your custom network definition file that contains the specifications for your spine-leaf network on the overcloud, for example, spine-leaf-networks-data.yaml . For more information, see Defining the leaf networks . Your custom roles data file that defines a role for each leaf, for example, spine-leaf-roles.yaml . For more information, see Defining leaf roles and attaching networks Your custom environment file that contains the roles and the custom NIC template mappings for each role, for example, spine-leaf-nic-roles-map.yaml . For more information, see Creating a custom NIC configuration for leaf roles . Your custom network environment file that contains the separate network mappings and sets access to the control plane networks for the overcloud, for example, spine-leaf-ctlplane.yaml For more information, see Mapping separate networks and setting control plane parameters . Your custom network VIP definition file that contains the overrides for your VIPs on the overcloud, for example, spine-leaf-vip-data.yaml . For more information, see Setting the subnet for virtual IP addresses . The output file from provisioning your overcloud networks, for example, spine-leaf-networks-provisioned.yaml . For more information, see Provisioning networks and VIPs for the overcloud . The output file from provisioning your overcloud VIPs, for example, spine-leaf-vips-provisioned.yaml . For more information, see Provisioning networks and VIPs for the overcloud . If you are not using pre-provisioned nodes, the output file from provisioning bare-metal nodes, for example, spine-leaf-baremetal-nodes-provisioned.yaml . For more information, see Provisioning bare metal nodes for the overcloud . Any other custom environment files. Enter the overcloud deploy command by carefully ordering the custom environment files and custom templates that are inputs to the command. The general rule is to specify any unedited heat template files first, followed by your custom environment files and custom templates that contain custom configurations, such as overrides to the default properties. In particular, follow this order for listing the inputs to the overcloud deploy command: Include your custom environment file that contains your custom NIC templates mapped to each role, for example, spine-leaf-nic-roles-map.yaml , after network-environment.yaml . The network-environment.yaml file provides the default network configuration for composable network parameters, that your mapping file overrides. Note that the director renders this file from the network-environment.j2.yaml Jinja2 template. If you created any other spine leaf network environment files, include these environment files after the roles-NIC templates mapping file. Add any additional environment files. For example, an environment file with your container image locations or Ceph cluster configuration. Example The following command snippet demonstrates the ordering: Tip For more information, see Creating your overcloud in the Director Installation and Usage guide. Run the overcloud deploy command. When the overcloud creation completes, director provides details to access your overcloud. Verification Perform the steps in Validating your overcloud deployment in the Director Installation and Usage guide. Additional resources Creating your overcloud in the Director Installation and Usage guide overcloud deploy in the Command Line Interface Reference 4.11. Adding a new leaf to a spine-leaf deployment When increasing network capacity or adding a new physical site, you might need to add a new leaf to your Red Hat OpenStack Platform (RHOSP) spine-leaf network. Prerequisites Your RHOSP deployment uses a spine-leaf network topology. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credential file: Open your network definition template, for example, /home/stack/templates/spine-leaf-networks-data.yaml . Under the appropriate base network, add a leaf subnet as a composable network item for the new leaf that you are adding. Example In this example, a subnet entry for the new leaf ( leaf3 ) has been added: Create a roles data file for the new leaf that you are adding. Copy a leaf Compute and a leaf Ceph Storage file for the new leaf that you are adding. Example In this example, Compute1.yaml and CephStorage1.yaml are copied for the new leaf, Compute3.yaml and CephStorage3.yaml , respectively: Edit the name and HostnameFormatDefault parameters in the new leaf files so that they align with the respective leaf parameters. Example For example, the parameters in the Leaf 1 Compute file have the following values: Example The Leaf 1 Ceph Storage parameters have the following values: Edit the network parameter in the new leaf files so that they align with the respective Leaf network parameters. Example For example, the parameters in the Leaf 1 Compute file have the following values: Example The Leaf 1 Ceph Storage parameters have the following values: When your role configuration is complete, run the following command to generate the full roles data file. Include all of the leafs in your network and the new leaf that you are adding. Example In this example, leaf3 is added to leaf0, leaf1, and leaf2: This creates a full roles_data_spine_leaf.yaml file that includes all of the custom roles for each respective leaf network. Create a custom NIC configuration for the leaf that you are adding. Copy a leaf Compute and a leaf Ceph Storage NIC configuration file for the new leaf that you are adding. Example In this example, computeleaf1.yaml and ceph-storageleaf1.yaml are copied for the new leaf, computeleaf3.yaml and ceph-storageleaf3.yaml , respectively: Open your custom environment file that contains the roles and the custom NIC template mappings for each role, for example, spine-leaf-nic-roles-map.yaml. Insert an entry for each role for the new leaf that you are adding. Example In this example, the entries ComputeLeaf3NetworkConfigTemplate and CephStorage3NetworkConfigTemplate have been added: Open your custom network environment file that contains the separate network mappings and sets access to the control plane networks for the overcloud, for example, spine-leaf-ctlplane.yaml and update the control plane parameters. Under the parameter_defaults section, add the control plane subnet mapping for the new leaf network. Also, include the external network mapping for the new leaf network. For flat network mappings, list the new leaf ( leaf3 ) in the NeutronFlatNetworks parameter and set the NeutronBridgeMappings parameter for the new leaf: For VLAN network mappings, additionally set the NeutronNetworkVLANRanges to map VLANs for the new leaf ( leaf3 ) network: Example In this example, flat network mappings are used, and the new leaf ( leaf3 ) entries are added: Redeploy your spine-leaf enabled overcloud, by following the steps in Deploying a spine-leaf enabled overcloud .
[ "source ~/stackrc", "mkdir /home/stack/templates", "cp /usr/share/openstack-tripleo-heat-templates/network-data-samples/ routed-networks.yaml /home/stack/templates/spine-leaf-networks-data.yaml", "- name: InternalApi name_lower: internal_api vip: true mtu: 1500 subnets: internal_api_subnet: ip_subnet: 172.16.32.0/24 gateway_ip: 172.16.32.1 allocation_pools: - start: 172.16.32.4 end: 172.16.32.250 vlan: 20 internal_api_leaf1_subnet: ip_subnet: 172.16.33.0/24 gateway_ip: 172.16.33.1 allocation_pools: - start: 172.16.33.4 end: 172.16.33.250 vlan: 30 internal_api_leaf2_subnet: ip_subnet: 172.16.34.0/24 gateway_ip: 172.16.34.1 allocation_pools: - start: 172.16.34.4 end: 172.16.34.250 vlan: 40", "source ~/stackrc", "cp /usr/share/openstack-tripleo-heat-templates/roles/Controller.yaml ~/roles/Controller0.yaml cp /usr/share/openstack-tripleo-heat-templates/roles/Compute.yaml ~/roles/Compute0.yaml cp /usr/share/openstack-tripleo-heat-templates/roles/CephStorage.yaml ~/roles/CephStorage0.yaml", "cp ~/roles/Compute0.yaml ~/roles/Compute1.yaml cp ~/roles/Compute0.yaml ~/roles/Compute2.yaml cp ~/roles/CephStorage0.yaml ~/roles/CephStorage1.yaml cp ~/roles/CephStorage0.yaml ~/roles/CephStorage2.yaml", "- name: ComputeLeaf0 HostnameFormatDefault: '%stackname%-compute-leaf0-%index%'", "- name: CephStorageLeaf0 HostnameFormatDefault: '%stackname%-cephstorage-leaf0-%index%'", "- name: ComputeLeaf1 networks: InternalApi: subnet: internal_api_leaf1 Tenant: subnet: tenant_leaf1 Storage: subnet: storage_leaf1", "- name: CephStorageLeaf1 networks: Storage: subnet: storage_leaf1 StorageMgmt: subnet: storage_mgmt_leaf1", "openstack overcloud roles generate --roles-path ~/roles -o spine-leaf-roles-data.yaml Controller Compute Compute1 Compute2 CephStorage CephStorage1 CephStorage2", "source ~/stackrc", "cp -r /usr/share/ansible/roles/tripleo_network_config/ templates/single-nic-vlans/* /home/stack/templates/spine-leaf-nics/.", "{% set mtu_list = [ctlplane_mtu] %} {% for network in role_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_subnet_cidr }} routes: {{ ctlplane_host_routes }} members: - type: interface name: nic1 mtu: {{ min_viable_mtu }} # force the MAC address of the bridge to this interface primary: true {% for network in role_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endfor %}", "parameter_defaults: %%ROLE%%NetworkConfigTemplate: <path_to_ansible_jinja2_nic_config_file>", "parameter_defaults: Controller0NetworkConfigTemplate: '/home/stack/templates/spine-leaf-nics/single-nic-vlans.j2' Controller1NetworkConfigTemplate: '/home/stack/templates/spine-leaf-nics/single-nic-vlans.j2' Controller2NetworkConfigTemplate: '/home/stack/templates/spine-leaf-nics/single-nic-vlans.j2' ComputeLeaf0NetworkConfigTemplate: '/home/stack/templates/spine-leaf-nics/single-nic-vlans.j2' ComputeLeaf1NetworkConfigTemplate: '/home/stack/templates/spine-leaf-nics/single-nic-vlans.j2' ComputeLeaf2NetworkConfigTemplate: '/home/stack/templates/spine-leaf-nics/single-nic-vlans.j2' CephStorage0NetworkConfigTemplate: '/home/stack/templates/spine-leaf-nics/single-nic-vlans.j2' CephStorage1NetworkConfigTemplate: '/home/stack/templates/spine-leaf-nics/single-nic-vlans.j2' CephStorage2NetworkConfigTemplate: '/home/stack/templates/spine-leaf-nics/single-nic-vlans.j2'", "source ~/stackrc", "parameter_defaults: NeutronFlatNetworks: leaf0,leaf1,leaf2 Controller0Parameters: NeutronBridgeMappings: \"leaf0:br-ex\" Controller1Parameters: NeutronBridgeMappings: \"leaf0:br-ex\" Controller2Parameters: NeutronBridgeMappings: \"leaf0:br-ex\" Compute0Parameters: NeutronBridgeMappings: \"leaf0:br-ex\" Compute1Parameters: NeutronBridgeMappings: \"leaf1:br-ex\" Compute2Parameters: NeutronBridgeMappings: \"leaf2:br-ex\"", "parameter_defaults: NeutronNetworkType: 'geneve,vlan' NeutronNetworkVLANRanges: 'leaf0:1:1000,leaf1:1:1000,leaf2:1:1000' Controller0Parameters: NeutronBridgeMappings: \"leaf0:br-ex\" Controller1Parameters: NeutronBridgeMappings: \"leaf0:br-ex\" Controller2Parameters: NeutronBridgeMappings: \"leaf0:br-ex\" Compute0Parameters: NeutronBridgeMappings: \"leaf0:br-ex\" Compute1Parameters: NeutronBridgeMappings: \"leaf1:br-ex\" Compute2Parameters: NeutronBridgeMappings: \"leaf2:br-ex\"", "parameter_defaults: NeutronFlatNetworks: leaf0,leaf1,leaf2 Controller0Parameters: NeutronBridgeMappings: \"leaf0:br-ex\" ControllerControlPlaneSubnet: leaf0 Controller1Parameters: NeutronBridgeMappings: \"leaf0:br-ex\" Controller1ControlPlaneSubnet: leaf0 Controller2Parameters: NeutronBridgeMappings: \"leaf0:br-ex\" Controller2ControlPlaneSubnet: leaf0 Compute0Parameters: NeutronBridgeMappings: \"leaf0:br-ex\" Compute0ControlPlaneSubnet: leaf0 CephStorage0Parameters: CephStorage0ControlPlaneSubnet: leaf0 Compute1Parameters: NeutronBridgeMappings: \"leaf1:br-ex\" Compute1ControlPlaneSubnet: leaf1 CephStorage1Parameters: CephStorage1ControlPlaneSubnet: leaf1 Compute2Parameters: NeutronBridgeMappings: \"leaf2:br-ex\" Compute2ControlPlaneSubnet: leaf2 CephStorage2Parameters: CephStorage2ControlPlaneSubnet: leaf2", "source ~/stackrc", "parameter_defaults: VipSubnetMap: ctlplane: leaf0", "parameter_defaults: VipSubnetMap: ctlplane: leaf1 redis: internal_api_leaf1 InternalApi: internal_api_leaf1 Storage: storage_leaf1 StorageMgmt: storage_mgmt_leaf1", "source ~/stackrc", "openstack overcloud network provision --output spine-leaf-networks-provisioned.yaml /home/stack/templates/spine_leaf_networks_data.yaml", "openstack overcloud network vip provision --stack spine_leaf_overcloud --output spine-leaf-vips_provisioned.yaml /home/stack/templates/spine_leaf_vip_data.yaml", "openstack network list openstack subnet list openstack network show <network> openstack subnet show <subnet> openstack port list openstack port show <port>", "source ~/stackrc", "nodes: - name: \"node01\" ports: - address: \"aa:aa:aa:aa:aa:aa\" physical_network: ctlplane local_link_connection: switch_id: 52:54:00:00:00:00 port_id: p0 cpu: 4 memory: 6144 disk: 40 arch: \"x86_64\" pm_type: \"ipmi\" pm_user: \"admin\" pm_password: \"p@55w0rd!\" pm_addr: \"192.168.24.205\" - name: \"node02\" ports: - address: \"bb:bb:bb:bb:bb:bb\" physical_network: ctlplane local_link_connection: switch_id: 52:54:00:00:00:00 port_id: p0 cpu: 4 memory: 6144 disk: 40 arch: \"x86_64\" pm_type: \"ipmi\" pm_user: \"admin\" pm_password: \"p@55w0rd!\" pm_addr: \"192.168.24.206\"", "openstack overcloud node import --validate-only ~/templates/ baremetal-nodes.yaml", "openstack overcloud node import ~/baremetal-nodes.yaml", "openstack baremetal node list", "source ~/stackrc", "validation run --group pre-introspection", "validation history get --full <UUID>", "openstack overcloud node introspect --all-manageable --provide", "sudo tail -f /var/log/containers/ironic-inspector/ironic-inspector.log", "source ~/stackrc", "- name: Controller count: 3 defaults: networks: - network: ctlplane vif: true - network: external subnet: external_subnet - network: internal_api subnet: internal_api_subnet01 - network: storage subnet: storage_subnet01 - network: storage_mgmt subnet: storage_mgmt_subnet01 - network: tenant subnet: tenant_subnet01 network_config: template: /home/stack/templates/spine-leaf-nics/single-nic-vlans.j2 default_route_network: - external - name: Compute0 count: 1 defaults: networks: - network: ctlplane vif: true - network: internal_api subnet: internal_api_subnet02 - network: tenant subnet: tenant_subnet02 - network: storage subnet: storage_subnet02 network_config: template: /home/stack/templates/spine-leaf-nics/single-nic-vlans.j2 - name: Compute1", "openstack overcloud node provision --stack spine_leaf_overcloud --network-config --output spine-leaf-baremetal-nodes-provisioned.yaml /home/stack/templates/spine-leaf-baremetal-nodes.yaml", "watch openstack baremetal node list", "metalsmith list", "openstack baremetal allocation list", "source ~/stackrc", "openstack overcloud deploy --templates -n /home/stack/templates/spine-leaf-networks-data.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml -e /home/stack/templates/spine-leaf-nic-roles-map.yaml -e /home/stack/templates/spine-leaf-ctlplane.yaml -e /home/stack/templates/spine-leaf-vip-data.yaml -e /home/stack/templates/spine-leaf-baremetal-provisioned.yaml -e /home/stack/templates/spine-leaf-networks-provisioned.yaml -e /home/stack/templates/spine-leaf-vips-provisioned.yaml -e /home/stack/containers-prepare-parameter.yaml -e /home/stack/inject-trust-anchor-hiera.yaml -r /home/stack/templates/spine-leaf-roles-data.yaml", "source ~/stackrc", "- name: InternalApi name_lower: internal_api vip: true vlan: 10 ip_subnet: '172.18.0.0/24' allocation_pools: [{'start': '172.18.0.4', 'end': '172.18.0.250'}] gateway_ip: '172.18.0.1' subnets: internal_api_leaf1: vlan: 11 ip_subnet: '172.18.1.0/24' allocation_pools: [{'start': '172.18.1.4', 'end': '172.18.1.250'}] gateway_ip: '172.18.1.1' internal_api_leaf2: vlan: 12 ip_subnet: '172.18.2.0/24' allocation_pools: [{'start': '172.18.2.4', 'end': '172.18.2.250'}] gateway_ip: '172.18.2.1' internal_api_leaf3: vlan: 13 ip_subnet: '172.18.3.0/24' allocation_pools: [{'start': '172.18.3.4', 'end': '172.18.3.250'}] gateway_ip: '172.18.3.1'", "cp ~/roles/Compute1.yaml ~/roles/Compute3.yaml cp ~/roles/CephStorage1.yaml ~/roles/CephStorage3.yaml", "- name: ComputeLeaf1 HostnameFormatDefault: '%stackname%-compute-leaf1-%index%'", "- name: CephStorageLeaf1 HostnameFormatDefault: '%stackname%-cephstorage-leaf1-%index%'", "- name: ComputeLeaf1 networks: InternalApi: subnet: internal_api_leaf1 Tenant: subnet: tenant_leaf1 Storage: subnet: storage_leaf1", "- name: CephStorageLeaf1 networks: Storage: subnet: storage_leaf1 StorageMgmt: subnet: storage_mgmt_leaf1", "openstack overcloud roles generate --roles-path ~/roles -o roles_data_spine_leaf.yaml Controller Controller1 Controller2 Compute Compute1 Compute2 Compute3 CephStorage CephStorage1 CephStorage2 CephStorage3", "cp ~/templates/spine-leaf-nics/computeleaf1.yaml ~/templates/spine-leaf-nics/computeleaf3.yaml cp ~/templates/spine-leaf-nics/ceph-storageleaf1.yaml ~/templates/spine-leaf-nics/ceph-storageleaf3.yaml", "parameter_defaults: %%ROLE%%NetworkConfigTemplate: <path_to_ansible_jinja2_nic_config_file>", "parameter_defaults: Controller0NetworkConfigTemplate: '/home/stack/templates/spine-leaf-nics/single-nic-vlans.j2' Controller1NetworkConfigTemplate: '/home/stack/templates/spine-leaf-nics/single-nic-vlans.j2' Controller2NetworkConfigTemplate: '/home/stack/templates/spine-leaf-nics/single-nic-vlans.j2' ComputeLeaf0NetworkConfigTemplate: '/home/stack/templates/spine-leaf-nics/single-nic-vlans.j2' ComputeLeaf1NetworkConfigTemplate: '/home/stack/templates/spine-leaf-nics/single-nic-vlans.j2' ComputeLeaf2NetworkConfigTemplate: '/home/stack/templates/spine-leaf-nics/single-nic-vlans.j2' ComputeLeaf3NetworkConfigTemplate: '/home/stack/templates/spine-leaf-nics/single-nic-vlans.j2' CephStorage0NetworkConfigTemplate: '/home/stack/templates/spine-leaf-nics/single-nic-vlans.j2' CephStorage1NetworkConfigTemplate: '/home/stack/templates/spine-leaf-nics/single-nic-vlans.j2' CephStorage2NetworkConfigTemplate: '/home/stack/templates/spine-leaf-nics/single-nic-vlans.j2' CephStorage3NetworkConfigTemplate: '/home/stack/templates/spine-leaf-nics/single-nic-vlans.j2'", "parameter_defaults: NeutronFlatNetworks: leaf0,leaf1,leaf2,leaf3 Controller0Parameters: NeutronBridgeMappings: \"leaf0:br-ex\" Compute0Parameters: NeutronBridgeMappings: \"leaf0:br-ex\" Compute1Parameters: NeutronBridgeMappings: \"leaf1:br-ex\" Compute2Parameters: NeutronBridgeMappings: \"leaf2:br-ex\" Compute3Parameters: NeutronBridgeMappings: \"leaf3:br-ex\"", "NeutronNetworkType: 'geneve,vlan' NeutronNetworkVLANRanges: 'leaf0:1:1000,leaf1:1:1000,leaf2:1:1000,leaf3:1:1000'", "parameter_defaults: NeutronFlatNetworks: leaf0,leaf1,leaf2,leaf3 Controller0Parameters: NeutronBridgeMappings: \"leaf0:br-ex\" ControllerControlPlaneSubnet: leaf0 Controller1Parameters: NeutronBridgeMappings: \"leaf0:br-ex\" Controller1ControlPlaneSubnet: leaf0 Controller2Parameters: NeutronBridgeMappings: \"leaf0:br-ex\" Controller2ControlPlaneSubnet: leaf0 Compute0Parameters: NeutronBridgeMappings: \"leaf0:br-ex\" Compute0ControlPlaneSubnet: leaf0 Compute1Parameters: NeutronBridgeMappings: \"leaf1:br-ex\" Compute1ControlPlaneSubnet: leaf1 Compute2Parameters: NeutronBridgeMappings: \"leaf2:br-ex\" Compute2ControlPlaneSubnet: leaf2 Compute3Parameters: NeutronBridgeMappings: \"leaf3:br-ex\" Compute3ControlPlaneSubnet: leaf3" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/spine_leaf_networking/assembly_configuring-the-overcloud
4.4.2. Persistent Device Numbers
4.4.2. Persistent Device Numbers Major and minor device numbers are allocated dynamically at module load. Some applications work best if the block device always is activated with the same device (major and minor) number. You can specify these with the lvcreate and the lvchange commands by using the following arguments: Use a large minor number to be sure that it hasn't already been allocated to another device dynamically. If you are exporting a file system using NFS, specifying the fsid parameter in the exports file may avoid the need to set a persistent device number within LVM.
[ "--persistent y --major major --minor minor" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/persistent_numbers
Chapter 44. Installing and running IBM WebSphere Application Server
Chapter 44. Installing and running IBM WebSphere Application Server IBM WebSphere Application Server must be installed and running for you to apply many of the configurations that accommodate KIE Server. This section describes how to install and start IBM WebSphere. For the most up-to-date and detailed installation instructions, see the IBM Knowledge Center . Procedure Download IBM Installation Manager version 1.8.5 or later from the IBM Installation Manager and Packaging Utility download links page. IBM Installation Manager is required for installing IBM WebSphere. Extract the downloaded archive and run the following command as the root user in the new directory: The IBM Installation Manager opens. Go to File Preferences and click Add Repository . In the Add Repository window, enter the repository URL for IBM WebSphere 9.0. You can find all the repository URLs in the Online product repositories for IBM WebSphere Application Server offerings page of the IBM Knowledge Center. In your command terminal, navigate to the IBM WebSphere Application Server folder location that you specified during the installation. Change to the /bin directory and run a command similar to the following example to create an IBM WebSphere profile, user name, and password. A profile defines the run time environment. The profile includes all the files that the server processes in the runtime environment and that you can change. The user is required for login. In your command terminal, navigate to the bin directory within the profile that you created (for example, /profiles/testprofile/bin ) and run the following command to start the IBM WebSphere Application Server instance: Replace <SERVER_NAME> with the IBM WebSphere Application Server name defined in Servers Server Types IBM WebSphere Application Servers of the WebSphere Integrated Solutions Console. Open the following URL in a web browser: <HOST> is the system name or IP address of the target server. For example, to start the WebSphere Integrated Solutions Console for a local instance of IBM WebSphere running on your system, enter the following URL in a web browser: When the login page of the WebSphere Integrated Solutions Console appears, enter your administrative credentials.
[ "sudo ./install", "sudo ./manageprofiles.sh -create -profileName testprofile -profilePath /profiles/testprofile -adminUserName websphere -adminPassword password123", "sudo ./startServer.sh <SERVER_NAME>", "http://<HOST>:9060/ibm/console", "http://localhost:9060/ibm/console" ]
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/installing_and_configuring_red_hat_decision_manager/was-install-start-proc
Chapter 17. Storage
Chapter 17. Storage Support added in LVM for RAID level takeover LVM now provides full support for RAID takeover, previously available as a Technology Preview, which allows users to convert a RAID logical volume from one RAID level to another. This release expands the number of RAID takeover combinations. Support for some transitions may require intermediate steps. New RAID types that are added by means of RAID takeover are not supported in older released kernel versions; these RAID types are raid0, raid0_meta, raid5_n, and raid6_{ls,rs,la,ra,n}_6. Users creating those RAID types or converting to those RAID types on Red Hat Enterprise Linux 7.4 cannot activate the logical volumes on systems running releases. RAID takeover is available only on top-level logical volumes in single machine mode (that is, takeover is not available for cluster volume groups or while the RAID is under a snapshot or part of a thin pool). (BZ# 1366296 ) LVM now supports RAID reshaping LVM now provides support for RAID reshaping. While takeover allows users to change from one RAID type to another, reshaping allows users to change properties such as the RAID algorithm, stripe size, region size, or number of images. For example, a user can change a 3-way stripe to a 5-way stripe by adding two additional devices. Reshaping is available only on top-level logical volumes in single machine mode, and only while the logical volume is not in-use (for example, when it is mounted by a file system). (BZ# 1191935 , BZ#834579, BZ# 1191978 , BZ# 1392947 ) Device Mapper linear devices now support DAX Direct Access (DAX) support has been added to the dm-linear and dm-stripe targets. Multiple Non-Volatile Dual In-line Memory Module (NVDIMM) devices can now be combined to provide larger persistent memory (PMEM) block devices. (BZ#1384648) libstoragemgmt rebased to version 1.4.0 The libstoragemgmt packages have been upgraded to upstream version 1.4.0, which provides a number of bug fixes and enhancements over the version. Notably, the following libraries have been added: Query serial number of local disk: lsm_local_disk_serial_num_get()/lsm.LocalDisk.serial_num_get() Query LED status of local disk: lsm_local_disk_led_status_get()/lsm.LocalDisk.led_status_get() Query link speed of local disk: lsm_local_disk_link_speed_get()/lsm.LocalDisk.link_speed_get() Notable bug fixes include: The megaraid plug-in for the Dell PowerEdge RAID Controller (PERC) has been fixed. The local disk rotation speed query on the NVM Express (NVMe) disk has been fixed. lsmcli incorrect error handling on a local disk query has been fixed. All gcc compile warnings have been fixed. The obsolete usage of the autoconf AC_OUTPUT macro has been fixed. (BZ# 1403142 ) mpt3sas updated to version 15.100.00.00 The mpt3sas storage driver has been updated to version 15.100.00.00, which adds support for new devices. Contact your vendor for more details. (BZ#1306453) The lpfc_no_hba_reset module parameter for the lpfc driver is now available With this update, the lpfc driver for certain models of Emulex Fibre Channel Host Bus Adapters (HBAs) has been enhanced by adding the lpfc_no_hba_reset module parameter. This parameter accepts a list of one or more hexadecimal world-wide port numbers (WWPNs) of HBAs that are not reset during SCSI error handling. Now, lpfc allows you to control which ports on the HBA may be reset during SCSI error handling time. Also, lpfc now allows you to set the eh_deadline parameter, which represents an upper limit of the SCSI error handling time. (BZ#1366564) LVM now detects Veritas Dynamic Multi-Pathing systems and no longer accesses the underlying device paths directly For LVM to work correctly with Veritas Dynamic Multi-Pathing, you must set obtain_device_list_from_udev to 0 in the devices section of the configuration file /etc/lvm/lvm.conf . These multi-pathed devices are not exposed through the standard udev interfaces and so without this setting LVM will be unaware of their existence. (BZ#1346280) The libnvdimm kernel subsystem now supports PMEM subdivision Intel's Non-Volatile Dual In-line Memory Module (NVDIMM) label specification has been extended to allow more than one Persistent Memory (PMEM) namespace to be configured per region (interleave set). The kernel shipped with Red Hat Enterprise Linux 7.4 has been modified to support these new configurations. Without subdivision support, a single region could previously be used in only one mode: pmem , device dax , or sector . With this update, a single region can be subdivided, and each subdivision can be configured independently of the others. (BZ#1383827) Warning messages when multipathd is not running Users now get warning messages if they run a multipath command that creates or lists multipath devices while multipathd is not running. If multipathd is not running, then the devices are not able to restore paths that have failed or react to changes in the device setup. The multipathd daemon now prints a warning message if there are multipath devices and multipathd is not running. (BZ# 1359510 ) c library interface added to multipathd to give structured output Users can now use the libdmmp library to get structured information from multipathd. Other programs that want to get information from multipathd can now get this information without running a command and parsing the results. (BZ# 1430097 ) New remove retries multipath configuration value If a multipath device is temporarily in use when multipath tries to remove it, the remove will fail. It is now possible to control the number of times that the multipath command will retry removing a multipath device that is busy by setting the remove_retries configuration value. The default value is 0, in which case multipath will not retry failed removes. (BZ# 1368211 ) New multipathd reset multipaths stats commands Multipath now supports two new multipathd commands: multipathd reset multipaths stats and multipathd reset multipath dev stats . These commands reset the device stats that multipathd tracks for all the devices, or the specified device, respectively. This allows users to reset their device stats after they make changes to them. (BZ# 1416569 ) New disable_changed_wwids mulitpath configuration parameter Multipath now supports a new multipath.conf defaults section parameter, disable_changed_wwids . Setting this will make multipathd notice when a path device changes its wwid while in use, and will disable access to the path device until its wwid returns to its value. When the wwid of a scsi device changes, it is often a sign that the device has been remapped to a different LUN. If this happens while the scsi device is in use, it can lead to data corruption. Setting the disable_changed_wwids parameter will warn users when the scsi device changes its wwid. In many cases multipathd will disable access to the path device as soon as it gets unmapped from its original LUN, removing the possibility of corruption. However multipathd is not always able to catch the change before the scsi device has been remapped, meaning there may still be a window for corruption. Remapping in-use scsi devices is not currently supported. (BZ#1169168) Updated built-in configuration for HPE 3PAR array The built-in configuration for the 3PAR array now sets no_path_retry to 12. (BZ#1279355) Added built-in configuration for NFINIDAT InfiniBox.* devices Multipath now autoconfigures NFINIDAT InfiniBox.* devices (BZ#1362409) device-mapper-multipath now supports the max_sectors_kb configuration parameter With this update, device-mapper-multipath provides a new max_sectors_kb parameter in the defaults, devices, and multipaths sections of the multipath.conf file. The max_sectors_kb parameter allows you to set the max_sectors_kb device queue parameter to the specified value on all underlying paths of a multipath device before the multipath device is first activated. When a multipath device is created, the device inherits the max_sectors_kb value from the path devices. Manually raising this value for the multipath device or lowering this value for the path devices can cause multipath to create I/O operations larger than the path devices allow. Using the max_sectors_kb multipath.conf parameter is an easy way to set these values before a multipath device is created on top of the path devices, and prevent invalid-sized I/O operations from being passed down. (BZ#1394059) New detect_checker multipath configuration parameter Some devices, such as the VNX2, can be optionally configured in ALUA mode. In this mode, they need to use a different path_checker and prioritizer than in their non-ALUA mode. Multipath now supports the detect_checker parameter in the multipath.conf defaults and devices sections. If this is set, multipath will detect if a device supports ALUA, and if so, it will override the configured path_checker and use the TUR checker instead. The detect_checker option allows devices with an optional ALUA mode to be correctly autoconfigured, regardless of what mode they are in. (BZ#1372032) Multipath now has a built-in default configuration for Nimble Storage devices The multipath default hardware table now includes an entry for Nimble Storage arrays. (BZ# 1406226 ) LVM supports reducing the size of a RAID logical volume As of Red Hat Enterprise Linux 7,4, you can use the lvreduce or lvresize command to reduce the size of a RAID logical volume. (BZ# 1394048 ) iprutils rebased to version 2.4.14 The iprutils packages have been upgraded to upstream version 2.4.14, which provides a number of bug fixes and enhancements over the version. Notably: Endian swapped device_id is now compatible with earlier versions. VSET write cache in bare metal mode is now allowed. Creating RAIDS on dual adapter setups has been fixed. Verifying rebuilds for single adapter configurations is now disabled by default. (BZ#1384382) mdadm rebased to version 4.0 The mdadm packages have been upgraded to upstream version 4.0, which provides a number of bug fixes and enhancements over the version. Notably, this update adds bad block management support for Intel Matrix Storage Manager (IMSM) metadata. The features included in this update are supported on external metadata formats, and Red Hat continues supporting the Intel Rapid Storage Technology enterprise (Intel RSTe) software stack. (BZ#1380017) LVM extends the size of a thin pool logical volume when a thin pool fills over 50 percent When a thin pool logical volume fills by more than 50 percent, by default the dmeventd thin plugin now calls the dmeventd thin_command command with every 5 percent increase. This resizes the thin pool when it has been filled above the configured thin_pool_autoextend_threshold in the activation section of the configuration file. A user may override this default by configuring an external command and specifying this command as the value of thin_command in the dmeventd section of the lvm.conf file. For information on the thin plugin and on configuring external commands to maintain a thin pool, see the dmeventd(8) man page. In releases, when a thin pool resize failed, the dmeventd plugin would try to unmount unconditionally all thin volumes associated with the thin pool when a compile-time defined threshold of more than 95 percent was reached. The dmeventd plugin, by default, no longer unmounts any volumes. Reproducing the logic requires configuring an external script. (BZ# 1442992 ) LVM now supports dm-cache metadata version 2 LVM/DM cache has been significantly improved. It provides support for larger cache sizes, better adaptation to changing workloads, greatly improved startup and shutdown times, and higher performance overall. Version 2 of the dm-cache metadata format is now the default when creating cache logical volumes with LVM. Version 1 will continue to be supported for previously created LVM cache logical volumes. Upgrading to version 2 will require the removal of the old cache layer and the creation of a new cache layer. (BZ# 1436748 ) Support for DIF/DIX (T10 PI) on specified hardware SCSI T10 DIF/DIX is fully supported in Red Hat Enterprise Linux 7.4, provided that the hardware vendor has qualified it and provides full support for the particular HBA and storage array configuration. DIF/DIX is not supported on other configurations, it is not supported for use on the boot device, and it is not supported on virtualized guests. At the current time, the following vendors are known to provide this support. FUJITSU supports DIF and DIX on: EMULEX 16G FC HBA: EMULEX LPe16000/LPe16002, 10.2.254.0 BIOS, 10.4.255.23 FW, with: FUJITSU ETERNUS DX100 S3, DX200 S3, DX500 S3, DX600 S3, DX8100 S3, DX8700 S3, DX8900 S3, DX200F, DX60 S3, AF250, AF650 QLOGIC 16G FC HBA: QLOGIC QLE2670/QLE2672, 3.28 BIOS, 8.00.00 FW, with: FUJITSU ETERNUS DX100 S3, DX200 S3, DX500 S3, DX600 S3, DX8100 S3, DX8700 S3, DX8900 S3, DX200F, DX60 S3 Note that T10 DIX requires database or some other software that provides generation and verification of checksums on disk blocks. No currently supported Linux file systems have this capability. EMC supports DIF on: EMULEX 8G FC HBA: LPe12000-E and LPe12002-E with firmware 2.01a10 or later, with: EMC VMAX3 Series with Enginuity 5977; EMC Symmetrix VMAX Series with Enginuity 5876.82.57 and later EMULEX 16G FC HBA: LPe16000B-E and LPe16002B-E with firmware 10.0.803.25 or later, with: EMC VMAX3 Series with Enginuity 5977; EMC Symmetrix VMAX Series with Enginuity 5876.82.57 and later QLOGIC 16G FC HBA: QLE2670-E-SP and QLE2672-E-SP, with: EMC VMAX3 Series with Enginuity 5977; EMC Symmetrix VMAX Series with Enginuity 5876.82.57 and later Please refer to the hardware vendor's support information for the latest status. Support for DIF/DIX remains in Technology Preview for other HBAs and storage arrays. (BZ#1457907) The dmstats facility can now track the statistics for files that change Previously, the dmstats facility was able to report statistics for files that did not change in size. It now has the ability to watch files for changes and update its mappings to track file I/O even as the file changes in size (or fills holes that may be in the file). (BZ# 1378956 ) Support for thin snapshots of cached logical volumes LVM in Red Hat Enterprise Linux 7.4 allows you to create thin snapshots of cached logical volumes. This feature was not available in earlier releases. These external origin cached logical volumes are converted to a read-only state and thus can be used by different thin pools. (BZ# 1189108 ) New package: nvmetcli The nvmetcli utility enables you to configure Red Hat Enterprise Linux as an NVMEoF target, using the NVME-over-RDMA fabric type. With nvmetcli , you can configure nvmet interactively, or use a JSON file to save and restore the configuration. (BZ#1383837) Device DAX is now available for NVDIMM devices Device DAX enables users like hypervisors and databases to have raw access to persistent memory without an intervening file system. In particular, Device DAX allows applications to have predictable fault granularities and the ability to flush data to the persistence domain from user space. Starting with Red Hat Enterprise Linux 7.4, Device Dax is available for Non-Volatile Dual In-line Memory Module (NVDIMM) devices. (BZ#1383489)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.4_release_notes/new_features_storage
Chapter 15. Renewing the custom SSL certificate
Chapter 15. Renewing the custom SSL certificate This chapter provides information on how to renew the custom SSL certificate on Satellite Server as well as on Capsule Server. 15.1. Renewing a custom SSL certificate on Satellite Server Use this procedure to update your custom SSL certificate for Satellite Server. Prerequisites You must create a new Certificate Signing Request (CSR) and send it to the Certificate Authority to sign the certificate. Refer to the Configuring Satellite Server with a Custom SSL Certificate guide before creating a new CSR because the Server certificate must have X.509 v3 Key Usage and Extended Key Usage extensions with required values. In return, you will receive the Satellite Server certificate and CA bundle. Procedure Before deploying a renewed custom certificate on your Satellite Server, validate the custom SSL input files. Note that for the katello-certs-check command to work correctly, Common Name (CN) in the certificate must match the FQDN of Satellite Server: If the command is successful, it returns the following satellite-installer command. You can use this command to deploy the renewed CA certificates to Satellite Server: Important Do not delete the certificate files after you deploy the certificate. They are required when upgrading Satellite Server. Note If a new consumer package katello-ca-consumer-latest.noarch.rpm is generated due to a different Certificate Signing Authority, all the clients registered to Satellite Server must be updated. Verification Access the Satellite web UI from your local machine. For example, https://satellite.example.com . In your browser, view the certificate details to verify the deployed certificate. 15.2. Renewing a custom SSL certificate on Capsule Server Use this procedure to update your custom SSL certificate for Capsule Server. The satellite-installer command, which the capsule-certs-generate command returns, is unique to each Capsule Server. You cannot use the same command on more than one Capsule Server. Prerequisites You must create a new Certificate Signing Request and send it to the Certificate Authority to sign the certificate. Refer to the Configuring Satellite Server with a Custom SSL Certificate guide before creating a new CSR because the Satellite Server certificate must have X.509 v3 Key Usage and Extended Key Usage extensions with required values. In return, you will receive the Capsule Server certificate and CA bundle. Procedure On your Satellite Server, validate the custom SSL certificate input files: On your Satellite Server, generate the certificate archive file for your Capsule Server: On your Satellite Server, copy the certificate archive file to your Capsule Server: You can move the copied file to the applicable path if required. Retain a copy of the satellite-installer command that the capsule-certs-generate command returns for deploying the certificate to your Capsule Server. Deploy the certificate on your Capsule Server using the satellite-installer command returned by the capsule-certs-generate command: Important Do not delete the certificate archive file on the Capsule Server after you deploy the certificate. They are required when upgrading Capsule Server. Note If a new consumer package katello-ca-consumer-latest.noarch.rpm is generated due to a different Certificate Signing Authority, all the clients registered to Capsule Server must be updated.
[ "katello-certs-check -t satellite -b /root/ satellite_cert/ca_cert_bundle.pem -c /root/ satellite_cert/satellite_cert.pem -k /root/ satellite_cert/satellite_cert_key.pem", "satellite-installer --scenario satellite --certs-server-cert \"/root/ satellite_cert/satellite_cert.pem \" --certs-server-key \"/root/ satellite_cert/satellite_cert_key.pem \" --certs-server-ca-cert \"/root/ satellite_cert/ca_cert_bundle.pem \" --certs-update-server --certs-update-server-ca", "katello-certs-check -t capsule -b /root/ capsule_cert/ca_cert_bundle.pem -c /root/ capsule_cert/capsule_cert.pem -k /root/ capsule_cert/capsule_cert_key.pem", "capsule-certs-generate --certs-tar \" /root/My_Certificates/capsule.example.com-certs.tar \" --certs-update-server --foreman-proxy-fqdn \" capsule.example.com \" --server-ca-cert \" /root/My_Certificates/ca_cert_bundle.pem \" --server-cert \" /root/My_Certificates/capsule_cert.pem \" --server-key \" /root/My_Certificates/capsule_cert_key.pem \"", "scp /root/My_Certificates/capsule.example.com-certs.tar [email protected] :", "satellite-installer --scenario capsule --certs-tar-file \" /root/My_Certificates/capsule.example.com-certs.tar \" --certs-update-server --foreman-proxy-foreman-base-url \"https:// satellite.example.com \" --foreman-proxy-register-in-foreman \"true\"" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/administering_red_hat_satellite/renewing-the-custom-ssl-certificate_admin
9.3. Welcome to Red Hat Enterprise Linux
9.3. Welcome to Red Hat Enterprise Linux The Welcome screen does not prompt you for any input. Figure 9.1. The Welcome screen Click on the button to continue.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sn-welcome-x86
Chapter 7. Assigning permissions using roles and groups
Chapter 7. Assigning permissions using roles and groups Roles and groups have a similar purpose, which is to give users access and permissions to use applications. Groups are a collection of users to which you apply roles and attributes. Roles define specific applications permissions and access control. A role typically applies to one type of user. For example, an organization may include admin , user , manager , and employee roles. An application can assign access and permissions to a role and then assign multiple users to that role so the users have the same access and permissions. For example, the Admin Console has roles that give permission to users to access different parts of the Admin Console. There is a global namespace for roles and each client also has its own dedicated namespace where roles can be defined. 7.1. Creating a realm role Realm-level roles are a namespace for defining your roles. To see the list of roles, click Realm Roles in the menu. Procedure Click Create Role . Enter a Role Name . Enter a Description . Click Save . Add role The description field can be localized by specifying a substitution variable with USD{var-name} strings. The localized value is configured to your theme within the themes property files. See the Server Developer Guide for more details. 7.2. Client roles Client roles are namespaces dedicated to clients. Each client gets its own namespace. Client roles are managed under the Roles tab for each client. You interact with this UI the same way you do for realm-level roles. 7.3. Converting a role to a composite role Any realm or client level role can become a composite role . A composite role is a role that has one or more additional roles associated with it. When a composite role is mapped to a user, the user gains the roles associated with the composite role. This inheritance is recursive so users also inherit any composite of composites. However, we recommend that composite roles are not overused. Procedure Click Realm Roles in the menu. Click the role that you want to convert. From the Action list, select Add associated roles . Composite role The role selection UI is displayed on the page and you can associate realm level and client level roles to the composite role you are creating. In this example, the employee realm-level role is associated with the developer composite role. Any user with the developer role also inherits the employee role. Note When creating tokens and SAML assertions, any composite also has its associated roles added to the claims and assertions of the authentication response sent back to the client. 7.4. Assigning role mappings You can assign role mappings to a user through the Role Mappings tab for that user. Procedure Click Users in the menu. Click the user that you want to perform a role mapping on. Click the Role mappings tab. Click Assign role . Select the role you want to assign to the user from the dialog. Click Assign . Role mappings In the preceding example, we are assigning the composite role developer to a user. That role was created in the Composite Roles topic. Effective role mappings When the developer role is assigned, the employee role associated with the developer composite is displayed with Inherited "True". Inherited roles are the roles explicitly assigned to users and roles that are inherited from composites. 7.5. Using default roles Use default roles to automatically assign user role mappings when a user is created or imported through Identity Brokering . Procedure Click Realm settings in the menu. Click the User registration tab. Default roles This screenshot shows that some default roles already exist. 7.6. Role scope mappings On creation of an OIDC access token or SAML assertion, the user role mappings become claims within the token or assertion. Applications use these claims to make access decisions on the resources controlled by the application. Red Hat build of Keycloak digitally signs access tokens and applications re-use them to invoke remotely secured REST services. However, these tokens have an associated risk. An attacker can obtain these tokens and use their permissions to compromise your networks. To prevent this situation, use Role Scope Mappings . Role Scope Mappings limit the roles declared inside an access token. When a client requests a user authentication, the access token they receive contains only the role mappings that are explicitly specified for the client's scope. The result is that you limit the permissions of each individual access token instead of giving the client access to all the users permissions. By default, each client gets all the role mappings of the user. You can view the role mappings for a client. Procedure Click Clients in the menu. Click the client to go to the details. Click the Client scopes tab. Click the link in the row with Dedicated scope and mappers for this client Click the Scope tab. Full scope By default, the effective roles of scopes are every declared role in the realm. To change this default behavior, toggle Full Scope Allowed to OFF and declare the specific roles you want in each client. You can also use client scopes to define the same role scope mappings for a set of clients. Partial scope 7.7. Groups Groups in Red Hat build of Keycloak manage a common set of attributes and role mappings for each user. Users can be members of any number of groups and inherit the attributes and role mappings assigned to each group. To manage groups, click Groups in the menu. Groups Groups are hierarchical. A group can have multiple subgroups but a group can have only one parent. Subgroups inherit the attributes and role mappings from their parent. Users inherit the attributes and role mappings from their parent as well. If you have a parent group and a child group, and a user that belongs only to the child group, the user in the child group inherits the attributes and role mappings of both the parent group and the child group. The following example includes a top-level Sales group and a child North America subgroup. To add a group: Click the group. Click Create group . Enter a group name. Click Create . Click the group name. The group management page is displayed. Group Attributes and role mappings you define are inherited by the groups and users that are members of the group. To add a user to a group: Click Users in the menu. Click the user that you want to perform a role mapping on. If the user is not displayed, click View all users . Click Groups . User groups Click Join Group . Select a group from the dialog. Select a group from the Available Groups tree. Click Join . To remove a group from a user: Click Users in the menu. Click the user to be removed from the group. Click Leave on the group table row. In this example, the user jimlincoln is in the North America group. You can see jimlincoln displayed under the Members tab for the group. Group membership 7.7.1. Groups compared to roles Groups and roles have some similarities and differences. In Red Hat build of Keycloak, groups are a collection of users to which you apply roles and attributes. Roles define types of users and applications assign permissions and access control to roles. Composite Roles are similar to Groups as they provide the same functionality. The difference between them is conceptual. Composite roles apply the permission model to a set of services and applications. Use composite roles to manage applications and services. Groups focus on collections of users and their roles in an organization. Use groups to manage users. 7.7.2. Using default groups To automatically assign group membership to any users who is created or who is imported through Identity Brokering , you use default groups. Click Realm settings in the menu. Click the User registration tab. Click the Default Groups tab. Default groups This screenshot shows that some default groups already exist.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/server_administration_guide/assigning_permissions_using_roles_and_groups
Chapter 5. Using secure communications between two systems with OpenSSH
Chapter 5. Using secure communications between two systems with OpenSSH SSH (Secure Shell) is a protocol which provides secure communications between two systems using a client-server architecture and allows users to log in to server host systems remotely. Unlike other remote communication protocols, such as FTP or Telnet, SSH encrypts the login session, which prevents intruders from collecting unencrypted passwords from the connection. 5.1. Generating SSH key pairs You can log in to an OpenSSH server without entering a password by generating an SSH key pair on a local system and copying the generated public key to the OpenSSH server. Each user who wants to create a key must run this procedure. To preserve previously generated key pairs after you reinstall the system, back up the ~/.ssh/ directory before you create new keys. After reinstalling, copy it back to your home directory. You can do this for all users on your system, including root . Prerequisites You are logged in as a user who wants to connect to the OpenSSH server by using keys. The OpenSSH server is configured to allow key-based authentication. Procedure Generate an ECDSA key pair: You can also generate an RSA key pair by using the ssh-keygen command without any parameter or an Ed25519 key pair by entering the ssh-keygen -t ed25519 command. Note that the Ed25519 algorithm is not FIPS-140-compliant, and OpenSSH does not work with Ed25519 keys in FIPS mode. Copy the public key to a remote machine: Replace <username> @ <ssh-server-example.com> with your credentials. If you do not use the ssh-agent program in your session, the command copies the most recently modified ~/.ssh/id*.pub public key if it is not yet installed. To specify another public-key file or to prioritize keys in files over keys cached in memory by ssh-agent , use the ssh-copy-id command with the -i option. Verification Log in to the OpenSSH server by using the key file: Additional resources ssh-keygen(1) and ssh-copy-id(1) man pages on your system 5.2. Setting key-based authentication as the only method on an OpenSSH server To improve system security, enforce key-based authentication by disabling password authentication on your OpenSSH server. Prerequisites The openssh-server package is installed. The sshd daemon is running on the server. You can already connect to the OpenSSH server by using a key. See the Generating SSH key pairs section for details. Procedure Open the /etc/ssh/sshd_config configuration in a text editor, for example: Change the PasswordAuthentication option to no : On a system other than a new default installation, check that the PubkeyAuthentication parameter is either not set or set to yes . Set the ChallengeResponseAuthentication directive to no . Note that the corresponding entry is commented out in the configuration file and the default value is yes . To use key-based authentication with NFS-mounted home directories, enable the use_nfs_home_dirs SELinux boolean: If you are connected remotely, not using console or out-of-band access, test the key-based login process before disabling password authentication. Reload the sshd daemon to apply the changes: Additional resources sshd_config(5) and setsebool(8) man pages on your system 5.3. Caching your SSH credentials by using ssh-agent To avoid entering a passphrase each time you initiate an SSH connection, you can use the ssh-agent utility to cache the private SSH key for a login session. If the agent is running and your keys are unlocked, you can log in to SSH servers by using these keys but without having to enter the key's password again. The private key and the passphrase remain secure. Prerequisites You have a remote host with the SSH daemon running and reachable through the network. You know the IP address or hostname and credentials to log in to the remote host. You have generated an SSH key pair with a passphrase and transferred the public key to the remote machine. See the Generating SSH key pairs section for details. Procedure Add the command for automatically starting ssh-agent in your session to the ~/.bashrc file: Open ~/.bashrc in a text editor of your choice, for example: Add the following line to the file: Save the changes, and quit the editor. Add the following line to the ~/.ssh/config file: With this option and ssh-agent started in your session, the agent prompts for a password only for the first time when you connect to a host. Verification Log in to a host which uses the corresponding public key of the cached private key in the agent, for example: Note that you did not have to enter the passphrase. 5.4. Authenticating by SSH keys stored on a smart card You can create and store ECDSA and RSA keys on a smart card and authenticate by the smart card on an OpenSSH client. Smart-card authentication replaces the default password authentication. Prerequisites On the client side, the opensc package is installed and the pcscd service is running. Procedure List all keys provided by the OpenSC PKCS #11 module including their PKCS #11 URIs and save the output to the keys.pub file: Transfer the public key to the remote server. Use the ssh-copy-id command with the keys.pub file created in the step: Connect to <ssh-server-example.com> by using the ECDSA key. You can use just a subset of the URI, which uniquely references your key, for example: Because OpenSSH uses the p11-kit-proxy wrapper and the OpenSC PKCS #11 module is registered to the p11-kit tool, you can simplify the command: If you skip the id= part of a PKCS #11 URI, OpenSSH loads all keys that are available in the proxy module. This can reduce the amount of typing required: Optional: You can use the same URI string in the ~/.ssh/config file to make the configuration permanent: The ssh client utility now automatically uses this URI and the key from the smart card. Additional resources p11-kit(8) , opensc.conf(5) , pcscd(8) , ssh(1) , and ssh-keygen(1) man pages on your system 5.5. Additional resources sshd(8) , ssh(1) , scp(1) , sftp(1) , ssh-keygen(1) , ssh-copy-id(1) , ssh_config(5) , sshd_config(5) , update-crypto-policies(8) , and crypto-policies(7) man pages on your system Configuring SELinux for applications and services with non-standard configurations Controlling network traffic using firewalld
[ "ssh-keygen -t ecdsa Generating public/private ecdsa key pair. Enter file in which to save the key (/home/ <username> /.ssh/id_ecdsa): Enter passphrase (empty for no passphrase): <password> Enter same passphrase again: <password> Your identification has been saved in /home/ <username> /.ssh/id_ecdsa. Your public key has been saved in /home/ <username> /.ssh/id_ecdsa.pub. The key fingerprint is: SHA256:Q/x+qms4j7PCQ0qFd09iZEFHA+SqwBKRNaU72oZfaCI <username> @ <localhost.example.com> The key's randomart image is: +---[ECDSA 256]---+ |.oo..o=++ | |.. o .oo . | |. .. o. o | |....o.+... | |o.oo.o +S . | |.=.+. .o | |E.*+. . . . | |.=..+ +.. o | | . oo*+o. | +----[SHA256]-----+", "ssh-copy-id <username> @ <ssh-server-example.com> /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed <username> @ <ssh-server-example.com> 's password: ... Number of key(s) added: 1 Now try logging into the machine, with: \"ssh ' <username> @ <ssh-server-example.com> '\" and check to make sure that only the key(s) you wanted were added.", "ssh -o PreferredAuthentications=publickey <username> @ <ssh-server-example.com>", "vi /etc/ssh/sshd_config", "PasswordAuthentication no", "setsebool -P use_nfs_home_dirs 1", "systemctl reload sshd", "vi ~/.bashrc", "eval USD(ssh-agent)", "AddKeysToAgent yes", "ssh <example.user> @ <[email protected]>", "ssh-keygen -D pkcs11: > keys.pub", "ssh-copy-id -f -i keys.pub <[email protected]>", "ssh -i \"pkcs11:id=%01?module-path=/usr/lib64/pkcs11/opensc-pkcs11.so\" <ssh-server-example.com> Enter PIN for 'SSH key': [ssh-server-example.com] USD", "ssh -i \"pkcs11:id=%01\" <ssh-server-example.com> Enter PIN for 'SSH key': [ssh-server-example.com] USD", "ssh -i pkcs11: <ssh-server-example.com> Enter PIN for 'SSH key': [ssh-server-example.com] USD", "cat ~/.ssh/config IdentityFile \"pkcs11:id=%01?module-path=/usr/lib64/pkcs11/opensc-pkcs11.so\" ssh <ssh-server-example.com> Enter PIN for 'SSH key': [ssh-server-example.com] USD" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_basic_system_settings/assembly_using-secure-communications-between-two-systems-with-openssh_configuring-basic-system-settings
16.7. Configuring Uni-Directional Synchronization
16.7. Configuring Uni-Directional Synchronization As Figure 16.1, "Active Directory - Directory Server Synchronization Process" illustrates, synchronization is bi-directional by default. That means that changes in Active Directory are sent to Directory Server and changes on Directory Server are sent to Active Directory. It is possible to create uni-directional synchronization, where changes are only sent one-way. This is similar to a supplier-consumer relationship [1] as opposed to multi-supplier. An additional attribute for the sync agreement, oneWaySync , enables uni-directional synchronization and specifies the direction to send changes. The possible values are fromWindows (for Active Directory to Directory Server sync) and toWindows (for Directory Server to Active Directory sync). If this attribute is absent, then synchronization is bi-directional. Figure 16.3. Uni-Directional Synchronization The synchronization process itself is the mostly same for bi-directional and uni-directional synchronization. It uses the same sync interval and configuration. The only difference is in how sync information is requested. For Windows Active Directory to Directory Server synchronization, during the regular synchronization update interval, the Directory Server contacts the Active Directory server and sends the DirSync control to request updates. However, the Directory Server does not send any changes or entries from its side. So, the sync update consists of the Active Directory changes being sent to and updating the Directory Server entries. For Directory Server to Active Directory synchronization, the Directory Server sends entry modifications to the Active Directory server in a normal update, but it does not include the DirSync control so that it does not request any updates from the Active Directory side. Use the --one-way-sync=" direction " option to enable uni-directional synchronization in one of the following situations: If you create a new synchronization agreement in Section 16.4.9, "Step 9: Configuring the Database for Synchronization and Creating the Synchronization Agreement" , pass the option to the dsconf repl-winsync-agmt create command. If the synchronization agreement already exists, update the agreement. For example, to set synchronization from AD to Directory Server: Note Enabling uni-directional sync does not automatically prevent changes on the un-synchronized server, and this can lead to inconsistencies between the sync peers between sync updates. For example, uni-directional sync is configured to go from Active Directory to Directory Server, so Active Directory is (in essence) the data supplier. If an entry is modified or even deleted on the Directory Server, then the Directory Server information is different than the information and those changes are never carried over to Active Directory. During the sync update, the edits are overwritten on the Directory Server and the deleted entry is re-added. To prevent data inconsistency, use access control rules to prevent editing or deleting entries within the synchronized subtree on the un synchronized server. Access controls for Directory Server are covered in Chapter 18, Managing Access Control . For Active Directory, see the appropriate Windows documentation. Uni-directional sync does not affect password synchronization. Even when the synchronization direction is set to toWindows , after updating a password on the Active Directory server, the password is sent to the Directory Server. [1] Unlike a consumer, changes can still be made on the un-synchronized server. Use ACLs to prevent editing or deleting entries on the un-synchronized server to maintain data integrity.
[ "dsconf -D \"cn=Directory Manager\" ldap://server.example.com repl-winsync-agmt set --one-way-sync=\"fromWindows\" --suffix=\" dc=example,dc=com \" example-agreement" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/unidirectional-sync
Chapter 9. Managing activation keys
Chapter 9. Managing activation keys Activation keys provide a method to automate system registration and subscription attachment. You can create multiple keys and associate them with different environments and content views. For example, you might create a basic activation key with a subscription for Red Hat Enterprise Linux workstations and associate it with content views from a particular environment. Important If you have Simple Content Access (SCA) enabled on Satellite, you cannot attach subscriptions to your activation key. With SCA enabled, you do not need to have subscriptions attached to your hosts. Note that SCA is enabled by default for newly created organizations. To learn more about SCA, see Simple Content Access . You can use activation keys during content host registration to improve the speed, simplicity and consistency of the process. Note that activation keys are used only when hosts are registered. If changes are made to an activation key, it is applicable only to hosts that are registered with the amended activation key in the future. The changes are not made to existing hosts. Activation keys can define the following properties for content hosts: Associated subscriptions and subscription attachment behavior Available products and repositories A lifecycle environment and a content view Host collection membership System purpose Content view conflicts between host creation and registration When you provision a host, Satellite uses provisioning templates and other content from the content view that you set in the host group or host settings. When the host is registered, the content view from the activation key overwrites the original content view from the host group or host settings. Then Satellite uses the content view from the activation key for every future task, for example, rebuilding a host. When you rebuild a host, ensure that you set the content view that you want to use in the activation key and not in the host group or host settings. Using the same activation key with multiple content hosts You can apply the same activation key to multiple content hosts if it contains enough subscriptions. However, activation keys set only the initial configuration for a content host. When the content host is registered to an organization, the organization's content can be attached to the content host manually. Using multiple activation keys with a content host A content host can be associated with multiple activation keys that are combined to define the host settings. In case of conflicting settings, the last specified activation key takes precedence. You can specify the order of precedence by setting a host group parameter as follows: 9.1. Best practices for activation keys Create an activation key for each use case. This structures, modularizes, and simplifies content management on hosts. Use a naming convention for activation keys to indicate the content and lifecycle environment, for example, red-hat-enterprise-linux-webserver . Automate activation key management by using a Hammer script or an Ansible playbook . 9.2. Creating an activation key You can use activation keys to define a specific set of subscriptions to attach to hosts during registration. The subscriptions that you add to an activation key must be available within the associated content view. Important If you have Simple Content Access (SCA) enabled on Satellite, you cannot attach subscriptions to your activation key. With SCA enabled, you do not need to have subscriptions attached to your hosts. Note that SCA is enabled by default for newly created organizations. To learn more about SCA, see Simple Content Access . Subscription Manager attaches subscriptions differently depending on the following factors: Are there any subscriptions associated with the activation key? Is the auto-attach option enabled? For Red Hat Enterprise Linux 8 hosts: Is there system purpose set on the activation key? Note that Satellite automatically attaches subscriptions only for the products installed on a host. For subscriptions that do not list products installed on Red Hat Enterprise Linux by default, such as the Extended Update Support (EUS) subscription, use an activation key specifying the required subscriptions and with the auto-attach disabled. Based on the factors, there are three possible scenarios for subscribing with activation keys: Activation key that attaches subscriptions automatically. With no subscriptions specified and auto-attach enabled, hosts using the activation key search for the best fitting subscription from the ones provided by the content view associated with the activation key. This is similar to entering the subscription-manager --auto-attach command. For Red Hat Enterprise Linux 8 hosts, you can configure the activation key to set system purpose on hosts during registration to enhance the automatic subscriptions attachment. Activation key providing a custom set of subscription for auto-attach. If there are subscriptions specified and auto-attach is enabled, hosts using the activation key select the best fitting subscription from the list specified in the activation key. Setting system purpose on the activation key does not affect this scenario. Activation key with the exact set of subscriptions. If there are subscriptions specified and auto-attach is disabled, hosts using the activation key are associated with all subscriptions specified in the activation key. Setting system purpose on the activation key does not affect this scenario. Custom products If a custom product, typically containing content not provided by Red Hat, is assigned to an activation key, this product is always enabled for the registered content host regardless of the auto-attach setting. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Content > Lifecycle > Activation Keys and click Create Activation Key . In the Name field, enter the name of the activation key. If you want to set a limit, clear the Unlimited hosts checkbox, and in the Limit field, enter the maximum number of systems you can register with the activation key. If you want unlimited hosts to register with the activation key, ensure the Unlimited Hosts checkbox is selected. Optional: In the Description field, enter a description for the activation key. From the Environment list, select the environment to use. From the Content View list, select a content view to use. If Simple Content Access (SCA) is enabled: In the Repository Sets tab, ensure only your named repository is enabled. If SCA is not enabled: Click the Subscriptions tab, then click the Add submenu. Click the checkbox under the subscription you created before. Click Add Selected . Click Save . Optional: For Red Hat Enterprise Linux 8 hosts, in the System Purpose section, you can configure the activation key with system purpose to set on hosts during registration to enhance subscriptions auto attachment. CLI procedure Create the activation key: Optional: For Red Hat Enterprise Linux 8 hosts, enter the following command to configure the activation key with system purpose to set on hosts during registration to enhance subscriptions auto attachment. Obtain a list of your subscription IDs: Attach the Red Hat Enterprise Linux subscription UUID to the activation key: List the product content associated with the activation key: If Simple Content Access (SCA) is enabled: If SCA is not enabled: Override the default auto-enable status for the Red Hat Satellite Client 6 repository. The default status is set to disabled. To enable, enter the following command: 9.3. Updating subscriptions associated with an activation key Important This procedure is only valid if you have Simple Content Access (SCA) disabled on your Satellite. With SCA enabled, you do not need to have subscriptions attached to your hosts. Note that SCA is enabled by default for newly created organizations. To learn more about SCA, see Simple Content Access . Use this procedure to change the subscriptions associated with an activation key. To use the CLI instead of the Satellite web UI, see the CLI procedure . Note that changes to an activation key apply only to machines provisioned after the change. To update subscriptions on existing content hosts, see Section 2.7, "Updating Red Hat subscriptions on multiple hosts" . Procedure In the Satellite web UI, navigate to Content > Lifecycle > Activation Keys and click the name of the activation key. Click the Subscriptions tab. To remove subscriptions, select List/Remove , and then select the checkboxes to the left of the subscriptions to be removed and then click Remove Selected . To add subscriptions, select Add , and then select the checkboxes to the left of the subscriptions to be added and then click Add Selected . Click the Repository Sets tab and review the repositories' status settings. To enable or disable a repository, select the checkbox for a repository and then change the status using the Select Action list. Click the Details tab, select a content view for this activation key, and then click Save . CLI procedure List the subscriptions that the activation key currently contains: Remove the required subscription from the activation key: For the --subscription-id option, you can use either the UUID or the ID of the subscription. Attach new subscription to the activation key: For the --subscription-id option, you can use either the UUID or the ID of the subscription. List the product content associated with the activation key: Override the default auto-enable status for the required repository: For the --value option, enter 1 for enable, 0 for disable. 9.4. Using activation keys for host registration You can use activation keys to complete the following tasks: Registering new hosts during provisioning through Red Hat Satellite. The kickstart provisioning templates in Red Hat Satellite contain commands to register the host using an activation key that is defined when creating a host. Registering existing Red Hat Enterprise Linux hosts. Configure Subscription Manager to use Satellite Server for registration and specify the activation key when running the subscription-manager register command. You can register hosts with Satellite using the host registration feature in the Satellite web UI, Hammer CLI, or the Satellite API. For more information, see Registering Hosts in Managing hosts . Procedure In the Satellite web UI, navigate to Hosts > Register Host . From the Activation Keys list, select the activation keys to assign to your host. Click Generate to create the registration command. Click on the files icon to copy the command to your clipboard. Connect to your host using SSH and run the registration command. Check the /etc/yum.repos.d/redhat.repo file and ensure that the appropriate repositories have been enabled. CLI procedure Generate the host registration command using the Hammer CLI: If your hosts do not trust the SSL certificate of Satellite Server, you can disable SSL validation by adding the --insecure flag to the registration command. Connect to your host using SSH and run the registration command. Check the /etc/yum.repos.d/redhat.repo file and ensure that the appropriate repositories have been enabled. API procedure Generate the host registration command using the Satellite API: If your hosts do not trust the SSL certificate of Satellite Server, you can disable SSL validation by adding the --insecure flag to the registration command. Use an activation key to simplify specifying the environments. For more information, see Managing Activation Keys in Managing content . To enter a password as a command line argument, use username:password syntax. Keep in mind this can save the password in the shell history. Alternatively, you can use a temporary personal access token instead of a password. To generate a token in the Satellite web UI, navigate to My Account > Personal Access Tokens . Connect to your host using SSH and run the registration command. Check the /etc/yum.repos.d/redhat.repo file and ensure that the appropriate repositories have been enabled. Multiple activation keys You can use multiple activation keys when registering a content host. You can then create activation keys for specific subscription sets and combine them according to content host requirements. For example, the following command registers a content host to your organization with both VDC and OpenShift subscriptions: Settings conflicts If there are conflicting settings in activation keys, the rightmost key takes precedence. Settings that conflict: Service Level , Release Version , Environment , Content View , and Product Content . Settings that do not conflict and the host gets the union of them: Subscriptions and Host Collections . Settings that influence the behavior of the key itself and not the host configuration: Content Host Limit and Auto-Attach . 9.5. Enabling auto-attach When auto-attach is enabled on an activation key and there are subscriptions associated with the key, the subscription management service selects and attaches the best-matched associated subscriptions based on a set of criteria like currently installed products, architecture, and preferences like service level. Important This procedure is only valid if you have Simple Content Access (SCA) disabled on your Satellite. With SCA enabled, you do not need to have subscriptions attached to your hosts. Note that SCA is enabled by default for newly created organizations. To learn more about SCA, see Simple Content Access . You can enable auto-attach and have no subscriptions associated with the key. This type of key is commonly used to register virtual machines when you do not want the virtual machine to consume a physical subscription, but to inherit a host-based subscription from the hypervisor. For more information, see Configuring virtual machine subscriptions . Auto-attach is enabled by default. Disable the option if you want to force attach all subscriptions associated with the activation key. Procedure In the Satellite web UI, navigate to Content > Lifecycle > Activation Keys . Click the activation key name that you want to edit. Click the Subscriptions tab. Click the edit icon to Auto-Attach . Select or clear the checkbox to enable or disable auto-attach. Click Save . CLI procedure Enter the following command to enable auto-attach on the activation key: 9.6. Setting the service level You can configure an activation key to define a default service level for the new host created with the activation key. Setting a default service level selects only the matching subscriptions to be attached to the host. For example, if the default service level on an activation key is set to Premium, only subscriptions with premium service levels are attached to the host upon registration. Procedure In the Satellite web UI, navigate to Content > Lifecycle > Activation Keys . Click the activation key name you want to edit. Click the edit icon to Service Level . Select the required service level from the list. The list only contains service levels available to the activation key. Click Save . CLI procedure Set the service level to Premium on your activation key: 9.7. Enabling and disabling repositories on activation key As a Simple Content Access (SCA) user, you can enable or disable repositories on an activation key in the Satellite web UI. Procedure In the Satellite web UI, navigate to Content > Lifecycle > Activation Keys . Select an activation key. Select the Repository Sets tab. From the dropdown, you can filter the Repository type column to Custom or Red Hat , if desired. Select the desired repositories or click the Select All checkbox to select all repositories. From the Select Action list, select Override to Enabled , Override to Disabled , or Reset to Default .
[ "hammer hostgroup set-parameter --hostgroup \" My_Host_Group \" --name \" My_Activation_Key \" --value \" name_of_first_key \", \" name_of_second_key \",", "hammer activation-key create --name \" My_Activation_Key \" --unlimited-hosts --description \" Example Stack in the Development Environment \" --lifecycle-environment \" Development \" --content-view \" Stack \" --organization \" My_Organization \"", "hammer activation-key update --organization \" My_Organization \" --name \" My_Activation_Key \" --service-level \" Standard \" --purpose-usage \" Development/Test \" --purpose-role \" Red Hat Enterprise Linux Server \" --purpose-addons \" addons \"", "hammer subscription list --organization \" My_Organization \"", "hammer activation-key add-subscription --name \" My_Activation_Key \" --subscription-id My_Subscription_ID --organization \" My_Organization \"", "hammer activation-key product-content --content-access-mode-all true --name \" My_Activation_Key \" --organization \" My_Organization \"", "hammer activation-key product-content --name \" My_Activation_Key \" --organization \" My_Organization \"", "hammer activation-key content-override --name \" My_Activation_Key \" --content-label rhel-7-server-satellite-client-6-rpms --value 1 --organization \" My_Organization \"", "hammer activation-key subscriptions --name My_Activation_Key --organization \" My_Organization \"", "hammer activation-key remove-subscription --name \" My_Activation_Key \" --subscription-id ff808181533518d50152354246e901aa --organization \" My_Organization \"", "hammer activation-key add-subscription --name \" My_Activation_Key \" --subscription-id ff808181533518d50152354246e901aa --organization \" My_Organization \"", "hammer activation-key product-content --name \" My_Activation_Key \" --organization \" My_Organization \"", "hammer activation-key content-override --name \" My_Activation_Key \" --content-label content_label --value 1 --organization \" My_Organization \"", "hammer host-registration generate-command --activation-keys \" My_Activation_Key \"", "hammer host-registration generate-command --activation-keys \" My_Activation_Key \" --insecure true", "curl -X POST https://satellite.example.com/api/registration_commands --user \" My_User_Name \" -H 'Content-Type: application/json' -d '{ \"registration_command\": { \"activation_keys\": [\" My_Activation_Key_1 , My_Activation_Key_2 \"] }}'", "curl -X POST https://satellite.example.com/api/registration_commands --user \" My_User_Name \" -H 'Content-Type: application/json' -d '{ \"registration_command\": { \"activation_keys\": [\" My_Activation_Key_1 , My_Activation_Key_2 \"], \"insecure\": true }}'", "subscription-manager register --activationkey=\"ak-VDC,ak-OpenShift\" --org=\" My_Organization \"", "hammer activation-key update --name \" My_Activation_Key \" --organization \" My_Organization \" --auto-attach true", "hammer activation-key update --name \" My_Activation_Key \" --organization \" My_Organization \" --service-level premium" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/managing_content/Managing_Activation_Keys_content-management
Chapter 3. Enabling Linux control group version 2 (cgroup v2)
Chapter 3. Enabling Linux control group version 2 (cgroup v2) By default, OpenShift Container Platform uses Linux control group version 1 (cgroup v1) in your cluster. You can enable Linux control group version 2 (cgroup v2) upon installation. Enabling cgroup v2 in OpenShift Container Platform disables all cgroup version 1 controllers and hierarchies in your cluster. cgroup v2 is the version of the Linux cgroup API. cgroup v2 offers several improvements over cgroup v1, including a unified hierarchy, safer sub-tree delegation, new features such as Pressure Stall Information , and enhanced resource management and isolation. You can switch between cgroup v1 and cgroup v2, as needed, by editing the node.config object. For more information, see "Configuring the Linux cgroup on your nodes" in the "Additional resources" of this section. 3.1. Enabling Linux cgroup v2 during installation You can enable Linux control group version 2 (cgroup v2) when you install a cluster by creating installation manifests. Procedure Create or edit the node.config object to specify the v2 cgroup: apiVersion: config.openshift.io/v1 kind: Node metadata: name: cluster spec: cgroupMode: "v2" Proceed with the installation as usual. Additional resources OpenShift Container Platform installation overview Configuring the Linux cgroup on your nodes
[ "apiVersion: config.openshift.io/v1 kind: Node metadata: name: cluster spec: cgroupMode: \"v2\"" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installation_configuration/enabling-cgroup-v2
12.2. Creating a Virtual Machine Pool
12.2. Creating a Virtual Machine Pool You can create a virtual machine pool containing multiple virtual machines based on a common template. See Templates in the Virtual Machine Management Guide for information about sealing a virtual machine and creating a template. Sysprep File Configuration Options for Windows Virtual Machines Several sysprep file configuration options are available, depending on your requirements. If your pool does not need to join a domain, you can use the default sysprep file, located in /usr/share/ovirt-engine/conf/sysprep/ . If your pool needs to join a domain, you can create a custom sysprep for each Windows operating system: Copy the relevant sections for each operating system from /usr/share/ovirt-engine/conf/osinfo-defaults.properties to a new file and save as 99-defaults.properties . In 99-defaults.properties , specify the Windows product activation key and the path of your new custom sysprep file: Create a new sysprep file, specifying the domain, domain password, and domain administrator: If you need to configure different sysprep settings for different pools of Windows virtual machines, you can create a custom sysprep file in the Administration Portal (see Creating a Virtual Machine Pool below). See Using Sysprep to Automate the Configuration of Virtual Machines in the Virtual Machine Guide for more information. Creating a Virtual Machine Pool Click Compute Pools . Click New . Select a Cluster from the drop-down list. Select a Template and version from the drop-down menu. A template provides standard settings for all the virtual machines in the pool. Select an Operating System from the drop-down list. Use the Optimized for drop-down list to optimize virtual machines for Desktop or Server . Note High Performance optimization is not recommended for pools because a high performance virtual machine is pinned to a single host and concrete resources. A pool containing multiple virtual machines with such a configuration would not run well. Enter a Name and, optionally, a Description and Comment . The Name of the pool is applied to each virtual machine in the pool, with a numeric suffix. You can customize the numbering of the virtual machines with ? as a placeholder. Example 12.1. Pool Name and Virtual Machine Numbering Examples Pool: MyPool Virtual machines: MyPool-1 , MyPool-2 , ... MyPool-10 Pool: MyPool-??? Virtual machines: MyPool-001 , MyPool-002 , ... MyPool-010 Enter the Number of VMs for the pool. Enter the number of virtual machines to be prestarted in the Prestarted field. Select the Maximum number of VMs per user that a single user is allowed to run in a session. The minimum is 1 . Select the Delete Protection check box to enable delete protection. If you are creating a pool of non-Windows virtual machines or if you are using the default sysprep , skip this step. If you are creating a custom sysprep file for a pool of Windows virtual machines: Click the Show Advanced Options button. Click the Initial Run tab and select the Use Cloud-Init/Sysprep check box. Click the Authentication arrow and enter the User Name and Password or select Use already configured password . Note This User Name is the name of the local administrator. You can change its value from its default value ( user ) here in the Authentication section or in a custom sysprep file. Click the Custom Script arrow and paste the contents of the default sysprep file, located in /usr/share/ovirt-engine/conf/sysprep/ , into the text box. You can modify the following values of the sysprep file: Key . If you do not want to use the pre-defined Windows activation product key, replace <![CDATA[USDProductKeyUSD]]> with a valid product key: Example 12.2. Windows Product Key Example Domain that the Windows virtual machines will join, the domain's Password , and the domain administrator's Username : <Credentials> <Domain> AD_Domain </Domain> <Password> Domain_Password </Password> <Username> Domain_Administrator </Username> </Credentials> Example 12.3. Domain Credentials Example Note The Domain , Password , and Username are required to join the domain. The Key is for activation. You do not necessarily need both. The domain and credentials cannot be modified in the Initial Run tab. FullName of the local administrator: <UserData> ... <FullName> Local_Administrator </FullName> ... </UserData> DisplayName and Name of the local administrator: <LocalAccounts> <LocalAccount wcm:action="add"> <Password> <Value><![CDATA[USDAdminPasswordUSD]]></Value> <PlainText>true</PlainText> </Password> <DisplayName> Local_Administrator </DisplayName> <Group>administrators</Group> <Name> Local_Administrator </Name> </LocalAccount> </LocalAccounts> The remaining variables in the sysprep file can be filled in on the Initial Run tab. Optional. Set a Pool Type : Click the Type tab and select a Pool Type : Manual - The administrator is responsible for explicitly returning the virtual machine to the pool. Automatic - The virtual machine is automatically returned to the virtual machine pool. Select the Stateful Pool check box to ensure that virtual machines are started in a stateful mode. This ensures that changes made by a user will persist on a virtual machine. Click OK . Optional. Override the SPICE proxy: In the Console tab, select the Override SPICE Proxy check box. In the Overridden SPICE proxy address text field, specify the address of a SPICE proxy to override the global SPICE proxy. Click OK . For a pool of Windows virtual machines, click Compute Virtual Machines , select each virtual machine from the pool, and click Run Run Once . Note If the virtual machine does not start and Info [windeploy.exe] Found no unattend file appears in %WINDIR%\panther\UnattendGC\setupact.log , add the UnattendFile key to the registry of the Windows virtual machine that was used to create the template for the pool: Check that the Windows virtual machine has an attached floppy device with the unattend file, for example, A:\Unattend.xml . Click Start , click Run , type regedit in the Open text box, and click OK . In the left pane, go to HKEY_LOCAL_MACHINE SYSTEM Setup . Right-click the right pane and select New String Value . Enter UnattendFile as the key name. Double-click the new key and enter the unattend file name and path, for example, A:\Unattend.xml , as the key's value. Save the registry, seal the Windows virtual machine, and create a new template. See Templates in the Virtual Machine Management Guide for details. You have created and configured a virtual machine pool with the specified number of identical virtual machines. You can view these virtual machines in Compute Virtual Machines , or by clicking the name of a pool to open its details view; a virtual machine in a pool is distinguished from independent virtual machines by its icon.
[ "os. operating_system .productKey.value= Windows_product_activation_key os. operating_system .sysprepPath.value = USD{ENGINE_USR}/conf/sysprep/sysprep. operating_system", "<Credentials> <Domain> AD_Domain </Domain> <Password> Domain_Password </Password> <Username> Domain_Administrator </Username> </Credentials>", "<ProductKey> <Key><![CDATA[USDProductKeyUSD]]></Key> </ProductKey>", "<ProductKey> <Key>0000-000-000-000</Key> </ProductKey>", "<Credentials> <Domain> AD_Domain </Domain> <Password> Domain_Password </Password> <Username> Domain_Administrator </Username> </Credentials>", "<Credentials> <Domain>addomain.local</Domain> <Password>12345678</Password> <Username>Sarah_Smith</Username> </Credentials>", "<UserData> ... <FullName> Local_Administrator </FullName> ... </UserData>", "<LocalAccounts> <LocalAccount wcm:action=\"add\"> <Password> <Value><![CDATA[USDAdminPasswordUSD]]></Value> <PlainText>true</PlainText> </Password> <DisplayName> Local_Administrator </DisplayName> <Group>administrators</Group> <Name> Local_Administrator </Name> </LocalAccount> </LocalAccounts>" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/Creating_a_VM_Pool
Chapter 9. Cloning virtual machines
Chapter 9. Cloning virtual machines To quickly create a new virtual machine (VM) with a specific set of properties, you can clone an existing VM. Cloning creates a new VM that uses its own disk image for storage, but most of the clone's configuration and stored data is identical to the source VM. This makes it possible to prepare multiple VMs optimized for a certain task without the need to optimize each VM individually. 9.1. How cloning virtual machines works Cloning a virtual machine (VM) copies the XML configuration of the source VM and its disk images, and makes adjustments to the configurations to ensure the uniqueness of the new VM. This includes changing the name of the VM and ensuring it uses the disk image clones. Nevertheless, the data stored on the clone's virtual disks is identical to the source VM. This process is faster than creating a new VM and installing it with a guest operating system, and can be used to rapidly generate VMs with a specific configuration and content. If you are planning to create multiple clones of a VM, first create a VM template that does not contain: Unique settings, such as persistent network MAC configuration, which can prevent the clones from working correctly. Sensitive data, such as SSH keys and password files. For instructions, see Creating virtual machines templates . Additional resources Cloning a virtual machine by using the command line Cloning a virtual machine by using the web console 9.2. Creating virtual machine templates To create multiple virtual machine (VM) clones that work correctly, you can remove information and configurations that are unique to a source VM, such as SSH keys or persistent network MAC configuration. This creates a VM template , which you can use to easily and safely create VM clones. You can create VM templates by using the virt-sysprep utility or you can create them manually based on your requirements. 9.2.1. Creating a virtual machine template by using virt-sysprep To create a cloning template from an existing virtual machine (VM), you can use the virt-sysprep utility. This removes certain configurations that might cause the clone to work incorrectly, such as specific network settings or system registration metadata. As a result, virt-sysprep makes creating clones of the VM more efficient, and ensures that the clones work more reliably. Prerequisites The libguestfs-tools-c package, which contains the virt-sysprep utility, is installed on your host: The source VM intended as a template is shut down. You know where the disk image for the source VM is located, and you are the owner of the VM's disk image file. Note that disk images for VMs created in the system connection of libvirt are located in the /var/lib/libvirt/images directory and owned by the root user by default: Optional: Any important data on the source VM's disk has been backed up. If you want to preserve the source VM intact, clone it first and turn the clone into a template. Procedure Ensure you are logged in as the owner of the VM's disk image: Optional: Copy the disk image of the VM. This is used later to verify that the VM was successfully turned into a template. Use the following command, and replace /var/lib/libvirt/images/a-really-important-vm.qcow2 with the path to the disk image of the source VM. Verification To confirm that the process was successful, compare the modified disk image to the original one. The following example shows a successful creation of a template: Additional resources The OPERATIONS section in the virt-sysprep man page on your system Cloning a virtual machine by using the command line 9.2.2. Creating a virtual machine template manually To create a template from an existing virtual machine (VM), you can manually reset or unconfigure a guest VM to prepare it for cloning. Prerequisites Ensure that you know the location of the disk image for the source VM and are the owner of the VM's disk image file. Note that disk images for VMs created in the system connection of libvirt are by default located in the /var/lib/libvirt/images directory and owned by the root user: Ensure that the VM is shut down. Optional: Any important data on the VM's disk has been backed up. If you want to preserve the source VM intact, clone it first and edit the clone to create a template. Procedure Configure the VM for cloning: Install any software needed on the clone. Configure any non-unique settings for the operating system. Configure any non-unique application settings. Remove the network configuration: Remove any persistent udev rules by using the following command: Note If udev rules are not removed, the name of the first NIC might be eth1 instead of eth0 . Remove unique network details from ifcfg scripts by editing /etc/sysconfig/network-scripts/ifcfg-eth[x] as follows: Remove the HWADDR and Static lines: Note If the HWADDR does not match the new guest's MAC address, the ifcfg will be ignored. Configure DHCP but do not include HWADDR or any other unique information: Ensure the following files also contain the same content, if they exist on your system: /etc/sysconfig/networking/devices/ifcfg-eth[x] /etc/sysconfig/networking/profiles/default/ifcfg-eth[x] Note If you had used NetworkManager or any special settings with the VM, ensure that any additional unique information is removed from the ifcfg scripts. Remove registration details: For VMs registered on the Red Hat Network (RHN): For VMs registered with Red Hat Subscription Manager (RHSM): If you do not plan to use the original VM: If you plan to use the original VM: Note The original RHSM profile remains in the Portal along with your ID code. Use the following command to reactivate your RHSM registration on the VM after it is cloned: Remove other unique details: Remove SSH public and private key pairs: Remove the configuration of LVM devices: Remove any other application-specific identifiers or configurations that might cause conflicts if running on multiple machines. Remove the gnome-initial-setup-done file to configure the VM to run the configuration wizard on the boot: Note The wizard that runs on the boot depends on the configurations that have been removed from the VM. In addition, on the first boot of the clone, it is recommended that you change the hostname. 9.3. Cloning a virtual machine by using the command line For testing, to create a new virtual machine (VM) with a specific set of properties, you can clone an existing VM by using CLI. Prerequisites The source VM is shut down. Ensure that there is sufficient disk space to store the cloned disk images. Optional: When creating multiple VM clones, remove unique data and settings from the source VM to ensure the cloned VMs work properly. For instructions, see Creating virtual machine templates . Procedure Use the virt-clone utility with options that are appropriate for your environment and use case. Sample use cases The following command clones a local VM named example-VM-1 and creates the example-VM-1-clone VM. It also creates and allocates the example-VM-1-clone.qcow2 disk image in the same location as the disk image of the original VM, and with the same data: The following command clones a VM named example-VM-2 , and creates a local VM named example-VM-3 , which uses only two out of multiple disks of example-VM-2 : To clone your VM to a different host, migrate the VM without undefining it on the local host. For example, the following commands clone the previously created example-VM-3 VM to the 192.0.2.1 remote system, including its local disks. Note that you require root privileges to run these commands for 192.0.2.1 : Verification To verify the VM has been successfully cloned and is working correctly: Confirm the clone has been added to the list of VMs on your host: Start the clone and observe if it boots up: Additional resources virt-clone (1) man page on your system Migrating virtual machines 9.4. Cloning a virtual machine by using the web console To create new virtual machines (VMs) with a specific set of properties, you can clone a VM that you had previously configured by using the web console. Note Cloning a VM also clones the disks associated with that VM. Prerequisites You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The web console VM plug-in is installed on your system . Ensure that the VM you want to clone is shut down. Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . In the Virtual Machines interface of the web console, click the Menu button ... of the VM that you want to clone. A drop down menu appears with controls for various VM operations. Click Clone . The Create a clone VM dialog appears. Optional: Enter a new name for the VM clone. Click Clone . A new VM is created based on the source VM. Verification Confirm whether the cloned VM appears in the list of VMs available on your host.
[ "yum install libguestfs-tools-c", "ls -la /var/lib/libvirt/images -rw-------. 1 root root 9665380352 Jul 23 14:50 a-really-important-vm.qcow2 -rw-------. 1 root root 8591507456 Jul 26 2017 an-actual-vm-that-i-use.qcow2 -rw-------. 1 root root 8591507456 Jul 26 2017 totally-not-a-fake-vm.qcow2 -rw-------. 1 root root 10739318784 Sep 20 17:57 another-vm-example.qcow2", "whoami root", "cp /var/lib/libvirt/images/a-really-important-vm.qcow2 /var/lib/libvirt/images/a-really-important-vm-original.qcow2", "virt-sysprep -a /var/lib/libvirt/images/a-really-important-vm.qcow2 [ 0.0] Examining the guest [ 7.3] Performing \"abrt-data\" [ 7.3] Performing \"backup-files\" [ 9.6] Performing \"bash-history\" [ 9.6] Performing \"blkid-tab\" [...]", "virt-diff -a /var/lib/libvirt/images/a-really-important-vm-orig.qcow2 -A /var/lib/libvirt/images/a-really-important-vm.qcow2 - - 0644 1001 /etc/group- - - 0000 797 /etc/gshadow- = - 0444 33 /etc/machine-id [...] - - 0600 409 /home/username/.bash_history - d 0700 6 /home/username/.ssh - - 0600 868 /root/.bash_history [...]", "ls -la /var/lib/libvirt/images -rw-------. 1 root root 9665380352 Jul 23 14:50 a-really-important-vm.qcow2 -rw-------. 1 root root 8591507456 Jul 26 2017 an-actual-vm-that-i-use.qcow2 -rw-------. 1 root root 8591507456 Jul 26 2017 totally-not-a-fake-vm.qcow2 -rw-------. 1 root root 10739318784 Sep 20 17:57 another-vm-example.qcow2", "rm -f /etc/udev/rules.d/70-persistent-net.rules", "DEVICE=eth[x] BOOTPROTO=none ONBOOT=yes #NETWORK=192.0.2.0 <- REMOVE #NETMASK=255.255.255.0 <- REMOVE #IPADDR=192.0.2.1 <- REMOVE #HWADDR=xx:xx:xx:xx:xx <- REMOVE #USERCTL=no <- REMOVE # Remove any other *unique or non-desired settings, such as UUID.*", "DEVICE=eth[x] BOOTPROTO=dhcp ONBOOT=yes", "rm /etc/sysconfig/rhn/systemid", "subscription-manager unsubscribe --all # subscription-manager unregister # subscription-manager clean", "subscription-manager clean", "subscription-manager register --consumerid=71rd64fx-6216-4409-bf3a-e4b7c7bd8ac9", "rm -rf /etc/ssh/ssh_host_example", "rm /etc/lvm/devices/system.devices", "rm ~/.config/gnome-initial-setup-done", "virt-clone --original example-VM-1 --auto-clone Allocating 'example-VM-1-clone.qcow2' | 50.0 GB 00:05:37 Clone 'example-VM-1-clone' created successfully.", "virt-clone --original example-VM-2 --name example-VM-3 --file /var/lib/libvirt/images/ disk-1-example-VM-2 .qcow2 --file /var/lib/libvirt/images/ disk-2-example-VM-2 .qcow2 Allocating 'disk-1-example-VM-2-clone.qcow2' | 78.0 GB 00:05:37 Allocating 'disk-2-example-VM-2-clone.qcow2' | 80.0 GB 00:05:37 Clone 'example-VM-3' created successfully.", "virsh migrate --offline --persistent example-VM-3 qemu+ssh://[email protected]/system [email protected]'s password: scp /var/lib/libvirt/images/ <disk-1-example-VM-2-clone> .qcow2 [email protected]/ <user@remote_host.com> ://var/lib/libvirt/images/ scp /var/lib/libvirt/images/ <disk-2-example-VM-2-clone> .qcow2 [email protected]/ <user@remote_host.com> ://var/lib/libvirt/images/", "virsh list --all Id Name State --------------------------------------- - example-VM-1 shut off - example-VM-1-clone shut off", "virsh start example-VM-1-clone Domain 'example-VM-1-clone' started" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_virtualization/cloning-virtual-machines_configuring-and-managing-virtualization
Chapter 2. Release notes
Chapter 2. Release notes 2.1. Red Hat OpenShift support for Windows Containers release notes 2.1.1. About Red Hat OpenShift support for Windows Containers Windows Container Support for Red Hat OpenShift enables running Windows compute nodes in an OpenShift Container Platform cluster. Running Windows workloads is possible by using the Red Hat Windows Machine Config Operator (WMCO) to install and manage Windows nodes. With Windows nodes available, you can run Windows container workloads in OpenShift Container Platform. These release notes track the development of the WMCO, which provides all Windows container workload capabilities in OpenShift Container Platform. 2.1.2. Release notes for Red Hat Windows Machine Config Operator 8.1.3 This release of the WMCO provides new features and bug fixes for running Windows compute nodes in an OpenShift Container Platform cluster. The components of WMCO 8.1.3 were released in RHSA-2024:6461 . 2.2. Release notes for past releases of the Windows Machine Config Operator The following release notes are for versions of the Windows Machine Config Operator (WMCO). For the current version, see Red Hat OpenShift support for Windows Containers release notes . 2.2.1. Release notes for Red Hat Windows Machine Config Operator 8.1.2 This release of the WMCO provides new features and bug fixes for running Windows compute nodes in an OpenShift Container Platform cluster. The components of WMCO 8.1.2 were released in RHSA-2024:1477 . 2.2.1.1. Bug fixes Previously, because of bad logic in the networking configuration script, the WICD was incorrectly reading carriage returns in the CNI configuration file as changes, and identified the file as modified. This caused the CNI configuration to be unnecessarily reloaded, potentially resulting in container restarts and brief network outages. With this fix, the WICD now reloads the CNI configuration only when the CNI configuration is actually modified. ( OCPBUGS-27046 ) Previously, because of a lack of synchronization between Windows machine set nodes and BYOH instances, during an update the machine set nodes and the BYOH instances could update simultaneously. This could impact running workloads. This fix introduces a locking mechanism so that machine set nodes and BYOH instances update individually. ( OCPBUGS-23016 ) 2.2.2. Release notes for Red Hat Windows Machine Config Operator 8.1.1 This release of the WMCO provides new features and bug fixes for running Windows compute nodes in an OpenShift Container Platform cluster. The components of WMCO 8.1.1 were released in RHBA-2023:7709 . 2.2.2.1. Bug fixes Previously, the WMCO did not properly wait for Windows virtual machines (VMs) to finish rebooting. This led to occasional timing issues where the WMCO would attempt to interact with a node that was in the middle of a reboot, causing WMCO to log an error and restart node configuration. Now, the WMCO waits for the instance to completely reboot. ( OCPBUGS-20259 ) Previously, the WMCO configuration was missing the DeleteEmptyDirData: true field, which is required for draining nodes that have emptyDir volumes attached. As a consequence, customers that had nodes with emptyDir volumes would see the following error in the logs: cannot delete Pods with local storage. With this fix, the DeleteEmptyDirData: true field was added to the node drain helper struct in the WMCO. As a result, customers are able to drain nodes with emptyDir volumes attached. ( OCPBUGS-22748 ) 2.2.3. Release notes for Red Hat Windows Machine Config Operator 8.0.1 This release of the WMCO provides new features and bug fixes for running Windows compute nodes in an OpenShift Container Platform cluster. The components of WMCO 8.0.1 were released in RHBA-2023:3738 . 2.2.3.1. New features and improvements 2.2.3.1.1. Windows Server 2022 support With this release, Windows Server 2022 now supports Amazon Web Services (AWS). 2.2.3.2. Bug fixes Previously, on an Azure Windows Server 2019 platform that does not have Azure container services installed, WMCO would fail to deploy Windows instances and would display the Install-WindowsFeature : Win32 internal error "Access is denied" 0x5 occurred while reading the console output buffer error message. The failure occurred because the Microsoft Install-WindowsFeature cmdlet displays a progress bar that cannot be sent over an SSH connection. This fix hides the progress bar. As a result, Windows instances can be deployed as nodes. ( OCPBUGS-14181 ) 2.2.4. Release notes for Red Hat Windows Machine Config Operator 8.0.0 This release of the WMCO provides new features and bug fixes for running Windows compute nodes in an OpenShift Container Platform cluster. The components of the WMCO 8.0.0 were released in RHBA-2023:3738 . Important Due to a known issue , WMCO 8.0.0 is not available to download and use. The issue will be addressed in WMCO 8.0.1, which is planned for release. If you upgrade your cluster from OpenShift Container Platform 4.12 to OpenShift Container Platform 4.13, you can continue to use WMCO 7.0.x. However, you will not be able to use the new WMCO 8.0.0 functionality, as described in this section. 2.2.4.1. New features and improvements 2.2.4.1.1. Support for the pod os parameter You can now use the spec.os.name.windows parameter in your workload pods to authoritatively identify the pod operating system for validation and to enforce Windows-specific pod security context constraints (SCCs). It is recommended that you configure this parameter in your workload pods. For more information, see Sample Windows container workload deployment . 2.2.4.1.2. WICD logs are added to must-gather The must-gather tool now collects the service logs generated by the Windows Instance Config Daemon (WICD) from Windows nodes. 2.2.4.2. Bug fixes Previously, the test to determine if the Windows Defender antivirus service is running was incorrectly checking for any process whose name started with Windows Defender , regardless of state. This resulted in an error when the WMCO created firewall exclusions for containerd on instances without Windows Defender installed. This fix now checks for the presence of the specific running process associated with the Windows Defender antivirus service. As a result, the WMCO can properly configure Windows instances as nodes regardless of whether Windows Defender is installed. ( OCPBUGS-1513 ) Previously, in-tree storage was not working for Windows nodes on VMware vSphere. With this fix, Red Hat OpenShift support for Windows Containers properly supports in-tree storage for all cloud providers. ( WINC-1014 ) 2.3. Windows Machine Config Operator prerequisites The following information details the supported platform versions, Windows Server versions, and networking configurations for the Windows Machine Config Operator. See the vSphere documentation for any information that is relevant to only that platform. The following table lists the Windows Server versions that are supported by WMCO 8.1.1, based on the applicable platform. Windows Server versions not listed are not supported, and attempting to use them will cause errors. To prevent these errors, use only an appropriate version for your platform. 2.3.1. WMCO supported installation method The WMCO fully supports installing Windows nodes into installer-provisioned infrastructure (IPI) clusters. This is the preferred OpenShift Container Platform installation method. For user-provisioned infrastructure (UPI) clusters, the WMCO supports installing Windows nodes only into a UPI cluster installed with the platform: none field set in the install-config.yaml file (bare-metal or provider-agnostic) and only for the BYOH (Bring Your Own Host) use case. UPI is not supported for any other platform. 2.3.2. WMCO 8.1.x supported platforms and Windows Server versions The following table lists the Windows Server versions that are supported by WMCO 8.1.x, based on the applicable platform. Windows Server versions not listed are not supported and attempting to use them will cause errors. To prevent these errors, use only an appropriate version for your platform. Platform Supported Windows Server version Amazon Web Services (AWS) Windows Server 2022, OS Build 20348.681 or later [1] Windows Server 2019, version 1809 Microsoft Azure Windows Server 2022, OS Build 20348.681 or later Windows Server 2019, version 1809 VMware vSphere Windows Server 2022, OS Build 20348.681 or later Google Cloud Platform (GCP) Windows Server 2022, OS Build 20348.681 or later Bare metal or provider agnostic Windows Server 2022, OS Build 20348.681 or later Windows Server 2019, version 1809 For disconnected clusters, the Windows AMI must have the EC2LaunchV2 agent version 2.0.1643 or later installed. For more information, see the Install the latest version of EC2Launch v2 in the AWS documentation. 2.3.3. WMCO 8.0.1 supported platforms and Windows Server versions The following table lists the Windows Server versions that are supported by WMCO 8.0.1, based on the applicable platform. Windows Server versions not listed are not supported, and attempting to use them will cause errors. To prevent these errors, use only an appropriate version for your platform. Platform Supported Windows Server version Amazon Web Services (AWS) Windows Server 2022, OS Build 20348.681 or later [1] Windows Server 2019, version 1809 Microsoft Azure Windows Server 2022, OS Build 20348.681 or later Windows Server 2019, version 1809 VMware vSphere Windows Server 2022, OS Build 20348.681 or later Google Cloud Platform (GCP) Windows Server 2022, OS Build 20348.681 or later Bare metal or provider agnostic Windows Server 2022, OS Build 20348.681 or later Windows Server 2019, version 1809 For disconnected clusters, the Windows AMI must have the EC2LaunchV2 agent version 2.0.1643 or later installed. For more information, see the Install the latest version of EC2Launch v2 in the AWS documentation. 2.3.4. WMCO 8.0.0 supported platforms and Windows Server versions The following table lists the Windows Server versions that are supported by WMCO 8.0.0, based on the applicable platform. Windows Server versions not listed are not supported, and attempting to use them will cause errors. To prevent these errors, use only an appropriate version for your platform. Platform Supported Windows Server version Amazon Web Services (AWS) Windows Server 2022, OS Build 20348.681 or later [1] Microsoft Azure Windows Server 2022, OS Build 20348.681 or later Windows Server 2019, version 1809 VMware vSphere Windows Server 2022, OS Build 20348.681 or later Google Cloud Platform (GCP) Windows Server 2022, OS Build 20348.681 or later Bare metal or provider agnostic Windows Server 2022, OS Build 20348.681 or later Windows Server 2019, version 1809 For disconnected clusters, the Windows AMI must have the EC2LaunchV2 agent version 2.0.1643 or later installed. For more information, see the Install the latest version of EC2Launch v2 in the AWS documentation. 2.3.5. Supported networking Hybrid networking with OVN-Kubernetes is the only supported networking configuration. See the additional resources below for more information on this functionality. The following tables outline the type of networking configuration and Windows Server versions to use based on your platform. You must specify the network configuration when you install the cluster. Note The WMCO does not support OVN-Kubernetes without hybrid networking or OpenShift SDN. Dual NIC is not supported on WMCO-managed Windows instances. Table 2.1. Platform networking support Platform Supported networking Amazon Web Services (AWS) Hybrid networking with OVN-Kubernetes Microsoft Azure Hybrid networking with OVN-Kubernetes VMware vSphere Hybrid networking with OVN-Kubernetes with a custom VXLAN port Google Cloud Platform (GCP) Hybrid networking with OVN-Kubernetes Bare metal or provider agnostic Hybrid networking with OVN-Kubernetes Table 2.2. Hybrid OVN-Kubernetes Windows Server support Hybrid networking with OVN-Kubernetes Supported Windows Server version Default VXLAN port Windows Server 2022, OS Build 20348.681 or later Windows Server 2019, version 1809 Custom VXLAN port Windows Server 2022, OS Build 20348.681 or later Additional resources Hybrid networking 2.4. Windows Machine Config Operator known limitations Note the following limitations when working with Windows nodes managed by the WMCO (Windows nodes): The following OpenShift Container Platform features are not supported on Windows nodes: Image builds OpenShift Pipelines OpenShift Service Mesh OpenShift monitoring of user-defined projects OpenShift Serverless Horizontal Pod Autoscaling Vertical Pod Autoscaling The following Red Hat features are not supported on Windows nodes: Red Hat Insights cost management Red Hat OpenShift Local Dual NIC is not supported on WMCO-managed Windows instances. Windows nodes do not support workloads created by using deployment configs. You can use a deployment or other method to deploy workloads. Windows nodes are not supported in clusters that use a cluster-wide proxy. This is because the WMCO is not able to route traffic through the proxy connection for the workloads. Windows nodes are not supported in clusters that are in a disconnected environment. Red Hat OpenShift support for Windows Containers does not support adding Windows nodes to a cluster through a trunk port. The only supported networking configuration for adding Windows nodes is through an access port that carries traffic for the VLAN. Red Hat OpenShift support for Windows Containers supports only in-tree storage drivers for all cloud providers. Red Hat OpenShift support for Windows Containers does not support any Windows operating system language other than English (United States). Due to a limitation within the Windows operating system, clusterNetwork CIDR addresses of class E, such as 240.0.0.0 , are not compatible with Windows nodes. Kubernetes has identified the following node feature limitations : Huge pages are not supported for Windows containers. Privileged containers are not supported for Windows containers. Kubernetes has identified several API compatibility issues .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/windows_container_support_for_openshift/release-notes
Chapter 3. OpenShift Data Foundation operators
Chapter 3. OpenShift Data Foundation operators Red Hat OpenShift Data Foundation is comprised of the following three Operator Lifecycle Manager (OLM) operator bundles, deploying four operators which codify administrative tasks and custom resources so that task and resource characteristics can be easily automated: OpenShift Data Foundation odf-operator OpenShift Container Storage ocs-operator rook-ceph-operator Multicloud Object Gateway mcg-operator Administrators define the desired end state of the cluster, and the OpenShift Data Foundation operators ensure the cluster is either in that state or approaching that state, with minimal administrator intervention. 3.1. OpenShift Data Foundation operator The odf-operator can be described as a "meta" operator for OpenShift Data Foundation, that is, an operator meant to influence other operators. The odf-operator has the following primary functions: Enforces the configuration and versioning of the other operators that comprise OpenShift Data Foundation. It does this by using two primary mechanisms: operator dependencies and Subscription management. The odf-operator bundle specifies dependencies on other OLM operators to make sure they are always installed at specific versions. The operator itself manages the Subscriptions for all other operators to make sure the desired versions of those operators are available for installation by the OLM. Provides the OpenShift Data Foundation external plugin for the OpenShift Console. Provides an API to integrate storage solutions with the OpenShift Console. 3.1.1. Components The odf-operator has a dependency on the ocs-operator package. It also manages the Subscription of the mcg-operator . In addition, the odf-operator bundle defines a second Deployment for the OpenShift Data Foundation external plugin for the OpenShift Console. This defines an nginx -based Pod that serves the necessary files to register and integrate OpenShift Data Foundation dashboards directly into the OpenShift Container Platform Console. 3.1.2. Design diagram This diagram illustrates how odf-operator is integrated with the OpenShift Container Platform. Figure 3.1. OpenShift Data Foundation Operator 3.1.3. Responsibilites The odf-operator defines the following CRD: StorageSystem The StorageSystem CRD represents an underlying storage system that provides data storage and services for OpenShift Container Platform. It triggers the operator to ensure the existence of a Subscription for a given Kind of storage system. 3.1.4. Resources The ocs-operator creates the following CRs in response to the spec of a given StorageSystem. Operator Lifecycle Manager Resources Creates a Subscription for the operator which defines and reconciles the given StorageSystem's Kind. 3.1.5. Limitation The odf-operator does not provide any data storage or services itself. It exists as an integration and management layer for other storage systems. 3.1.6. High availability High availability is not a primary requirement for the odf-operator Pod similar to most of the other operators. In general, there are no operations that require or benefit from process distribution. OpenShift Container Platform quickly spins up a replacement Pod whenever the current Pod becomes unavailable or is deleted. 3.1.7. Relevant config files The odf-operator comes with a ConfigMap of variables that can be used to modify the behavior of the operator. 3.1.8. Relevant log files To get an understanding of the OpenShift Data Foundation and troubleshoot issues, you can look at the following: Operator Pod logs StorageSystem status Underlying storage system CRD statuses Operator Pod logs Each operator provides standard Pod logs that include information about reconciliation and errors encountered. These logs often have information about successful reconciliation which can be filtered out and ignored. StorageSystem status and events The StorageSystem CR stores the reconciliation details in the status of the CR and has associated events. The spec of the StorageSystem contains the name, namespace, and Kind of the actual storage system's CRD, which the administrator can use to find further information on the status of the storage system. 3.1.9. Lifecycle The odf-operator is required to be present as long as the OpenShift Data Foundation bundle remains installed. This is managed as part of OLM's reconciliation of the OpenShift Data Foundation CSV. At least one instance of the pod should be in Ready state. The operator operands such as CRDs should not affect the lifecycle of the operator. The creation and deletion of StorageSystems is an operation outside the operator's control and must be initiated by the administrator or automated with the appropriate application programming interface (API) calls. 3.2. OpenShift Container Storage operator The ocs-operator can be described as a "meta" operator for OpenShift Data Foundation, that is, an operator meant to influence other operators and serves as a configuration gateway for the features provided by the other operators. It does not directly manage the other operators. The ocs-operator has the following primary functions: Creates Custom Resources (CRs) that trigger the other operators to reconcile against them. Abstracts the Ceph and Multicloud Object Gateway configurations and limits them to known best practices that are validated and supported by Red Hat. Creates and reconciles the resources required to deploy containerized Ceph and NooBaa according to the support policies. 3.2.1. Components The ocs-operator does not have any dependent components. However, the operator has a dependency on the existence of all the custom resource definitions (CRDs) from other operators, which are defined in the ClusterServiceVersion (CSV). 3.2.2. Design diagram This diagram illustrates how OpenShift Container Storage is integrated with the OpenShift Container Platform. Figure 3.2. OpenShift Container Storage Operator 3.2.3. Responsibilities The two ocs-operator CRDs are: OCSInitialization StorageCluster OCSInitialization is a singleton CRD used for encapsulating operations that apply at the operator level. The operator takes care of ensuring that one instance always exists. The CR triggers the following: Performs initialization tasks required for OpenShift Container Storage. If needed, these tasks can be triggered to run again by deleting the OCSInitialization CRD. Ensures that the required Security Context Constraints (SCCs) for OpenShift Container Storage are present. Manages the deployment of the Ceph toolbox Pod, used for performing advanced troubleshooting and recovery operations. The StorageCluster CRD represents the system that provides the full functionality of OpenShift Container Storage. It triggers the operator to ensure the generation and reconciliation of Rook-Ceph and NooBaa CRDs. The ocs-operator algorithmically generates the CephCluster and NooBaa CRDs based on the configuration in the StorageCluster spec. The operator also creates additional CRs, such as CephBlockPools , Routes , and so on. These resources are required for enabling different features of OpenShift Container Storage. Currently, only one StorageCluster CR per OpenShift Container Platform cluster is supported. 3.2.4. Resources The ocs-operator creates the following CRs in response to the spec of the CRDs it defines . The configuration of some of these resources can be overridden, allowing for changes to the generated spec or not creating them altogether. General resources Events Creates various events when required in response to reconciliation. Persistent Volumes (PVs) PVs are not created directly by the operator. However, the operator keeps track of all the PVs created by the Ceph CSI drivers and ensures that the PVs have appropriate annotations for the supported features. Quickstarts Deploys various Quickstart CRs for the OpenShift Container Platform Console. Rook-Ceph resources CephBlockPool Define the default Ceph block pools. CephFilesysPrometheusRulesoute for the Ceph object store. StorageClass Define the default Storage classes. For example, for CephBlockPool and CephFilesystem ). VolumeSnapshotClass Define the default volume snapshot classes for the corresponding storage classes. Multicloud Object Gateway resources NooBaa Define the default Multicloud Object Gateway system. Monitoring resources Metrics Exporter Service Metrics Exporter Service Monitor PrometheusRules 3.2.5. Limitation The ocs-operator neither deploys nor reconciles the other Pods of OpenShift Data Foundation. The ocs-operator CSV defines the top-level components such as operator Deployments and the Operator Lifecycle Manager (OLM) reconciles the specified component. 3.2.6. High availability High availability is not a primary requirement for the ocs-operator Pod similar to most of the other operators. In general, there are no operations that require or benefit from process distribution. OpenShift Container Platform quickly spins up a replacement Pod whenever the current Pod becomes unavailable or is deleted. 3.2.7. Relevant config files The ocs-operator configuration is entirely specified by the CSV and is not modifiable without a custom build of the CSV. 3.2.8. Relevant log files To get an understanding of the OpenShift Container Storage and troubleshoot issues, you can look at the following: Operator Pod logs StorageCluster status and events OCSInitialization status Operator Pod logs Each operator provides standard Pod logs that include information about reconciliation and errors encountered. These logs often have information about successful reconciliation which can be filtered out and ignored. StorageCluster status and events The StorageCluster CR stores the reconciliation details in the status of the CR and has associated events. Status contains a section of the expected container images. It shows the container images that it expects to be present in the pods from other operators and the images that it currently detects. This helps to determine whether the OpenShift Container Storage upgrade is complete. OCSInitialization status This status shows whether the initialization tasks are completed successfully. 3.2.9. Lifecycle The ocs-operator is required to be present as long as the OpenShift Container Storage bundle remains installed. This is managed as part of OLM's reconciliation of the OpenShift Container Storage CSV. At least one instance of the pod should be in Ready state. The operator operands such as CRDs should not affect the lifecycle of the operator. An OCSInitialization CR should always exist. The operator creates one if it does not exist. The creation and deletion of StorageClusters is an operation outside the operator's control and must be initiated by the administrator or automated with the appropriate API calls. 3.3. Rook-Ceph operator Rook-Ceph operator is the Rook operator for Ceph in the OpenShift Data Foundation. Rook enables Ceph storage systems to run on the OpenShift Container Platform. The Rook-Ceph operator is a simple container that automatically bootstraps the storage clusters and monitors the storage daemons to ensure the storage clusters are healthy. 3.3.1. Components The Rook-Ceph operator manages a number of components as part of the OpenShift Data Foundation deployment. Ceph-CSI Driver The operator creates and updates the CSI driver, including a provisioner for each of the two drivers, RADOS block device (RBD) and Ceph filesystem (CephFS) and a volume plugin daemonset for each of the two drivers. Ceph daemons Mons The monitors (mons) provide the core metadata store for Ceph. OSDs The object storage daemons (OSDs) store the data on underlying devices. Mgr The manager (mgr) collects metrics and provides other internal functions for Ceph. RGW The RADOS Gateway (RGW) provides the S3 endpoint to the object store. MDS The metadata server (MDS) provides CephFS shared volumes. 3.3.2. Design diagram The following image illustrates how Ceph Rook integrates with OpenShift Container Platform. Figure 3.3. Rook-Ceph Operator With Ceph running in the OpenShift Container Platform cluster, OpenShift Container Platform applications can mount block devices and filesystems managed by Rook-Ceph, or can use the S3/Swift API for object storage. 3.3.3. Responsibilities The Rook-Ceph operator is a container that bootstraps and monitors the storage cluster. It performs the following functions: Automates the configuration of storage components Starts, monitors, and manages the Ceph monitor pods and Ceph OSD daemons to provide the RADOS storage cluster Initializes the pods and other artifacts to run the services to manage: CRDs for pools Object stores (S3/Swift) Filesystems Monitors the Ceph mons and OSDs to ensure that the storage remains available and healthy Deploys and manages Ceph mons placement while adjusting the mon configuration based on cluster size Watches the desired state changes requested by the API service and applies the changes Initializes the Ceph-CSI drivers that are needed for consuming the storage Automatically configures the Ceph-CSI driver to mount the storage to pods Rook-Ceph Operator architecture The Rook-Ceph operator image includes all required tools to manage the cluster. There is no change to the data path. However, the operator does not expose all Ceph configurations. Many of the Ceph features like placement groups and crush maps are hidden from the users and are provided with a better user experience in terms of physical resources, pools, volumes, filesystems, and buckets. 3.3.4. Resources Rook-Ceph operator adds owner references to all the resources it creates in the openshift-storage namespace. When the cluster is uninstalled, the owner references ensure that the resources are all cleaned up. This includes OpenShift Container Platform resources such as configmaps , secrets , services , deployments , daemonsets , and so on. The Rook-Ceph operator watches CRs to configure the settings determined by OpenShift Data Foundation, which includes CephCluster , CephObjectStore , CephFilesystem , and CephBlockPool . 3.3.5. Lifecycle Rook-Ceph operator manages the lifecycle of the following pods in the Ceph cluster: Rook operator A single pod that owns the reconcile of the cluster. RBD CSI Driver Two provisioner pods, managed by a single deployment. One plugin pod per node, managed by a daemonset . CephFS CSI Driver Two provisioner pods, managed by a single deployment. One plugin pod per node, managed by a daemonset . Monitors (mons) Three mon pods, each with its own deployment. Stretch clusters Contain five mon pods, one in the arbiter zone and two in each of the other two data zones. Manager (mgr) There is a single mgr pod for the cluster. Stretch clusters There are two mgr pods (starting with OpenShift Data Foundation 4.8), one in each of the two non-arbiter zones. Object storage daemons (OSDs) At least three OSDs are created initially in the cluster. More OSDs are added when the cluster is expanded. Metadata server (MDS) The CephFS metadata server has a single pod. RADOS gateway (RGW) The Ceph RGW daemon has a single pod. 3.4. MCG operator The Multicloud Object Gateway (MCG) operator is an operator for OpenShift Data Foundation along with the OpenShift Data Foundation operator and the Rook-Ceph operator. The MCG operator is available upstream as a standalone operator. The MCG operator performs the following primary functions: Controls and reconciles the Multicloud Object Gateway (MCG) component within OpenShift Data Foundation. Manages new user resources such as object bucket claims, bucket classes, and backing stores. Creates the default out-of-the-box resources. A few configurations and information are passed to the MCG operator through the OpenShift Data Foundation operator. 3.4.1. Components The MCG operator does not have sub-components. However, it consists of a reconcile loop for the different resources that are controlled by it. The MCG operator has a command-line interface (CLI) and is available as a part of OpenShift Data Foundation. It enables the creation, deletion, and querying of various resources. This CLI adds a layer of input sanitation and status validation before the configurations are applied unlike applying a YAML file directly. 3.4.2. Responsibilities and resources The MCG operator reconciles and is responsible for the custom resource definitions (CRDs) and OpenShift Container Platform entities. Backing store Namespace store Bucket class Object bucket claims (OBCs) NooBaa, pod stateful sets CRD Prometheus Rules and Service Monitoring Horizontal pod autoscaler (HPA) Backing store A resource that the customer has connected to the MCG component. This resource provides MCG the ability to save the data of the provisioned buckets on top of it. A default backing store is created as part of the deployment depending on the platform that the OpenShift Container Platform is running on. For example, when OpenShift Container Platform or OpenShift Data Foundation is deployed on Amazon Web Services (AWS), it results in a default backing store which is an AWS::S3 bucket. Similarly, for Microsoft Azure, the default backing store is a blob container and so on. The default backing stores are created using CRDs for the cloud credential operator, which comes with OpenShift Container Platform. There is no limit on the amount of the backing stores that can be added to MCG. The backing stores are used in the bucket class CRD to define the different policies of the bucket. Refer the documentation of the specific OpenShift Data Foundation version to identify the types of services or resources supported as backing stores. Namespace store Resources that are used in namespace buckets. No default is created during deployment. Bucketclass A default or initial policy for a newly provisioned bucket. The following policies are set in a bucketclass: Placement policy Indicates the backing stores to be attached to the bucket and used to write the data of the bucket. This policy is used for data buckets and for cache policies to indicate the local cache placement. There are two modes of placement policy: Spread. Strips the data across the defined backing stores Mirror. Creates a full replica on each backing store Namespace policy A policy for the namespace buckets that defines the resources that are being used for aggregation and the resource used for the write target. Cache Policy This is a policy for the bucket and sets the hub (the source of truth) and the time to live (TTL) for the cache items. A default bucket class is created during deployment and it is set with a placement policy that uses the default backing store. There is no limit to the number of bucket class that can be added. Refer to the documentation of the specific OpenShift Data Foundation version to identify the types of policies that are supported. Object bucket claims (OBCs) CRDs that enable provisioning of S3 buckets. With MCG, OBCs receive an optional bucket class to note the initial configuration of the bucket. If a bucket class is not provided, the default bucket class is used. NooBaa, pod stateful sets CRD An internal CRD that controls the different pods of the NooBaa deployment such as the DB pod, the core pod, and the endpoints. This CRD must not be changed as it is internal. This operator reconciles the following entities: DB pod SCC Role Binding and Service Account to allow SSO single sign-on between OpenShift Container Platform and NooBaa user interfaces Route for S3 access Certificates that are taken and signed by the OpenShift Container Platform and are set on the S3 route Prometheus rules and service monitoring These CRDs set up scraping points for Prometheus and alert rules that are supported by MCG. Horizontal pod autoscaler (HPA) It is Integrated with the MCG endpoints. The endpoint pods scale up and down according to CPU pressure (amount of S3 traffic). 3.4.3. High availability As an operator, the only high availability provided is that the OpenShift Container Platform reschedules a failed pod. 3.4.4. Relevant log files To troubleshoot issues with the NooBaa operator, you can look at the following: Operator pod logs, which are also available through the must-gather. Different CRDs or entities and their statuses that are available through the must-gather. 3.4.5. Lifecycle The MCG operator runs and reconciles after OpenShift Data Foundation is deployed and until it is uninstalled.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/red_hat_openshift_data_foundation_architecture/openshift_data_foundation_operators
Chapter 10. Improving Import Performance
Chapter 10. Improving Import Performance Very large entry sizes or a large number of entries can negatively impact server performance during import operations. This section describes how to tune both Directory Server settings and operating system settings to improve the import performance. 10.1. Tuning Directory Server for Large Database Imports and Imports with Large Attributes Update the entry cache in the following scenarios: You want to import a very large database. You want to import a database with large attributes, such as binary attributes that store certificate chains or images. For details, about setting the size of the entry cache, see Section 6.1, "The Database and Entry Cache Auto-Sizing Feature" and Section 6.3, "Manually Setting the Entry Cache Size" .
null
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/performance_tuning_guide/import
Chapter 14. NetworkAttachmentDefinition [k8s.cni.cncf.io/v1]
Chapter 14. NetworkAttachmentDefinition [k8s.cni.cncf.io/v1] Description NetworkAttachmentDefinition is a CRD schema specified by the Network Plumbing Working Group to express the intent for attaching pods to one or more logical or physical networks. More information available at: https://github.com/k8snetworkplumbingwg/multi-net-spec Type object 14.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object NetworkAttachmentDefinition spec defines the desired state of a network attachment 14.1.1. .spec Description NetworkAttachmentDefinition spec defines the desired state of a network attachment Type object Property Type Description config string NetworkAttachmentDefinition config is a JSON-formatted CNI configuration 14.2. API endpoints The following API endpoints are available: /apis/k8s.cni.cncf.io/v1/network-attachment-definitions GET : list objects of kind NetworkAttachmentDefinition /apis/k8s.cni.cncf.io/v1/namespaces/{namespace}/network-attachment-definitions DELETE : delete collection of NetworkAttachmentDefinition GET : list objects of kind NetworkAttachmentDefinition POST : create a NetworkAttachmentDefinition /apis/k8s.cni.cncf.io/v1/namespaces/{namespace}/network-attachment-definitions/{name} DELETE : delete a NetworkAttachmentDefinition GET : read the specified NetworkAttachmentDefinition PATCH : partially update the specified NetworkAttachmentDefinition PUT : replace the specified NetworkAttachmentDefinition 14.2.1. /apis/k8s.cni.cncf.io/v1/network-attachment-definitions HTTP method GET Description list objects of kind NetworkAttachmentDefinition Table 14.1. HTTP responses HTTP code Reponse body 200 - OK NetworkAttachmentDefinitionList schema 401 - Unauthorized Empty 14.2.2. /apis/k8s.cni.cncf.io/v1/namespaces/{namespace}/network-attachment-definitions HTTP method DELETE Description delete collection of NetworkAttachmentDefinition Table 14.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind NetworkAttachmentDefinition Table 14.3. HTTP responses HTTP code Reponse body 200 - OK NetworkAttachmentDefinitionList schema 401 - Unauthorized Empty HTTP method POST Description create a NetworkAttachmentDefinition Table 14.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 14.5. Body parameters Parameter Type Description body NetworkAttachmentDefinition schema Table 14.6. HTTP responses HTTP code Reponse body 200 - OK NetworkAttachmentDefinition schema 201 - Created NetworkAttachmentDefinition schema 202 - Accepted NetworkAttachmentDefinition schema 401 - Unauthorized Empty 14.2.3. /apis/k8s.cni.cncf.io/v1/namespaces/{namespace}/network-attachment-definitions/{name} Table 14.7. Global path parameters Parameter Type Description name string name of the NetworkAttachmentDefinition HTTP method DELETE Description delete a NetworkAttachmentDefinition Table 14.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 14.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified NetworkAttachmentDefinition Table 14.10. HTTP responses HTTP code Reponse body 200 - OK NetworkAttachmentDefinition schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified NetworkAttachmentDefinition Table 14.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 14.12. HTTP responses HTTP code Reponse body 200 - OK NetworkAttachmentDefinition schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified NetworkAttachmentDefinition Table 14.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 14.14. Body parameters Parameter Type Description body NetworkAttachmentDefinition schema Table 14.15. HTTP responses HTTP code Reponse body 200 - OK NetworkAttachmentDefinition schema 201 - Created NetworkAttachmentDefinition schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/network_apis/networkattachmentdefinition-k8s-cni-cncf-io-v1
Chapter 27. Configuring the cluster-wide proxy
Chapter 27. Configuring the cluster-wide proxy Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure OpenShift Container Platform to use a proxy by modifying the Proxy object for existing clusters or by configuring the proxy settings in the install-config.yaml file for new clusters. 27.1. Prerequisites Review the sites that your cluster requires access to and determine whether any of them must bypass the proxy. By default, all cluster system egress traffic is proxied, including calls to the cloud provider API for the cloud that hosts your cluster. System-wide proxy affects system components only, not user workloads. Add sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration with most installation types. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Important If your installation type does not include setting the networking.machineNetwork[].cidr field, you must include the machine IP addresses manually in the .status.noProxy field to make sure that the traffic between nodes can bypass the proxy. 27.2. Enabling the cluster-wide proxy The Proxy object is used to manage the cluster-wide egress proxy. When a cluster is installed or upgraded without the proxy configured, a Proxy object is still generated but it will have a nil spec . For example: apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: trustedCA: name: "" status: A cluster administrator can configure the proxy for OpenShift Container Platform by modifying this cluster Proxy object. Note Only the Proxy object named cluster is supported, and no additional proxies can be created. Prerequisites Cluster administrator permissions OpenShift Container Platform oc CLI tool installed Procedure Create a config map that contains any additional CA certificates required for proxying HTTPS connections. Note You can skip this step if the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Create a file called user-ca-bundle.yaml with the following contents, and provide the values of your PEM-encoded certificates: apiVersion: v1 data: ca-bundle.crt: | 1 <MY_PEM_ENCODED_CERTS> 2 kind: ConfigMap metadata: name: user-ca-bundle 3 namespace: openshift-config 4 1 This data key must be named ca-bundle.crt . 2 One or more PEM-encoded X.509 certificates used to sign the proxy's identity certificate. 3 The config map name that will be referenced from the Proxy object. 4 The config map must be in the openshift-config namespace. Create the config map from this file: USD oc create -f user-ca-bundle.yaml Use the oc edit command to modify the Proxy object: USD oc edit proxy/cluster Configure the necessary fields for the proxy: apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 readinessEndpoints: - http://www.google.com 4 - https://www.google.com trustedCA: name: user-ca-bundle 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. The URL scheme must be either http or https . Specify a URL for the proxy that supports the URL scheme. For example, most proxies will report an error if they are configured to use https but they only support http . This failure message may not propagate to the logs and can appear to be a network connection failure instead. If using a proxy that listens for https connections from the cluster, you may need to configure the cluster to accept the CAs and certificates that the proxy uses. 3 A comma-separated list of destination domain names, domains, IP addresses or other network CIDRs to exclude proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy or httpsProxy fields are set. 4 One or more URLs external to the cluster to use to perform a readiness check before writing the httpProxy and httpsProxy values to status. 5 A reference to the config map in the openshift-config namespace that contains additional CA certificates required for proxying HTTPS connections. Note that the config map must already exist before referencing it here. This field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Save the file to apply the changes. 27.3. Removing the cluster-wide proxy The cluster Proxy object cannot be deleted. To remove the proxy from a cluster, remove all spec fields from the Proxy object. Prerequisites Cluster administrator permissions OpenShift Container Platform oc CLI tool installed Procedure Use the oc edit command to modify the proxy: USD oc edit proxy/cluster Remove all spec fields from the Proxy object. For example: apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: {} Save the file to apply the changes. Additional resources Replacing the CA Bundle certificate Proxy certificate customization
[ "apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: trustedCA: name: \"\" status:", "apiVersion: v1 data: ca-bundle.crt: | 1 <MY_PEM_ENCODED_CERTS> 2 kind: ConfigMap metadata: name: user-ca-bundle 3 namespace: openshift-config 4", "oc create -f user-ca-bundle.yaml", "oc edit proxy/cluster", "apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 readinessEndpoints: - http://www.google.com 4 - https://www.google.com trustedCA: name: user-ca-bundle 5", "oc edit proxy/cluster", "apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: {}" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/networking/enable-cluster-wide-proxy
Chapter 5. The Redfish modules in RHEL
Chapter 5. The Redfish modules in RHEL The Redfish modules for remote management of devices are now part of the redhat.rhel_mgmt Ansible collection. With the Redfish modules, you can easily use management automation on bare-metal servers and platform hardware by getting information about the servers or control them through an Out-Of-Band (OOB) controller, using the standard HTTPS transport and JSON format. 5.1. The Redfish modules The redhat.rhel_mgmt Ansible collection provides the Redfish modules to support hardware management in Ansible over Redfish. The redhat.rhel_mgmt collection is available in the ansible-collection-redhat-rhel_mgmt package. To install it, see Installing the redhat.rhel_mgmt Collection using the CLI . The following Redfish modules are available in the redhat.rhel_mgmt collection: redfish_info : The redfish_info module retrieves information about the remote Out-Of-Band (OOB) controller such as systems inventory. redfish_command : The redfish_command module performs Out-Of-Band (OOB) controller operations like log management and user management, and power operations such as system restart, power on and off. redfish_config : The redfish_config module performs OOB controller operations such as changing OOB configuration, or setting the BIOS configuration. 5.2. Redfish modules parameters The parameters used for the Redfish modules are: redfish_info parameters: Description baseuri (Mandatory) - Base URI of OOB controller. category (Mandatory) - List of categories to execute on OOB controller. The default value is ["Systems"]. command (Mandatory) - List of commands to execute on OOB controller. username Username for authentication to OOB controller. password Password for authentication to OOB controller. redfish_command parameters: Description baseuri (Mandatory) - Base URI of OOB controller. category (Mandatory) - List of categories to execute on OOB controller. The default value is ["Systems"]. command (Mandatory) - List of commands to execute on OOB controller. username Username for authentication to OOB controller. password Password for authentication to OOB controller. redfish_config parameters: Description baseuri (Mandatory) - Base URI of OOB controller. category (Mandatory) - List of categories to execute on OOB controller. The default value is ["Systems"]. command (Mandatory) - List of commands to execute on OOB controller. username Username for authentication to OOB controller. password Password for authentication to OOB controller. bios_attributes BIOS attributes to update. 5.3. Using the redfish_info module The following example shows how to use the redfish_info module in a playbook to get information about the CPU inventory. For simplicity, the example uses the same host as the Ansible control host and managed host, thus executing the modules on the same host where the playbook is executed. Prerequisites The redhat.rhel_mgmt collection is installed. The pyghmi library in the python3-pyghmi package is installed on the managed host. If you use localhost as the managed host, install the python3-pyghmi package on the host where you execute the playbook. OOB controller access details. Procedure Create a new playbook.yml file with the following content: --- - name: Get CPU inventory hosts: localhost tasks: - redhat.rhel_mgmt.redfish_info: baseuri: "{{ baseuri }}" username: "{{ username }}" password: "{{ password }}" category: Systems command: GetCpuInventory register: result Execute the playbook against localhost: As a result, the output returns the CPU inventory details. 5.4. Using the redfish_command module The following example shows how to use the redfish_command module in a playbook to turn on a system. For simplicity, the example uses the same host as the Ansible control host and managed host, thus executing the modules on the same host where the playbook is executed. Prerequisites The redhat.rhel_mgmt collection is installed. The pyghmi library in the python3-pyghmi package is installed on the managed host. If you use localhost as the managed host, install the python3-pyghmi package on the host where you execute the playbook. OOB controller access details. Procedure Create a new playbook.yml file with the following content: --- - name: Power on system hosts: localhost tasks: - redhat.rhel_mgmt.redfish_command: baseuri: "{{ baseuri }}" username: "{{ username }}" password: "{{ password }}" category: Systems command: PowerOn Execute the playbook against localhost: As a result, the system powers on. 5.5. Using the redfish_config module The following example shows how to use the redfish_config module in a playbook to configure a system to boot with UEFI. For simplicity, the example uses the same host as the Ansible control host and managed host, thus executing the modules on the same host where the playbook is executed. Prerequisites The redhat.rhel_mgmt collection is installed. The pyghmi library in the python3-pyghmi package is installed on the managed host. If you use localhost as the managed host, install the python3-pyghmi package on the host where you execute the playbook. OOB controller access details. Procedure Create a new playbook.yml file with the following content: --- - name: "Set BootMode to UEFI" hosts: localhost tasks: - redhat.rhel_mgmt.redfish_config: baseuri: "{{ baseuri }}" username: "{{ username }}" password: "{{ password }}" category: Systems command: SetBiosAttributes bios_attributes: BootMode: Uefi Execute the playbook against localhost: As a result, the system boot mode is set to UEFI.
[ "--- - name: Get CPU inventory hosts: localhost tasks: - redhat.rhel_mgmt.redfish_info: baseuri: \"{{ baseuri }}\" username: \"{{ username }}\" password: \"{{ password }}\" category: Systems command: GetCpuInventory register: result", "ansible-playbook playbook.yml", "--- - name: Power on system hosts: localhost tasks: - redhat.rhel_mgmt.redfish_command: baseuri: \"{{ baseuri }}\" username: \"{{ username }}\" password: \"{{ password }}\" category: Systems command: PowerOn", "ansible-playbook playbook.yml", "--- - name: \"Set BootMode to UEFI\" hosts: localhost tasks: - redhat.rhel_mgmt.redfish_config: baseuri: \"{{ baseuri }}\" username: \"{{ username }}\" password: \"{{ password }}\" category: Systems command: SetBiosAttributes bios_attributes: BootMode: Uefi", "ansible-playbook playbook.yml" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/automating_system_administration_by_using_rhel_system_roles_in_rhel_7.9/assembly_the-redfish-modules-in-rhel_automating-system-administration-by-using-rhel-system-roles
Chapter 11. Scaling Multicloud Object Gateway performance
Chapter 11. Scaling Multicloud Object Gateway performance The Multicloud Object Gateway (MCG) performance may vary from one environment to another. In some cases, specific applications require faster performance which can be easily addressed by scaling S3 endpoints. The MCG resource pool is a group of NooBaa daemon containers that provide two types of services enabled by default: Storage service S3 endpoint service S3 endpoint service The S3 endpoint is a service that every Multicloud Object Gateway (MCG) provides by default that handles the heavy lifting data digestion in the MCG. The endpoint service handles the inline data chunking, deduplication, compression, and encryption, and it accepts data placement instructions from the MCG. 11.1. Automatic scaling of MultiCloud Object Gateway endpoints The number of MultiCloud Object Gateway (MCG) endpoints scale automatically when the load on the MCG S3 service increases or decreases. OpenShift Data Foundation clusters are deployed with one active MCG endpoint. Each MCG endpoint pod is configured by default with 1 CPU and 2Gi memory request, with limits matching the request. When the CPU load on the endpoint crosses over an 80% usage threshold for a consistent period of time, a second endpoint is deployed lowering the load on the first endpoint. When the average CPU load on both endpoints falls below the 80% threshold for a consistent period of time, one of the endpoints is deleted. This feature improves performance and serviceability of the MCG. 11.2. Scaling the Multicloud Object Gateway with storage nodes Prerequisites A running OpenShift Data Foundation cluster on OpenShift Container Platform with access to the Multicloud Object Gateway (MCG). A storage node in the MCG is a NooBaa daemon container attached to one or more Persistent Volumes (PVs) and used for local object service data storage. NooBaa daemons can be deployed on Kubernetes nodes. This can be done by creating a Kubernetes pool consisting of StatefulSet pods. Procedure Log in to OpenShift Web Console . From the MCG user interface, click Overview Add Storage Resources . In the window, click Deploy Kubernetes Pool . In the Create Pool step create the target pool for the future installed nodes. In the Configure step, configure the number of requested pods and the size of each PV. For each new pod, one PV is to be created. In the Review step, you can find the details of the new pool and select the deployment method you wish to use: local or external deployment. If local deployment is selected, the Kubernetes nodes will deploy within the cluster. If external deployment is selected, you will be provided with a YAML file to run externally. All nodes will be assigned to the pool you chose in the first step, and can be found under Resources Storage resources Resource name . 11.3. Increasing CPU and memory for PV pool resources MCG default configuration supports low resource consumption. However, when you need to increase CPU and memory to accommodate specific workloads and to increase MCG performance for the workloads, it is possible to configure the required values for CPU and memory in the OpenShift Web Console. Procedure In the OpenShift Web Console, click Installed operators ODF Operator . Click on the Backingstore tab. Select the new backingstore . Scroll down and click Edit PV pool resources . In the edit window that appears, edit the value of Mem , CPU , and Vol size based on the requirement. Click Save . Verification steps To verfiy, you can check the resource values of the PV pool pods.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/managing_hybrid_and_multicloud_resources/scaling-multicloud-object-gateway-performance-by-adding-endpoints__rhodf
Chapter 209. Kubernetes Service Account Component
Chapter 209. Kubernetes Service Account Component Available as of Camel version 2.17 The Kubernetes Service Account component is one of Kubernetes Components which provides a producer to execute kubernetes Service Account operations. 209.1. Component Options The Kubernetes Service Account component has no options. 209.2. Endpoint Options The Kubernetes Service Account endpoint is configured using URI syntax: with the following path and query parameters: 209.2.1. Path Parameters (1 parameters): Name Description Default Type masterUrl Required Kubernetes API server URL String 209.2.2. Query Parameters (20 parameters): Name Description Default Type apiVersion (producer) The Kubernetes API Version to use String dnsDomain (producer) The dns domain, used for ServiceCall EIP String kubernetesClient (producer) Default KubernetesClient to use if provided KubernetesClient operation (producer) Producer operation to do on Kubernetes String portName (producer) The port name, used for ServiceCall EIP String portProtocol (producer) The port protocol, used for ServiceCall EIP tcp String connectionTimeout (advanced) Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean caCertData (security) The CA Cert Data String caCertFile (security) The CA Cert File String clientCertData (security) The Client Cert Data String clientCertFile (security) The Client Cert File String clientKeyAlgo (security) The Key Algorithm used by the client String clientKeyData (security) The Client Key data String clientKeyFile (security) The Client Key file String clientKeyPassphrase (security) The Client Key Passphrase String oauthToken (security) The Auth Token String password (security) Password to connect to Kubernetes String trustCerts (security) Define if the certs we used are trusted anyway or not Boolean username (security) Username to connect to Kubernetes String 209.3. Spring Boot Auto-Configuration The component supports 2 options, which are listed below. Name Description Default Type camel.component.kubernetes-service-accounts.enabled Whether to enable auto configuration of the kubernetes-service-accounts component. This is enabled by default. Boolean camel.component.kubernetes-service-accounts.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean
[ "kubernetes-service-accounts:masterUrl" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/kubernetes-service-accounts-component
Chapter 6. Configuring smart card authentication with local certificates
Chapter 6. Configuring smart card authentication with local certificates To configure smart card authentication with local certificates: The host is not connected to a domain. You want to authenticate with a smart card on this host. You want to configure SSH access using smart card authentication. You want to configure the smart card with authselect . Use the following configuration to accomplish this scenario: Obtain a user certificate for the user who wants to authenticate with a smart card. The certificate should be generated by a trustworthy Certification Authority used in the domain. If you cannot get the certificate, you can generate a user certificate signed by a local certificate authority for testing purposes, Store the certificate and private key in a smart card. Configure the smart card authentication for SSH access. Important If a host can be part of the domain, add the host to the domain and use certificates generated by Active Directory or Identity Management Certification Authority. For details about how to create IdM certificates for a smart card, see Configuring Identity Management for smart card authentication . Prerequisites Authselect installed The authselect tool configures user authentication on Linux hosts and you can use it to configure smart card authentication parameters. For details about authselect, see Explaining authselect . Smart Card or USB devices supported by RHEL 8 For details, see Smart Card support in RHEL8 . 6.1. Creating local certificates Follow this procedure to perform the following tasks: Generate the OpenSSL certificate authority Create a certificate signing request Warning The following steps are intended for testing purposes only. Certificates generated by a local self-signed Certificate Authority are not as secure as using AD, IdM, or RHCS Certification Authority. You should use a certificate generated by your enterprise Certification Authority even if the host is not part of the domain. Procedure Create a directory where you can generate the certificate, for example: Set up the certificate (copy this text to your command line in the ca directory): Create the following directories: Create the following files: Write the number 01 in the serial file: This command writes a number 01 in the serial file. It is a serial number of the certificate. With each new certificate released by this CA the number increases by one. Create an OpenSSL root CA key: Create a self-signed root Certification Authority certificate: Create the key for your username: This key is generated in the local system which is not secure, therefore, remove the key from the system when the key is stored in the card. You can create a key directly in the smart card as well. For doing this, follow instructions created by the manufacturer of your smart card. Create the certificate signing request configuration file (copy this text to your command line in the ca directory): Create a certificate signing request for your example.user certificate: Configure the new certificate. Expiration period is set to 1 year: At this point, the certification authority and certificates are successfully generated and prepared for import into a smart card. 6.2. Copying certificates to the SSSD directory GNOME Desktop Manager (GDM) requires SSSD. If you use GDM, you need to copy the PEM certificate to the /etc/sssd/pki directory. Prerequisites The local CA authority and certificates have been generated Procedure Ensure that you have SSSD installed on the system. Create a /etc/sssd/pki directory: Copy the rootCA.crt as a PEM file in the /etc/sssd/pki/ directory: Now you have successfully generated the certificate authority and certificates, and you have saved them in the /etc/sssd/pki directory. Note If you want to share the Certificate Authority certificates with another application, you can change the location in sssd.conf: SSSD PAM responder: pam_cert_db_path in the [pam] section SSSD ssh responder: ca_db in the [ssh] section For details, see man page for sssd.conf . Red Hat recommends keeping the default path and using a dedicated Certificate Authority certificate file for SSSD to make sure that only Certificate Authorities trusted for authentication are listed here. 6.3. Installing tools for managing and using smart cards Prerequisites The gnutls-utils package is installed. The opensc package is installed. The pcscd service is running. Before you can configure your smart card, you must install the corresponding tools, which can generate certificates and start the pscd service. Procedure Install the opensc and gnutls-utils packages: Start the pcscd service. Verification Verify that the pcscd service is up and running 6.4. Preparing your smart card and uploading your certificates and keys to your smart card Follow this procedure to configure your smart card with the pkcs15-init tool, which helps you to configure: Erasing your smart card Setting new PINs and optional PIN Unblocking Keys (PUKs) Creating a new slot on the smart card Storing the certificate, private key, and public key in the slot If required, locking the smart card settings as certain smart cards require this type of finalization Note The pkcs15-init tool may not work with all smart cards. You must use the tools that work with the smart card you are using. Prerequisites The opensc package, which includes the pkcs15-init tool, is installed. For more details, see Installing tools for managing and using smart cards . The card is inserted in the reader and connected to the computer. You have a private key, a public key, and a certificate to store on the smart card. In this procedure, testuser.key , testuserpublic.key , and testuser.crt are the names used for the private key, public key, and the certificate. You have your current smart card user PIN and Security Officer PIN (SO-PIN). Procedure Erase your smart card and authenticate yourself with your PIN: The card has been erased. Initialize your smart card, set your user PIN and PUK, and your Security Officer PIN and PUK: The pcks15-init tool creates a new slot on the smart card. Set a label and the authentication ID for the slot: The label is set to a human-readable value, in this case, testuser . The auth-id must be two hexadecimal values, in this case it is set to 01 . Store and label the private key in the new slot on the smart card: Note The value you specify for --id must be the same when storing your private key and storing your certificate in the step. Specifying your own value for --id is recommended as otherwise a more complicated value is calculated by the tool. Store and label the certificate in the new slot on the smart card: Optional: Store and label the public key in the new slot on the smart card: Note If the public key corresponds to a private key or certificate, specify the same ID as the ID of the private key or certificate. Optional: Certain smart cards require you to finalize the card by locking the settings: At this stage, your smart card includes the certificate, private key, and public key in the newly created slot. You have also created your user PIN and PUK and the Security Officer PIN and PUK. 6.5. Configuring SSH access using smart card authentication SSH connections require authentication. You can use a password or a certificate. Follow this procedure to enable authentication using a certificate stored on a smart card. For details about configuring smart cards with authselect , see Configuring smart cards using authselect . Prerequisites The smart card contains your certificate and private key. The card is inserted in the reader and connected to the computer. The pcscd service is running on your local machine. For details, see Installing tools for managing and using smart cards . Procedure Create a new directory for SSH keys in the home directory of the user who uses smart card authentication: Run the ssh-keygen -D command with the opensc library to retrieve the existing public key paired with the private key on the smart card, and add it to the authorized_keys list of the user's SSH keys directory to enable SSH access with smart card authentication. SSH requires access right configuration for the /.ssh directory and the authorized_keys file. To set or change the access rights, enter: Verification Display the keys: The terminal displays the keys. You can verify the SSH access with the following command: If the configuration is successful, you are prompted to enter the smart card PIN. The configuration works now locally. Now you can copy the public key and distribute it to authorized_keys files located on all servers on which you want to use SSH. 6.6. Creating certificate mapping rules when using smart cards You need to create certificate mapping rules in order to log in using the certificate stored on a smart card. Prerequisites The smart card contains your certificate and private key. The card is inserted in the reader and connected to the computer. The pcscd service is running on your local machine. Procedure Create a certificate mapping configuration file, such as /etc/sssd/conf.d/sssd_certmap.conf . Add certificate mapping rules to the sssd_certmap.conf file: Note that you must define each certificate mapping rule in separate sections. Define each section as follows: If SSSD is configured to use the proxy provider to allow smart card authentication for local users instead of AD, IPA, or LDAP, the <RULE_NAME> can simply be the username of the user with the card matching the data provided in the matchrule . Verification Note that to verify SSH access with a smart card, SSH access must be configured. For more information, see Configuring SSH access using smart card authentication . You can verify the SSH access with the following command: If the configuration is successful, you are prompted to enter the smart card PIN.
[ "mkdir /tmp/ca cd /tmp/ca", "cat > ca.cnf <<EOF [ ca ] default_ca = CA_default [ CA_default ] dir = . database = \\USDdir/index.txt new_certs_dir = \\USDdir/newcerts certificate = \\USDdir/rootCA.crt serial = \\USDdir/serial private_key = \\USDdir/rootCA.key RANDFILE = \\USDdir/rand default_days = 365 default_crl_days = 30 default_md = sha256 policy = policy_any email_in_dn = no name_opt = ca_default cert_opt = ca_default copy_extensions = copy [ usr_cert ] authorityKeyIdentifier = keyid, issuer [ v3_ca ] subjectKeyIdentifier = hash authorityKeyIdentifier = keyid:always,issuer:always basicConstraints = CA:true keyUsage = critical, digitalSignature, cRLSign, keyCertSign [ policy_any ] organizationName = supplied organizationalUnitName = supplied commonName = supplied emailAddress = optional [ req ] distinguished_name = req_distinguished_name prompt = no [ req_distinguished_name ] O = Example OU = Example Test CN = Example Test CA EOF", "mkdir certs crl newcerts", "touch index.txt crlnumber index.txt.attr", "echo 01 > serial", "openssl genrsa -out rootCA.key 2048", "openssl req -batch -config ca.cnf -x509 -new -nodes -key rootCA.key -sha256 -days 10000 -set_serial 0 -extensions v3_ca -out rootCA.crt", "openssl genrsa -out example.user.key 2048", "cat > req.cnf <<EOF [ req ] distinguished_name = req_distinguished_name prompt = no [ req_distinguished_name ] O = Example OU = Example Test CN = testuser [ req_exts ] basicConstraints = CA:FALSE nsCertType = client, email nsComment = \"testuser\" subjectKeyIdentifier = hash keyUsage = critical, nonRepudiation, digitalSignature, keyEncipherment extendedKeyUsage = clientAuth, emailProtection, msSmartcardLogin subjectAltName = otherName:msUPN;UTF8:[email protected], email:[email protected] EOF", "openssl req -new -nodes -key example.user.key -reqexts req_exts -config req.cnf -out example.user.csr", "openssl ca -config ca.cnf -batch -notext -keyfile rootCA.key -in example.user.csr -days 365 -extensions usr_cert -out example.user.crt", "rpm -q sssd sssd-2.0.0.43.el8_0.3.x86_64", "file /etc/sssd/pki /etc/sssd/pki/: directory", "cp /tmp/ca/rootCA.crt /etc/sssd/pki/sssd_auth_ca_db.pem", "yum -y install opensc gnutls-utils", "systemctl start pcscd", "systemctl status pcscd", "pkcs15-init --erase-card --use-default-transport-keys Using reader with a card: Reader name PIN [Security Officer PIN] required. Please enter PIN [Security Officer PIN]:", "pkcs15-init --create-pkcs15 --use-default-transport-keys --pin 963214 --puk 321478 --so-pin 65498714 --so-puk 784123 Using reader with a card: Reader name", "pkcs15-init --store-pin --label testuser --auth-id 01 --so-pin 65498714 --pin 963214 --puk 321478 Using reader with a card: Reader name", "pkcs15-init --store-private-key testuser.key --label testuser_key --auth-id 01 --id 01 --pin 963214 Using reader with a card: Reader name", "pkcs15-init --store-certificate testuser.crt --label testuser_crt --auth-id 01 --id 01 --format pem --pin 963214 Using reader with a card: Reader name", "pkcs15-init --store-public-key testuserpublic.key --label testuserpublic_key --auth-id 01 --id 01 --pin 963214 Using reader with a card: Reader name", "pkcs15-init -F", "mkdir /home/example.user/.ssh", "ssh-keygen -D /usr/lib64/pkcs11/opensc-pkcs11.so >> ~example.user/.ssh/authorized_keys", "chown -R example.user:example.user ~example.user/.ssh/ chmod 700 ~example.user/.ssh/ chmod 600 ~example.user/.ssh/authorized_keys", "cat ~example.user/.ssh/authorized_keys", "ssh -I /usr/lib64/opensc-pkcs11.so -l example.user localhost hostname", "[certmap/shadowutils/otheruser] matchrule = <SUBJECT>.*CN=certificate_user.*<ISSUER>^CN=Example Test CA,OU=Example Test,O=EXAMPLEUSD", "[certmap/<DOMAIN_NAME>/<RULE_NAME>]", "ssh -I /usr/lib64/opensc-pkcs11.so -l otheruser localhost hostname" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_smart_card_authentication/configuring-and-importing-local-certificates-to-a-smart-card_managing-smart-card-authentication
Chapter 7. Connecting an instance to the physical network
Chapter 7. Connecting an instance to the physical network This chapter contains information about using provider networks to connect instances directly to an external network. 7.1. Overview of the OpenStack Networking topology OpenStack Networking (neutron) has two categories of services distributed across a number of node types. Neutron server - This service runs the OpenStack Networking API server, which provides the API for end-users and services to interact with OpenStack Networking. This server also integrates with the underlying database to store and retrieve project network, router, and loadbalancer details, among others. Neutron agents - These are the services that perform the network functions for OpenStack Networking: neutron-dhcp-agent - manages DHCP IP addressing for project private networks. neutron-l3-agent - performs layer 3 routing between project private networks, the external network, and others. Compute node - This node hosts the hypervisor that runs the virtual machines, also known as instances. A Compute node must be wired directly to the network in order to provide external connectivity for instances. This node is typically where the l2 agents run, such as neutron-openvswitch-agent . 7.1.1. Service placement The OpenStack Networking services can either run together on the same physical server, or on separate dedicated servers, which are named according to their roles: Controller node - The server that runs API service. Network node - The server that runs the OpenStack Networking agents. Compute node - The hypervisor server that hosts the instances. The steps in this chapter apply to an environment that contains these three node types. If your deployment has both the Controller and Network node roles on the same physical node, then you must perform the steps from both sections on that server. This also applies for a High Availability (HA) environment, where all three nodes might be running the Controller node and Network node services with HA. As a result, you must complete the steps in sections applicable to Controller and Network nodes on all three nodes. 7.2. Using flat provider networks The procedures in this section create flat provider networks that can connect instances directly to external networks. You would do this if you have multiple physical networks (for example, physnet1 , physnet2 ) and separate physical interfaces ( eth0 -> physnet1 , and eth1 -> physnet2 ), and you need to connect each Compute node and Network node to those external networks. Note If you want to connect multiple VLAN-tagged interfaces (on a single NIC) to multiple provider networks, see Section 7.3, "Using VLAN provider networks" . 7.2.1. Configuring the Controller nodes 1. Edit /etc/neutron/plugin.ini (which is symlinked to /etc/neutron/plugins/ml2/ml2_conf.ini ), add flat to the existing list of values, and set flat_networks to * : 2 . Create a flat external network and associate it with the configured physical_network. Create this network as a shared network so that other users can connect their instances directly to it: 3. Create a subnet within this external network using the openstack subnet create command, or the OpenStack Dashboard: 4. Restart the neutron-server service to apply this change: 7.2.2. Configuring the Network node and Compute nodes Complete the following steps on the Network node and the Compute nodes so that these nodes can connect to the external network, and can allow instances to communicate directly with the external network 1. Create the Open vSwitch bridge and port. Run the following command to create the external network bridge (br-ex) and add a corresponding port (eth1) i. Edit /etc/sysconfig/network-scripts/ifcfg-eth1 : ii. Edit /etc/sysconfig/network-scripts/ifcfg-br-ex : 2. Restart the network service to apply these changes: 3. Configure the physical networks in /etc/neutron/plugins/ml2/openvswitch_agent.ini and map the bridge to the physical network: Note For more information on configuring bridge_mappings , see Chapter 11, Configuring bridge mappings . 4. Restart the neutron-openvswitch-agent service on the Network and Compute nodes to apply these changes: 7.2.3. Configuring the Network node 1. Set the external_network_bridge = parameter to an empty value in /etc/neutron/l3_agent.ini to enable the use of external provider networks. 2. Restart neutron-l3-agent to apply these changes: Note If you have multiple flat provider networks, ensure that each of them has a separate physical interface and bridge to connect them to the external network. Configure the ifcfg-* scripts appropriately and use a comma-separated list for each network when specifying them in bridge_mappings . For more information on configuring bridge_mappings , see Chapter 11, Configuring bridge mappings . 7.2.4. Connecting an instance to the external network After you create the external network, you can connect an instance to it and test connectivity: 1. Create a new instance. 2. Use the Networking tab in the dashboard to add the new instance directly to the newly-created external network. 7.2.5. How does the flat provider network packet flow work? This section describes in detail how traffic flows to and from an instance with flat provider network configuration. The flow of outgoing traffic in a flat provider network The following diagram describes the packet flow for traffic leaving an instance and arriving directly at an external network. After you configure the br-ex external bridge, add the physical interface to the bridge, and spawn an instance to a Compute node, the resulting configuration of interfaces and bridges resembles the configuration in the following diagram (if using the iptables_hybrid firewall driver): 1. Packets leave the eth0 interface of the instance and arrive at the linux bridge qbr-xx . 2. Bridge qbr-xx is connected to br-int using veth pair qvb-xx <-> qvo-xxx . This is because the bridge is used to apply the inbound/outbound firewall rules defined by the security group. 3. Interface qvb-xx is connected to the qbr-xx linux bridge, and qvoxx is connected to the br-int Open vSwitch (OVS) bridge. An example configuration of `qbr-xx`Linux bridge: The configuration of qvo-xx on br-int : Note Port qvo-xx is tagged with the internal VLAN tag associated with the flat provider network. In this example, the VLAN tag is 5 . When the packet reaches qvo-xx , the VLAN tag is appended to the packet header. The packet is then moved to the br-ex OVS bridge using the patch-peer int-br-ex <-> phy-br-ex . Example configuration of the patch-peer on br-int : Example configuration of the patch-peer on br-ex : When this packet reaches phy-br-ex on br-ex , an OVS flow inside br-ex strips the VLAN tag (5) and forwards it to the physical interface. In the following example, the output shows the port number of phy-br-ex as 2 . The following output shows any packet that arrives on phy-br-ex ( in_port=2 ) with a VLAN tag of 5 ( dl_vlan=5 ). In addition, an OVS flow in br-ex strips the VLAN tag and forwards the packet to the physical interface. If the physical interface is another VLAN-tagged interface, then the physical interface adds the tag to the packet. The flow of incoming traffic in a flat provider network This section contains information about the flow of incoming traffic from the external network until it arrives at the interface of the instance. 1. Incoming traffic arrives at eth1 on the physical node. 2. The packet passes to the br-ex bridge. 3. The packet moves to br-int via the patch-peer phy-br-ex <--> int-br-ex . In the following example, int-br-ex uses port number 15 . See the entry containing 15(int-br-ex) : Observing the traffic flow on br-int 1. When the packet arrives at int-br-ex , an OVS flow rule within the br-int bridge amends the packet to add the internal VLAN tag 5 . See the entry for actions=mod_vlan_vid:5 : 2. The second rule manages packets that arrive on int-br-ex (in_port=15) with no VLAN tag (vlan_tci=0x0000): This rule adds VLAN tag 5 to the packet ( actions=mod_vlan_vid:5,NORMAL ) and forwards it to qvoxxx . 3. qvoxxx accepts the packet and forwards it to qvbxx , after stripping away the VLAN tag. 4. The packet then reaches the instance. Note VLAN tag 5 is an example VLAN that was used on a test Compute node with a flat provider network; this value was assigned automatically by neutron-openvswitch-agent . This value may be different for your own flat provider network, and can differ for the same network on two separate Compute nodes. 7.2.6. Troubleshooting instance-physical network connections on flat provider networks The output provided in Section 7.2.5, "How does the flat provider network packet flow work?" - provides sufficient debugging information for troubleshooting a flat provider network, should anything go wrong. The following steps contain further information about the troubleshooting process. 1. Review the bridge_mappings : Verify that the physical network name you use (for example, physnet1 ) is consistent with the contents of the bridge_mapping configuration as shown in this example: 2. Review the network configuration: Confirm that the network is created as external , and uses the flat type: 3. Review the patch-peer: Run the ovs-vsctl show command, and verify that br-int and br-ex are connected using a patch-peer int-br-ex <--> phy-br-ex . Example configuration of the patch-peer on br-ex : This connection is created when you restart the neutron-openvswitch-agent service, if bridge_mapping is correctly configured in /etc/neutron/plugins/ml2/openvswitch_agent.ini . Re-check the bridge_mapping setting if the connection is not created after you restart the service. Note For more information on configuring bridge_mappings , see Chapter 11, Configuring bridge mappings . 4. Review the network flows: Run ovs-ofctl dump-flows br-ex and ovs-ofctl dump-flows br-int and review whether the flows strip the internal VLAN IDs for outgoing packets, and add VLAN IDs for incoming packets. This flow is first added when you spawn an instance to this network on a specific Compute node. If this flow is not created after spawning the instance, verify that the network is created as flat , is external , and that the physical_network name is correct. In addition, review the bridge_mapping settings. Finally, review the ifcfg-br-ex and ifcfg-ethx configuration. Ensure that ethX is added as a port within br-ex , and that ifcfg-br-ex and ifcfg-ethx have an UP flag in the output of ip a . The following output shows eth1 is a port in br-ex : The following example demonstrates that eth1 is configured as an OVS port, and that the kernel knows to transfer all packets from the interface, and send them to the OVS bridge br-ex . This can be observed in the entry: master ovs-system . 7.3. Using VLAN provider networks To connect multiple VLAN-tagged interfaces on a single NIC to multiple provider networks, complete the steps in this section to create VLAN provider networks that can connect instances directly to external networks.This example uses a physical network called physnet1 , with a range of VLANs ( 171-172 ). The network nodes and compute nodes are connected to the physical network using physical interface eth1 . The switch ports that these interfaces connect to must be configured to trunk the required VLAN ranges. Complete the following procedures to configure the VLAN provider networks using the example VLAN IDs and names. 7.3.1. Configuring the Controller nodes 1. Enable the vlan mechanism driver by editing /etc/neutron/plugin.ini (symlinked to /etc/neutron/plugins/ml2/ml2_conf.ini ), and add vlan to the existing list of values: 2. Configure the network_vlan_ranges setting to reflect the physical network and VLAN ranges in use: 3. Restart the neutron-server service to apply the changes: 4. Create the external networks as type vlan , and associate them with the configured physical_network . Used the --shared when you create the external networks so that other users can connect instances directly. Run the following example command to create two networks: one for VLAN 171, and another for VLAN 172: 5. Create a number of subnets and configure them to use the external network. You can use either openstack subnet create or the dashboard to create these subnets. Ensure that the external subnet details you have received from your network administrator are correctly associated with each VLAN. In this example, VLAN 171 uses subnet 10.65.217.0/24 and VLAN 172 uses 10.65.218.0/24 : 7.3.2. Configuring the Network and Compute nodes Complete the following steps on the Network node and Compute nodes to connect the nodes to the external network, and permit instances to communicate directly with the external network. 1. Create an external network bridge ( br-ex ), and associate a port ( eth1 ) with it: This example configures eth1 to use br-ex : This example configures the br-ex bridge: 2. Reboot the node, or restart the network service to apply the networking changes: 3. Configure the physical networks in /etc/neutron/plugins/ml2/openvswitch_agent.ini and map bridges according to the physical network: Note For more information on configuring bridge_mappings , see Chapter 11, Configuring bridge mappings . 4. Restart the neutron-openvswitch-agent service on the network nodes and compute nodes to apply the changes: 7.3.3. Configuring the Network node 1. Set the external_network_bridge = parameter to an empty value in /etc/neutron/l3_agent.ini so that you can use provider external networks (not bridge based external networks) where you add external_network_bridge = br-ex : 2. Restart neutron-l3-agent to apply the changes. 3. Create a new instance and use the Networking tab in the dashboard to add the new instance directly to the new external network. 7.3.4. How does the VLAN provider network packet flow work? This section describes in detail how traffic flows to and from an instance with VLAN provider network configuration. The flow of outgoing traffic in a VLAN provider network The following diagram describes the packet flow for traffic leaving an instance and arriving directly to a VLAN provider external network. This example uses two instances attached to the two VLAN networks (171 and 172). After you configure br-ex , add a physical interface to it, and spawn an instance to a Compute node, the resulting configuration of interfaces and bridges resembles the configuration in the following diagram: 1. Packets leaving the eth0 interface of the instance arrive at the linux bridge qbr-xx connected to the instance. 2. qbr-xx is connected to br-int using veth pair qvbxx <-> qvoxxx . 3. qvbxx is connected to the linux bridge qbr-xx and qvoxx is connected to the Open vSwitch bridge br-int . Example configuration of qbr-xx on the Linux bridge. This example features two instances and two corresponding linux bridges: The configuration of qvoxx on br-int : qvoxx is tagged with the internal VLAN tag associated with the VLAN provider network. In this example, the internal VLAN tag 2 is associated with the VLAN provider network provider-171 and VLAN tag 3 is associated with VLAN provider network provider-172 . When the packet reaches qvoxx , the this VLAN tag is added to the packet header. The packet is then moved to the br-ex OVS bridge using patch-peer int-br-ex <-> phy-br-ex . Example patch-peer on br-int : Example configuration of the patch peer on br-ex : When this packet reaches phy-br-ex on br-ex , an OVS flow inside br-ex replaces the internal VLAN tag with the actual VLAN tag associated with the VLAN provider network. The output of the following command shows that the port number of phy-br-ex is 4 : The following command shows any packet that arrives on phy-br-ex ( in_port=4 ) which has VLAN tag 2 ( dl_vlan=2 ). Open vSwitch replaces the VLAN tag with 171 ( actions=mod_vlan_vid:171,NORMAL ) and forwards the packet to the physical interface. The command also shows any packet that arrives on phy-br-ex ( in_port=4 ) which has VLAN tag 3 ( dl_vlan=3 ). Open vSwitch replaces the VLAN tag with 172 ( actions=mod_vlan_vid:172,NORMAL ) and forwards the packet to the physical interface. The neutron-openvswitch-agent adds these rules. This packet is then forwarded to physical interface eth1 . The flow of incoming traffic in a VLAN provider network The following example flow was tested on a Compute node using VLAN tag 2 for provider network provider-171 and VLAN tag 3 for provider network provider-172. The flow uses port 18 on the integration bridge br-int. Your VLAN provider network may require a different configuration. Also, the configuration requirement for a network may differ between two different Compute nodes. The output of the following command shows int-br-ex with port number 18: The output of the following command shows the flow rules on br-int. Incoming flow example This example demonstrates the the following br-int OVS flow: A packet with VLAN tag 172 from the external network reaches the br-ex bridge via eth1 on the physical node The packet moves to br-int via the patch-peer phy-br-ex <-> int-br-ex . The packet matches the flow's criteria ( in_port=18,dl_vlan=172 ) The flow actions ( actions=mod_vlan_vid:3,NORMAL ) replace the VLAN tag 172 with internal VLAN tag 3 and forwards the packet to the instance with normal Layer 2 processing. 7.3.5. Troubleshooting instance-physical network connections on VLAN provider networks Refer to the packet flow described in Section 7.3.4, "How does the VLAN provider network packet flow work?" when troubleshooting connectivity in a VLAN provider network. In addition, review the following configuration options: 1. Verify that physical network name is used consistently. In this example, physnet1 is used consistently while creating the network, and within the bridge_mapping configuration: 2. Confirm that the network was created as external , is type vlan , and uses the correct segmentation_id value: 3. Run ovs-vsctl show and verify that br-int and br-ex are connected using the patch-peer int-br-ex <-> phy-br-ex . This connection is created while restarting neutron-openvswitch-agent , provided that the bridge_mapping is correctly configured in /etc/neutron/plugins/ml2/openvswitch_agent.ini . Recheck the bridge_mapping setting if this is not created even after restarting the service. 4. To review the flow of outgoing packets, run ovs-ofctl dump-flows br-ex and ovs-ofctl dump-flows br-int , and verify that the flows map the internal VLAN IDs to the external VLAN ID ( segmentation_id ). For incoming packets, map the external VLAN ID to the internal VLAN ID. This flow is added by the neutron OVS agent when you spawn an instance to this network for the first time. If this flow is not created after spawning the instance, ensure that the network is created as vlan , is external , and that the physical_network name is correct. In addition, re-check the bridge_mapping settings. 5. Finally, re-check the ifcfg-br-ex and ifcfg-ethx configuration. Ensure that br-ex includes port ethX , and that both ifcfg-br-ex and ifcfg-ethx have an UP flag in the output of the ip a command. For example, the following output shows that eth1 is a port in br-ex : The following command shows that eth1 has been added as a port, and that the kernel is configured to move all packets from the interface to the OVS bridge br-ex . This is demonstrated by the entry: master ovs-system . 7.4. Enabling Compute metadata access Instances connected as described in this chapter are directly attached to the provider external networks, and have external routers configured as their default gateway. No OpenStack Networking (neutron) routers are used. This means that neutron routers cannot be used to proxy metadata requests from instances to the nova-metadata server, which may result in failures while running cloud-init . However, this issue can be resolved by configuring the dhcp agent to proxy metadata requests. You can enable this functionality in /etc/neutron/dhcp_agent.ini . For example: 7.5. Floating IP addresses You can use the same network to allocate floating IP addresses to instances, even if the floating IPs are already associated with private networks. The addresses that you allocate as floating IPs from this network are bound to the qrouter-xxx namespace on the Network node, and perform DNAT-SNAT to the associated private IP address. In contrast, the IP addresses that you allocate for direct external network access are bound directly inside the instance, and allow the instance to communicate directly with external network.
[ "type_drivers = vxlan,flat flat_networks =*", "openstack network create --provider-network-type flat --provider-physical-network physnet1 --external public01", "openstack subnet create --dhcp --allocation-pool start=192.168.100.20,end=192.168.100.100 --gateway 192.168.100.1 --network public01 public_subnet", "systemctl restart neutron-server.service", "DEVICE=eth1 TYPE=OVSPort DEVICETYPE=ovs OVS_BRIDGE=br-ex ONBOOT=yes NM_CONTROLLED=no BOOTPROTO=none", "DEVICE=br-ex TYPE=OVSBridge DEVICETYPE=ovs ONBOOT=yes NM_CONTROLLED=no BOOTPROTO=none", "systemctl restart network.service", "bridge_mappings = physnet1:br-ex", "systemctl restart neutron-openvswitch-agent", "Name of bridge used for external network traffic. This should be set to empty value for the linux bridge external_network_bridge =", "systemctl restart neutron-l3-agent.service", "brctl show qbr269d4d73-e7 8000.061943266ebb no qvb269d4d73-e7 tap269d4d73-e7", "ovs-vsctl show Bridge br-int fail_mode: secure Interface \"qvof63599ba-8f\" Port \"qvo269d4d73-e7\" tag: 5 Interface \"qvo269d4d73-e7\"", "ovs-vsctl show Bridge br-int fail_mode: secure Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex}", "Bridge br-ex Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Port br-ex Interface br-ex type: internal", "ovs-ofctl show br-ex OFPT_FEATURES_REPLY (xid=0x2): dpid:00003440b5c90dc6 n_tables:254, n_buffers:256 capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST ENQUEUE 2(phy-br-ex): addr:ba:b5:7b:ae:5c:a2 config: 0 state: 0 speed: 0 Mbps now, 0 Mbps max", "ovs-ofctl dump-flows br-ex NXST_FLOW reply (xid=0x4): cookie=0x0, duration=4703.491s, table=0, n_packets=3620, n_bytes=333744, idle_age=0, priority=1 actions=NORMAL cookie=0x0, duration=3890.038s, table=0, n_packets=13, n_bytes=1714, idle_age=3764, priority=4,in_port=2,dl_vlan=5 actions=strip_vlan,NORMAL cookie=0x0, duration=4702.644s, table=0, n_packets=10650, n_bytes=447632, idle_age=0, priority=2,in_port=2 actions=drop", "ovs-ofctl show br-int OFPT_FEATURES_REPLY (xid=0x2): dpid:00004e67212f644d n_tables:254, n_buffers:256 capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST ENQUEUE 15(int-br-ex): addr:12:4e:44:a9:50:f4 config: 0 state: 0 speed: 0 Mbps now, 0 Mbps max", "ovs-ofctl dump-flows br-int NXST_FLOW reply (xid=0x4): cookie=0x0, duration=5351.536s, table=0, n_packets=12118, n_bytes=510456, idle_age=0, priority=1 actions=NORMAL cookie=0x0, duration=4537.553s, table=0, n_packets=3489, n_bytes=321696, idle_age=0, priority=3,in_port=15,vlan_tci=0x0000 actions=mod_vlan_vid:5,NORMAL cookie=0x0, duration=5350.365s, table=0, n_packets=628, n_bytes=57892, idle_age=4538, priority=2,in_port=15 actions=drop cookie=0x0, duration=5351.432s, table=23, n_packets=0, n_bytes=0, idle_age=5351, priority=0 actions=drop", "grep bridge_mapping /etc/neutron/plugins/ml2/openvswitch_agent.ini bridge_mappings = physnet1:br-ex # openstack network show provider-flat | provider:physical_network | physnet1", "openstack network show provider-flat | provider:network_type | flat | | router:external | True |", "ovs-vsctl show Bridge br-int fail_mode: secure Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex}", "Bridge br-ex Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Port br-ex Interface br-ex type: internal", "Bridge br-ex Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Port \"eth1\" Interface \"eth1\"", "ip a 5: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP qlen 1000", "[ml2] type_drivers = vxlan,flat,vlan", "[ml2_type_vlan] network_vlan_ranges=physnet1:171:172", "systemctl restart neutron-server", "openstack network create --provider-network-type vlan --external --provider-physical-network physnet1 --segment 171 --share openstack network create --provider-network-type vlan --external --provider-physical-network physnet1 --segment 172 --share", "openstack subnet create --network provider-171 --subnet-range 10.65.217.0/24 --dhcp --gateway 10.65.217.254 --subnet-provider-171 openstack subnet create --network provider-172 --subnet-range 10.65.218.0/24 --dhcp --gateway 10.65.218.254 --subnet-provider-172", "/etc/sysconfig/network-scripts/ifcfg-eth1 DEVICE=eth1 TYPE=OVSPort DEVICETYPE=ovs OVS_BRIDGE=br-ex ONBOOT=yes NM_CONTROLLED=no BOOTPROTO=none", "/etc/sysconfig/network-scripts/ifcfg-br-ex: DEVICE=br-ex TYPE=OVSBridge DEVICETYPE=ovs ONBOOT=yes NM_CONTROLLED=no BOOTPROTO=none", "systemctl restart network", "bridge_mappings = physnet1:br-ex", "systemctl restart neutron-openvswitch-agent", "Name of bridge used for external network traffic. This should be set to empty value for the linux bridge external_network_bridge =", "systemctl restart neutron-l3-agent", "brctl show bridge name bridge id STP enabled interfaces qbr84878b78-63 8000.e6b3df9451e0 no qvb84878b78-63 tap84878b78-63 qbr86257b61-5d 8000.3a3c888eeae6 no qvb86257b61-5d tap86257b61-5d", "options: {peer=phy-br-ex} Port \"qvo86257b61-5d\" tag: 3 Interface \"qvo86257b61-5d\" Port \"qvo84878b78-63\" tag: 2 Interface \"qvo84878b78-63\"", "Bridge br-int fail_mode: secure Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex}", "Bridge br-ex Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Port br-ex Interface br-ex type: internal", "ovs-ofctl show br-ex 4(phy-br-ex): addr:32:e7:a1:6b:90:3e config: 0 state: 0 speed: 0 Mbps now, 0 Mbps max", "ovs-ofctl dump-flows br-ex NXST_FLOW reply (xid=0x4): NXST_FLOW reply (xid=0x4): cookie=0x0, duration=6527.527s, table=0, n_packets=29211, n_bytes=2725576, idle_age=0, priority=1 actions=NORMAL cookie=0x0, duration=2939.172s, table=0, n_packets=117, n_bytes=8296, idle_age=58, priority=4,in_port=4,dl_vlan=3 actions=mod_vlan_vid:172,NORMAL cookie=0x0, duration=6111.389s, table=0, n_packets=145, n_bytes=9368, idle_age=98, priority=4,in_port=4,dl_vlan=2 actions=mod_vlan_vid:171,NORMAL cookie=0x0, duration=6526.675s, table=0, n_packets=82, n_bytes=6700, idle_age=2462, priority=2,in_port=4 actions=drop", "ovs-ofctl show br-int 18(int-br-ex): addr:fe:b7:cb:03:c5:c1 config: 0 state: 0 speed: 0 Mbps now, 0 Mbps max", "ovs-ofctl dump-flows br-int NXST_FLOW reply (xid=0x4): cookie=0x0, duration=6770.572s, table=0, n_packets=1239, n_bytes=127795, idle_age=106, priority=1 actions=NORMAL cookie=0x0, duration=3181.679s, table=0, n_packets=2605, n_bytes=246456, idle_age=0, priority=3,in_port=18,dl_vlan=172 actions=mod_vlan_vid:3,NORMAL cookie=0x0, duration=6353.898s, table=0, n_packets=5077, n_bytes=482582, idle_age=0, priority=3,in_port=18,dl_vlan=171 actions=mod_vlan_vid:2,NORMAL cookie=0x0, duration=6769.391s, table=0, n_packets=22301, n_bytes=2013101, idle_age=0, priority=2,in_port=18 actions=drop cookie=0x0, duration=6770.463s, table=23, n_packets=0, n_bytes=0, idle_age=6770, priority=0 actions=drop", "cookie=0x0, duration=3181.679s, table=0, n_packets=2605, n_bytes=246456, idle_age=0, priority=3,in_port=18,dl_vlan=172 actions=mod_vlan_vid:3,NORMAL", "grep bridge_mapping /etc/neutron/plugins/ml2/openvswitch_agent.ini bridge_mappings = physnet1:br-ex openstack network show provider-vlan171 | provider:physical_network | physnet1", "openstack network show provider-vlan171 | provider:network_type | vlan | | provider:physical_network | physnet1 | | provider:segmentation_id | 171 |", "Bridge br-ex Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Port \"eth1\" Interface \"eth1\"", "ip a 5: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP qlen 1000", "enable_isolated_metadata = True" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/networking_guide/sec-connect-instance
Chapter 60. Executing rules
Chapter 60. Executing rules After you identify example rules or create your own rules in Business Central, you can build and deploy the associated project and execute rules locally or on KIE Server to test the rules. Prerequisites Business Central and KIE Server are installed and running. For installation options, see Planning a Red Hat Process Automation Manager installation . Procedure In Business Central, go to Menu Design Projects and click the project name. In the upper-right corner of the project Assets page, click Deploy to build the project and deploy it to KIE Server. If the build fails, address any problems described in the Alerts panel at the bottom of the screen. For more information about project deployment options, see Packaging and deploying an Red Hat Process Automation Manager project . Note If the rule assets in your project are not built from an executable rule model by default, verify that the following dependency is in the pom.xml file of your project and rebuild the project: <dependency> <groupId>org.drools</groupId> <artifactId>drools-model-compiler</artifactId> <version>USD{rhpam.version}</version> </dependency> This dependency is required for rule assets in Red Hat Process Automation Manager to be built from executable rule models by default. This dependency is included as part of the Red Hat Process Automation Manager core packaging, but depending on your Red Hat Process Automation Manager upgrade history, you may need to manually add this dependency to enable the executable rule model behavior. For more information about executable rule models, see Packaging and deploying an Red Hat Process Automation Manager project . Create a Maven or Java project outside of Business Central, if not created already, that you can use for executing rules locally or that you can use as a client application for executing rules on KIE Server. The project must contain a pom.xml file and any other required components for executing the project resources. For example test projects, see "Other methods for creating and executing DRL rules" . Open the pom.xml file of your test project or client application and add the following dependencies, if not added already: kie-ci : Enables your client application to load Business Central project data locally using ReleaseId kie-server-client : Enables your client application to interact remotely with assets on KIE Server slf4j : (Optional) Enables your client application to use Simple Logging Facade for Java (SLF4J) to return debug logging information after you interact with KIE Server Example dependencies for Red Hat Process Automation Manager 7.13 in a client application pom.xml file: <!-- For local execution --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-ci</artifactId> <version>7.67.0.Final-redhat-00024</version> </dependency> <!-- For remote execution on KIE Server --> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-client</artifactId> <version>7.67.0.Final-redhat-00024</version> </dependency> <!-- For debug logging (optional) --> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-simple</artifactId> <version>1.7.25</version> </dependency> For available versions of these artifacts, search the group ID and artifact ID in the Nexus Repository Manager online. Note Instead of specifying a Red Hat Process Automation Manager <version> for individual dependencies, consider adding the Red Hat Business Automation bill of materials (BOM) dependency to your project pom.xml file. The Red Hat Business Automation BOM applies to both Red Hat Decision Manager and Red Hat Process Automation Manager. When you add the BOM files, the correct versions of transitive dependencies from the provided Maven repositories are included in the project. Example BOM dependency: <dependency> <groupId>com.redhat.ba</groupId> <artifactId>ba-platform-bom</artifactId> <version>7.13.5.redhat-00002</version> <scope>import</scope> <type>pom</type> </dependency> For more information about the Red Hat Business Automation BOM, see What is the mapping between Red Hat Process Automation Manager and the Maven library version? . Ensure that the dependencies for artifacts containing model classes are defined in the client application pom.xml file exactly as they appear in the pom.xml file of the deployed project. If dependencies for model classes differ between the client application and your projects, execution errors can occur. To access the project pom.xml file in Business Central, select any existing asset in the project and then in the Project Explorer menu on the left side of the screen, click the Customize View gear icon and select Repository View pom.xml . For example, the following Person class dependency appears in both the client and deployed project pom.xml files: <dependency> <groupId>com.sample</groupId> <artifactId>Person</artifactId> <version>1.0.0</version> </dependency> If you added the slf4j dependency to the client application pom.xml file for debug logging, create a simplelogger.properties file on the relevant classpath (for example, in src/main/resources/META-INF in Maven) with the following content: org.slf4j.simpleLogger.defaultLogLevel=debug In your client application, create a .java main class containing the necessary imports and a main() method to load the KIE base, insert facts, and execute the rules. For example, a Person object in a project contains getter and setter methods to set and retrieve the first name, last name, hourly rate, and the wage of a person. The following Wage rule in a project calculates the wage and hourly rate values and displays a message based on the result: package com.sample; import com.sample.Person; dialect "java" rule "Wage" when Person(hourlyRate * wage > 100) Person(name : firstName, surname : lastName) then System.out.println("Hello" + " " + name + " " + surname + "!"); System.out.println("You are rich!"); end To test this rule locally outside of KIE Server (if needed), configure the .java class to import KIE services, a KIE container, and a KIE session, and then use the main() method to fire all rules against a defined fact model: Executing rules locally import org.kie.api.KieServices; import org.kie.api.builder.ReleaseId; import org.kie.api.runtime.KieContainer; import org.kie.api.runtime.KieSession; import org.drools.compiler.kproject.ReleaseIdImpl; public class RulesTest { public static final void main(String[] args) { try { // Identify the project in the local repository: ReleaseId rid = new ReleaseIdImpl("com.myspace", "MyProject", "1.0.0"); // Load the KIE base: KieServices ks = KieServices.Factory.get(); KieContainer kContainer = ks.newKieContainer(rid); KieSession kSession = kContainer.newKieSession(); // Set up the fact model: Person p = new Person(); p.setWage(12); p.setFirstName("Tom"); p.setLastName("Summers"); p.setHourlyRate(10); // Insert the person into the session: kSession.insert(p); // Fire all rules: kSession.fireAllRules(); kSession.dispose(); } catch (Throwable t) { t.printStackTrace(); } } } To test this rule on KIE Server, configure the .java class with the imports and rule execution information similarly to the local example, and additionally specify KIE services configuration and KIE services client details: Executing rules on KIE Server package com.sample; import java.util.ArrayList; import java.util.HashSet; import java.util.List; import java.util.Set; import org.kie.api.command.BatchExecutionCommand; import org.kie.api.command.Command; import org.kie.api.KieServices; import org.kie.api.runtime.ExecutionResults; import org.kie.api.runtime.KieContainer; import org.kie.api.runtime.KieSession; import org.kie.server.api.marshalling.MarshallingFormat; import org.kie.server.api.model.ServiceResponse; import org.kie.server.client.KieServicesClient; import org.kie.server.client.KieServicesConfiguration; import org.kie.server.client.KieServicesFactory; import org.kie.server.client.RuleServicesClient; import com.sample.Person; public class RulesTest { private static final String containerName = "testProject"; private static final String sessionName = "myStatelessSession"; public static final void main(String[] args) { try { // Define KIE services configuration and client: Set<Class<?>> allClasses = new HashSet<Class<?>>(); allClasses.add(Person.class); String serverUrl = "http://USDHOST:USDPORT/kie-server/services/rest/server"; String username = "USDUSERNAME"; String password = "USDPASSWORD"; KieServicesConfiguration config = KieServicesFactory.newRestConfiguration(serverUrl, username, password); config.setMarshallingFormat(MarshallingFormat.JAXB); config.addExtraClasses(allClasses); KieServicesClient kieServicesClient = KieServicesFactory.newKieServicesClient(config); // Set up the fact model: Person p = new Person(); p.setWage(12); p.setFirstName("Tom"); p.setLastName("Summers"); p.setHourlyRate(10); // Insert Person into the session: KieCommands kieCommands = KieServices.Factory.get().getCommands(); List<Command> commandList = new ArrayList<Command>(); commandList.add(kieCommands.newInsert(p, "personReturnId")); // Fire all rules: commandList.add(kieCommands.newFireAllRules("numberOfFiredRules")); BatchExecutionCommand batch = kieCommands.newBatchExecution(commandList, sessionName); // Use rule services client to send request: RuleServicesClient ruleClient = kieServicesClient.getServicesClient(RuleServicesClient.class); ServiceResponse<ExecutionResults> executeResponse = ruleClient.executeCommandsWithResults(containerName, batch); System.out.println("number of fired rules:" + executeResponse.getResult().getValue("numberOfFiredRules")); } catch (Throwable t) { t.printStackTrace(); } } } Run the configured .java class from your project directory. You can run the file in your development platform (such as Red Hat CodeReady Studio) or in the command line. Example Maven execution (within project directory): Example Java execution (within project directory) Review the rule execution status in the command line and in the server log. If any rules do not execute as expected, review the configured rules in the project and the main class configuration to validate the data provided.
[ "<dependency> <groupId>org.drools</groupId> <artifactId>drools-model-compiler</artifactId> <version>USD{rhpam.version}</version> </dependency>", "<!-- For local execution --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-ci</artifactId> <version>7.67.0.Final-redhat-00024</version> </dependency> <!-- For remote execution on KIE Server --> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-client</artifactId> <version>7.67.0.Final-redhat-00024</version> </dependency> <!-- For debug logging (optional) --> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-simple</artifactId> <version>1.7.25</version> </dependency>", "<dependency> <groupId>com.redhat.ba</groupId> <artifactId>ba-platform-bom</artifactId> <version>7.13.5.redhat-00002</version> <scope>import</scope> <type>pom</type> </dependency>", "<dependency> <groupId>com.sample</groupId> <artifactId>Person</artifactId> <version>1.0.0</version> </dependency>", "org.slf4j.simpleLogger.defaultLogLevel=debug", "package com.sample; import com.sample.Person; dialect \"java\" rule \"Wage\" when Person(hourlyRate * wage > 100) Person(name : firstName, surname : lastName) then System.out.println(\"Hello\" + \" \" + name + \" \" + surname + \"!\"); System.out.println(\"You are rich!\"); end", "import org.kie.api.KieServices; import org.kie.api.builder.ReleaseId; import org.kie.api.runtime.KieContainer; import org.kie.api.runtime.KieSession; import org.drools.compiler.kproject.ReleaseIdImpl; public class RulesTest { public static final void main(String[] args) { try { // Identify the project in the local repository: ReleaseId rid = new ReleaseIdImpl(\"com.myspace\", \"MyProject\", \"1.0.0\"); // Load the KIE base: KieServices ks = KieServices.Factory.get(); KieContainer kContainer = ks.newKieContainer(rid); KieSession kSession = kContainer.newKieSession(); // Set up the fact model: Person p = new Person(); p.setWage(12); p.setFirstName(\"Tom\"); p.setLastName(\"Summers\"); p.setHourlyRate(10); // Insert the person into the session: kSession.insert(p); // Fire all rules: kSession.fireAllRules(); kSession.dispose(); } catch (Throwable t) { t.printStackTrace(); } } }", "package com.sample; import java.util.ArrayList; import java.util.HashSet; import java.util.List; import java.util.Set; import org.kie.api.command.BatchExecutionCommand; import org.kie.api.command.Command; import org.kie.api.KieServices; import org.kie.api.runtime.ExecutionResults; import org.kie.api.runtime.KieContainer; import org.kie.api.runtime.KieSession; import org.kie.server.api.marshalling.MarshallingFormat; import org.kie.server.api.model.ServiceResponse; import org.kie.server.client.KieServicesClient; import org.kie.server.client.KieServicesConfiguration; import org.kie.server.client.KieServicesFactory; import org.kie.server.client.RuleServicesClient; import com.sample.Person; public class RulesTest { private static final String containerName = \"testProject\"; private static final String sessionName = \"myStatelessSession\"; public static final void main(String[] args) { try { // Define KIE services configuration and client: Set<Class<?>> allClasses = new HashSet<Class<?>>(); allClasses.add(Person.class); String serverUrl = \"http://USDHOST:USDPORT/kie-server/services/rest/server\"; String username = \"USDUSERNAME\"; String password = \"USDPASSWORD\"; KieServicesConfiguration config = KieServicesFactory.newRestConfiguration(serverUrl, username, password); config.setMarshallingFormat(MarshallingFormat.JAXB); config.addExtraClasses(allClasses); KieServicesClient kieServicesClient = KieServicesFactory.newKieServicesClient(config); // Set up the fact model: Person p = new Person(); p.setWage(12); p.setFirstName(\"Tom\"); p.setLastName(\"Summers\"); p.setHourlyRate(10); // Insert Person into the session: KieCommands kieCommands = KieServices.Factory.get().getCommands(); List<Command> commandList = new ArrayList<Command>(); commandList.add(kieCommands.newInsert(p, \"personReturnId\")); // Fire all rules: commandList.add(kieCommands.newFireAllRules(\"numberOfFiredRules\")); BatchExecutionCommand batch = kieCommands.newBatchExecution(commandList, sessionName); // Use rule services client to send request: RuleServicesClient ruleClient = kieServicesClient.getServicesClient(RuleServicesClient.class); ServiceResponse<ExecutionResults> executeResponse = ruleClient.executeCommandsWithResults(containerName, batch); System.out.println(\"number of fired rules:\" + executeResponse.getResult().getValue(\"numberOfFiredRules\")); } catch (Throwable t) { t.printStackTrace(); } } }", "mvn clean install exec:java -Dexec.mainClass=\"com.sample.app.RulesTest\"", "javac -classpath \"./USDDEPENDENCIES/*:.\" RulesTest.java java -classpath \"./USDDEPENDENCIES/*:.\" RulesTest" ]
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/assets-executing-proc_guided-rule-templates
Updating clusters
Updating clusters OpenShift Container Platform 4.7 Updating OpenShift Container Platform clusters Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/updating_clusters/index
function::gettimeofday_s
function::gettimeofday_s Name function::gettimeofday_s - Number of seconds since UNIX epoch Synopsis Arguments None Description This function returns the number of seconds since the UNIX epoch.
[ "gettimeofday_s:long()" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-gettimeofday-s
Chapter 4. Dynamically provisioned OpenShift Data Foundation deployed on Microsoft Azure
Chapter 4. Dynamically provisioned OpenShift Data Foundation deployed on Microsoft Azure 4.1. Replacing operational or failed storage devices on Azure installer-provisioned infrastructure When you need to replace a device in a dynamically created storage cluster on an Azure installer-provisioned infrastructure, you must replace the storage node. For information about how to replace nodes, see: Replacing operational nodes on Azure installer-provisioned infrastructure . Replacing failed nodes on Azure installer-provisioned infrastructures .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/replacing_devices/dynamically_provisioned_openshift_data_foundation_deployed_on_microsoft_azure
Chapter 25. OpenShift SDN network plugin
Chapter 25. OpenShift SDN network plugin 25.1. About the OpenShift SDN network plugin Part of Red Hat OpenShift Networking, OpenShift SDN is a network plugin that uses a software-defined networking (SDN) approach to provide a unified cluster network that enables communication between pods across the OpenShift Container Platform cluster. This pod network is established and maintained by OpenShift SDN, which configures an overlay network using Open vSwitch (OVS). 25.1.1. OpenShift SDN network isolation modes OpenShift SDN provides three SDN modes for configuring the pod network: Network policy mode allows project administrators to configure their own isolation policies using NetworkPolicy objects. Network policy is the default mode in OpenShift Container Platform 4.14. Multitenant mode provides project-level isolation for pods and services. Pods from different projects cannot send packets to or receive packets from pods and services of a different project. You can disable isolation for a project, allowing it to send network traffic to all pods and services in the entire cluster and receive network traffic from those pods and services. Subnet mode provides a flat pod network where every pod can communicate with every other pod and service. The network policy mode provides the same functionality as subnet mode. 25.1.2. Supported network plugin feature matrix Red Hat OpenShift Networking offers two options for the network plugin, OpenShift SDN and OVN-Kubernetes, for the network plugin. The following table summarizes the current feature support for both network plugins: Table 25.1. Default CNI network plugin feature comparison Feature OpenShift SDN OVN-Kubernetes Egress IPs Supported Supported Egress firewall Supported Supported [1] Egress router Supported Supported [2] Hybrid networking Not supported Supported IPsec encryption for intra-cluster communication Not supported Supported IPv4 single-stack Supported Supported IPv6 single-stack Not supported Supported [3] IPv4/IPv6 dual-stack Not Supported Supported [4] IPv6/IPv4 dual-stack Not supported Supported [5] Kubernetes network policy Supported Supported Kubernetes network policy logs Not supported Supported Hardware offloading Not supported Supported Multicast Supported Supported Egress firewall is also known as egress network policy in OpenShift SDN. This is not the same as network policy egress. Egress router for OVN-Kubernetes supports only redirect mode. IPv6 single-stack networking on a bare-metal platform. IPv4/IPv6 dual-stack networking on bare-metal, VMware vSphere (installer-provisioned infrastructure installations only), IBM Power(R), IBM Z(R), and RHOSP platforms. Dual-stack networking on RHOSP is a Technology Preview feature. IPv6/IPv4 dual-stack networking on bare-metal, VMware vSphere (installer-provisioned infrastructure installations only), and IBM Power(R) platforms. 25.2. Migrating to the OpenShift SDN network plugin As a cluster administrator, you can migrate to the OpenShift SDN network plugin from the OVN-Kubernetes network plugin. To learn more about OpenShift SDN, read About the OpenShift SDN network plugin . 25.2.1. How the migration process works The following table summarizes the migration process by segmenting between the user-initiated steps in the process and the actions that the migration performs in response. Table 25.2. Migrating to OpenShift SDN from OVN-Kubernetes User-initiated steps Migration activity Set the migration field of the Network.operator.openshift.io custom resource (CR) named cluster to OpenShiftSDN . Make sure the migration field is null before setting it to a value. Cluster Network Operator (CNO) Updates the status of the Network.config.openshift.io CR named cluster accordingly. Machine Config Operator (MCO) Rolls out an update to the systemd configuration necessary for OpenShift SDN; the MCO updates a single machine per pool at a time by default, causing the total time the migration takes to increase with the size of the cluster. Update the networkType field of the Network.config.openshift.io CR. CNO Performs the following actions: Destroys the OVN-Kubernetes control plane pods. Deploys the OpenShift SDN control plane pods. Updates the Multus objects to reflect the new network plugin. Reboot each node in the cluster. Cluster As nodes reboot, the cluster assigns IP addresses to pods on the OpenShift SDN cluster network. 25.2.2. Migrating to the OpenShift SDN network plugin Cluster administrators can roll back to the OpenShift SDN Container Network Interface (CNI) network plugin by using the offline migration method. During the migration you must manually reboot every node in your cluster. With the offline migration method, there is some downtime, during which your cluster is unreachable. Prerequisites Install the OpenShift CLI ( oc ). Access to the cluster as a user with the cluster-admin role. A cluster installed on infrastructure configured with the OVN-Kubernetes network plugin. A recent backup of the etcd database is available. A reboot can be triggered manually for each node. The cluster is in a known good state, without any errors. Procedure Stop all of the machine configuration pools managed by the Machine Config Operator (MCO): Stop the master configuration pool by entering the following command in your CLI: USD oc patch MachineConfigPool master --type='merge' --patch \ '{ "spec": { "paused": true } }' Stop the worker machine configuration pool by entering the following command in your CLI: USD oc patch MachineConfigPool worker --type='merge' --patch \ '{ "spec":{ "paused": true } }' To prepare for the migration, set the migration field to null by entering the following command in your CLI: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": null } }' Check that the migration status is empty for the Network.config.openshift.io object by entering the following command in your CLI. Empty command output indicates that the object is not in a migration operation. USD oc get Network.config cluster -o jsonpath='{.status.migration}' Apply the patch to the Network.operator.openshift.io object to set the network plugin back to OpenShift SDN by entering the following command in your CLI: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": { "networkType": "OpenShiftSDN" } } }' Important If you applied the patch to the Network.config.openshift.io object before the patch operation finalizes on the Network.operator.openshift.io object, the Cluster Network Operator (CNO) enters into a degradation state and this causes a slight delay until the CNO recovers from the degraded state. Confirm that the migration status of the network plugin for the Network.config.openshift.io cluster object is OpenShiftSDN by entering the following command in your CLI: USD oc get Network.config cluster -o jsonpath='{.status.migration.networkType}' Apply the patch to the Network.config.openshift.io object to set the network plugin back to OpenShift SDN by entering the following command in your CLI: USD oc patch Network.config.openshift.io cluster --type='merge' \ --patch '{ "spec": { "networkType": "OpenShiftSDN" } }' Optional: Disable automatic migration of several OVN-Kubernetes capabilities to the OpenShift SDN equivalents: Egress IPs Egress firewall Multicast To disable automatic migration of the configuration for any of the previously noted OpenShift SDN features, specify the following keys: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": { "networkType": "OpenShiftSDN", "features": { "egressIP": <bool>, "egressFirewall": <bool>, "multicast": <bool> } } } }' where: bool : Specifies whether to enable migration of the feature. The default is true . Optional: You can customize the following settings for OpenShift SDN to meet your network infrastructure requirements: Maximum transmission unit (MTU) VXLAN port To customize either or both of the previously noted settings, customize and enter the following command in your CLI. If you do not need to change the default value, omit the key from the patch. USD oc patch Network.operator.openshift.io cluster --type=merge \ --patch '{ "spec":{ "defaultNetwork":{ "openshiftSDNConfig":{ "mtu":<mtu>, "vxlanPort":<port> }}}}' mtu The MTU for the VXLAN overlay network. This value is normally configured automatically, but if the nodes in your cluster do not all use the same MTU, then you must set this explicitly to 50 less than the smallest node MTU value. port The UDP port for the VXLAN overlay network. If a value is not specified, the default is 4789 . The port cannot be the same as the Geneve port that is used by OVN-Kubernetes. The default value for the Geneve port is 6081 . Example patch command USD oc patch Network.operator.openshift.io cluster --type=merge \ --patch '{ "spec":{ "defaultNetwork":{ "openshiftSDNConfig":{ "mtu":1200 }}}}' Reboot each node in your cluster. You can reboot the nodes in your cluster with either of the following approaches: With the oc rsh command, you can use a bash script similar to the following: #!/bin/bash readarray -t POD_NODES <<< "USD(oc get pod -n openshift-machine-config-operator -o wide| grep daemon|awk '{print USD1" "USD7}')" for i in "USD{POD_NODES[@]}" do read -r POD NODE <<< "USDi" until oc rsh -n openshift-machine-config-operator "USDPOD" chroot /rootfs shutdown -r +1 do echo "cannot reboot node USDNODE, retry" && sleep 3 done done With the ssh command, you can use a bash script similar to the following. The script assumes that you have configured sudo to not prompt for a password. #!/bin/bash for ip in USD(oc get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}') do echo "reboot node USDip" ssh -o StrictHostKeyChecking=no core@USDip sudo shutdown -r -t 3 done Wait until the Multus daemon set rollout completes. Run the following command to see your rollout status: USD oc -n openshift-multus rollout status daemonset/multus The name of the Multus pods is in the form of multus-<xxxxx> where <xxxxx> is a random sequence of letters. It might take several moments for the pods to restart. Example output Waiting for daemon set "multus" rollout to finish: 1 out of 6 new pods have been updated... ... Waiting for daemon set "multus" rollout to finish: 5 of 6 updated pods are available... daemon set "multus" successfully rolled out After the nodes in your cluster have rebooted and the multus pods are rolled out, start all of the machine configuration pools by running the following commands:: Start the master configuration pool: USD oc patch MachineConfigPool master --type='merge' --patch \ '{ "spec": { "paused": false } }' Start the worker configuration pool: USD oc patch MachineConfigPool worker --type='merge' --patch \ '{ "spec": { "paused": false } }' As the MCO updates machines in each config pool, it reboots each node. By default the MCO updates a single machine per pool at a time, so the time that the migration requires to complete grows with the size of the cluster. Confirm the status of the new machine configuration on the hosts: To list the machine configuration state and the name of the applied machine configuration, enter the following command in your CLI: USD oc describe node | egrep "hostname|machineconfig" Example output kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done Verify that the following statements are true: The value of machineconfiguration.openshift.io/state field is Done . The value of the machineconfiguration.openshift.io/currentConfig field is equal to the value of the machineconfiguration.openshift.io/desiredConfig field. To confirm that the machine config is correct, enter the following command in your CLI: USD oc get machineconfig <config_name> -o yaml where <config_name> is the name of the machine config from the machineconfiguration.openshift.io/currentConfig field. Confirm that the migration succeeded: To confirm that the network plugin is OpenShift SDN, enter the following command in your CLI. The value of status.networkType must be OpenShiftSDN . USD oc get Network.config/cluster -o jsonpath='{.status.networkType}{"\n"}' To confirm that the cluster nodes are in the Ready state, enter the following command in your CLI: USD oc get nodes If a node is stuck in the NotReady state, investigate the machine config daemon pod logs and resolve any errors. To list the pods, enter the following command in your CLI: USD oc get pod -n openshift-machine-config-operator Example output NAME READY STATUS RESTARTS AGE machine-config-controller-75f756f89d-sjp8b 1/1 Running 0 37m machine-config-daemon-5cf4b 2/2 Running 0 43h machine-config-daemon-7wzcd 2/2 Running 0 43h machine-config-daemon-fc946 2/2 Running 0 43h machine-config-daemon-g2v28 2/2 Running 0 43h machine-config-daemon-gcl4f 2/2 Running 0 43h machine-config-daemon-l5tnv 2/2 Running 0 43h machine-config-operator-79d9c55d5-hth92 1/1 Running 0 37m machine-config-server-bsc8h 1/1 Running 0 43h machine-config-server-hklrm 1/1 Running 0 43h machine-config-server-k9rtx 1/1 Running 0 43h The names for the config daemon pods are in the following format: machine-config-daemon-<seq> . The <seq> value is a random five character alphanumeric sequence. To display the pod log for each machine config daemon pod shown in the output, enter the following command in your CLI: USD oc logs <pod> -n openshift-machine-config-operator where pod is the name of a machine config daemon pod. Resolve any errors in the logs shown by the output from the command. To confirm that your pods are not in an error state, enter the following command in your CLI: USD oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}' If pods on a node are in an error state, reboot that node. Complete the following steps only if the migration succeeds and your cluster is in a good state: To remove the migration configuration from the Cluster Network Operator configuration object, enter the following command in your CLI: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": null } }' To remove the OVN-Kubernetes configuration, enter the following command in your CLI: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "defaultNetwork": { "ovnKubernetesConfig":null } } }' To remove the OVN-Kubernetes network provider namespace, enter the following command in your CLI: USD oc delete namespace openshift-ovn-kubernetes 25.2.3. Additional resources Configuration parameters for the OpenShift SDN network plugin Backing up etcd About network policy OpenShift SDN capabilities Configuring egress IPs for a project Configuring an egress firewall for a project Enabling multicast for a project Network [operator.openshift.io/v1 ] 25.3. Rolling back to the OVN-Kubernetes network plugin As a cluster administrator, you can rollback to the OVN-Kubernetes network plugin from the OpenShift SDN network plugin if the migration to OpenShift SDN is unsuccessful. To learn more about OVN-Kubernetes, read About the OVN-Kubernetes network plugin . 25.3.1. Migrating to the OVN-Kubernetes network plugin As a cluster administrator, you can change the network plugin for your cluster to OVN-Kubernetes. During the migration, you must reboot every node in your cluster. Important While performing the migration, your cluster is unavailable and workloads might be interrupted. Perform the migration only when an interruption in service is acceptable. Prerequisites You have a cluster configured with the OpenShift SDN CNI network plugin in the network policy isolation mode. You installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. You have a recent backup of the etcd database. You can manually reboot each node. You checked that your cluster is in a known good state without any errors. You created a security group rule that allows User Datagram Protocol (UDP) packets on port 6081 for all nodes on all cloud platforms. You set all timeouts for webhooks to 3 seconds or removed the webhooks. Procedure To backup the configuration for the cluster network, enter the following command: USD oc get Network.config.openshift.io cluster -o yaml > cluster-openshift-sdn.yaml Verify that the OVN_SDN_MIGRATION_TIMEOUT environment variable is set and is equal to 0s by running the following command: #!/bin/bash if [ -n "USDOVN_SDN_MIGRATION_TIMEOUT" ] && [ "USDOVN_SDN_MIGRATION_TIMEOUT" = "0s" ]; then unset OVN_SDN_MIGRATION_TIMEOUT fi #loops the timeout command of the script to repeatedly check the cluster Operators until all are available. co_timeout=USD{OVN_SDN_MIGRATION_TIMEOUT:-1200s} timeout "USDco_timeout" bash <<EOT until oc wait co --all --for='condition=AVAILABLE=True' --timeout=10s && \ oc wait co --all --for='condition=PROGRESSING=False' --timeout=10s && \ oc wait co --all --for='condition=DEGRADED=False' --timeout=10s; do sleep 10 echo "Some ClusterOperators Degraded=False,Progressing=True,or Available=False"; done EOT Remove the configuration from the Cluster Network Operator (CNO) configuration object by running the following command: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{"spec":{"migration":null}}' . Delete the NodeNetworkConfigurationPolicy (NNCP) custom resource (CR) that defines the primary network interface for the OpenShift SDN network plugin by completing the following steps: Check that the existing NNCP CR bonded the primary interface to your cluster by entering the following command: USD oc get nncp Example output NAME STATUS REASON bondmaster0 Available SuccessfullyConfigured Network Manager stores the connection profile for the bonded primary interface in the /etc/NetworkManager/system-connections system path. Remove the NNCP from your cluster: USD oc delete nncp <nncp_manifest_filename> To prepare all the nodes for the migration, set the migration field on the CNO configuration object by running the following command: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": { "networkType": "OVNKubernetes" } } }' Note This step does not deploy OVN-Kubernetes immediately. Instead, specifying the migration field triggers the Machine Config Operator (MCO) to apply new machine configs to all the nodes in the cluster in preparation for the OVN-Kubernetes deployment. Check that the reboot is finished by running the following command: USD oc get mcp Check that all cluster Operators are available by running the following command: USD oc get co Alternatively: You can disable automatic migration of several OpenShift SDN capabilities to the OVN-Kubernetes equivalents: Egress IPs Egress firewall Multicast To disable automatic migration of the configuration for any of the previously noted OpenShift SDN features, specify the following keys: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": { "networkType": "OVNKubernetes", "features": { "egressIP": <bool>, "egressFirewall": <bool>, "multicast": <bool> } } } }' where: bool : Specifies whether to enable migration of the feature. The default is true . Optional: You can customize the following settings for OVN-Kubernetes to meet your network infrastructure requirements: Maximum transmission unit (MTU). Consider the following before customizing the MTU for this optional step: If you use the default MTU, and you want to keep the default MTU during migration, this step can be ignored. If you used a custom MTU, and you want to keep the custom MTU during migration, you must declare the custom MTU value in this step. This step does not work if you want to change the MTU value during migration. Instead, you must first follow the instructions for "Changing the cluster MTU". You can then keep the custom MTU value by performing this procedure and declaring the custom MTU value in this step. Note OpenShift-SDN and OVN-Kubernetes have different overlay overhead. MTU values should be selected by following the guidelines found on the "MTU value selection" page. Geneve (Generic Network Virtualization Encapsulation) overlay network port OVN-Kubernetes IPv4 internal subnet To customize either of the previously noted settings, enter and customize the following command. If you do not need to change the default value, omit the key from the patch. USD oc patch Network.operator.openshift.io cluster --type=merge \ --patch '{ "spec":{ "defaultNetwork":{ "ovnKubernetesConfig":{ "mtu":<mtu>, "genevePort":<port>, "v4InternalSubnet":"<ipv4_subnet>" }}}}' where: mtu The MTU for the Geneve overlay network. This value is normally configured automatically, but if the nodes in your cluster do not all use the same MTU, then you must set this explicitly to 100 less than the smallest node MTU value. port The UDP port for the Geneve overlay network. If a value is not specified, the default is 6081 . The port cannot be the same as the VXLAN port that is used by OpenShift SDN. The default value for the VXLAN port is 4789 . ipv4_subnet An IPv4 address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is 100.64.0.0/16 . Example patch command to update mtu field USD oc patch Network.operator.openshift.io cluster --type=merge \ --patch '{ "spec":{ "defaultNetwork":{ "ovnKubernetesConfig":{ "mtu":1200 }}}}' As the MCO updates machines in each machine config pool, it reboots each node one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command: USD oc get mcp A successfully updated node has the following status: UPDATED=true , UPDATING=false , DEGRADED=false . Note By default, the MCO updates one machine per pool at a time, causing the total time the migration takes to increase with the size of the cluster. Confirm the status of the new machine configuration on the hosts: To list the machine configuration state and the name of the applied machine configuration, enter the following command: USD oc describe node | egrep "hostname|machineconfig" Example output kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done Verify that the following statements are true: The value of machineconfiguration.openshift.io/state field is Done . The value of the machineconfiguration.openshift.io/currentConfig field is equal to the value of the machineconfiguration.openshift.io/desiredConfig field. To confirm that the machine config is correct, enter the following command: USD oc get machineconfig <config_name> -o yaml | grep ExecStart where <config_name> is the name of the machine config from the machineconfiguration.openshift.io/currentConfig field. The machine config must include the following update to the systemd configuration: ExecStart=/usr/local/bin/configure-ovs.sh OVNKubernetes If a node is stuck in the NotReady state, investigate the machine config daemon pod logs and resolve any errors. To list the pods, enter the following command: USD oc get pod -n openshift-machine-config-operator Example output NAME READY STATUS RESTARTS AGE machine-config-controller-75f756f89d-sjp8b 1/1 Running 0 37m machine-config-daemon-5cf4b 2/2 Running 0 43h machine-config-daemon-7wzcd 2/2 Running 0 43h machine-config-daemon-fc946 2/2 Running 0 43h machine-config-daemon-g2v28 2/2 Running 0 43h machine-config-daemon-gcl4f 2/2 Running 0 43h machine-config-daemon-l5tnv 2/2 Running 0 43h machine-config-operator-79d9c55d5-hth92 1/1 Running 0 37m machine-config-server-bsc8h 1/1 Running 0 43h machine-config-server-hklrm 1/1 Running 0 43h machine-config-server-k9rtx 1/1 Running 0 43h The names for the config daemon pods are in the following format: machine-config-daemon-<seq> . The <seq> value is a random five character alphanumeric sequence. Display the pod log for the first machine config daemon pod shown in the output by enter the following command: USD oc logs <pod> -n openshift-machine-config-operator where pod is the name of a machine config daemon pod. Resolve any errors in the logs shown by the output from the command. To start the migration, configure the OVN-Kubernetes network plugin by using one of the following commands: To specify the network provider without changing the cluster network IP address block, enter the following command: USD oc patch Network.config.openshift.io cluster \ --type='merge' --patch '{ "spec": { "networkType": "OVNKubernetes" } }' To specify a different cluster network IP address block, enter the following command: USD oc patch Network.config.openshift.io cluster \ --type='merge' --patch '{ "spec": { "clusterNetwork": [ { "cidr": "<cidr>", "hostPrefix": <prefix> } ], "networkType": "OVNKubernetes" } }' where cidr is a CIDR block and prefix is the slice of the CIDR block apportioned to each node in your cluster. You cannot use any CIDR block that overlaps with the 100.64.0.0/16 CIDR block because the OVN-Kubernetes network provider uses this block internally. Important You cannot change the service network address block during the migration. Verify that the Multus daemon set rollout is complete before continuing with subsequent steps: USD oc -n openshift-multus rollout status daemonset/multus The name of the Multus pods is in the form of multus-<xxxxx> where <xxxxx> is a random sequence of letters. It might take several moments for the pods to restart. Example output Waiting for daemon set "multus" rollout to finish: 1 out of 6 new pods have been updated... ... Waiting for daemon set "multus" rollout to finish: 5 of 6 updated pods are available... daemon set "multus" successfully rolled out To complete changing the network plugin, reboot each node in your cluster. You can reboot the nodes in your cluster with either of the following approaches: Important The following scripts reboot all of the nodes in the cluster at the same time. This can cause your cluster to be unstable. Another option is to reboot your nodes manually one at a time. Rebooting nodes one-by-one causes considerable downtime in a cluster with many nodes. Cluster Operators will not work correctly before you reboot the nodes. With the oc rsh command, you can use a bash script similar to the following: #!/bin/bash readarray -t POD_NODES <<< "USD(oc get pod -n openshift-machine-config-operator -o wide| grep daemon|awk '{print USD1" "USD7}')" for i in "USD{POD_NODES[@]}" do read -r POD NODE <<< "USDi" until oc rsh -n openshift-machine-config-operator "USDPOD" chroot /rootfs shutdown -r +1 do echo "cannot reboot node USDNODE, retry" && sleep 3 done done With the ssh command, you can use a bash script similar to the following. The script assumes that you have configured sudo to not prompt for a password. #!/bin/bash for ip in USD(oc get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}') do echo "reboot node USDip" ssh -o StrictHostKeyChecking=no core@USDip sudo shutdown -r -t 3 done Confirm that the migration succeeded: To confirm that the network plugin is OVN-Kubernetes, enter the following command. The value of status.networkType must be OVNKubernetes . USD oc get network.config/cluster -o jsonpath='{.status.networkType}{"\n"}' To confirm that the cluster nodes are in the Ready state, enter the following command: USD oc get nodes To confirm that your pods are not in an error state, enter the following command: USD oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}' If pods on a node are in an error state, reboot that node. To confirm that all of the cluster Operators are not in an abnormal state, enter the following command: USD oc get co The status of every cluster Operator must be the following: AVAILABLE="True" , PROGRESSING="False" , DEGRADED="False" . If a cluster Operator is not available or degraded, check the logs for the cluster Operator for more information. Complete the following steps only if the migration succeeds and your cluster is in a good state: To remove the migration configuration from the CNO configuration object, enter the following command: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "migration": null } }' To remove custom configuration for the OpenShift SDN network provider, enter the following command: USD oc patch Network.operator.openshift.io cluster --type='merge' \ --patch '{ "spec": { "defaultNetwork": { "openshiftSDNConfig": null } } }' To remove the OpenShift SDN network provider namespace, enter the following command: USD oc delete namespace openshift-sdn steps Optional: After cluster migration, you can convert your IPv4 single-stack cluster to a dual-network cluster network that supports IPv4 and IPv6 address families. For more information, see "Converting to IPv4/IPv6 dual-stack networking". 25.4. Configuring egress IPs for a project As a cluster administrator, you can configure the OpenShift SDN Container Network Interface (CNI) network plugin to assign one or more egress IP addresses to a project. 25.4.1. Egress IP address architectural design and implementation The OpenShift Container Platform egress IP address functionality allows you to ensure that the traffic from one or more pods in one or more namespaces has a consistent source IP address for services outside the cluster network. For example, you might have a pod that periodically queries a database that is hosted on a server outside of your cluster. To enforce access requirements for the server, a packet filtering device is configured to allow traffic only from specific IP addresses. To ensure that you can reliably allow access to the server from only that specific pod, you can configure a specific egress IP address for the pod that makes the requests to the server. An egress IP address assigned to a namespace is different from an egress router, which is used to send traffic to specific destinations. In some cluster configurations, application pods and ingress router pods run on the same node. If you configure an egress IP address for an application project in this scenario, the IP address is not used when you send a request to a route from the application project. An egress IP address is implemented as an additional IP address on the primary network interface of a node and must be in the same subnet as the primary IP address of the node. The additional IP address must not be assigned to any other node in the cluster. Important Egress IP addresses must not be configured in any Linux network configuration files, such as ifcfg-eth0 . 25.4.1.1. Platform support Support for the egress IP address functionality on various platforms is summarized in the following table: Platform Supported Bare metal Yes VMware vSphere Yes Red Hat OpenStack Platform (RHOSP) Yes Amazon Web Services (AWS) Yes Google Cloud Platform (GCP) Yes Microsoft Azure Yes IBM Z(R) and IBM(R) LinuxONE Yes IBM Z(R) and IBM(R) LinuxONE for Red Hat Enterprise Linux (RHEL) KVM Yes IBM Power(R) Yes Nutanix Yes Important The assignment of egress IP addresses to control plane nodes with the EgressIP feature is not supported on a cluster provisioned on Amazon Web Services (AWS). ( BZ#2039656 ). 25.4.1.2. Public cloud platform considerations For clusters provisioned on public cloud infrastructure, there is a constraint on the absolute number of assignable IP addresses per node. The maximum number of assignable IP addresses per node, or the IP capacity , can be described in the following formula: IP capacity = public cloud default capacity - sum(current IP assignments) While the Egress IPs capability manages the IP address capacity per node, it is important to plan for this constraint in your deployments. For example, for a cluster installed on bare-metal infrastructure with 8 nodes you can configure 150 egress IP addresses. However, if a public cloud provider limits IP address capacity to 10 IP addresses per node, the total number of assignable IP addresses is only 80. To achieve the same IP address capacity in this example cloud provider, you would need to allocate 7 additional nodes. To confirm the IP capacity and subnets for any node in your public cloud environment, you can enter the oc get node <node_name> -o yaml command. The cloud.network.openshift.io/egress-ipconfig annotation includes capacity and subnet information for the node. The annotation value is an array with a single object with fields that provide the following information for the primary network interface: interface : Specifies the interface ID on AWS and Azure and the interface name on GCP. ifaddr : Specifies the subnet mask for one or both IP address families. capacity : Specifies the IP address capacity for the node. On AWS, the IP address capacity is provided per IP address family. On Azure and GCP, the IP address capacity includes both IPv4 and IPv6 addresses. Automatic attachment and detachment of egress IP addresses for traffic between nodes are available. This allows for traffic from many pods in namespaces to have a consistent source IP address to locations outside of the cluster. This also supports OpenShift SDN and OVN-Kubernetes, which is the default networking plugin in Red Hat OpenShift Networking in OpenShift Container Platform 4.14. Note The RHOSP egress IP address feature creates a Neutron reservation port called egressip-<IP address> . Using the same RHOSP user as the one used for the OpenShift Container Platform cluster installation, you can assign a floating IP address to this reservation port to have a predictable SNAT address for egress traffic. When an egress IP address on an RHOSP network is moved from one node to another, because of a node failover, for example, the Neutron reservation port is removed and recreated. This means that the floating IP association is lost and you need to manually reassign the floating IP address to the new reservation port. Note When an RHOSP cluster administrator assigns a floating IP to the reservation port, OpenShift Container Platform cannot delete the reservation port. The CloudPrivateIPConfig object cannot perform delete and move operations until an RHOSP cluster administrator unassigns the floating IP from the reservation port. The following examples illustrate the annotation from nodes on several public cloud providers. The annotations are indented for readability. Example cloud.network.openshift.io/egress-ipconfig annotation on AWS cloud.network.openshift.io/egress-ipconfig: [ { "interface":"eni-078d267045138e436", "ifaddr":{"ipv4":"10.0.128.0/18"}, "capacity":{"ipv4":14,"ipv6":15} } ] Example cloud.network.openshift.io/egress-ipconfig annotation on GCP cloud.network.openshift.io/egress-ipconfig: [ { "interface":"nic0", "ifaddr":{"ipv4":"10.0.128.0/18"}, "capacity":{"ip":14} } ] The following sections describe the IP address capacity for supported public cloud environments for use in your capacity calculation. 25.4.1.2.1. Amazon Web Services (AWS) IP address capacity limits On AWS, constraints on IP address assignments depend on the instance type configured. For more information, see IP addresses per network interface per instance type 25.4.1.2.2. Google Cloud Platform (GCP) IP address capacity limits On GCP, the networking model implements additional node IP addresses through IP address aliasing, rather than IP address assignments. However, IP address capacity maps directly to IP aliasing capacity. The following capacity limits exist for IP aliasing assignment: Per node, the maximum number of IP aliases, both IPv4 and IPv6, is 100. Per VPC, the maximum number of IP aliases is unspecified, but OpenShift Container Platform scalability testing reveals the maximum to be approximately 15,000. For more information, see Per instance quotas and Alias IP ranges overview . 25.4.1.2.3. Microsoft Azure IP address capacity limits On Azure, the following capacity limits exist for IP address assignment: Per NIC, the maximum number of assignable IP addresses, for both IPv4 and IPv6, is 256. Per virtual network, the maximum number of assigned IP addresses cannot exceed 65,536. For more information, see Networking limits . 25.4.1.3. Limitations The following limitations apply when using egress IP addresses with the OpenShift SDN network plugin: You cannot use manually assigned and automatically assigned egress IP addresses on the same nodes. If you manually assign egress IP addresses from an IP address range, you must not make that range available for automatic IP assignment. You cannot share egress IP addresses across multiple namespaces using the OpenShift SDN egress IP address implementation. If you need to share IP addresses across namespaces, the OVN-Kubernetes network plugin egress IP address implementation allows you to span IP addresses across multiple namespaces. Note If you use OpenShift SDN in multitenant mode, you cannot use egress IP addresses with any namespace that is joined to another namespace by the projects that are associated with them. For example, if project1 and project2 are joined by running the oc adm pod-network join-projects --to=project1 project2 command, neither project can use an egress IP address. For more information, see BZ#1645577 . 25.4.1.4. IP address assignment approaches You can assign egress IP addresses to namespaces by setting the egressIPs parameter of the NetNamespace object. After an egress IP address is associated with a project, OpenShift SDN allows you to assign egress IP addresses to hosts in two ways: In the automatically assigned approach, an egress IP address range is assigned to a node. In the manually assigned approach, a list of one or more egress IP address is assigned to a node. Namespaces that request an egress IP address are matched with nodes that can host those egress IP addresses, and then the egress IP addresses are assigned to those nodes. If the egressIPs parameter is set on a NetNamespace object, but no node hosts that egress IP address, then egress traffic from the namespace will be dropped. High availability of nodes is automatic. If a node that hosts an egress IP address is unreachable and there are nodes that are able to host that egress IP address, then the egress IP address will move to a new node. When the unreachable node comes back online, the egress IP address automatically moves to balance egress IP addresses across nodes. 25.4.1.4.1. Considerations when using automatically assigned egress IP addresses When using the automatic assignment approach for egress IP addresses the following considerations apply: You set the egressCIDRs parameter of each node's HostSubnet resource to indicate the range of egress IP addresses that can be hosted by a node. OpenShift Container Platform sets the egressIPs parameter of the HostSubnet resource based on the IP address range you specify. If the node hosting the namespace's egress IP address is unreachable, OpenShift Container Platform will reassign the egress IP address to another node with a compatible egress IP address range. The automatic assignment approach works best for clusters installed in environments with flexibility in associating additional IP addresses with nodes. 25.4.1.4.2. Considerations when using manually assigned egress IP addresses This approach allows you to control which nodes can host an egress IP address. Note If your cluster is installed on public cloud infrastructure, you must ensure that each node that you assign egress IP addresses to has sufficient spare capacity to host the IP addresses. For more information, see "Platform considerations" in a section. When using the manual assignment approach for egress IP addresses the following considerations apply: You set the egressIPs parameter of each node's HostSubnet resource to indicate the IP addresses that can be hosted by a node. Multiple egress IP addresses per namespace are supported. If a namespace has multiple egress IP addresses and those addresses are hosted on multiple nodes, the following additional considerations apply: If a pod is on a node that is hosting an egress IP address, that pod always uses the egress IP address on the node. If a pod is not on a node that is hosting an egress IP address, that pod uses an egress IP address at random. 25.4.2. Configuring automatically assigned egress IP addresses for a namespace In OpenShift Container Platform you can enable automatic assignment of an egress IP address for a specific namespace across one or more nodes. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Update the NetNamespace object with the egress IP address using the following JSON: USD oc patch netnamespace <project_name> --type=merge -p \ '{ "egressIPs": [ "<ip_address>" ] }' where: <project_name> Specifies the name of the project. <ip_address> Specifies one or more egress IP addresses for the egressIPs array. For example, to assign project1 to an IP address of 192.168.1.100 and project2 to an IP address of 192.168.1.101: USD oc patch netnamespace project1 --type=merge -p \ '{"egressIPs": ["192.168.1.100"]}' USD oc patch netnamespace project2 --type=merge -p \ '{"egressIPs": ["192.168.1.101"]}' Note Because OpenShift SDN manages the NetNamespace object, you can make changes only by modifying the existing NetNamespace object. Do not create a new NetNamespace object. Indicate which nodes can host egress IP addresses by setting the egressCIDRs parameter for each host using the following JSON: USD oc patch hostsubnet <node_name> --type=merge -p \ '{ "egressCIDRs": [ "<ip_address_range>", "<ip_address_range>" ] }' where: <node_name> Specifies a node name. <ip_address_range> Specifies an IP address range in CIDR format. You can specify more than one address range for the egressCIDRs array. For example, to set node1 and node2 to host egress IP addresses in the range 192.168.1.0 to 192.168.1.255: USD oc patch hostsubnet node1 --type=merge -p \ '{"egressCIDRs": ["192.168.1.0/24"]}' USD oc patch hostsubnet node2 --type=merge -p \ '{"egressCIDRs": ["192.168.1.0/24"]}' OpenShift Container Platform automatically assigns specific egress IP addresses to available nodes in a balanced way. In this case, it assigns the egress IP address 192.168.1.100 to node1 and the egress IP address 192.168.1.101 to node2 or vice versa. 25.4.3. Configuring manually assigned egress IP addresses for a namespace In OpenShift Container Platform you can associate one or more egress IP addresses with a namespace. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Update the NetNamespace object by specifying the following JSON object with the desired IP addresses: USD oc patch netnamespace <project_name> --type=merge -p \ '{ "egressIPs": [ "<ip_address>" ] }' where: <project_name> Specifies the name of the project. <ip_address> Specifies one or more egress IP addresses for the egressIPs array. For example, to assign the project1 project to the IP addresses 192.168.1.100 and 192.168.1.101 : USD oc patch netnamespace project1 --type=merge \ -p '{"egressIPs": ["192.168.1.100","192.168.1.101"]}' To provide high availability, set the egressIPs value to two or more IP addresses on different nodes. If multiple egress IP addresses are set, then pods use all egress IP addresses roughly equally. Note Because OpenShift SDN manages the NetNamespace object, you can make changes only by modifying the existing NetNamespace object. Do not create a new NetNamespace object. Manually assign the egress IP address to the node hosts. If your cluster is installed on public cloud infrastructure, you must confirm that the node has available IP address capacity. Set the egressIPs parameter on the HostSubnet object on the node host. Using the following JSON, include as many IP addresses as you want to assign to that node host: USD oc patch hostsubnet <node_name> --type=merge -p \ '{ "egressIPs": [ "<ip_address>", "<ip_address>" ] }' where: <node_name> Specifies a node name. <ip_address> Specifies an IP address. You can specify more than one IP address for the egressIPs array. For example, to specify that node1 should have the egress IPs 192.168.1.100 , 192.168.1.101 , and 192.168.1.102 : USD oc patch hostsubnet node1 --type=merge -p \ '{"egressIPs": ["192.168.1.100", "192.168.1.101", "192.168.1.102"]}' In the example, all egress traffic for project1 will be routed to the node hosting the specified egress IP, and then connected through Network Address Translation (NAT) to that IP address. 25.4.4. Additional resources If you are configuring manual egress IP address assignment, see Platform considerations for information about IP capacity planning. 25.5. Configuring an egress firewall for a project As a cluster administrator, you can create an egress firewall for a project that restricts egress traffic leaving your OpenShift Container Platform cluster. 25.5.1. How an egress firewall works in a project As a cluster administrator, you can use an egress firewall to limit the external hosts that some or all pods can access from within the cluster. An egress firewall supports the following scenarios: A pod can only connect to internal hosts and cannot initiate connections to the public internet. A pod can only connect to the public internet and cannot initiate connections to internal hosts that are outside the OpenShift Container Platform cluster. A pod cannot reach specified internal subnets or hosts outside the OpenShift Container Platform cluster. A pod can connect to only specific external hosts. For example, you can allow one project access to a specified IP range but deny the same access to a different project. Or you can restrict application developers from updating from Python pip mirrors, and force updates to come only from approved sources. Note Egress firewall does not apply to the host network namespace. Pods with host networking enabled are unaffected by egress firewall rules. You configure an egress firewall policy by creating an EgressNetworkPolicy custom resource (CR) object. The egress firewall matches network traffic that meets any of the following criteria: An IP address range in CIDR format A DNS name that resolves to an IP address Important If your egress firewall includes a deny rule for 0.0.0.0/0 , access to your OpenShift Container Platform API servers is blocked. You must either add allow rules for each IP address or use the nodeSelector type allow rule in your egress policy rules to connect to API servers. The following example illustrates the order of the egress firewall rules necessary to ensure API server access: apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: default namespace: <namespace> 1 spec: egress: - to: cidrSelector: <api_server_address_range> 2 type: Allow # ... - to: cidrSelector: 0.0.0.0/0 3 type: Deny 1 The namespace for the egress firewall. 2 The IP address range that includes your OpenShift Container Platform API servers. 3 A global deny rule prevents access to the OpenShift Container Platform API servers. To find the IP address for your API servers, run oc get ep kubernetes -n default . For more information, see BZ#1988324 . Important You must have OpenShift SDN configured to use either the network policy or multitenant mode to configure an egress firewall. If you use network policy mode, an egress firewall is compatible with only one policy per namespace and will not work with projects that share a network, such as global projects. Warning Egress firewall rules do not apply to traffic that goes through routers. Any user with permission to create a Route CR object can bypass egress firewall policy rules by creating a route that points to a forbidden destination. 25.5.1.1. Limitations of an egress firewall An egress firewall has the following limitations: No project can have more than one EgressNetworkPolicy object. Important The creation of more than one EgressNetworkPolicy object is allowed, however it should not be done. When you create more than one EgressNetworkPolicy object, the following message is returned: dropping all rules . In actuality, all external traffic is dropped, which can cause security risks for your organization. A maximum of one EgressNetworkPolicy object with a maximum of 1,000 rules can be defined per project. The default project cannot use an egress firewall. When using the OpenShift SDN network plugin in multitenant mode, the following limitations apply: Global projects cannot use an egress firewall. You can make a project global by using the oc adm pod-network make-projects-global command. Projects merged by using the oc adm pod-network join-projects command cannot use an egress firewall in any of the joined projects. If you create a selectorless service and manually define endpoints or EndpointSlices that point to external IPs, traffic to the service IP might still be allowed, even if your EgressNetworkPolicy is configured to deny all egress traffic. This occurs because OpenShift SDN does not fully enforce egress network policies for these external endpoints. Consequently, this might result in unexpected access to external services. Violating any of these restrictions results in a broken egress firewall for the project. Consequently, all external network traffic is dropped, which can cause security risks for your organization. An Egress Firewall resource can be created in the kube-node-lease , kube-public , kube-system , openshift and openshift- projects. 25.5.1.2. Matching order for egress firewall policy rules The egress firewall policy rules are evaluated in the order that they are defined, from first to last. The first rule that matches an egress connection from a pod applies. Any subsequent rules are ignored for that connection. 25.5.1.3. How Domain Name Server (DNS) resolution works If you use DNS names in any of your egress firewall policy rules, proper resolution of the domain names is subject to the following restrictions: Domain name updates are polled based on a time-to-live (TTL) duration. By default, the duration is 30 seconds. When the egress firewall controller queries the local name servers for a domain name, if the response includes a TTL that is less than 30 seconds, the controller sets the duration to the returned value. If the TTL in the response is greater than 30 minutes, the controller sets the duration to 30 minutes. If the TTL is between 30 seconds and 30 minutes, the controller ignores the value and sets the duration to 30 seconds. The pod must resolve the domain from the same local name servers when necessary. Otherwise the IP addresses for the domain known by the egress firewall controller and the pod can be different. If the IP addresses for a hostname differ, the egress firewall might not be enforced consistently. Because the egress firewall controller and pods asynchronously poll the same local name server, the pod might obtain the updated IP address before the egress controller does, which causes a race condition. Due to this current limitation, domain name usage in EgressNetworkPolicy objects is only recommended for domains with infrequent IP address changes. Note Using DNS names in your egress firewall policy does not affect local DNS resolution through CoreDNS. However, if your egress firewall policy uses domain names, and an external DNS server handles DNS resolution for an affected pod, you must include egress firewall rules that permit access to the IP addresses of your DNS server. 25.5.2. EgressNetworkPolicy custom resource (CR) object You can define one or more rules for an egress firewall. A rule is either an Allow rule or a Deny rule, with a specification for the traffic that the rule applies to. The following YAML describes an EgressNetworkPolicy CR object: EgressNetworkPolicy object apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: <name> 1 spec: egress: 2 ... 1 A name for your egress firewall policy. 2 A collection of one or more egress network policy rules as described in the following section. 25.5.2.1. EgressNetworkPolicy rules The following YAML describes an egress firewall rule object. The user can select either an IP address range in CIDR format, a domain name, or use the nodeSelector to allow or deny egress traffic. The egress stanza expects an array of one or more objects. Egress policy rule stanza egress: - type: <type> 1 to: 2 cidrSelector: <cidr> 3 dnsName: <dns_name> 4 1 The type of rule. The value must be either Allow or Deny . 2 A stanza describing an egress traffic match rule. A value for either the cidrSelector field or the dnsName field for the rule. You cannot use both fields in the same rule. 3 An IP address range in CIDR format. 4 A domain name. 25.5.2.2. Example EgressNetworkPolicy CR objects The following example defines several egress firewall policy rules: apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: default spec: egress: 1 - type: Allow to: cidrSelector: 1.2.3.0/24 - type: Allow to: dnsName: www.example.com - type: Deny to: cidrSelector: 0.0.0.0/0 1 A collection of egress firewall policy rule objects. 25.5.3. Creating an egress firewall policy object As a cluster administrator, you can create an egress firewall policy object for a project. Important If the project already has an EgressNetworkPolicy object defined, you must edit the existing policy to make changes to the egress firewall rules. Prerequisites A cluster that uses the OpenShift SDN network plugin. Install the OpenShift CLI ( oc ). You must log in to the cluster as a cluster administrator. Procedure Create a policy rule: Create a <policy_name>.yaml file where <policy_name> describes the egress policy rules. In the file you created, define an egress policy object. Enter the following command to create the policy object. Replace <policy_name> with the name of the policy and <project> with the project that the rule applies to. USD oc create -f <policy_name>.yaml -n <project> In the following example, a new EgressNetworkPolicy object is created in a project named project1 : USD oc create -f default.yaml -n project1 Example output egressnetworkpolicy.network.openshift.io/v1 created Optional: Save the <policy_name>.yaml file so that you can make changes later. 25.6. Editing an egress firewall for a project As a cluster administrator, you can modify network traffic rules for an existing egress firewall. 25.6.1. Viewing an EgressNetworkPolicy object You can view an EgressNetworkPolicy object in your cluster. Prerequisites A cluster using the OpenShift SDN network plugin. Install the OpenShift Command-line Interface (CLI), commonly known as oc . You must log in to the cluster. Procedure Optional: To view the names of the EgressNetworkPolicy objects defined in your cluster, enter the following command: USD oc get egressnetworkpolicy --all-namespaces To inspect a policy, enter the following command. Replace <policy_name> with the name of the policy to inspect. USD oc describe egressnetworkpolicy <policy_name> Example output Name: default Namespace: project1 Created: 20 minutes ago Labels: <none> Annotations: <none> Rule: Allow to 1.2.3.0/24 Rule: Allow to www.example.com Rule: Deny to 0.0.0.0/0 25.7. Editing an egress firewall for a project As a cluster administrator, you can modify network traffic rules for an existing egress firewall. 25.7.1. Editing an EgressNetworkPolicy object As a cluster administrator, you can update the egress firewall for a project. Prerequisites A cluster using the OpenShift SDN network plugin. Install the OpenShift CLI ( oc ). You must log in to the cluster as a cluster administrator. Procedure Find the name of the EgressNetworkPolicy object for the project. Replace <project> with the name of the project. USD oc get -n <project> egressnetworkpolicy Optional: If you did not save a copy of the EgressNetworkPolicy object when you created the egress network firewall, enter the following command to create a copy. USD oc get -n <project> egressnetworkpolicy <name> -o yaml > <filename>.yaml Replace <project> with the name of the project. Replace <name> with the name of the object. Replace <filename> with the name of the file to save the YAML to. After making changes to the policy rules, enter the following command to replace the EgressNetworkPolicy object. Replace <filename> with the name of the file containing the updated EgressNetworkPolicy object. USD oc replace -f <filename>.yaml 25.8. Removing an egress firewall from a project As a cluster administrator, you can remove an egress firewall from a project to remove all restrictions on network traffic from the project that leaves the OpenShift Container Platform cluster. 25.8.1. Removing an EgressNetworkPolicy object As a cluster administrator, you can remove an egress firewall from a project. Prerequisites A cluster using the OpenShift SDN network plugin. Install the OpenShift CLI ( oc ). You must log in to the cluster as a cluster administrator. Procedure Find the name of the EgressNetworkPolicy object for the project. Replace <project> with the name of the project. USD oc get -n <project> egressnetworkpolicy Enter the following command to delete the EgressNetworkPolicy object. Replace <project> with the name of the project and <name> with the name of the object. USD oc delete -n <project> egressnetworkpolicy <name> 25.9. Considerations for the use of an egress router pod 25.9.1. About an egress router pod The OpenShift Container Platform egress router pod redirects traffic to a specified remote server from a private source IP address that is not used for any other purpose. An egress router pod can send network traffic to servers that are set up to allow access only from specific IP addresses. Note The egress router pod is not intended for every outgoing connection. Creating large numbers of egress router pods can exceed the limits of your network hardware. For example, creating an egress router pod for every project or application could exceed the number of local MAC addresses that the network interface can handle before reverting to filtering MAC addresses in software. Important The egress router image is not compatible with Amazon AWS, Azure Cloud, or any other cloud platform that does not support layer 2 manipulations due to their incompatibility with macvlan traffic. 25.9.1.1. Egress router modes In redirect mode , an egress router pod configures iptables rules to redirect traffic from its own IP address to one or more destination IP addresses. Client pods that need to use the reserved source IP address must be configured to access the service for the egress router rather than connecting directly to the destination IP. You can access the destination service and port from the application pod by using the curl command. For example: USD curl <router_service_IP> <port> In HTTP proxy mode , an egress router pod runs as an HTTP proxy on port 8080 . This mode only works for clients that are connecting to HTTP-based or HTTPS-based services, but usually requires fewer changes to the client pods to get them to work. Many programs can be told to use an HTTP proxy by setting an environment variable. In DNS proxy mode , an egress router pod runs as a DNS proxy for TCP-based services from its own IP address to one or more destination IP addresses. To make use of the reserved, source IP address, client pods must be modified to connect to the egress router pod rather than connecting directly to the destination IP address. This modification ensures that external destinations treat traffic as though it were coming from a known source. Redirect mode works for all services except for HTTP and HTTPS. For HTTP and HTTPS services, use HTTP proxy mode. For TCP-based services with IP addresses or domain names, use DNS proxy mode. 25.9.1.2. Egress router pod implementation The egress router pod setup is performed by an initialization container. That container runs in a privileged context so that it can configure the macvlan interface and set up iptables rules. After the initialization container finishes setting up the iptables rules, it exits. the egress router pod executes the container to handle the egress router traffic. The image used varies depending on the egress router mode. The environment variables determine which addresses the egress-router image uses. The image configures the macvlan interface to use EGRESS_SOURCE as its IP address, with EGRESS_GATEWAY as the IP address for the gateway. Network Address Translation (NAT) rules are set up so that connections to the cluster IP address of the pod on any TCP or UDP port are redirected to the same port on IP address specified by the EGRESS_DESTINATION variable. If only some of the nodes in your cluster are capable of claiming the specified source IP address and using the specified gateway, you can specify a nodeName or nodeSelector to identify which nodes are acceptable. 25.9.1.3. Deployment considerations An egress router pod adds an additional IP address and MAC address to the primary network interface of the node. As a result, you might need to configure your hypervisor or cloud provider to allow the additional address. Red Hat OpenStack Platform (RHOSP) If you deploy OpenShift Container Platform on RHOSP, you must allow traffic from the IP and MAC addresses of the egress router pod on your OpenStack environment. If you do not allow the traffic, then communication will fail : USD openstack port set --allowed-address \ ip_address=<ip_address>,mac_address=<mac_address> <neutron_port_uuid> VMware vSphere If you are using VMware vSphere, see the VMware documentation for securing vSphere standard switches . View and change VMware vSphere default settings by selecting the host virtual switch from the vSphere Web Client. Specifically, ensure that the following are enabled: MAC Address Changes Forged Transits Promiscuous Mode Operation 25.9.1.4. Failover configuration To avoid downtime, you can deploy an egress router pod with a Deployment resource, as in the following example. To create a new Service object for the example deployment, use the oc expose deployment/egress-demo-controller command. apiVersion: apps/v1 kind: Deployment metadata: name: egress-demo-controller spec: replicas: 1 1 selector: matchLabels: name: egress-router template: metadata: name: egress-router labels: name: egress-router annotations: pod.network.openshift.io/assign-macvlan: "true" spec: 2 initContainers: ... containers: ... 1 Ensure that replicas is set to 1 , because only one pod can use a given egress source IP address at any time. This means that only a single copy of the router runs on a node. 2 Specify the Pod object template for the egress router pod. 25.9.2. Additional resources Deploying an egress router in redirection mode Deploying an egress router in HTTP proxy mode Deploying an egress router in DNS proxy mode 25.10. Deploying an egress router pod in redirect mode As a cluster administrator, you can deploy an egress router pod that is configured to redirect traffic to specified destination IP addresses. 25.10.1. Egress router pod specification for redirect mode Define the configuration for an egress router pod in the Pod object. The following YAML describes the fields for the configuration of an egress router pod in redirect mode: apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: "true" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress_router> - name: EGRESS_GATEWAY 3 value: <egress_gateway> - name: EGRESS_DESTINATION 4 value: <egress_destination> - name: EGRESS_ROUTER_MODE value: init containers: - name: egress-router-wait image: registry.redhat.io/openshift4/ose-pod 1 The annotation tells OpenShift Container Platform to create a macvlan network interface on the primary network interface controller (NIC) and move that macvlan interface into the pod's network namespace. You must include the quotation marks around the "true" value. To have OpenShift Container Platform create the macvlan interface on a different NIC interface, set the annotation value to the name of that interface. For example, eth1 . 2 IP address from the physical network that the node is on that is reserved for use by the egress router pod. Optional: You can include the subnet length, the /24 suffix, so that a proper route to the local subnet is set. If you do not specify a subnet length, then the egress router can access only the host specified with the EGRESS_GATEWAY variable and no other hosts on the subnet. 3 Same value as the default gateway used by the node. 4 External server to direct traffic to. Using this example, connections to the pod are redirected to 203.0.113.25 , with a source IP address of 192.168.12.99 . Example egress router pod specification apiVersion: v1 kind: Pod metadata: name: egress-multi labels: name: egress-multi annotations: pod.network.openshift.io/assign-macvlan: "true" spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE value: 192.168.12.99/24 - name: EGRESS_GATEWAY value: 192.168.12.1 - name: EGRESS_DESTINATION value: | 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 203.0.113.27 - name: EGRESS_ROUTER_MODE value: init containers: - name: egress-router-wait image: registry.redhat.io/openshift4/ose-pod 25.10.2. Egress destination configuration format When an egress router pod is deployed in redirect mode, you can specify redirection rules by using one or more of the following formats: <port> <protocol> <ip_address> - Incoming connections to the given <port> should be redirected to the same port on the given <ip_address> . <protocol> is either tcp or udp . <port> <protocol> <ip_address> <remote_port> - As above, except that the connection is redirected to a different <remote_port> on <ip_address> . <ip_address> - If the last line is a single IP address, then any connections on any other port will be redirected to the corresponding port on that IP address. If there is no fallback IP address then connections on other ports are rejected. In the example that follows several rules are defined: The first line redirects traffic from local port 80 to port 80 on 203.0.113.25 . The second and third lines redirect local ports 8080 and 8443 to remote ports 80 and 443 on 203.0.113.26 . The last line matches traffic for any ports not specified in the rules. Example configuration 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 203.0.113.27 25.10.3. Deploying an egress router pod in redirect mode In redirect mode , an egress router pod sets up iptables rules to redirect traffic from its own IP address to one or more destination IP addresses. Client pods that need to use the reserved source IP address must be configured to access the service for the egress router rather than connecting directly to the destination IP. You can access the destination service and port from the application pod by using the curl command. For example: USD curl <router_service_IP> <port> Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an egress router pod. To ensure that other pods can find the IP address of the egress router pod, create a service to point to the egress router pod, as in the following example: apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: http port: 80 - name: https port: 443 type: ClusterIP selector: name: egress-1 Your pods can now connect to this service. Their connections are redirected to the corresponding ports on the external server, using the reserved egress IP address. 25.10.4. Additional resources Configuring an egress router destination mappings with a ConfigMap 25.11. Deploying an egress router pod in HTTP proxy mode As a cluster administrator, you can deploy an egress router pod configured to proxy traffic to specified HTTP and HTTPS-based services. 25.11.1. Egress router pod specification for HTTP mode Define the configuration for an egress router pod in the Pod object. The following YAML describes the fields for the configuration of an egress router pod in HTTP mode: apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: "true" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress-router> - name: EGRESS_GATEWAY 3 value: <egress-gateway> - name: EGRESS_ROUTER_MODE value: http-proxy containers: - name: egress-router-pod image: registry.redhat.io/openshift4/ose-egress-http-proxy env: - name: EGRESS_HTTP_PROXY_DESTINATION 4 value: |- ... ... 1 The annotation tells OpenShift Container Platform to create a macvlan network interface on the primary network interface controller (NIC) and move that macvlan interface into the pod's network namespace. You must include the quotation marks around the "true" value. To have OpenShift Container Platform create the macvlan interface on a different NIC interface, set the annotation value to the name of that interface. For example, eth1 . 2 IP address from the physical network that the node is on that is reserved for use by the egress router pod. Optional: You can include the subnet length, the /24 suffix, so that a proper route to the local subnet is set. If you do not specify a subnet length, then the egress router can access only the host specified with the EGRESS_GATEWAY variable and no other hosts on the subnet. 3 Same value as the default gateway used by the node. 4 A string or YAML multi-line string specifying how to configure the proxy. Note that this is specified as an environment variable in the HTTP proxy container, not with the other environment variables in the init container. 25.11.2. Egress destination configuration format When an egress router pod is deployed in HTTP proxy mode, you can specify redirection rules by using one or more of the following formats. Each line in the configuration specifies one group of connections to allow or deny: An IP address allows connections to that IP address, such as 192.168.1.1 . A CIDR range allows connections to that CIDR range, such as 192.168.1.0/24 . A hostname allows proxying to that host, such as www.example.com . A domain name preceded by *. allows proxying to that domain and all of its subdomains, such as *.example.com . A ! followed by any of the match expressions denies the connection instead. If the last line is * , then anything that is not explicitly denied is allowed. Otherwise, anything that is not allowed is denied. You can also use * to allow connections to all remote destinations. Example configuration !*.example.com !192.168.1.0/24 192.168.2.1 * 25.11.3. Deploying an egress router pod in HTTP proxy mode In HTTP proxy mode , an egress router pod runs as an HTTP proxy on port 8080 . This mode only works for clients that are connecting to HTTP-based or HTTPS-based services, but usually requires fewer changes to the client pods to get them to work. Many programs can be told to use an HTTP proxy by setting an environment variable. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an egress router pod. To ensure that other pods can find the IP address of the egress router pod, create a service to point to the egress router pod, as in the following example: apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: http-proxy port: 8080 1 type: ClusterIP selector: name: egress-1 1 Ensure the http port is set to 8080 . To configure the client pod (not the egress proxy pod) to use the HTTP proxy, set the http_proxy or https_proxy variables: apiVersion: v1 kind: Pod metadata: name: app-1 labels: name: app-1 spec: containers: env: - name: http_proxy value: http://egress-1:8080/ 1 - name: https_proxy value: http://egress-1:8080/ ... 1 The service created in the step. Note Using the http_proxy and https_proxy environment variables is not necessary for all setups. If the above does not create a working setup, then consult the documentation for the tool or software you are running in the pod. 25.11.4. Additional resources Configuring an egress router destination mappings with a ConfigMap 25.12. Deploying an egress router pod in DNS proxy mode As a cluster administrator, you can deploy an egress router pod configured to proxy traffic to specified DNS names and IP addresses. 25.12.1. Egress router pod specification for DNS mode Define the configuration for an egress router pod in the Pod object. The following YAML describes the fields for the configuration of an egress router pod in DNS mode: apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: "true" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress-router> - name: EGRESS_GATEWAY 3 value: <egress-gateway> - name: EGRESS_ROUTER_MODE value: dns-proxy containers: - name: egress-router-pod image: registry.redhat.io/openshift4/ose-egress-dns-proxy securityContext: privileged: true env: - name: EGRESS_DNS_PROXY_DESTINATION 4 value: |- ... - name: EGRESS_DNS_PROXY_DEBUG 5 value: "1" ... 1 The annotation tells OpenShift Container Platform to create a macvlan network interface on the primary network interface controller (NIC) and move that macvlan interface into the pod's network namespace. You must include the quotation marks around the "true" value. To have OpenShift Container Platform create the macvlan interface on a different NIC interface, set the annotation value to the name of that interface. For example, eth1 . 2 IP address from the physical network that the node is on that is reserved for use by the egress router pod. Optional: You can include the subnet length, the /24 suffix, so that a proper route to the local subnet is set. If you do not specify a subnet length, then the egress router can access only the host specified with the EGRESS_GATEWAY variable and no other hosts on the subnet. 3 Same value as the default gateway used by the node. 4 Specify a list of one or more proxy destinations. 5 Optional: Specify to output the DNS proxy log output to stdout . 25.12.2. Egress destination configuration format When the router is deployed in DNS proxy mode, you specify a list of port and destination mappings. A destination may be either an IP address or a DNS name. An egress router pod supports the following formats for specifying port and destination mappings: Port and remote address You can specify a source port and a destination host by using the two field format: <port> <remote_address> . The host can be an IP address or a DNS name. If a DNS name is provided, DNS resolution occurs at runtime. For a given host, the proxy connects to the specified source port on the destination host when connecting to the destination host IP address. Port and remote address pair example 80 172.16.12.11 100 example.com Port, remote address, and remote port You can specify a source port, a destination host, and a destination port by using the three field format: <port> <remote_address> <remote_port> . The three field format behaves identically to the two field version, with the exception that the destination port can be different than the source port. Port, remote address, and remote port example 8080 192.168.60.252 80 8443 web.example.com 443 25.12.3. Deploying an egress router pod in DNS proxy mode In DNS proxy mode , an egress router pod acts as a DNS proxy for TCP-based services from its own IP address to one or more destination IP addresses. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an egress router pod. Create a service for the egress router pod: Create a file named egress-router-service.yaml that contains the following YAML. Set spec.ports to the list of ports that you defined previously for the EGRESS_DNS_PROXY_DESTINATION environment variable. apiVersion: v1 kind: Service metadata: name: egress-dns-svc spec: ports: ... type: ClusterIP selector: name: egress-dns-proxy For example: apiVersion: v1 kind: Service metadata: name: egress-dns-svc spec: ports: - name: con1 protocol: TCP port: 80 targetPort: 80 - name: con2 protocol: TCP port: 100 targetPort: 100 type: ClusterIP selector: name: egress-dns-proxy To create the service, enter the following command: USD oc create -f egress-router-service.yaml Pods can now connect to this service. The connections are proxied to the corresponding ports on the external server, using the reserved egress IP address. 25.12.4. Additional resources Configuring an egress router destination mappings with a ConfigMap 25.13. Configuring an egress router pod destination list from a config map As a cluster administrator, you can define a ConfigMap object that specifies destination mappings for an egress router pod. The specific format of the configuration depends on the type of egress router pod. For details on the format, refer to the documentation for the specific egress router pod. 25.13.1. Configuring an egress router destination mappings with a config map For a large or frequently-changing set of destination mappings, you can use a config map to externally maintain the list. An advantage of this approach is that permission to edit the config map can be delegated to users without cluster-admin privileges. Because the egress router pod requires a privileged container, it is not possible for users without cluster-admin privileges to edit the pod definition directly. Note The egress router pod does not automatically update when the config map changes. You must restart the egress router pod to get updates. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a file containing the mapping data for the egress router pod, as in the following example: You can put blank lines and comments into this file. Create a ConfigMap object from the file: USD oc delete configmap egress-routes --ignore-not-found USD oc create configmap egress-routes \ --from-file=destination=my-egress-destination.txt In the command, the egress-routes value is the name of the ConfigMap object to create and my-egress-destination.txt is the name of the file that the data is read from. Tip You can alternatively apply the following YAML to create the config map: apiVersion: v1 kind: ConfigMap metadata: name: egress-routes data: destination: | # Egress routes for Project "Test", version 3 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 # Fallback 203.0.113.27 Create an egress router pod definition and specify the configMapKeyRef stanza for the EGRESS_DESTINATION field in the environment stanza: ... env: - name: EGRESS_DESTINATION valueFrom: configMapKeyRef: name: egress-routes key: destination ... 25.13.2. Additional resources Redirect mode HTTP proxy mode DNS proxy mode 25.14. Enabling multicast for a project 25.14.1. About multicast With IP multicast, data is broadcast to many IP addresses simultaneously. Important At this time, multicast is best used for low-bandwidth coordination or service discovery and not a high-bandwidth solution. By default, network policies affect all connections in a namespace. However, multicast is unaffected by network policies. If multicast is enabled in the same namespace as your network policies, it is always allowed, even if there is a deny-all network policy. Cluster administrators should consider the implications to the exemption of multicast from network policies before enabling it. Multicast traffic between OpenShift Container Platform pods is disabled by default. If you are using the OpenShift SDN network plugin, you can enable multicast on a per-project basis. When using the OpenShift SDN network plugin in networkpolicy isolation mode: Multicast packets sent by a pod will be delivered to all other pods in the project, regardless of NetworkPolicy objects. Pods might be able to communicate over multicast even when they cannot communicate over unicast. Multicast packets sent by a pod in one project will never be delivered to pods in any other project, even if there are NetworkPolicy objects that allow communication between the projects. When using the OpenShift SDN network plugin in multitenant isolation mode: Multicast packets sent by a pod will be delivered to all other pods in the project. Multicast packets sent by a pod in one project will be delivered to pods in other projects only if each project is joined together and multicast is enabled in each joined project. 25.14.2. Enabling multicast between pods You can enable multicast between pods for your project. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin role. Procedure Run the following command to enable multicast for a project. Replace <namespace> with the namespace for the project you want to enable multicast for. USD oc annotate netnamespace <namespace> \ netnamespace.network.openshift.io/multicast-enabled=true Verification To verify that multicast is enabled for a project, complete the following procedure: Change your current project to the project that you enabled multicast for. Replace <project> with the project name. USD oc project <project> Create a pod to act as a multicast receiver: USD cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: mlistener labels: app: multicast-verify spec: containers: - name: mlistener image: registry.access.redhat.com/ubi9 command: ["/bin/sh", "-c"] args: ["dnf -y install socat hostname && sleep inf"] ports: - containerPort: 30102 name: mlistener protocol: UDP EOF Create a pod to act as a multicast sender: USD cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: msender labels: app: multicast-verify spec: containers: - name: msender image: registry.access.redhat.com/ubi9 command: ["/bin/sh", "-c"] args: ["dnf -y install socat && sleep inf"] EOF In a new terminal window or tab, start the multicast listener. Get the IP address for the Pod: USD POD_IP=USD(oc get pods mlistener -o jsonpath='{.status.podIP}') Start the multicast listener by entering the following command: USD oc exec mlistener -i -t -- \ socat UDP4-RECVFROM:30102,ip-add-membership=224.1.0.1:USDPOD_IP,fork EXEC:hostname Start the multicast transmitter. Get the pod network IP address range: USD CIDR=USD(oc get Network.config.openshift.io cluster \ -o jsonpath='{.status.clusterNetwork[0].cidr}') To send a multicast message, enter the following command: USD oc exec msender -i -t -- \ /bin/bash -c "echo | socat STDIO UDP4-DATAGRAM:224.1.0.1:30102,range=USDCIDR,ip-multicast-ttl=64" If multicast is working, the command returns the following output: mlistener 25.15. Disabling multicast for a project 25.15.1. Disabling multicast between pods You can disable multicast between pods for your project. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin role. Procedure Disable multicast by running the following command: USD oc annotate netnamespace <namespace> \ 1 netnamespace.network.openshift.io/multicast-enabled- 1 The namespace for the project you want to disable multicast for. 25.16. Configuring network isolation using OpenShift SDN When your cluster is configured to use the multitenant isolation mode for the OpenShift SDN network plugin, each project is isolated by default. Network traffic is not allowed between pods or services in different projects in multitenant isolation mode. You can change the behavior of multitenant isolation for a project in two ways: You can join one or more projects, allowing network traffic between pods and services in different projects. You can disable network isolation for a project. It will be globally accessible, accepting network traffic from pods and services in all other projects. A globally accessible project can access pods and services in all other projects. 25.16.1. Prerequisites You must have a cluster configured to use the OpenShift SDN network plugin in multitenant isolation mode. 25.16.2. Joining projects You can join two or more projects to allow network traffic between pods and services in different projects. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin role. Procedure Use the following command to join projects to an existing project network: USD oc adm pod-network join-projects --to=<project1> <project2> <project3> Alternatively, instead of specifying specific project names, you can use the --selector=<project_selector> option to specify projects based upon an associated label. Optional: Run the following command to view the pod networks that you have joined together: USD oc get netnamespaces Projects in the same pod-network have the same network ID in the NETID column. 25.16.3. Isolating a project You can isolate a project so that pods and services in other projects cannot access its pods and services. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin role. Procedure To isolate the projects in the cluster, run the following command: USD oc adm pod-network isolate-projects <project1> <project2> Alternatively, instead of specifying specific project names, you can use the --selector=<project_selector> option to specify projects based upon an associated label. 25.16.4. Disabling network isolation for a project You can disable network isolation for a project. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin role. Procedure Run the following command for the project: USD oc adm pod-network make-projects-global <project1> <project2> Alternatively, instead of specifying specific project names, you can use the --selector=<project_selector> option to specify projects based upon an associated label. 25.17. Configuring kube-proxy The Kubernetes network proxy (kube-proxy) runs on each node and is managed by the Cluster Network Operator (CNO). kube-proxy maintains network rules for forwarding connections for endpoints associated with services. 25.17.1. About iptables rules synchronization The synchronization period determines how frequently the Kubernetes network proxy (kube-proxy) syncs the iptables rules on a node. A sync begins when either of the following events occurs: An event occurs, such as service or endpoint is added to or removed from the cluster. The time since the last sync exceeds the sync period defined for kube-proxy. 25.17.2. kube-proxy configuration parameters You can modify the following kubeProxyConfig parameters. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. Table 25.3. Parameters Parameter Description Values Default iptablesSyncPeriod The refresh period for iptables rules. A time interval, such as 30s or 2m . Valid suffixes include s , m , and h and are described in the Go time package documentation. 30s proxyArguments.iptables-min-sync-period The minimum duration before refreshing iptables rules. This parameter ensures that the refresh does not happen too frequently. By default, a refresh starts as soon as a change that affects iptables rules occurs. A time interval, such as 30s or 2m . Valid suffixes include s , m , and h and are described in the Go time package 0s 25.17.3. Modifying the kube-proxy configuration You can modify the Kubernetes network proxy configuration for your cluster. Prerequisites Install the OpenShift CLI ( oc ). Log in to a running cluster with the cluster-admin role. Procedure Edit the Network.operator.openshift.io custom resource (CR) by running the following command: USD oc edit network.operator.openshift.io cluster Modify the kubeProxyConfig parameter in the CR with your changes to the kube-proxy configuration, such as in the following example CR: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: kubeProxyConfig: iptablesSyncPeriod: 30s proxyArguments: iptables-min-sync-period: ["30s"] Save the file and exit the text editor. The syntax is validated by the oc command when you save the file and exit the editor. If your modifications contain a syntax error, the editor opens the file and displays an error message. Enter the following command to confirm the configuration update: USD oc get networks.operator.openshift.io -o yaml Example output apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 defaultNetwork: type: OpenShiftSDN kubeProxyConfig: iptablesSyncPeriod: 30s proxyArguments: iptables-min-sync-period: - 30s serviceNetwork: - 172.30.0.0/16 status: {} kind: List Optional: Enter the following command to confirm that the Cluster Network Operator accepted the configuration change: USD oc get clusteroperator network Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE network 4.1.0-0.9 True False False 1m The AVAILABLE field is True when the configuration update is applied successfully.
[ "oc patch MachineConfigPool master --type='merge' --patch '{ \"spec\": { \"paused\": true } }'", "oc patch MachineConfigPool worker --type='merge' --patch '{ \"spec\":{ \"paused\": true } }'", "oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": null } }'", "oc get Network.config cluster -o jsonpath='{.status.migration}'", "oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": { \"networkType\": \"OpenShiftSDN\" } } }'", "oc get Network.config cluster -o jsonpath='{.status.migration.networkType}'", "oc patch Network.config.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"networkType\": \"OpenShiftSDN\" } }'", "oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": { \"networkType\": \"OpenShiftSDN\", \"features\": { \"egressIP\": <bool>, \"egressFirewall\": <bool>, \"multicast\": <bool> } } } }'", "oc patch Network.operator.openshift.io cluster --type=merge --patch '{ \"spec\":{ \"defaultNetwork\":{ \"openshiftSDNConfig\":{ \"mtu\":<mtu>, \"vxlanPort\":<port> }}}}'", "oc patch Network.operator.openshift.io cluster --type=merge --patch '{ \"spec\":{ \"defaultNetwork\":{ \"openshiftSDNConfig\":{ \"mtu\":1200 }}}}'", "#!/bin/bash readarray -t POD_NODES <<< \"USD(oc get pod -n openshift-machine-config-operator -o wide| grep daemon|awk '{print USD1\" \"USD7}')\" for i in \"USD{POD_NODES[@]}\" do read -r POD NODE <<< \"USDi\" until oc rsh -n openshift-machine-config-operator \"USDPOD\" chroot /rootfs shutdown -r +1 do echo \"cannot reboot node USDNODE, retry\" && sleep 3 done done", "#!/bin/bash for ip in USD(oc get nodes -o jsonpath='{.items[*].status.addresses[?(@.type==\"InternalIP\")].address}') do echo \"reboot node USDip\" ssh -o StrictHostKeyChecking=no core@USDip sudo shutdown -r -t 3 done", "oc -n openshift-multus rollout status daemonset/multus", "Waiting for daemon set \"multus\" rollout to finish: 1 out of 6 new pods have been updated Waiting for daemon set \"multus\" rollout to finish: 5 of 6 updated pods are available daemon set \"multus\" successfully rolled out", "oc patch MachineConfigPool master --type='merge' --patch '{ \"spec\": { \"paused\": false } }'", "oc patch MachineConfigPool worker --type='merge' --patch '{ \"spec\": { \"paused\": false } }'", "oc describe node | egrep \"hostname|machineconfig\"", "kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done", "oc get machineconfig <config_name> -o yaml", "oc get Network.config/cluster -o jsonpath='{.status.networkType}{\"\\n\"}'", "oc get nodes", "oc get pod -n openshift-machine-config-operator", "NAME READY STATUS RESTARTS AGE machine-config-controller-75f756f89d-sjp8b 1/1 Running 0 37m machine-config-daemon-5cf4b 2/2 Running 0 43h machine-config-daemon-7wzcd 2/2 Running 0 43h machine-config-daemon-fc946 2/2 Running 0 43h machine-config-daemon-g2v28 2/2 Running 0 43h machine-config-daemon-gcl4f 2/2 Running 0 43h machine-config-daemon-l5tnv 2/2 Running 0 43h machine-config-operator-79d9c55d5-hth92 1/1 Running 0 37m machine-config-server-bsc8h 1/1 Running 0 43h machine-config-server-hklrm 1/1 Running 0 43h machine-config-server-k9rtx 1/1 Running 0 43h", "oc logs <pod> -n openshift-machine-config-operator", "oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}'", "oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": null } }'", "oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"defaultNetwork\": { \"ovnKubernetesConfig\":null } } }'", "oc delete namespace openshift-ovn-kubernetes", "oc get Network.config.openshift.io cluster -o yaml > cluster-openshift-sdn.yaml", "#!/bin/bash if [ -n \"USDOVN_SDN_MIGRATION_TIMEOUT\" ] && [ \"USDOVN_SDN_MIGRATION_TIMEOUT\" = \"0s\" ]; then unset OVN_SDN_MIGRATION_TIMEOUT fi #loops the timeout command of the script to repeatedly check the cluster Operators until all are available. co_timeout=USD{OVN_SDN_MIGRATION_TIMEOUT:-1200s} timeout \"USDco_timeout\" bash <<EOT until oc wait co --all --for='condition=AVAILABLE=True' --timeout=10s && oc wait co --all --for='condition=PROGRESSING=False' --timeout=10s && oc wait co --all --for='condition=DEGRADED=False' --timeout=10s; do sleep 10 echo \"Some ClusterOperators Degraded=False,Progressing=True,or Available=False\"; done EOT", "oc patch Network.operator.openshift.io cluster --type='merge' --patch '{\"spec\":{\"migration\":null}}'", "oc get nncp", "NAME STATUS REASON bondmaster0 Available SuccessfullyConfigured", "oc delete nncp <nncp_manifest_filename>", "oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": { \"networkType\": \"OVNKubernetes\" } } }'", "oc get mcp", "oc get co", "oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": { \"networkType\": \"OVNKubernetes\", \"features\": { \"egressIP\": <bool>, \"egressFirewall\": <bool>, \"multicast\": <bool> } } } }'", "oc patch Network.operator.openshift.io cluster --type=merge --patch '{ \"spec\":{ \"defaultNetwork\":{ \"ovnKubernetesConfig\":{ \"mtu\":<mtu>, \"genevePort\":<port>, \"v4InternalSubnet\":\"<ipv4_subnet>\" }}}}'", "oc patch Network.operator.openshift.io cluster --type=merge --patch '{ \"spec\":{ \"defaultNetwork\":{ \"ovnKubernetesConfig\":{ \"mtu\":1200 }}}}'", "oc get mcp", "oc describe node | egrep \"hostname|machineconfig\"", "kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done", "oc get machineconfig <config_name> -o yaml | grep ExecStart", "ExecStart=/usr/local/bin/configure-ovs.sh OVNKubernetes", "oc get pod -n openshift-machine-config-operator", "NAME READY STATUS RESTARTS AGE machine-config-controller-75f756f89d-sjp8b 1/1 Running 0 37m machine-config-daemon-5cf4b 2/2 Running 0 43h machine-config-daemon-7wzcd 2/2 Running 0 43h machine-config-daemon-fc946 2/2 Running 0 43h machine-config-daemon-g2v28 2/2 Running 0 43h machine-config-daemon-gcl4f 2/2 Running 0 43h machine-config-daemon-l5tnv 2/2 Running 0 43h machine-config-operator-79d9c55d5-hth92 1/1 Running 0 37m machine-config-server-bsc8h 1/1 Running 0 43h machine-config-server-hklrm 1/1 Running 0 43h machine-config-server-k9rtx 1/1 Running 0 43h", "oc logs <pod> -n openshift-machine-config-operator", "oc patch Network.config.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"networkType\": \"OVNKubernetes\" } }'", "oc patch Network.config.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"clusterNetwork\": [ { \"cidr\": \"<cidr>\", \"hostPrefix\": <prefix> } ], \"networkType\": \"OVNKubernetes\" } }'", "oc -n openshift-multus rollout status daemonset/multus", "Waiting for daemon set \"multus\" rollout to finish: 1 out of 6 new pods have been updated Waiting for daemon set \"multus\" rollout to finish: 5 of 6 updated pods are available daemon set \"multus\" successfully rolled out", "#!/bin/bash readarray -t POD_NODES <<< \"USD(oc get pod -n openshift-machine-config-operator -o wide| grep daemon|awk '{print USD1\" \"USD7}')\" for i in \"USD{POD_NODES[@]}\" do read -r POD NODE <<< \"USDi\" until oc rsh -n openshift-machine-config-operator \"USDPOD\" chroot /rootfs shutdown -r +1 do echo \"cannot reboot node USDNODE, retry\" && sleep 3 done done", "#!/bin/bash for ip in USD(oc get nodes -o jsonpath='{.items[*].status.addresses[?(@.type==\"InternalIP\")].address}') do echo \"reboot node USDip\" ssh -o StrictHostKeyChecking=no core@USDip sudo shutdown -r -t 3 done", "oc get network.config/cluster -o jsonpath='{.status.networkType}{\"\\n\"}'", "oc get nodes", "oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}'", "oc get co", "oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": null } }'", "oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"defaultNetwork\": { \"openshiftSDNConfig\": null } } }'", "oc delete namespace openshift-sdn", "IP capacity = public cloud default capacity - sum(current IP assignments)", "cloud.network.openshift.io/egress-ipconfig: [ { \"interface\":\"eni-078d267045138e436\", \"ifaddr\":{\"ipv4\":\"10.0.128.0/18\"}, \"capacity\":{\"ipv4\":14,\"ipv6\":15} } ]", "cloud.network.openshift.io/egress-ipconfig: [ { \"interface\":\"nic0\", \"ifaddr\":{\"ipv4\":\"10.0.128.0/18\"}, \"capacity\":{\"ip\":14} } ]", "oc patch netnamespace <project_name> --type=merge -p '{ \"egressIPs\": [ \"<ip_address>\" ] }'", "oc patch netnamespace project1 --type=merge -p '{\"egressIPs\": [\"192.168.1.100\"]}' oc patch netnamespace project2 --type=merge -p '{\"egressIPs\": [\"192.168.1.101\"]}'", "oc patch hostsubnet <node_name> --type=merge -p '{ \"egressCIDRs\": [ \"<ip_address_range>\", \"<ip_address_range>\" ] }'", "oc patch hostsubnet node1 --type=merge -p '{\"egressCIDRs\": [\"192.168.1.0/24\"]}' oc patch hostsubnet node2 --type=merge -p '{\"egressCIDRs\": [\"192.168.1.0/24\"]}'", "oc patch netnamespace <project_name> --type=merge -p '{ \"egressIPs\": [ \"<ip_address>\" ] }'", "oc patch netnamespace project1 --type=merge -p '{\"egressIPs\": [\"192.168.1.100\",\"192.168.1.101\"]}'", "oc patch hostsubnet <node_name> --type=merge -p '{ \"egressIPs\": [ \"<ip_address>\", \"<ip_address>\" ] }'", "oc patch hostsubnet node1 --type=merge -p '{\"egressIPs\": [\"192.168.1.100\", \"192.168.1.101\", \"192.168.1.102\"]}'", "apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: default namespace: <namespace> 1 spec: egress: - to: cidrSelector: <api_server_address_range> 2 type: Allow - to: cidrSelector: 0.0.0.0/0 3 type: Deny", "apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: <name> 1 spec: egress: 2", "egress: - type: <type> 1 to: 2 cidrSelector: <cidr> 3 dnsName: <dns_name> 4", "apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: default spec: egress: 1 - type: Allow to: cidrSelector: 1.2.3.0/24 - type: Allow to: dnsName: www.example.com - type: Deny to: cidrSelector: 0.0.0.0/0", "oc create -f <policy_name>.yaml -n <project>", "oc create -f default.yaml -n project1", "egressnetworkpolicy.network.openshift.io/v1 created", "oc get egressnetworkpolicy --all-namespaces", "oc describe egressnetworkpolicy <policy_name>", "Name: default Namespace: project1 Created: 20 minutes ago Labels: <none> Annotations: <none> Rule: Allow to 1.2.3.0/24 Rule: Allow to www.example.com Rule: Deny to 0.0.0.0/0", "oc get -n <project> egressnetworkpolicy", "oc get -n <project> egressnetworkpolicy <name> -o yaml > <filename>.yaml", "oc replace -f <filename>.yaml", "oc get -n <project> egressnetworkpolicy", "oc delete -n <project> egressnetworkpolicy <name>", "curl <router_service_IP> <port>", "openstack port set --allowed-address ip_address=<ip_address>,mac_address=<mac_address> <neutron_port_uuid>", "apiVersion: apps/v1 kind: Deployment metadata: name: egress-demo-controller spec: replicas: 1 1 selector: matchLabels: name: egress-router template: metadata: name: egress-router labels: name: egress-router annotations: pod.network.openshift.io/assign-macvlan: \"true\" spec: 2 initContainers: containers:", "apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: \"true\" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress_router> - name: EGRESS_GATEWAY 3 value: <egress_gateway> - name: EGRESS_DESTINATION 4 value: <egress_destination> - name: EGRESS_ROUTER_MODE value: init containers: - name: egress-router-wait image: registry.redhat.io/openshift4/ose-pod", "apiVersion: v1 kind: Pod metadata: name: egress-multi labels: name: egress-multi annotations: pod.network.openshift.io/assign-macvlan: \"true\" spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE value: 192.168.12.99/24 - name: EGRESS_GATEWAY value: 192.168.12.1 - name: EGRESS_DESTINATION value: | 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 203.0.113.27 - name: EGRESS_ROUTER_MODE value: init containers: - name: egress-router-wait image: registry.redhat.io/openshift4/ose-pod", "80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 203.0.113.27", "curl <router_service_IP> <port>", "apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: http port: 80 - name: https port: 443 type: ClusterIP selector: name: egress-1", "apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: \"true\" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress-router> - name: EGRESS_GATEWAY 3 value: <egress-gateway> - name: EGRESS_ROUTER_MODE value: http-proxy containers: - name: egress-router-pod image: registry.redhat.io/openshift4/ose-egress-http-proxy env: - name: EGRESS_HTTP_PROXY_DESTINATION 4 value: |-", "!*.example.com !192.168.1.0/24 192.168.2.1 *", "apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: http-proxy port: 8080 1 type: ClusterIP selector: name: egress-1", "apiVersion: v1 kind: Pod metadata: name: app-1 labels: name: app-1 spec: containers: env: - name: http_proxy value: http://egress-1:8080/ 1 - name: https_proxy value: http://egress-1:8080/", "apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: \"true\" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress-router> - name: EGRESS_GATEWAY 3 value: <egress-gateway> - name: EGRESS_ROUTER_MODE value: dns-proxy containers: - name: egress-router-pod image: registry.redhat.io/openshift4/ose-egress-dns-proxy securityContext: privileged: true env: - name: EGRESS_DNS_PROXY_DESTINATION 4 value: |- - name: EGRESS_DNS_PROXY_DEBUG 5 value: \"1\"", "80 172.16.12.11 100 example.com", "8080 192.168.60.252 80 8443 web.example.com 443", "apiVersion: v1 kind: Service metadata: name: egress-dns-svc spec: ports: type: ClusterIP selector: name: egress-dns-proxy", "apiVersion: v1 kind: Service metadata: name: egress-dns-svc spec: ports: - name: con1 protocol: TCP port: 80 targetPort: 80 - name: con2 protocol: TCP port: 100 targetPort: 100 type: ClusterIP selector: name: egress-dns-proxy", "oc create -f egress-router-service.yaml", "Egress routes for Project \"Test\", version 3 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 Fallback 203.0.113.27", "oc delete configmap egress-routes --ignore-not-found", "oc create configmap egress-routes --from-file=destination=my-egress-destination.txt", "apiVersion: v1 kind: ConfigMap metadata: name: egress-routes data: destination: | # Egress routes for Project \"Test\", version 3 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 # Fallback 203.0.113.27", "env: - name: EGRESS_DESTINATION valueFrom: configMapKeyRef: name: egress-routes key: destination", "oc annotate netnamespace <namespace> netnamespace.network.openshift.io/multicast-enabled=true", "oc project <project>", "cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: mlistener labels: app: multicast-verify spec: containers: - name: mlistener image: registry.access.redhat.com/ubi9 command: [\"/bin/sh\", \"-c\"] args: [\"dnf -y install socat hostname && sleep inf\"] ports: - containerPort: 30102 name: mlistener protocol: UDP EOF", "cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: msender labels: app: multicast-verify spec: containers: - name: msender image: registry.access.redhat.com/ubi9 command: [\"/bin/sh\", \"-c\"] args: [\"dnf -y install socat && sleep inf\"] EOF", "POD_IP=USD(oc get pods mlistener -o jsonpath='{.status.podIP}')", "oc exec mlistener -i -t -- socat UDP4-RECVFROM:30102,ip-add-membership=224.1.0.1:USDPOD_IP,fork EXEC:hostname", "CIDR=USD(oc get Network.config.openshift.io cluster -o jsonpath='{.status.clusterNetwork[0].cidr}')", "oc exec msender -i -t -- /bin/bash -c \"echo | socat STDIO UDP4-DATAGRAM:224.1.0.1:30102,range=USDCIDR,ip-multicast-ttl=64\"", "mlistener", "oc annotate netnamespace <namespace> \\ 1 netnamespace.network.openshift.io/multicast-enabled-", "oc adm pod-network join-projects --to=<project1> <project2> <project3>", "oc get netnamespaces", "oc adm pod-network isolate-projects <project1> <project2>", "oc adm pod-network make-projects-global <project1> <project2>", "oc edit network.operator.openshift.io cluster", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: kubeProxyConfig: iptablesSyncPeriod: 30s proxyArguments: iptables-min-sync-period: [\"30s\"]", "oc get networks.operator.openshift.io -o yaml", "apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 defaultNetwork: type: OpenShiftSDN kubeProxyConfig: iptablesSyncPeriod: 30s proxyArguments: iptables-min-sync-period: - 30s serviceNetwork: - 172.30.0.0/16 status: {} kind: List", "oc get clusteroperator network", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE network 4.1.0-0.9 True False False 1m" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/networking/openshift-sdn-network-plugin
Chapter 6. Designing the Directory Topology
Chapter 6. Designing the Directory Topology Chapter 4, Designing the Directory Tree covers how the directory service stores entries. Because Red Hat Directory Server can store a large number of entries, it is possible to distribute directory entries across more than one server. The directory's topology describes how the directory tree is divided among multiple physical Directory Servers and how these servers link with one another. This chapter describes planning the topology of the directory service. 6.1. Topology Overview Directory Server can support a distributed directory , where the directory tree (designed in Chapter 4, Designing the Directory Tree ) is spread across multiple physical Directory Servers. The way the directory is divided across those servers helps accomplish the following: Achieve the best possible performance for directory-enabled applications. Increase the availability of the directory service. Improve the management of the directory service. The database is the basic unit for jobs such as replication, performing backups, and restoring data. A single directory can be divided into manageable pieces and assigned to separate databases. These databases can then be distributed between a number of servers, reducing the workload for each server. More than one database can be located on a single server. For example, one server might contain three different databases. When the directory tree is divided across several databases, each database contains a portion of the directory tree, called a suffix . For example, one database can be used to store only entries in the ou=people,dc=example,dc=com suffix, or branch, of the directory tree. When the directory is divided between several servers, each server is responsible for only a part of the directory tree. The distributed directory service works similarly to the Domain Name Service (DNS), which assigns each portion of the DNS namespace to a particular DNS server. Likewise, the directory namespace can be distributed across servers while maintaining a directory service that, from a client's point of view, appears to be a single directory tree. The Directory Server also provides knowledge references , mechanisms for linking directory data stored in different databases. Directory Server includes two types of knowledge references; referrals and chaining . The remainder of this chapter describes databases and knowledge references, explains the differences between the two types of knowledge references, and describes how to design indexes to improve the performance of the databases.
null
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/deployment_guide/designing_the_directory_topology
E.3.9. /proc/sys/
E.3.9. /proc/sys/ The /proc/sys/ directory is different from others in /proc/ because it not only provides information about the system but also allows the system administrator to immediately enable and disable kernel features. Warning Use caution when changing settings on a production system using the various files in the /proc/sys/ directory. Changing the wrong setting may render the kernel unstable, requiring a system reboot. For this reason, be sure the options are valid for that file before attempting to change any value in /proc/sys/ . A good way to determine if a particular file can be configured, or if it is only designed to provide information, is to list it with the -l option at the shell prompt. If the file is writable, it may be used to configure the kernel. For example, a partial listing of /proc/sys/fs looks like the following: In this listing, the files dir-notify-enable and file-max can be written to and, therefore, can be used to configure the kernel. The other files only provide feedback on current settings. Changing a value within a /proc/sys/ file is done by echoing the new value into the file. For example, to enable the System Request Key on a running kernel, type the command: This changes the value for sysrq from 0 (off) to 1 (on). A few /proc/sys/ configuration files contain more than one value. To correctly send new values to them, place a space character between each value passed with the echo command, such as is done in this example: Note Any configuration changes made using the echo command disappear when the system is restarted. To make configuration changes take effect after the system is rebooted, see Section E.4, "Using the sysctl Command" . The /proc/sys/ directory contains several subdirectories controlling different aspects of a running kernel. E.3.9.1. /proc/sys/dev/ This directory provides parameters for particular devices on the system. Most systems have at least two directories, cdrom/ and raid/ . Customized kernels can have other directories, such as parport/ , which provides the ability to share one parallel port between multiple device drivers. The cdrom/ directory contains a file called info , which reveals a number of important CD-ROM parameters: This file can be quickly scanned to discover the qualities of an unknown CD-ROM. If multiple CD-ROMs are available on a system, each device is given its own column of information. Various files in /proc/sys/dev/cdrom , such as autoclose and checkmedia , can be used to control the system's CD-ROM. Use the echo command to enable or disable these features. If RAID support is compiled into the kernel, a /proc/sys/dev/raid/ directory becomes available with at least two files in it: speed_limit_min and speed_limit_max . These settings determine the acceleration of RAID devices for I/O intensive tasks, such as resyncing the disks. E.3.9.2. /proc/sys/fs/ This directory contains an array of options and information concerning various aspects of the file system, including quota, file handle, inode, and dentry information. The binfmt_misc/ directory is used to provide kernel support for miscellaneous binary formats. The important files in /proc/sys/fs/ include: dentry-state - Provides the status of the directory cache. The file looks similar to the following: The first number reveals the total number of directory cache entries, while the second number displays the number of unused entries. The third number tells the number of seconds between when a directory has been freed and when it can be reclaimed, and the fourth measures the pages currently requested by the system. The last two numbers are not used and display only zeros. file-max - Lists the maximum number of file handles that the kernel allocates. Raising the value in this file can resolve errors caused by a lack of available file handles. file-nr - Lists the number of allocated file handles, used file handles, and the maximum number of file handles. overflowgid and overflowuid - Defines the fixed group ID and user ID, respectively, for use with file systems that only support 16-bit group and user IDs. E.3.9.3. /proc/sys/kernel/ This directory contains a variety of different configuration files that directly affect the operation of the kernel. Some of the most important files include: acct - Controls the suspension of process accounting based on the percentage of free space available on the file system containing the log. By default, the file looks like the following: The first value dictates the percentage of free space required for logging to resume, while the second value sets the threshold percentage of free space when logging is suspended. The third value sets the interval, in seconds, that the kernel polls the file system to see if logging should be suspended or resumed. ctrl-alt-del - Controls whether Ctrl + Alt + Delete gracefully restarts the computer using init ( 0 ) or forces an immediate reboot without syncing the dirty buffers to disk ( 1 ). domainname - Configures the system domain name, such as example.com . exec-shield - Configures the Exec Shield feature of the kernel. Exec Shield provides protection against certain types of buffer overflow attacks. There are two possible values for this virtual file: 0 - Disables Exec Shield. 1 - Enables Exec Shield. This is the default value. Important If a system is running security-sensitive applications that were started while Exec Shield was disabled, these applications must be restarted when Exec Shield is enabled in order for Exec Shield to take effect. hostname - Configures the system host name, such as www.example.com . hotplug - Configures the utility to be used when a configuration change is detected by the system. This is primarily used with USB and Cardbus PCI. The default value of /sbin/hotplug should not be changed unless testing a new program to fulfill this role. modprobe - Sets the location of the program used to load kernel modules. The default value is /sbin/modprobe which means kmod calls it to load the module when a kernel thread calls kmod . msgmax - Sets the maximum size of any message sent from one process to another and is set to 8192 bytes by default. Be careful when raising this value, as queued messages between processes are stored in non-swappable kernel memory. Any increase in msgmax would increase RAM requirements for the system. msgmnb - Sets the maximum number of bytes in a single message queue. The default is 16384 . msgmni - Sets the maximum number of message queue identifiers. The default is 4008 . osrelease - Lists the Linux kernel release number. This file can only be altered by changing the kernel source and recompiling. ostype - Displays the type of operating system. By default, this file is set to Linux , and this value can only be changed by changing the kernel source and recompiling. overflowgid and overflowuid - Defines the fixed group ID and user ID, respectively, for use with system calls on architectures that only support 16-bit group and user IDs. panic - Defines the number of seconds the kernel postpones rebooting when the system experiences a kernel panic. By default, the value is set to 0 , which disables automatic rebooting after a panic. printk - This file controls a variety of settings related to printing or logging error messages. Each error message reported by the kernel has a loglevel associated with it that defines the importance of the message. The loglevel values break down in this order: 0 - Kernel emergency. The system is unusable. 1 - Kernel alert. Action must be taken immediately. 2 - Condition of the kernel is considered critical. 3 - General kernel error condition. 4 - General kernel warning condition. 5 - Kernel notice of a normal but significant condition. 6 - Kernel informational message. 7 - Kernel debug-level messages. Four values are found in the printk file: Each of these values defines a different rule for dealing with error messages. The first value, called the console loglevel , defines the lowest priority of messages printed to the console. (Note that, the lower the priority, the higher the loglevel number.) The second value sets the default loglevel for messages without an explicit loglevel attached to them. The third value sets the lowest possible loglevel configuration for the console loglevel. The last value sets the default value for the console loglevel. random/ directory - Lists a number of values related to generating random numbers for the kernel. sem - Configures semaphore settings within the kernel. A semaphore is a System V IPC object that is used to control utilization of a particular process. shmall - Sets the total amount of shared memory that can be used at one time on the system, in bytes. By default, this value is 2097152 . shmmax - Sets the largest shared memory segment size allowed by the kernel. By default, this value is 33554432 . However, the kernel supports much larger values than this. shmmni - Sets the maximum number of shared memory segments for the whole system. By default, this value is 4096 . sysrq - Activates the System Request Key, if this value is set to anything other than zero ( 0 ), the default. The System Request Key allows immediate input to the kernel through simple key combinations. For example, the System Request Key can be used to immediately shut down or restart a system, sync all mounted file systems, or dump important information to the console. To initiate a System Request Key, type Alt + SysRq + system request code . Replace system request code with one of the following system request codes: r - Disables raw mode for the keyboard and sets it to XLATE (a limited keyboard mode which does not recognize modifiers such as Alt , Ctrl , or Shift for all keys). k - Kills all processes active in a virtual console. Also called Secure Access Key ( SAK ), it is often used to verify that the login prompt is spawned from init and not a trojan copy designed to capture user names and passwords. b - Reboots the kernel without first unmounting file systems or syncing disks attached to the system. c - Crashes the system without first unmounting file systems or syncing disks attached to the system. o - Shuts off the system. s - Attempts to sync disks attached to the system. u - Attempts to unmount and remount all file systems as read-only. p - Outputs all flags and registers to the console. t - Outputs a list of processes to the console. m - Outputs memory statistics to the console. 0 through 9 - Sets the log level for the console. e - Kills all processes except init using SIGTERM. i - Kills all processes except init using SIGKILL. l - Kills all processes using SIGKILL (including init ). The system is unusable after issuing this System Request Key code. h - Displays help text. This feature is most beneficial when using a development kernel or when experiencing system freezes. Warning The System Request Key feature is considered a security risk because an unattended console provides an attacker with access to the system. For this reason, it is turned off by default. See /usr/share/doc/kernel-doc- kernel_version /Documentation/sysrq.txt for more information about the System Request Key. tainted - Indicates whether a non-GPL module is loaded. 0 - No non-GPL modules are loaded. 1 - At least one module without a GPL license (including modules with no license) is loaded. 2 - At least one module was force-loaded with the command insmod -f . threads-max - Sets the maximum number of threads to be used by the kernel, with a default value of 2048 . version - Displays the date and time the kernel was last compiled. The first field in this file, such as #3 , relates to the number of times a kernel was built from the source base. E.3.9.4. /proc/sys/net/ This directory contains subdirectories concerning various networking topics. Various configurations at the time of kernel compilation make different directories available here, such as ethernet/ , ipv4/ , ipx/ , and ipv6/ . By altering the files within these directories, system administrators are able to adjust the network configuration on a running system. Given the wide variety of possible networking options available with Linux, only the most common /proc/sys/net/ directories are discussed. The /proc/sys/net/core/ directory contains a variety of settings that control the interaction between the kernel and networking layers. The most important of these files are: message_burst - Sets the maximum number of new warning messages to be written to the kernel log in the time interval defined by message_cost . The default value of this file is 10 . In combination with message_cost , this setting is used to enforce a rate limit on warning messages written to the kernel log from the networking code and mitigate Denial of Service ( DoS ) attacks. The idea of a DoS attack is to bombard the targeted system with requests that generate errors and either fill up disk partitions with log files or require all of the system's resources to handle the error logging. The settings in message_burst and message_cost are designed to be modified based on the system's acceptable risk versus the need for comprehensive logging. For example, by setting message_burst to 10 and message_cost to 5, you allow the system to write the maximum number of 10 messages every 5 seconds. message_cost - Sets a cost on every warning message by defining a time interval for message_burst . The higher the value is, the more likely the warning message is ignored. The default value of this file is 5 . netdev_max_backlog - Sets the maximum number of packets allowed to queue when a particular interface receives packets faster than the kernel can process them. The default value for this file is 1000 . optmem_max - Configures the maximum ancillary buffer size allowed per socket. rmem_default - Sets the receive socket buffer default size in bytes. rmem_max - Sets the receive socket buffer maximum size in bytes. wmem_default - Sets the send socket buffer default size in bytes. wmem_max - Sets the send socket buffer maximum size in bytes. The /proc/sys/net/ipv4/ directory contains additional networking settings. Many of these settings, used in conjunction with one another, are useful in preventing attacks on the system or when using the system to act as a router. Warning An erroneous change to these files may affect remote connectivity to the system. The following is a list of some of the more important files within the /proc/sys/net/ipv4/ directory: icmp_echo_ignore_all and icmp_echo_ignore_broadcasts - Allows the kernel to ignore ICMP ECHO packets from every host or only those originating from broadcast and multicast addresses, respectively. A value of 0 allows the kernel to respond, while a value of 1 ignores the packets. ip_default_ttl - Sets the default Time To Live (TTL) , which limits the number of hops a packet may make before reaching its destination. Increasing this value can diminish system performance. ip_forward - Permits interfaces on the system to forward packets. By default, this file is set to 0 . Setting this file to 1 enables network packet forwarding. ip_local_port_range - Specifies the range of ports to be used by TCP or UDP when a local port is needed. The first number is the lowest port to be used and the second number specifies the highest port. Any systems that expect to require more ports than the default 1024 to 4999 should use a range from 32768 to 61000. tcp_syn_retries - Provides a limit on the number of times the system re-transmits a SYN packet when attempting to make a connection. tcp_retries1 - Sets the number of permitted re-transmissions attempting to answer an incoming connection. Default of 3 . tcp_retries2 - Sets the number of permitted re-transmissions of TCP packets. Default of 15 . The /usr/share/doc/kernel-doc- kernel_version /Documentation/networking/ip-sysctl.txt file contains a list of files and options available in the /proc/sys/net/ipv4/ and /proc/sys/net/ipv6/ directories. Use the sysctl -a command to list the parameters in the sysctl key format. A number of other directories exist within the /proc/sys/net/ipv4/ directory and each covers a different aspect of the network stack. The /proc/sys/net/ipv4/conf/ directory allows each system interface to be configured in different ways, including the use of default settings for unconfigured devices (in the /proc/sys/net/ipv4/conf/default/ subdirectory) and settings that override all special configurations (in the /proc/sys/net/ipv4/conf/all/ subdirectory). Important Red Hat Enterprise Linux 6 defaults to strict reverse path forwarding . Before changing the setting in the rp_filter file, see the entry on Reverse Path Forwarding in the Red Hat Enterprise Linux 6 Security Guide and The Red Hat Knowledgebase article about rp_filter . The /proc/sys/net/ipv4/neigh/ directory contains settings for communicating with a host directly connected to the system (called a network neighbor) and also contains different settings for systems more than one hop away. Routing over IPV4 also has its own directory, /proc/sys/net/ipv4/route/ . Unlike conf/ and neigh/ , the /proc/sys/net/ipv4/route/ directory contains specifications that apply to routing with any interfaces on the system. Many of these settings, such as max_size , max_delay , and min_delay , relate to controlling the size of the routing cache. To clear the routing cache, write any value to the flush file. Additional information about these directories and the possible values for their configuration files can be found in: E.3.9.5. /proc/sys/vm/ This directory facilitates the configuration of the Linux kernel's virtual memory (VM) subsystem. The kernel makes extensive and intelligent use of virtual memory, which is commonly referred to as swap space. The following files are commonly found in the /proc/sys/vm/ directory: block_dump - Configures block I/O debugging when enabled. All read/write and block dirtying operations done to files are logged accordingly. This can be useful if diagnosing disk spin up and spin downs for laptop battery conservation. All output when block_dump is enabled can be retrieved via dmesg . The default value is 0 . Note If block_dump is enabled at the same time as kernel debugging, it is prudent to stop the klogd daemon, as it generates erroneous disk activity caused by block_dump . dirty_background_ratio - Starts background writeback of dirty data at this percentage of total memory, via a pdflush daemon. The default value is 10 . dirty_expire_centisecs - Defines when dirty in-memory data is old enough to be eligible for writeout. Data which has been dirty in-memory for longer than this interval is written out time a pdflush daemon wakes up. The default value is 3000 , expressed in hundredths of a second. dirty_ratio - Starts active writeback of dirty data at this percentage of total memory for the generator of dirty data, via pdflush. The default value is 20 . dirty_writeback_centisecs - Defines the interval between pdflush daemon wakeups, which periodically writes dirty in-memory data out to disk. The default value is 500 , expressed in hundredths of a second. laptop_mode - Minimizes the number of times that a hard disk needs to spin up by keeping the disk spun down for as long as possible, therefore conserving battery power on laptops. This increases efficiency by combining all future I/O processes together, reducing the frequency of spin ups. The default value is 0 , but is automatically enabled in case a battery on a laptop is used. This value is controlled automatically by the acpid daemon once a user is notified battery power is enabled. No user modifications or interactions are necessary if the laptop supports the ACPI (Advanced Configuration and Power Interface) specification. For more information, see the following installed documentation: /usr/share/doc/kernel-doc- kernel_version /Documentation/laptop-mode.txt max_map_count - Configures the maximum number of memory map areas a process may have. In most cases, the default value of 65536 is appropriate. min_free_kbytes - Forces the Linux VM (virtual memory manager) to keep a minimum number of kilobytes free. The VM uses this number to compute a pages_min value for each lowmem zone in the system. The default value is in respect to the total memory on the machine. nr_hugepages - Indicates the current number of configured hugetlb pages in the kernel. For more information, see the following installed documentation: /usr/share/doc/kernel-doc- kernel_version /Documentation/vm/hugetlbpage.txt nr_pdflush_threads - Indicates the number of pdflush daemons that are currently running. This file is read-only, and should not be changed by the user. Under heavy I/O loads, the default value of two is increased by the kernel. overcommit_memory - Configures the conditions under which a large memory request is accepted or denied. The following three modes are available: 0 - The kernel performs heuristic memory over commit handling by estimating the amount of memory available and failing requests that are blatantly invalid. Unfortunately, since memory is allocated using a heuristic rather than a precise algorithm, this setting can sometimes allow available memory on the system to be overloaded. This is the default setting. 1 - The kernel performs no memory over commit handling. Under this setting, the potential for memory overload is increased, but so is performance for memory intensive tasks (such as those executed by some scientific software). 2 - The kernel fails any request for memory that would cause the total address space to exceed the sum of the allocated swap space and the percentage of physical RAM specified in /proc/sys/vm/overcommit_ratio . This setting is best for those who desire less risk of memory overcommitment. Note This setting is only recommended for systems with swap areas larger than physical memory. overcommit_ratio - Specifies the percentage of physical RAM considered when /proc/sys/vm/overcommit_memory is set to 2 . The default value is 50 . page-cluster - Sets the number of pages read in a single attempt. The default value of 3 , which actually relates to 16 pages, is appropriate for most systems. swappiness - Determines how much a machine should swap. The higher the value, the more swapping occurs. The default value, as a percentage, is set to 60 . All kernel-based documentation can be found in the following locally installed location: /usr/share/doc/kernel-doc- kernel_version /Documentation/ , which contains additional information.
[ "-r--r--r-- 1 root root 0 May 10 16:14 dentry-state -rw-r--r-- 1 root root 0 May 10 16:14 dir-notify-enable -rw-r--r-- 1 root root 0 May 10 16:14 file-max -r--r--r-- 1 root root 0 May 10 16:14 file-nr", "echo 1 > /proc/sys/kernel/sysrq", "echo 4 2 45 > /proc/sys/kernel/acct", "CD-ROM information, Id: cdrom.c 3.20 2003/12/17 drive name: hdc drive speed: 48 drive # of slots: 1 Can close tray: 1 Can open tray: 1 Can lock tray: 1 Can change speed: 1 Can select disk: 0 Can read multisession: 1 Can read MCN: 1 Reports media changed: 1 Can play audio: 1 Can write CD-R: 0 Can write CD-RW: 0 Can read DVD: 0 Can write DVD-R: 0 Can write DVD-RAM: 0 Can read MRW: 0 Can write MRW: 0 Can write RAM: 0", "57411 52939 45 0 0 0", "4 2 30", "6 4 1 7", "/usr/share/doc/kernel-doc- kernel_version /Documentation/filesystems/proc.txt" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-proc-dir-sys
5.5. Adding and Removing Tags to and from Objects
5.5. Adding and Removing Tags to and from Objects You can assign tags to and remove tags from hosts, virtual machines, and users. Adding and Removing Tags to and from Objects Select the object(s) you want to tag or untag. Click More Actions ( ), then click Assign Tags . Select the check box to assign a tag to the object, or clear the check box to detach the tag from the object. Click OK . The specified tag is now added or removed as a custom property of the selected object(s).
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/adding_and_removing_tags
Chapter 56. Guided rule templates
Chapter 56. Guided rule templates Guided rule templates are business rule structures with placeholder values (template keys) that are interchanged with actual values defined in separate data tables. Each row of values defined in the corresponding data table for that template results in a rule. Guided rule templates are ideal when many rules have the same conditions, actions, and other attributes but differ in values of facts or constraints. In such cases, instead of creating many similar guided rules and defining values in each rule, you can create a guided rule template with the rule structure that applies to each rule and then define only the differing values in the data table. The guided rule templates designer provides fields and options for acceptable template input based on the data objects for the rule template being defined, and a corresponding data table where you add template key values. After you create your guided rule template and add values in the corresponding data table, the rules you defined are compiled into Drools Rule Language (DRL) rules as with all other rule assets. All data objects related to a guided rule template must be in the same project package as the guided rule template. Assets in the same package are imported by default. After you create the necessary data objects and the guided rule template, you can use the Data Objects tab of the guided rule templates designer to verify that all required data objects are listed or to import other existing data objects by adding a New item .
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_decision_services_in_red_hat_decision_manager/guided-rule-templates-con
Chapter 11. Advanced migration options
Chapter 11. Advanced migration options You can automate your migrations and modify the MigPlan and MigrationController custom resources in order to perform large-scale migrations and to improve performance. 11.1. Terminology Table 11.1. MTC terminology Term Definition Source cluster Cluster from which the applications are migrated. Destination cluster [1] Cluster to which the applications are migrated. Replication repository Object storage used for copying images, volumes, and Kubernetes objects during indirect migration or for Kubernetes objects during direct volume migration or direct image migration. The replication repository must be accessible to all clusters. Host cluster Cluster on which the migration-controller pod and the web console are running. The host cluster is usually the destination cluster but this is not required. The host cluster does not require an exposed registry route for direct image migration. Remote cluster A remote cluster is usually the source cluster but this is not required. A remote cluster requires a Secret custom resource that contains the migration-controller service account token. A remote cluster requires an exposed secure registry route for direct image migration. Indirect migration Images, volumes, and Kubernetes objects are copied from the source cluster to the replication repository and then from the replication repository to the destination cluster. Direct volume migration Persistent volumes are copied directly from the source cluster to the destination cluster. Direct image migration Images are copied directly from the source cluster to the destination cluster. Stage migration Data is copied to the destination cluster without stopping the application. Running a stage migration multiple times reduces the duration of the cutover migration. Cutover migration The application is stopped on the source cluster and its resources are migrated to the destination cluster. State migration Application state is migrated by copying specific persistent volume claims to the destination cluster. Rollback migration Rollback migration rolls back a completed migration. 1 Called the target cluster in the MTC web console. 11.2. Migrating an application from on-premises to a cloud-based cluster You can migrate from a source cluster that is behind a firewall to a cloud-based destination cluster by establishing a network tunnel between the two clusters. The crane tunnel-api command establishes such a tunnel by creating a VPN tunnel on the source cluster and then connecting to a VPN server running on the destination cluster. The VPN server is exposed to the client using a load balancer address on the destination cluster. A service created on the destination cluster exposes the source cluster's API to MTC, which is running on the destination cluster. Prerequisites The system that creates the VPN tunnel must have access and be logged in to both clusters. It must be possible to create a load balancer on the destination cluster. Refer to your cloud provider to ensure this is possible. Have names prepared to assign to namespaces, on both the source cluster and the destination cluster, in which to run the VPN tunnel. These namespaces should not be created in advance. For information about namespace rules, see https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names. When connecting multiple firewall-protected source clusters to the cloud cluster, each source cluster requires its own namespace. OpenVPN server is installed on the destination cluster. OpenVPN client is installed on the source cluster. When configuring the source cluster in MTC, the API URL takes the form of https://proxied-cluster.<namespace>.svc.cluster.local:8443 . If you use the API, see Create a MigCluster CR manifest for each remote cluster . If you use the MTC web console, see Migrating your applications using the MTC web console . The MTC web console and Migration Controller must be installed on the target cluster. Procedure Install the crane utility: USD podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-controller-rhel8:v1.8):/crane ./ Log in remotely to a node on the source cluster and a node on the destination cluster. Obtain the cluster context for both clusters after logging in: USD oc config view Establish a tunnel by entering the following command on the command system: USD crane tunnel-api [--namespace <namespace>] \ --destination-context <destination-cluster> \ --source-context <source-cluster> If you do not specify a namespace, the command uses the default value openvpn . For example: USD crane tunnel-api --namespace my_tunnel \ --destination-context openshift-migration/c131-e-us-east-containers-cloud-ibm-com/admin \ --source-context default/192-168-122-171-nip-io:8443/admin Tip See all available parameters for the crane tunnel-api command by entering crane tunnel-api --help . The command generates TSL/SSL Certificates. This process might take several minutes. A message appears when the process completes. The OpenVPN server starts on the destination cluster and the OpenVPN client starts on the source cluster. After a few minutes, the load balancer resolves on the source node. Tip You can view the log for the OpenVPN pods to check the status of this process by entering the following commands with root privileges: # oc get po -n <namespace> Example output NAME READY STATUS RESTARTS AGE <pod_name> 2/2 Running 0 44s # oc logs -f -n <namespace> <pod_name> -c openvpn When the address of the load balancer is resolved, the message Initialization Sequence Completed appears at the end of the log. On the OpenVPN server, which is on a destination control node, verify that the openvpn service and the proxied-cluster service are running: USD oc get service -n <namespace> On the source node, get the service account (SA) token for the migration controller: # oc sa get-token -n openshift-migration migration-controller Open the MTC web console and add the source cluster, using the following values: Cluster name : The source cluster name. URL : proxied-cluster.<namespace>.svc.cluster.local:8443 . If you did not define a value for <namespace> , use openvpn . Service account token : The token of the migration controller service account. Exposed route host to image registry : proxied-cluster.<namespace>.svc.cluster.local:5000 . If you did not define a value for <namespace> , use openvpn . After MTC has successfully validated the connection, you can proceed to create and run a migration plan. The namespace for the source cluster should appear in the list of namespaces. Additional resources For information about creating a MigCluster CR manifest for each remote cluster, see Migrating an application by using the MTC API . For information about adding a cluster using the web console, see Migrating your applications by using the MTC web console 11.3. Migrating applications by using the command line You can migrate applications with the MTC API by using the command line interface (CLI) in order to automate the migration. 11.3.1. Migration prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Direct image migration You must ensure that the secure OpenShift image registry of the source cluster is exposed. You must create a route to the exposed registry. Direct volume migration If your clusters use proxies, you must configure an Stunnel TCP proxy. Internal images If your application uses internal images from the openshift namespace, you must ensure that the required versions of the images are present on the target cluster. You can manually update an image stream tag in order to use a deprecated OpenShift Container Platform 3 image on an OpenShift Container Platform 4.16 cluster. Clusters The source cluster must be upgraded to the latest MTC z-stream release. The MTC version must be the same on all clusters. Network The clusters have unrestricted network access to each other and to the replication repository. If you copy the persistent volumes with move , the clusters must have unrestricted network access to the remote volumes. You must enable the following ports on an OpenShift Container Platform 3 cluster: 8443 (API server) 443 (routes) 53 (DNS) You must enable the following ports on an OpenShift Container Platform 4 cluster: 6443 (API server) 443 (routes) 53 (DNS) You must enable port 443 on the replication repository if you are using TLS. Persistent volumes (PVs) The PVs must be valid. The PVs must be bound to persistent volume claims. If you use snapshots to copy the PVs, the following additional prerequisites apply: The cloud provider must support snapshots. The PVs must have the same cloud provider. The PVs must be located in the same geographic region. The PVs must have the same storage class. 11.3.2. Creating a registry route for direct image migration For direct image migration, you must create a route to the exposed OpenShift image registry on all remote clusters. Prerequisites The OpenShift image registry must be exposed to external traffic on all remote clusters. The OpenShift Container Platform 4 registry is exposed by default. The OpenShift Container Platform 3 registry must be exposed manually . Procedure To create a route to an OpenShift Container Platform 3 registry, run the following command: USD oc create route passthrough --service=docker-registry -n default To create a route to an OpenShift Container Platform 4 registry, run the following command: USD oc create route passthrough --service=image-registry -n openshift-image-registry 11.3.3. Proxy configuration For OpenShift Container Platform 4.1 and earlier versions, you must configure proxies in the MigrationController custom resource (CR) manifest after you install the Migration Toolkit for Containers Operator because these versions do not support a cluster-wide proxy object. For OpenShift Container Platform 4.2 to 4.16, the MTC inherits the cluster-wide proxy settings. You can change the proxy parameters if you want to override the cluster-wide proxy settings. 11.3.3.1. Direct volume migration Direct Volume Migration (DVM) was introduced in MTC 1.4.2. DVM supports only one proxy. The source cluster cannot access the route of the target cluster if the target cluster is also behind a proxy. If you want to perform a DVM from a source cluster behind a proxy, you must configure a TCP proxy that works at the transport layer and forwards the SSL connections transparently without decrypting and re-encrypting them with their own SSL certificates. A Stunnel proxy is an example of such a proxy. 11.3.3.1.1. TCP proxy setup for DVM You can set up a direct connection between the source and the target cluster through a TCP proxy and configure the stunnel_tcp_proxy variable in the MigrationController CR to use the proxy: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port Direct volume migration (DVM) supports only basic authentication for the proxy. Moreover, DVM works only from behind proxies that can tunnel a TCP connection transparently. HTTP/HTTPS proxies in man-in-the-middle mode do not work. The existing cluster-wide proxies might not support this behavior. As a result, the proxy settings for DVM are intentionally kept different from the usual proxy configuration in MTC. 11.3.3.1.2. Why use a TCP proxy instead of an HTTP/HTTPS proxy? You can enable DVM by running Rsync between the source and the target cluster over an OpenShift route. Traffic is encrypted using Stunnel, a TCP proxy. The Stunnel running on the source cluster initiates a TLS connection with the target Stunnel and transfers data over an encrypted channel. Cluster-wide HTTP/HTTPS proxies in OpenShift are usually configured in man-in-the-middle mode where they negotiate their own TLS session with the outside servers. However, this does not work with Stunnel. Stunnel requires that its TLS session be untouched by the proxy, essentially making the proxy a transparent tunnel which simply forwards the TCP connection as-is. Therefore, you must use a TCP proxy. 11.3.3.1.3. Known issue Migration fails with error Upgrade request required The migration Controller uses the SPDY protocol to execute commands within remote pods. If the remote cluster is behind a proxy or a firewall that does not support the SPDY protocol, the migration controller fails to execute remote commands. The migration fails with the error message Upgrade request required . Workaround: Use a proxy that supports the SPDY protocol. In addition to supporting the SPDY protocol, the proxy or firewall also must pass the Upgrade HTTP header to the API server. The client uses this header to open a websocket connection with the API server. If the Upgrade header is blocked by the proxy or firewall, the migration fails with the error message Upgrade request required . Workaround: Ensure that the proxy forwards the Upgrade header. 11.3.3.2. Tuning network policies for migrations OpenShift supports restricting traffic to or from pods using NetworkPolicy or EgressFirewalls based on the network plugin used by the cluster. If any of the source namespaces involved in a migration use such mechanisms to restrict network traffic to pods, the restrictions might inadvertently stop traffic to Rsync pods during migration. Rsync pods running on both the source and the target clusters must connect to each other over an OpenShift Route. Existing NetworkPolicy or EgressNetworkPolicy objects can be configured to automatically exempt Rsync pods from these traffic restrictions. 11.3.3.2.1. NetworkPolicy configuration 11.3.3.2.1.1. Egress traffic from Rsync pods You can use the unique labels of Rsync pods to allow egress traffic to pass from them if the NetworkPolicy configuration in the source or destination namespaces blocks this type of traffic. The following policy allows all egress traffic from Rsync pods in the namespace: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress 11.3.3.2.1.2. Ingress traffic to Rsync pods apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress 11.3.3.2.2. EgressNetworkPolicy configuration The EgressNetworkPolicy object or Egress Firewalls are OpenShift constructs designed to block egress traffic leaving the cluster. Unlike the NetworkPolicy object, the Egress Firewall works at a project level because it applies to all pods in the namespace. Therefore, the unique labels of Rsync pods do not exempt only Rsync pods from the restrictions. However, you can add the CIDR ranges of the source or target cluster to the Allow rule of the policy so that a direct connection can be setup between two clusters. Based on which cluster the Egress Firewall is present in, you can add the CIDR range of the other cluster to allow egress traffic between the two: apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny 11.3.3.2.3. Choosing alternate endpoints for data transfer By default, DVM uses an OpenShift Container Platform route as an endpoint to transfer PV data to destination clusters. You can choose another type of supported endpoint, if cluster topologies allow. For each cluster, you can configure an endpoint by setting the rsync_endpoint_type variable on the appropriate destination cluster in your MigrationController CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route] 11.3.3.2.4. Configuring supplemental groups for Rsync pods When your PVCs use a shared storage, you can configure the access to that storage by adding supplemental groups to Rsync pod definitions in order for the pods to allow access: Table 11.2. Supplementary groups for Rsync pods Variable Type Default Description src_supplemental_groups string Not set Comma-separated list of supplemental groups for source Rsync pods target_supplemental_groups string Not set Comma-separated list of supplemental groups for target Rsync pods Example usage The MigrationController CR can be updated to set values for these supplemental groups: spec: src_supplemental_groups: "1000,2000" target_supplemental_groups: "2000,3000" 11.3.3.3. Configuring proxies Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Procedure Get the MigrationController CR manifest: USD oc get migrationcontroller <migration_controller> -n openshift-migration Update the proxy parameters: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration ... spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2 1 Stunnel proxy URL for direct volume migration. 2 Comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy nor the httpsProxy field is set. Save the manifest as migration-controller.yaml . Apply the updated manifest: USD oc replace -f migration-controller.yaml -n openshift-migration 11.3.4. Migrating an application by using the MTC API You can migrate an application from the command line by using the Migration Toolkit for Containers (MTC) API. Procedure Create a MigCluster CR manifest for the host cluster: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <host_cluster> namespace: openshift-migration spec: isHostCluster: true EOF Create a Secret object manifest for each remote cluster: USD cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: <cluster_secret> namespace: openshift-config type: Opaque data: saToken: <sa_token> 1 EOF 1 Specify the base64-encoded migration-controller service account (SA) token of the remote cluster. You can obtain the token by running the following command: USD oc sa get-token migration-controller -n openshift-migration | base64 -w 0 Create a MigCluster CR manifest for each remote cluster: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <remote_cluster> 1 namespace: openshift-migration spec: exposedRegistryPath: <exposed_registry_route> 2 insecure: false 3 isHostCluster: false serviceAccountSecretRef: name: <remote_cluster_secret> 4 namespace: openshift-config url: <remote_cluster_url> 5 EOF 1 Specify the Cluster CR of the remote cluster. 2 Optional: For direct image migration, specify the exposed registry route. 3 SSL verification is enabled if false . CA certificates are not required or checked if true . 4 Specify the Secret object of the remote cluster. 5 Specify the URL of the remote cluster. Verify that all clusters are in a Ready state: USD oc describe MigCluster <cluster> Create a Secret object manifest for the replication repository: USD cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: namespace: openshift-config name: <migstorage_creds> type: Opaque data: aws-access-key-id: <key_id_base64> 1 aws-secret-access-key: <secret_key_base64> 2 EOF 1 Specify the key ID in base64 format. 2 Specify the secret key in base64 format. AWS credentials are base64-encoded by default. For other storage providers, you must encode your credentials by running the following command with each key: USD echo -n "<key>" | base64 -w 0 1 1 Specify the key ID or the secret key. Both keys must be base64-encoded. Create a MigStorage CR manifest for the replication repository: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: name: <migstorage> namespace: openshift-migration spec: backupStorageConfig: awsBucketName: <bucket> 1 credsSecretRef: name: <storage_secret> 2 namespace: openshift-config backupStorageProvider: <storage_provider> 3 volumeSnapshotConfig: credsSecretRef: name: <storage_secret> 4 namespace: openshift-config volumeSnapshotProvider: <storage_provider> 5 EOF 1 Specify the bucket name. 2 Specify the Secrets CR of the object storage. You must ensure that the credentials stored in the Secrets CR of the object storage are correct. 3 Specify the storage provider. 4 Optional: If you are copying data by using snapshots, specify the Secrets CR of the object storage. You must ensure that the credentials stored in the Secrets CR of the object storage are correct. 5 Optional: If you are copying data by using snapshots, specify the storage provider. Verify that the MigStorage CR is in a Ready state: USD oc describe migstorage <migstorage> Create a MigPlan CR manifest: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: destMigClusterRef: name: <host_cluster> namespace: openshift-migration indirectImageMigration: true 1 indirectVolumeMigration: true 2 migStorageRef: name: <migstorage> 3 namespace: openshift-migration namespaces: - <source_namespace_1> 4 - <source_namespace_2> - <source_namespace_3>:<destination_namespace> 5 srcMigClusterRef: name: <remote_cluster> 6 namespace: openshift-migration EOF 1 Direct image migration is enabled if false . 2 Direct volume migration is enabled if false . 3 Specify the name of the MigStorage CR instance. 4 Specify one or more source namespaces. By default, the destination namespace has the same name. 5 Specify a destination namespace if it is different from the source namespace. 6 Specify the name of the source cluster MigCluster instance. Verify that the MigPlan instance is in a Ready state: USD oc describe migplan <migplan> -n openshift-migration Create a MigMigration CR manifest to start the migration defined in the MigPlan instance: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: <migmigration> namespace: openshift-migration spec: migPlanRef: name: <migplan> 1 namespace: openshift-migration quiescePods: true 2 stage: false 3 rollback: false 4 EOF 1 Specify the MigPlan CR name. 2 The pods on the source cluster are stopped before migration if true . 3 A stage migration, which copies most of the data without stopping the application, is performed if true . 4 A completed migration is rolled back if true . Verify the migration by watching the MigMigration CR progress: USD oc watch migmigration <migmigration> -n openshift-migration The output resembles the following: Example output Name: c8b034c0-6567-11eb-9a4f-0bc004db0fbc Namespace: openshift-migration Labels: migration.openshift.io/migplan-name=django Annotations: openshift.io/touch: e99f9083-6567-11eb-8420-0a580a81020c API Version: migration.openshift.io/v1alpha1 Kind: MigMigration ... Spec: Mig Plan Ref: Name: migplan Namespace: openshift-migration Stage: false Status: Conditions: Category: Advisory Last Transition Time: 2021-02-02T15:04:09Z Message: Step: 19/47 Reason: InitialBackupCreated Status: True Type: Running Category: Required Last Transition Time: 2021-02-02T15:03:19Z Message: The migration is ready. Status: True Type: Ready Category: Required Durable: true Last Transition Time: 2021-02-02T15:04:05Z Message: The migration registries are healthy. Status: True Type: RegistriesHealthy Itinerary: Final Observed Digest: 7fae9d21f15979c71ddc7dd075cb97061895caac5b936d92fae967019ab616d5 Phase: InitialBackupCreated Pipeline: Completed: 2021-02-02T15:04:07Z Message: Completed Name: Prepare Started: 2021-02-02T15:03:18Z Message: Waiting for initial Velero backup to complete. Name: Backup Phase: InitialBackupCreated Progress: Backup openshift-migration/c8b034c0-6567-11eb-9a4f-0bc004db0fbc-wpc44: 0 out of estimated total of 0 objects backed up (5s) Started: 2021-02-02T15:04:07Z Message: Not started Name: StageBackup Message: Not started Name: StageRestore Message: Not started Name: DirectImage Message: Not started Name: DirectVolume Message: Not started Name: Restore Message: Not started Name: Cleanup Start Timestamp: 2021-02-02T15:03:18Z Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Running 57s migmigration_controller Step: 2/47 Normal Running 57s migmigration_controller Step: 3/47 Normal Running 57s (x3 over 57s) migmigration_controller Step: 4/47 Normal Running 54s migmigration_controller Step: 5/47 Normal Running 54s migmigration_controller Step: 6/47 Normal Running 52s (x2 over 53s) migmigration_controller Step: 7/47 Normal Running 51s (x2 over 51s) migmigration_controller Step: 8/47 Normal Ready 50s (x12 over 57s) migmigration_controller The migration is ready. Normal Running 50s migmigration_controller Step: 9/47 Normal Running 50s migmigration_controller Step: 10/47 11.3.5. State migration You can perform repeatable, state-only migrations by using Migration Toolkit for Containers (MTC) to migrate persistent volume claims (PVCs) that constitute an application's state. You migrate specified PVCs by excluding other PVCs from the migration plan. You can map the PVCs to ensure that the source and the target PVCs are synchronized. Persistent volume (PV) data is copied to the target cluster. The PV references are not moved, and the application pods continue to run on the source cluster. State migration is specifically designed to be used in conjunction with external CD mechanisms, such as OpenShift Gitops. You can migrate application manifests using GitOps while migrating the state using MTC. If you have a CI/CD pipeline, you can migrate stateless components by deploying them on the target cluster. Then you can migrate stateful components by using MTC. You can perform a state migration between clusters or within the same cluster. Important State migration migrates only the components that constitute an application's state. If you want to migrate an entire namespace, use stage or cutover migration. Prerequisites The state of the application on the source cluster is persisted in PersistentVolumes provisioned through PersistentVolumeClaims . The manifests of the application are available in a central repository that is accessible from both the source and the target clusters. Procedure Migrate persistent volume data from the source to the target cluster. You can perform this step as many times as needed. The source application continues running. Quiesce the source application. You can do this by setting the replicas of workload resources to 0 , either directly on the source cluster or by updating the manifests in GitHub and re-syncing the Argo CD application. Clone application manifests to the target cluster. You can use Argo CD to clone the application manifests to the target cluster. Migrate the remaining volume data from the source to the target cluster. Migrate any new data created by the application during the state migration process by performing a final data migration. If the cloned application is in a quiesced state, unquiesce it. Switch the DNS record to the target cluster to re-direct user traffic to the migrated application. Note MTC 1.6 cannot quiesce applications automatically when performing state migration. It can only migrate PV data. Therefore, you must use your CD mechanisms for quiescing or unquiescing applications. MTC 1.7 introduces explicit Stage and Cutover flows. You can use staging to perform initial data transfers as many times as needed. Then you can perform a cutover, in which the source applications are quiesced automatically. Additional resources See Excluding PVCs from migration to select PVCs for state migration. See Mapping PVCs to migrate source PV data to provisioned PVCs on the destination cluster. See Migrating Kubernetes objects to migrate the Kubernetes objects that constitute an application's state. 11.4. Migration hooks You can add up to four migration hooks to a single migration plan, with each hook running at a different phase of the migration. Migration hooks perform tasks such as customizing application quiescence, manually migrating unsupported data types, and updating applications after migration. A migration hook runs on a source or a target cluster at one of the following migration steps: PreBackup : Before resources are backed up on the source cluster. PostBackup : After resources are backed up on the source cluster. PreRestore : Before resources are restored on the target cluster. PostRestore : After resources are restored on the target cluster. You can create a hook by creating an Ansible playbook that runs with the default Ansible image or with a custom hook container. Ansible playbook The Ansible playbook is mounted on a hook container as a config map. The hook container runs as a job, using the cluster, service account, and namespace specified in the MigPlan custom resource. The job continues to run until it reaches the default limit of 6 retries or a successful completion. This continues even if the initial pod is evicted or killed. The default Ansible runtime image is registry.redhat.io/rhmtc/openshift-migration-hook-runner-rhel7:1.8 . This image is based on the Ansible Runner image and includes python-openshift for Ansible Kubernetes resources and an updated oc binary. Custom hook container You can use a custom hook container instead of the default Ansible image. 11.4.1. Writing an Ansible playbook for a migration hook You can write an Ansible playbook to use as a migration hook. The hook is added to a migration plan by using the MTC web console or by specifying values for the spec.hooks parameters in the MigPlan custom resource (CR) manifest. The Ansible playbook is mounted onto a hook container as a config map. The hook container runs as a job, using the cluster, service account, and namespace specified in the MigPlan CR. The hook container uses a specified service account token so that the tasks do not require authentication before they run in the cluster. 11.4.1.1. Ansible modules You can use the Ansible shell module to run oc commands. Example shell module - hosts: localhost gather_facts: false tasks: - name: get pod name shell: oc get po --all-namespaces You can use kubernetes.core modules, such as k8s_info , to interact with Kubernetes resources. Example k8s_facts module - hosts: localhost gather_facts: false tasks: - name: Get pod k8s_info: kind: pods api: v1 namespace: openshift-migration name: "{{ lookup( 'env', 'HOSTNAME') }}" register: pods - name: Print pod name debug: msg: "{{ pods.resources[0].metadata.name }}" You can use the fail module to produce a non-zero exit status in cases where a non-zero exit status would not normally be produced, ensuring that the success or failure of a hook is detected. Hooks run as jobs and the success or failure status of a hook is based on the exit status of the job container. Example fail module - hosts: localhost gather_facts: false tasks: - name: Set a boolean set_fact: do_fail: true - name: "fail" fail: msg: "Cause a failure" when: do_fail 11.4.1.2. Environment variables The MigPlan CR name and migration namespaces are passed as environment variables to the hook container. These variables are accessed by using the lookup plugin. Example environment variables - hosts: localhost gather_facts: false tasks: - set_fact: namespaces: "{{ (lookup( 'env', 'MIGRATION_NAMESPACES')).split(',') }}" - debug: msg: "{{ item }}" with_items: "{{ namespaces }}" - debug: msg: "{{ lookup( 'env', 'MIGRATION_PLAN_NAME') }}" 11.5. Migration plan options You can exclude, edit, and map components in the MigPlan custom resource (CR). 11.5.1. Excluding resources You can exclude resources, for example, image streams, persistent volumes (PVs), or subscriptions, from a Migration Toolkit for Containers (MTC) migration plan to reduce the resource load for migration or to migrate images or PVs with a different tool. By default, the MTC excludes service catalog resources and Operator Lifecycle Manager (OLM) resources from migration. These resources are parts of the service catalog API group and the OLM API group, neither of which is supported for migration at this time. Procedure Edit the MigrationController custom resource manifest: USD oc edit migrationcontroller <migration_controller> -n openshift-migration Update the spec section by adding parameters to exclude specific resources. For those resources that do not have their own exclusion parameters, add the additional_excluded_resources parameter: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: disable_image_migration: true 1 disable_pv_migration: true 2 additional_excluded_resources: 3 - resource1 - resource2 ... 1 Add disable_image_migration: true to exclude image streams from the migration. imagestreams is added to the excluded_resources list in main.yml when the MigrationController pod restarts. 2 Add disable_pv_migration: true to exclude PVs from the migration plan. persistentvolumes and persistentvolumeclaims are added to the excluded_resources list in main.yml when the MigrationController pod restarts. Disabling PV migration also disables PV discovery when you create the migration plan. 3 You can add OpenShift Container Platform resources that you want to exclude to the additional_excluded_resources list. Wait two minutes for the MigrationController pod to restart so that the changes are applied. Verify that the resource is excluded: USD oc get deployment -n openshift-migration migration-controller -o yaml | grep EXCLUDED_RESOURCES -A1 The output contains the excluded resources: Example output name: EXCLUDED_RESOURCES value: resource1,resource2,imagetags,templateinstances,clusterserviceversions,packagemanifests,subscriptions,servicebrokers,servicebindings,serviceclasses,serviceinstances,serviceplans,imagestreams,persistentvolumes,persistentvolumeclaims 11.5.2. Mapping namespaces If you map namespaces in the MigPlan custom resource (CR), you must ensure that the namespaces are not duplicated on the source or the destination clusters because the UID and GID ranges of the namespaces are copied during migration. Two source namespaces mapped to the same destination namespace spec: namespaces: - namespace_2 - namespace_1:namespace_2 If you want the source namespace to be mapped to a namespace of the same name, you do not need to create a mapping. By default, a source namespace and a target namespace have the same name. Incorrect namespace mapping spec: namespaces: - namespace_1:namespace_1 Correct namespace reference spec: namespaces: - namespace_1 11.5.3. Excluding persistent volume claims You select persistent volume claims (PVCs) for state migration by excluding the PVCs that you do not want to migrate. You exclude PVCs by setting the spec.persistentVolumes.pvc.selection.action parameter of the MigPlan custom resource (CR) after the persistent volumes (PVs) have been discovered. Prerequisites MigPlan CR is in a Ready state. Procedure Add the spec.persistentVolumes.pvc.selection.action parameter to the MigPlan CR and set it to skip : apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: ... persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: ... selection: action: skip 11.5.4. Mapping persistent volume claims You can migrate persistent volume (PV) data from the source cluster to persistent volume claims (PVCs) that are already provisioned in the destination cluster in the MigPlan CR by mapping the PVCs. This mapping ensures that the destination PVCs of migrated applications are synchronized with the source PVCs. You map PVCs by updating the spec.persistentVolumes.pvc.name parameter in the MigPlan custom resource (CR) after the PVs have been discovered. Prerequisites MigPlan CR is in a Ready state. Procedure Update the spec.persistentVolumes.pvc.name parameter in the MigPlan CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: ... persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: name: <source_pvc>:<destination_pvc> 1 1 Specify the PVC on the source cluster and the PVC on the destination cluster. If the destination PVC does not exist, it will be created. You can use this mapping to change the PVC name during migration. 11.5.5. Editing persistent volume attributes After you create a MigPlan custom resource (CR), the MigrationController CR discovers the persistent volumes (PVs). The spec.persistentVolumes block and the status.destStorageClasses block are added to the MigPlan CR. You can edit the values in the spec.persistentVolumes.selection block. If you change values outside the spec.persistentVolumes.selection block, the values are overwritten when the MigPlan CR is reconciled by the MigrationController CR. Note The default value for the spec.persistentVolumes.selection.storageClass parameter is determined by the following logic: If the source cluster PV is Gluster or NFS, the default is either cephfs , for accessMode: ReadWriteMany , or cephrbd , for accessMode: ReadWriteOnce . If the PV is neither Gluster nor NFS or if cephfs or cephrbd are not available, the default is a storage class for the same provisioner. If a storage class for the same provisioner is not available, the default is the default storage class of the destination cluster. You can change the storageClass value to the value of any name parameter in the status.destStorageClasses block of the MigPlan CR. If the storageClass value is empty, the PV will have no storage class after migration. This option is appropriate if, for example, you want to move the PV to an NFS volume on the destination cluster. Prerequisites MigPlan CR is in a Ready state. Procedure Edit the spec.persistentVolumes.selection values in the MigPlan CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: pvc-095a6559-b27f-11eb-b27f-021bddcaf6e4 proposedCapacity: 10Gi pvc: accessModes: - ReadWriteMany hasReference: true name: mysql namespace: mysql-persistent selection: action: <copy> 1 copyMethod: <filesystem> 2 verify: true 3 storageClass: <gp2> 4 accessMode: <ReadWriteMany> 5 storageClass: cephfs 1 Allowed values are move , copy , and skip . If only one action is supported, the default value is the supported action. If multiple actions are supported, the default value is copy . 2 Allowed values are snapshot and filesystem . Default value is filesystem . 3 The verify parameter is displayed if you select the verification option for file system copy in the MTC web console. You can set it to false . 4 You can change the default value to the value of any name parameter in the status.destStorageClasses block of the MigPlan CR. If no value is specified, the PV will have no storage class after migration. 5 Allowed values are ReadWriteOnce and ReadWriteMany . If this value is not specified, the default is the access mode of the source cluster PVC. You can only edit the access mode in the MigPlan CR. You cannot edit it by using the MTC web console. Additional resources For details about the move and copy actions, see MTC workflow . For details about the skip action, see Excluding PVCs from migration . For details about the file system and snapshot copy methods, see About data copy methods . 11.5.6. Performing a state migration of Kubernetes objects by using the MTC API After you migrate all the PV data, you can use the Migration Toolkit for Containers (MTC) API to perform a one-time state migration of Kubernetes objects that constitute an application. You do this by configuring MigPlan custom resource (CR) fields to provide a list of Kubernetes resources with an additional label selector to further filter those resources, and then performing a migration by creating a MigMigration CR. The MigPlan resource is closed after the migration. Note Selecting Kubernetes resources is an API-only feature. You must update the MigPlan CR and create a MigMigration CR for it by using the CLI. The MTC web console does not support migrating Kubernetes objects. Note After migration, the closed parameter of the MigPlan CR is set to true . You cannot create another MigMigration CR for this MigPlan CR. You add Kubernetes objects to the MigPlan CR by using one of the following options: Adding the Kubernetes objects to the includedResources section. When the includedResources field is specified in the MigPlan CR, the plan takes a list of group-kind as input. Only resources present in the list are included in the migration. Adding the optional labelSelector parameter to filter the includedResources in the MigPlan . When this field is specified, only resources matching the label selector are included in the migration. For example, you can filter a list of Secret and ConfigMap resources by using the label app: frontend as a filter. Procedure Update the MigPlan CR to include Kubernetes resources and, optionally, to filter the included resources by adding the labelSelector parameter: To update the MigPlan CR to include Kubernetes resources: apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: "" - kind: <kind> group: "" 1 Specify the Kubernetes object, for example, Secret or ConfigMap . Optional: To filter the included resources by adding the labelSelector parameter: apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: "" - kind: <kind> group: "" ... labelSelector: matchLabels: <label> 2 1 Specify the Kubernetes object, for example, Secret or ConfigMap . 2 Specify the label of the resources to migrate, for example, app: frontend . Create a MigMigration CR to migrate the selected Kubernetes resources. Verify that the correct MigPlan is referenced in migPlanRef : apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: generateName: <migplan> namespace: openshift-migration spec: migPlanRef: name: <migplan> namespace: openshift-migration stage: false 11.6. Migration controller options You can edit migration plan limits, enable persistent volume resizing, or enable cached Kubernetes clients in the MigrationController custom resource (CR) for large migrations and improved performance. 11.6.1. Increasing limits for large migrations You can increase the limits on migration objects and container resources for large migrations with the Migration Toolkit for Containers (MTC). Important You must test these changes before you perform a migration in a production environment. Procedure Edit the MigrationController custom resource (CR) manifest: USD oc edit migrationcontroller -n openshift-migration Update the following parameters: ... mig_controller_limits_cpu: "1" 1 mig_controller_limits_memory: "10Gi" 2 ... mig_controller_requests_cpu: "100m" 3 mig_controller_requests_memory: "350Mi" 4 ... mig_pv_limit: 100 5 mig_pod_limit: 100 6 mig_namespace_limit: 10 7 ... 1 Specifies the number of CPUs available to the MigrationController CR. 2 Specifies the amount of memory available to the MigrationController CR. 3 Specifies the number of CPU units available for MigrationController CR requests. 100m represents 0.1 CPU units (100 * 1e-3). 4 Specifies the amount of memory available for MigrationController CR requests. 5 Specifies the number of persistent volumes that can be migrated. 6 Specifies the number of pods that can be migrated. 7 Specifies the number of namespaces that can be migrated. Create a migration plan that uses the updated parameters to verify the changes. If your migration plan exceeds the MigrationController CR limits, the MTC console displays a warning message when you save the migration plan. 11.6.2. Enabling persistent volume resizing for direct volume migration You can enable persistent volume (PV) resizing for direct volume migration to avoid running out of disk space on the destination cluster. When the disk usage of a PV reaches a configured level, the MigrationController custom resource (CR) compares the requested storage capacity of a persistent volume claim (PVC) to its actual provisioned capacity. Then, it calculates the space required on the destination cluster. A pv_resizing_threshold parameter determines when PV resizing is used. The default threshold is 3% . This means that PV resizing occurs when the disk usage of a PV is more than 97% . You can increase this threshold so that PV resizing occurs at a lower disk usage level. PVC capacity is calculated according to the following criteria: If the requested storage capacity ( spec.resources.requests.storage ) of the PVC is not equal to its actual provisioned capacity ( status.capacity.storage ), the greater value is used. If a PV is provisioned through a PVC and then subsequently changed so that its PV and PVC capacities no longer match, the greater value is used. Prerequisites The PVCs must be attached to one or more running pods so that the MigrationController CR can execute commands. Procedure Log in to the host cluster. Enable PV resizing by patching the MigrationController CR: USD oc patch migrationcontroller migration-controller -p '{"spec":{"enable_dvm_pv_resizing":true}}' \ 1 --type='merge' -n openshift-migration 1 Set the value to false to disable PV resizing. Optional: Update the pv_resizing_threshold parameter to increase the threshold: USD oc patch migrationcontroller migration-controller -p '{"spec":{"pv_resizing_threshold":41}}' \ 1 --type='merge' -n openshift-migration 1 The default value is 3 . When the threshold is exceeded, the following status message is displayed in the MigPlan CR status: status: conditions: ... - category: Warn durable: true lastTransitionTime: "2021-06-17T08:57:01Z" message: 'Capacity of the following volumes will be automatically adjusted to avoid disk capacity issues in the target cluster: [pvc-b800eb7b-cf3b-11eb-a3f7-0eae3e0555f3]' reason: Done status: "False" type: PvCapacityAdjustmentRequired Note For AWS gp2 storage, this message does not appear unless the pv_resizing_threshold is 42% or greater because of the way gp2 calculates volume usage and size. ( BZ#1973148 ) 11.6.3. Enabling cached Kubernetes clients You can enable cached Kubernetes clients in the MigrationController custom resource (CR) for improved performance during migration. The greatest performance benefit is displayed when migrating between clusters in different regions or with significant network latency. Note Delegated tasks, for example, Rsync backup for direct volume migration or Velero backup and restore, however, do not show improved performance with cached clients. Cached clients require extra memory because the MigrationController CR caches all API resources that are required for interacting with MigCluster CRs. Requests that are normally sent to the API server are directed to the cache instead. The cache watches the API server for updates. You can increase the memory limits and requests of the MigrationController CR if OOMKilled errors occur after you enable cached clients. Procedure Enable cached clients by running the following command: USD oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch \ '[{ "op": "replace", "path": "/spec/mig_controller_enable_cache", "value": true}]' Optional: Increase the MigrationController CR memory limits by running the following command: USD oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch \ '[{ "op": "replace", "path": "/spec/mig_controller_limits_memory", "value": <10Gi>}]' Optional: Increase the MigrationController CR memory requests by running the following command: USD oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch \ '[{ "op": "replace", "path": "/spec/mig_controller_requests_memory", "value": <350Mi>}]'
[ "podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-controller-rhel8:v1.8):/crane ./", "oc config view", "crane tunnel-api [--namespace <namespace>] --destination-context <destination-cluster> --source-context <source-cluster>", "crane tunnel-api --namespace my_tunnel --destination-context openshift-migration/c131-e-us-east-containers-cloud-ibm-com/admin --source-context default/192-168-122-171-nip-io:8443/admin", "oc get po -n <namespace>", "NAME READY STATUS RESTARTS AGE <pod_name> 2/2 Running 0 44s", "oc logs -f -n <namespace> <pod_name> -c openvpn", "oc get service -n <namespace>", "oc sa get-token -n openshift-migration migration-controller", "oc create route passthrough --service=docker-registry -n default", "oc create route passthrough --service=image-registry -n openshift-image-registry", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress", "apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]", "spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"", "oc get migrationcontroller <migration_controller> -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2", "oc replace -f migration-controller.yaml -n openshift-migration", "cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <host_cluster> namespace: openshift-migration spec: isHostCluster: true EOF", "cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: <cluster_secret> namespace: openshift-config type: Opaque data: saToken: <sa_token> 1 EOF", "oc sa get-token migration-controller -n openshift-migration | base64 -w 0", "cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <remote_cluster> 1 namespace: openshift-migration spec: exposedRegistryPath: <exposed_registry_route> 2 insecure: false 3 isHostCluster: false serviceAccountSecretRef: name: <remote_cluster_secret> 4 namespace: openshift-config url: <remote_cluster_url> 5 EOF", "oc describe MigCluster <cluster>", "cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: namespace: openshift-config name: <migstorage_creds> type: Opaque data: aws-access-key-id: <key_id_base64> 1 aws-secret-access-key: <secret_key_base64> 2 EOF", "echo -n \"<key>\" | base64 -w 0 1", "cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: name: <migstorage> namespace: openshift-migration spec: backupStorageConfig: awsBucketName: <bucket> 1 credsSecretRef: name: <storage_secret> 2 namespace: openshift-config backupStorageProvider: <storage_provider> 3 volumeSnapshotConfig: credsSecretRef: name: <storage_secret> 4 namespace: openshift-config volumeSnapshotProvider: <storage_provider> 5 EOF", "oc describe migstorage <migstorage>", "cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: destMigClusterRef: name: <host_cluster> namespace: openshift-migration indirectImageMigration: true 1 indirectVolumeMigration: true 2 migStorageRef: name: <migstorage> 3 namespace: openshift-migration namespaces: - <source_namespace_1> 4 - <source_namespace_2> - <source_namespace_3>:<destination_namespace> 5 srcMigClusterRef: name: <remote_cluster> 6 namespace: openshift-migration EOF", "oc describe migplan <migplan> -n openshift-migration", "cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: <migmigration> namespace: openshift-migration spec: migPlanRef: name: <migplan> 1 namespace: openshift-migration quiescePods: true 2 stage: false 3 rollback: false 4 EOF", "oc watch migmigration <migmigration> -n openshift-migration", "Name: c8b034c0-6567-11eb-9a4f-0bc004db0fbc Namespace: openshift-migration Labels: migration.openshift.io/migplan-name=django Annotations: openshift.io/touch: e99f9083-6567-11eb-8420-0a580a81020c API Version: migration.openshift.io/v1alpha1 Kind: MigMigration Spec: Mig Plan Ref: Name: migplan Namespace: openshift-migration Stage: false Status: Conditions: Category: Advisory Last Transition Time: 2021-02-02T15:04:09Z Message: Step: 19/47 Reason: InitialBackupCreated Status: True Type: Running Category: Required Last Transition Time: 2021-02-02T15:03:19Z Message: The migration is ready. Status: True Type: Ready Category: Required Durable: true Last Transition Time: 2021-02-02T15:04:05Z Message: The migration registries are healthy. Status: True Type: RegistriesHealthy Itinerary: Final Observed Digest: 7fae9d21f15979c71ddc7dd075cb97061895caac5b936d92fae967019ab616d5 Phase: InitialBackupCreated Pipeline: Completed: 2021-02-02T15:04:07Z Message: Completed Name: Prepare Started: 2021-02-02T15:03:18Z Message: Waiting for initial Velero backup to complete. Name: Backup Phase: InitialBackupCreated Progress: Backup openshift-migration/c8b034c0-6567-11eb-9a4f-0bc004db0fbc-wpc44: 0 out of estimated total of 0 objects backed up (5s) Started: 2021-02-02T15:04:07Z Message: Not started Name: StageBackup Message: Not started Name: StageRestore Message: Not started Name: DirectImage Message: Not started Name: DirectVolume Message: Not started Name: Restore Message: Not started Name: Cleanup Start Timestamp: 2021-02-02T15:03:18Z Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Running 57s migmigration_controller Step: 2/47 Normal Running 57s migmigration_controller Step: 3/47 Normal Running 57s (x3 over 57s) migmigration_controller Step: 4/47 Normal Running 54s migmigration_controller Step: 5/47 Normal Running 54s migmigration_controller Step: 6/47 Normal Running 52s (x2 over 53s) migmigration_controller Step: 7/47 Normal Running 51s (x2 over 51s) migmigration_controller Step: 8/47 Normal Ready 50s (x12 over 57s) migmigration_controller The migration is ready. Normal Running 50s migmigration_controller Step: 9/47 Normal Running 50s migmigration_controller Step: 10/47", "- hosts: localhost gather_facts: false tasks: - name: get pod name shell: oc get po --all-namespaces", "- hosts: localhost gather_facts: false tasks: - name: Get pod k8s_info: kind: pods api: v1 namespace: openshift-migration name: \"{{ lookup( 'env', 'HOSTNAME') }}\" register: pods - name: Print pod name debug: msg: \"{{ pods.resources[0].metadata.name }}\"", "- hosts: localhost gather_facts: false tasks: - name: Set a boolean set_fact: do_fail: true - name: \"fail\" fail: msg: \"Cause a failure\" when: do_fail", "- hosts: localhost gather_facts: false tasks: - set_fact: namespaces: \"{{ (lookup( 'env', 'MIGRATION_NAMESPACES')).split(',') }}\" - debug: msg: \"{{ item }}\" with_items: \"{{ namespaces }}\" - debug: msg: \"{{ lookup( 'env', 'MIGRATION_PLAN_NAME') }}\"", "oc edit migrationcontroller <migration_controller> -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: disable_image_migration: true 1 disable_pv_migration: true 2 additional_excluded_resources: 3 - resource1 - resource2", "oc get deployment -n openshift-migration migration-controller -o yaml | grep EXCLUDED_RESOURCES -A1", "name: EXCLUDED_RESOURCES value: resource1,resource2,imagetags,templateinstances,clusterserviceversions,packagemanifests,subscriptions,servicebrokers,servicebindings,serviceclasses,serviceinstances,serviceplans,imagestreams,persistentvolumes,persistentvolumeclaims", "spec: namespaces: - namespace_2 - namespace_1:namespace_2", "spec: namespaces: - namespace_1:namespace_1", "spec: namespaces: - namespace_1", "apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: selection: action: skip", "apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: name: <source_pvc>:<destination_pvc> 1", "apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: pvc-095a6559-b27f-11eb-b27f-021bddcaf6e4 proposedCapacity: 10Gi pvc: accessModes: - ReadWriteMany hasReference: true name: mysql namespace: mysql-persistent selection: action: <copy> 1 copyMethod: <filesystem> 2 verify: true 3 storageClass: <gp2> 4 accessMode: <ReadWriteMany> 5 storageClass: cephfs", "apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: \"\" - kind: <kind> group: \"\"", "apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: \"\" - kind: <kind> group: \"\" labelSelector: matchLabels: <label> 2", "apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: generateName: <migplan> namespace: openshift-migration spec: migPlanRef: name: <migplan> namespace: openshift-migration stage: false", "oc edit migrationcontroller -n openshift-migration", "mig_controller_limits_cpu: \"1\" 1 mig_controller_limits_memory: \"10Gi\" 2 mig_controller_requests_cpu: \"100m\" 3 mig_controller_requests_memory: \"350Mi\" 4 mig_pv_limit: 100 5 mig_pod_limit: 100 6 mig_namespace_limit: 10 7", "oc patch migrationcontroller migration-controller -p '{\"spec\":{\"enable_dvm_pv_resizing\":true}}' \\ 1 --type='merge' -n openshift-migration", "oc patch migrationcontroller migration-controller -p '{\"spec\":{\"pv_resizing_threshold\":41}}' \\ 1 --type='merge' -n openshift-migration", "status: conditions: - category: Warn durable: true lastTransitionTime: \"2021-06-17T08:57:01Z\" message: 'Capacity of the following volumes will be automatically adjusted to avoid disk capacity issues in the target cluster: [pvc-b800eb7b-cf3b-11eb-a3f7-0eae3e0555f3]' reason: Done status: \"False\" type: PvCapacityAdjustmentRequired", "oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_enable_cache\", \"value\": true}]'", "oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_limits_memory\", \"value\": <10Gi>}]'", "oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_requests_memory\", \"value\": <350Mi>}]'" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/migrating_from_version_3_to_4/advanced-migration-options-3-4
Chapter 2. OpenShift Container Platform overview
Chapter 2. OpenShift Container Platform overview OpenShift Container Platform is a cloud-based Kubernetes container platform. The foundation of OpenShift Container Platform is based on Kubernetes and therefore shares the same technology. It is designed to allow applications and the data centers that support them to expand from just a few machines and applications to thousands of machines that serve millions of clients. OpenShift Container Platform enables you to do the following: Provide developers and IT organizations with cloud application platforms that can be used for deploying applications on secure and scalable resources. Require minimal configuration and management overhead. Bring the Kubernetes platform to customer data centers and cloud. Meet security, privacy, compliance, and governance requirements. With its foundation in Kubernetes, OpenShift Container Platform incorporates the same technology that serves as the engine for massive telecommunications, streaming video, gaming, banking, and other applications. Its implementation in open Red Hat technologies lets you extend your containerized applications beyond a single cloud to on-premise and multi-cloud environments. 2.1. Glossary of common terms for OpenShift Container Platform This glossary defines common Kubernetes and OpenShift Container Platform terms. Kubernetes Kubernetes is an open source container orchestration engine for automating deployment, scaling, and management of containerized applications. Containers Containers are application instances and components that run in OCI-compliant containers on the worker nodes. A container is the runtime of an Open Container Initiative (OCI)-compliant image. An image is a binary application. A worker node can run many containers. A node capacity is related to memory and CPU capabilities of the underlying resources whether they are cloud, hardware, or virtualized. Pod A pod is one or more containers deployed together on one host. It consists of a colocated group of containers with shared resources such as volumes and IP addresses. A pod is also the smallest compute unit defined, deployed, and managed. In OpenShift Container Platform, pods replace individual application containers as the smallest deployable unit. Pods are the orchestrated unit in OpenShift Container Platform. OpenShift Container Platform schedules and runs all containers in a pod on the same node. Complex applications are made up of many pods, each with their own containers. They interact externally and also with another inside the OpenShift Container Platform environment. Replica set and replication controller The Kubernetes replica set and the OpenShift Container Platform replication controller are both available. The job of this component is to ensure the specified number of pod replicas are running at all times. If pods exit or are deleted, the replica set or replication controller starts more. If more pods are running than needed, the replica set deletes as many as necessary to match the specified number of replicas. Deployment and DeploymentConfig OpenShift Container Platform implements both Kubernetes Deployment objects and OpenShift Container Platform DeploymentConfigs objects. Users may select either. Deployment objects control how an application is rolled out as pods. They identify the name of the container image to be taken from the registry and deployed as a pod on a node. They set the number of replicas of the pod to deploy, creating a replica set to manage the process. The labels indicated instruct the scheduler onto which nodes to deploy the pod. The set of labels is included in the pod definition that the replica set instantiates. Deployment objects are able to update the pods deployed onto the worker nodes based on the version of the Deployment objects and the various rollout strategies for managing acceptable application availability. OpenShift Container Platform DeploymentConfig objects add the additional features of change triggers, which are able to automatically create new versions of the Deployment objects as new versions of the container image are available, or other changes. Service A service defines a logical set of pods and access policies. It provides permanent internal IP addresses and hostnames for other applications to use as pods are created and destroyed. Service layers connect application components together. For example, a front-end web service connects to a database instance by communicating with its service. Services allow for simple internal load balancing across application components. OpenShift Container Platform automatically injects service information into running containers for ease of discovery. Route A route is a way to expose a service by giving it an externally reachable hostname, such as www.example.com. Each route consists of a route name, a service selector, and optionally a security configuration. A router can consume a defined route and the endpoints identified by its service to provide a name that lets external clients reach your applications. While it is easy to deploy a complete multi-tier application, traffic from anywhere outside the OpenShift Container Platform environment cannot reach the application without the routing layer. Build A build is the process of transforming input parameters into a resulting object. Most often, the process is used to transform input parameters or source code into a runnable image. A BuildConfig object is the definition of the entire build process. OpenShift Container Platform leverages Kubernetes by creating containers from build images and pushing them to the integrated registry. Project OpenShift Container Platform uses projects to allow groups of users or developers to work together, serving as the unit of isolation and collaboration. It defines the scope of resources, allows project administrators and collaborators to manage resources, and restricts and tracks the user's resources with quotas and limits. A project is a Kubernetes namespace with additional annotations. It is the central vehicle for managing access to resources for regular users. A project lets a community of users organize and manage their content in isolation from other communities. Users must receive access to projects from administrators. But cluster administrators can allow developers to create their own projects, in which case users automatically have access to their own projects. Each project has its own set of objects, policies, constraints, and service accounts. Projects are also known as namespaces. Operators An Operator is a Kubernetes-native application. The goal of an Operator is to put operational knowledge into software. Previously this knowledge only resided in the minds of administrators, various combinations or shell scripts or automation software such as Ansible. It was outside your Kubernetes cluster and hard to integrate. With Operators, all of this changes. Operators are purpose-built for your applications. They implement and automate common Day 1 activities such as installation and configuration as well as Day 2 activities such as scaling up and down, reconfiguration, updates, backups, fail overs, and restores in a piece of software running inside your Kubernetes cluster by integrating natively with Kubernetes concepts and APIs. This is called a Kubernetes-native application. With Operators, applications must not be treated as a collection of primitives, such as pods, deployments, services, or config maps. Instead, Operators should be treated as a single object that exposes the options that make sense for the application. 2.2. Understanding OpenShift Container Platform OpenShift Container Platform is a Kubernetes environment for managing the lifecycle of container-based applications and their dependencies on various computing platforms, such as bare metal, virtualized, on-premise, and in cloud. OpenShift Container Platform deploys, configures and manages containers. OpenShift Container Platform offers usability, stability, and customization of its components. OpenShift Container Platform utilises a number of computing resources, known as nodes. A node has a lightweight, secure operating system based on Red Hat Enterprise Linux (RHEL), known as Red Hat Enterprise Linux CoreOS (RHCOS). After a node is booted and configured, it obtains a container runtime, such as CRI-O or Docker, for managing and running the images of container workloads scheduled to it. The Kubernetes agent, or kubelet schedules container workloads on the node. The kubelet is responsible for registering the node with the cluster and receiving the details of container workloads. OpenShift Container Platform configures and manages the networking, load balancing and routing of the cluster. OpenShift Container Platform adds cluster services for monitoring the cluster health and performance, logging, and for managing upgrades. The container image registry and OperatorHub provide Red Hat certified products and community built softwares for providing various application services within the cluster. These applications and services manage the applications deployed in the cluster, databases, frontends and user interfaces, application runtimes and business automation, and developer services for development and testing of container applications. You can manage applications within the cluster either manually by configuring deployments of containers running from pre-built images or through resources known as Operators. You can build custom images from pre-build images and source code, and store these custom images locally in an internal, private or public registry. The Multicluster Management layer can manage multiple clusters including their deployment, configuration, compliance and distribution of workloads in a single console. 2.3. Installing OpenShift Container Platform The OpenShift Container Platform installation program offers you flexibility. You can use the installation program to deploy a cluster on infrastructure that the installation program provisions and the cluster maintains or deploy a cluster on infrastructure that you prepare and maintain. For more information about the installation process, the supported platforms, and choosing a method of installing and preparing your cluster, see the following: OpenShift Container Platform installation overview Installation process Supported platforms for OpenShift Container Platform clusters Selecting a cluster installation type 2.3.1. OpenShift Local overview OpenShift Local supports rapid application development to get started building OpenShift Container Platform clusters. OpenShift Local is designed to run on a local computer to simplify setup and testing, and to emulate the cloud development environment locally with all of the tools needed to develop container-based applications. Regardless of the programming language you use, OpenShift Local hosts your application and brings a minimal, preconfigured Red Hat OpenShift Container Platform cluster to your local PC without the need for a server-based infrastructure. On a hosted environment, OpenShift Local can create microservices, convert them into images, and run them in Kubernetes-hosted containers directly on your laptop or desktop running Linux, macOS, or Windows 10 or later. For more information about OpenShift Local, see Red Hat OpenShift Local Overview . 2.4. Steps 2.4.1. For developers Develop and deploy containerized applications with OpenShift Container Platform. OpenShift Container Platform is a platform for developing and deploying containerized applications. OpenShift Container Platform documentation helps you: Understand OpenShift Container Platform development : Learn the different types of containerized applications, from simple containers to advanced Kubernetes deployments and Operators. Work with projects : Create projects from the OpenShift Container Platform web console or OpenShift CLI ( oc ) to organize and share the software you develop. Work with applications : Use the Developer perspective in the OpenShift Container Platform web console to create and deploy applications . Use the Topology view to see your applications, monitor status, connect and group components, and modify your code base. Use the developer CLI tool ( odo ) : The odo CLI tool lets developers create single or multi-component applications and automates deployment, build, and service route configurations. It abstracts complex Kubernetes and OpenShift Container Platform concepts, allowing you to focus on developing your applications. Create CI/CD Pipelines : Pipelines are serverless, cloud-native, continuous integration, and continuous deployment systems that run in isolated containers. They use standard Tekton custom resources to automate deployments and are designed for decentralized teams working on microservices-based architecture. Deploy Helm charts : Helm 3 is a package manager that helps developers define, install, and update application packages on Kubernetes. A Helm chart is a packaging format that describes an application that can be deployed using the Helm CLI. Understand image builds : Choose from different build strategies (Docker, S2I, custom, and pipeline) that can include different kinds of source materials (Git repositories, local binary inputs, and external artifacts). Then, follow examples of build types from basic builds to advanced builds. Create container images : A container image is the most basic building block in OpenShift Container Platform (and Kubernetes) applications. Defining image streams lets you gather multiple versions of an image in one place as you continue its development. S2I containers let you insert your source code into a base container that is set up to run code of a particular type, such as Ruby, Node.js, or Python. Create deployments : Use Deployment and DeploymentConfig objects to exert fine-grained management over applications. Manage deployments using the Workloads page or OpenShift CLI ( oc ). Learn rolling, recreate, and custom deployment strategies. Create templates : Use existing templates or create your own templates that describe how an application is built or deployed. A template can combine images with descriptions, parameters, replicas, exposed ports and other content that defines how an application can be run or built. Understand Operators : Operators are the preferred method for creating on-cluster applications for OpenShift Container Platform 4.15. Learn about the Operator Framework and how to deploy applications using installed Operators into your projects. Develop Operators : Operators are the preferred method for creating on-cluster applications for OpenShift Container Platform 4.15. Learn the workflow for building, testing, and deploying Operators. Then, create your own Operators based on Ansible or Helm , or configure built-in Prometheus monitoring using the Operator SDK. REST API reference : Learn about OpenShift Container Platform application programming interface endpoints. 2.4.2. For administrators Understand OpenShift Container Platform management : Learn about components of the OpenShift Container Platform 4.15 control plane. See how OpenShift Container Platform control plane and worker nodes are managed and updated through the Machine API and Operators . Manage users and groups : Add users and groups with different levels of permissions to use or modify clusters. Manage authentication : Learn how user, group, and API authentication works in OpenShift Container Platform. OpenShift Container Platform supports multiple identity providers. Manage networking : The cluster network in OpenShift Container Platform is managed by the Cluster Network Operator (CNO). The CNO uses iptables rules in kube-proxy to direct traffic between nodes and pods running on those nodes. The Multus Container Network Interface adds the capability to attach multiple network interfaces to a pod. Using network policy features, you can isolate your pods or permit selected traffic. Manage storage : OpenShift Container Platform allows cluster administrators to configure persistent storage. Manage Operators : Lists of Red Hat, ISV, and community Operators can be reviewed by cluster administrators and installed on their clusters . After you install them, you can run , upgrade , back up, or otherwise manage the Operator on your cluster. Use custom resource definitions (CRDs) to modify the cluster : Cluster features implemented with Operators can be modified with CRDs. Learn to create a CRD and manage resources from CRDs . Set resource quotas : Choose from CPU, memory, and other system resources to set quotas . Prune and reclaim resources : Reclaim space by pruning unneeded Operators, groups, deployments, builds, images, registries, and cron jobs. Scale and tune clusters : Set cluster limits, tune nodes, scale cluster monitoring, and optimize networking, storage, and routes for your environment. Using the OpenShift Update Service in a disconnected environement : Learn about installing and managing a local OpenShift Update Service for recommending OpenShift Container Platform updates in disconnected environments. Monitor clusters : Learn to configure the monitoring stack . After configuring monitoring, use the web console to access monitoring dashboards . In addition to infrastructure metrics, you can also scrape and view metrics for your own services. Remote health monitoring : OpenShift Container Platform collects anonymized aggregated information about your cluster. Using Telemetry and the Insights Operator, this data is received by Red Hat and used to improve OpenShift Container Platform. You can view the data collected by remote health monitoring .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/getting_started/openshift-overview
Chapter 1. OpenShift sandboxed containers 1.3 release notes
Chapter 1. OpenShift sandboxed containers 1.3 release notes 1.1. About this release These release notes track the development of OpenShift sandboxed containers 1.3 alongside Red Hat OpenShift Container Platform 4.11. This product is fully supported and enabled by default as of OpenShift Container Platform 4.10. 1.2. New features and enhancements 1.2.1. Container ID in metrics list The sandbox_id with the ID of the relevant sandboxed container now appears in the metrics list on the Metrics page in the web console. In addition, the kata-monitor process now adds three new labels to kata-specific metrics: cri_uid , cri_name , and cri_namespace . These labels enable kata-specific metrics to relate to corresponding kubernetes workloads. For more information about kata-specific metrics, see About OpenShift sandboxed containers metrics . 1.2.2. OpenShift sandboxed containers availability on AWS bare metal Previously, OpenShift sandboxed containers availability on AWS bare metal was in Technology Preview. With this release, installing OpenShift sandboxed containers on AWS bare-metal clusters is fully supported. 1.2.3. Support for OpenShift sandboxed containers on single-node OpenShift OpenShift sandboxed containers now work on single-node OpenShift clusters when the OpenShift sandboxed containers Operator is installed by Red Hat Advanced Cluster Management (RHACM). 1.3. Bug fixes Previously, when creating the KataConfig CR and observing the pod status under the openshift-sandboxed-containers-operator namespace, a huge number of restarts for monitor pods was shown. The monitor pods use a specific SELinux policy that was installed as part of the sandboxed-containers extension installation. The monitor pod was created immediately. However, the SELinux policy was not yet available, which resulted in a pod creation error, followed by a pod restart. With this release, the SELinux policy is available when the monitor pod is created, and the monitor pod transitions to a Running state immediately. ( KATA-1338 ) Previously, OpenShift sandboxed containers deployed a security context constraint (SCC) on startup which enforced a custom SELinux policy that was not available on Machine Config Operator (MCO) pods. This caused the MCO pod to change to a CrashLoopBackOff state and cluster upgrades to fail. With this release, OpenShift sandboxed containers deploys the SCC when creating the KataConfig CR and no longer enforces using the custom SELinux policy. ( KATA-1373 ) Previously, when uninstalling the OpenShift sandboxed containers Operator, the sandboxed-containers-operator-scc custom resource was not deleted. With this release, the sandboxed-containers-operator-scc custom resource is deleted when uninstalling the OpenShift sandboxed containers Operator. ( KATA-1569 ) 1.4. Known issues If you are using OpenShift sandboxed containers, you might receive SELinux denials when accessing files or directories mounted from the hostPath volume in an OpenShift Container Platform cluster. These denials can occur even when running privileged sandboxed containers because privileged sandboxed containers do not disable SELinux checks. Following SELinux policy on the host guarantees full isolation of the host file system from the sandboxed workload by default. This also provides stronger protection against potential security flaws in the virtiofsd daemon or QEMU. If the mounted files or directories do not have specific SELinux requirements on the host, you can use local persistent volumes as an alternative. Files are automatically relabeled to container_file_t , following SELinux policy for container runtimes. See Persistent storage using local volumes for more information. Automatic relabeling is not an option when mounted files or directories are expected to have specific SELinux labels on the host. Instead, you can set custom SELinux rules on the host to allow the virtiofsd daemon to access these specific labels. ( BZ#1904609 ) Some OpenShift sandboxed containers Operator pods use container CPU resource limits to increase the number of available CPUs for the pod. These pods might receive fewer CPUs than requested. If the functionality is available inside the container, you can diagnose CPU resource issues by using oc rsh <pod> to access a pod and running the lscpu command: USD lscpu Example output CPU(s): 16 On-line CPU(s) list: 0-12,14,15 Off-line CPU(s) list: 13 The list of offline CPUs will likely change unpredictably from run to run. As a workaround, you can use a pod annotation to request additional CPUs rather than setting a CPU limit. CPU requests that use pod annotation are not affected by this issue, because the processor allocation method is different. Rather than setting a CPU limit, the following annotation must be added to the metadata of the pod: metadata: annotations: io.katacontainers.config.hypervisor.default_vcpus: "16" ( KATA-1376 ) The progress of the runtime installation is shown in the status section of the kataConfig custom resource (CR). However, the progress is not shown if all of the following conditions are true: There are no worker nodes defined. You can run oc get machineconfigpool to check the number of worker nodes in the machine config pool. No kataConfigPoolSelector is specified to select nodes for installation. In this case, the installation starts on the control plane nodes because the Operator assumes it is a converged cluster where nodes have both control plane and worker roles. The status section of the kataConfig CR is not updated during the installation. ( KATA-1017 ) When using older versions of the Buildah tool in OpenShift sandboxed containers, the build fails with the following error: process exited with error: fork/exec /bin/sh: no such file or directory subprocess exited with status 1 You must use the latest version of Buildah, available at quay.io . ( KATA-1278 ) In the KataConfig tab in the web console, if you click Create KataConfig while in the YAML view , the KataConfig YAML is missing the spec fields. Toggling to the Form view and then back to the YAML view fixes this issue and displays the full YAML. ( KATA-1372 ) In the KataConfig tab in the web console, a 404: Not found error message appears whether a KataConfig CR already exists or not. To access an existing KataConfig CR, go to Home > Search . From the Resources list, select KataConfig . ( KATA-1605 ) Upgrading OpenShift sandboxed containers does not automatically update the existing KataConfig CR. As a result, monitor pods from deployments are not restarted and continue to run with an outdated kataMonitor image. Upgrade the kataMonitor image with the following command: USD oc patch kataconfig example-kataconfig --type merge --patch '{"spec":{"kataMonitorImage":"registry.redhat.io/openshift-sandboxed-containers/osc-monitor-rhel8:1.3.0"}}' You can also upgrade the kataMonitor image by editing the KataConfig YAML in the web console. ( KATA-1650 ) 1.5. Asynchronous errata updates Security, bug fix, and enhancement updates for OpenShift sandboxed containers 4.11 are released as asynchronous errata through the Red Hat Network. All OpenShift Container Platform 4.11 errata are available on the Red Hat Customer Portal . For more information about asynchronous errata, see the OpenShift Container Platform Life Cycle . Red Hat Customer Portal users can enable errata notifications in the account settings for Red Hat Subscription Management (RHSM). When errata notifications are enabled, users are notified by email whenever new errata relevant to their registered systems are released. Note Red Hat Customer Portal user accounts must have systems registered and consuming OpenShift Container Platform entitlements for OpenShift Container Platform errata notification emails to generate. This section will continue to be updated over time to provide notes on enhancements and bug fixes for future asynchronous errata releases of OpenShift sandboxed containers 1.3. 1.5.1. RHSA-2022:6072 - OpenShift sandboxed containers 1.3.0 image release, bug fix, and enhancement advisory. Issued: 2022-08-17 OpenShift sandboxed containers release 1.3.0 is now available. This advisory contains an update for OpenShift sandboxed containers with enhancements and bug fixes. The list of bug fixes included in the update is documented in the RHSA-2022:6072 advisory. 1.5.2. RHSA-2022:7058 - OpenShift sandboxed containers 1.3.1 security fix and bug fix advisory. Issued: 2022-10-19 OpenShift sandboxed containers release 1.3.1 is now available. This advisory contains an update for OpenShift sandboxed containers with security fixes and a bug fix. The list of bug fixes included in the update is documented in the RHSA-2022:7058 advisory.
[ "lscpu", "CPU(s): 16 On-line CPU(s) list: 0-12,14,15 Off-line CPU(s) list: 13", "metadata: annotations: io.katacontainers.config.hypervisor.default_vcpus: \"16\"", "process exited with error: fork/exec /bin/sh: no such file or directory subprocess exited with status 1", "oc patch kataconfig example-kataconfig --type merge --patch '{\"spec\":{\"kataMonitorImage\":\"registry.redhat.io/openshift-sandboxed-containers/osc-monitor-rhel8:1.3.0\"}}'" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/sandboxed_containers_support_for_openshift/openshift-sandboxed-containers-release-notes
Working with data in an S3-compatible object store
Working with data in an S3-compatible object store Red Hat OpenShift AI Cloud Service 1 Work with data stored in an S3-compatible object store from your workbench
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/working_with_data_in_an_s3-compatible_object_store/index
Chapter 3. Configuring certificates
Chapter 3. Configuring certificates 3.1. Replacing the default ingress certificate 3.1.1. Understanding the default ingress certificate By default, OpenShift Container Platform uses the Ingress Operator to create an internal CA and issue a wildcard certificate that is valid for applications under the .apps sub-domain. Both the web console and CLI use this certificate as well. The internal infrastructure CA certificates are self-signed. While this process might be perceived as bad practice by some security or PKI teams, any risk here is minimal. The only clients that implicitly trust these certificates are other components within the cluster. Replacing the default wildcard certificate with one that is issued by a public CA already included in the CA bundle as provided by the container userspace allows external clients to connect securely to applications running under the .apps sub-domain. 3.1.2. Replacing the default ingress certificate You can replace the default ingress certificate for all applications under the .apps subdomain. After you replace the certificate, all applications, including the web console and CLI, will have encryption provided by specified certificate. Prerequisites You must have a wildcard certificate for the fully qualified .apps subdomain and its corresponding private key. Each should be in a separate PEM format file. The private key must be unencrypted. If your key is encrypted, decrypt it before importing it into OpenShift Container Platform. The certificate must include the subjectAltName extension showing *.apps.<clustername>.<domain> . The certificate file can contain one or more certificates in a chain. The wildcard certificate must be the first certificate in the file. It can then be followed with any intermediate certificates, and the file should end with the root CA certificate. Copy the root CA certificate into an additional PEM format file. Verify that all certificates which include -----END CERTIFICATE----- also end with one carriage return after that line. Procedure Create a config map that includes only the root CA certificate used to sign the wildcard certificate: USD oc create configmap custom-ca \ --from-file=ca-bundle.crt=</path/to/example-ca.crt> \ 1 -n openshift-config 1 </path/to/example-ca.crt> is the path to the root CA certificate file on your local file system. Update the cluster-wide proxy configuration with the newly created config map: USD oc patch proxy/cluster \ --type=merge \ --patch='{"spec":{"trustedCA":{"name":"custom-ca"}}}' Create a secret that contains the wildcard certificate chain and key: USD oc create secret tls <secret> \ 1 --cert=</path/to/cert.crt> \ 2 --key=</path/to/cert.key> \ 3 -n openshift-ingress 1 <secret> is the name of the secret that will contain the certificate chain and private key. 2 </path/to/cert.crt> is the path to the certificate chain on your local file system. 3 </path/to/cert.key> is the path to the private key associated with this certificate. Update the Ingress Controller configuration with the newly created secret: USD oc patch ingresscontroller.operator default \ --type=merge -p \ '{"spec":{"defaultCertificate": {"name": "<secret>"}}}' \ 1 -n openshift-ingress-operator 1 Replace <secret> with the name used for the secret in the step. Important To trigger the Ingress Operator to perform a rolling update, you must update the name of the secret. Because the kubelet automatically propagates changes to the secret in the volume mount, updating the secret contents does not trigger a rolling update. For more information, see this Red Hat Knowledgebase Solution . Additional resources Replacing the CA Bundle certificate Proxy certificate customization 3.2. Adding API server certificates The default API server certificate is issued by an internal OpenShift Container Platform cluster CA. Clients outside of the cluster will not be able to verify the API server's certificate by default. This certificate can be replaced by one that is issued by a CA that clients trust. Note In hosted control plane clusters, you cannot replace self-signed certificates from the API. 3.2.1. Add an API server named certificate The default API server certificate is issued by an internal OpenShift Container Platform cluster CA. You can add one or more alternative certificates that the API server will return based on the fully qualified domain name (FQDN) requested by the client, for example when a reverse proxy or load balancer is used. Prerequisites You must have a certificate for the FQDN and its corresponding private key. Each should be in a separate PEM format file. The private key must be unencrypted. If your key is encrypted, decrypt it before importing it into OpenShift Container Platform. The certificate must include the subjectAltName extension showing the FQDN. The certificate file can contain one or more certificates in a chain. The certificate for the API server FQDN must be the first certificate in the file. It can then be followed with any intermediate certificates, and the file should end with the root CA certificate. Warning Do not provide a named certificate for the internal load balancer (host name api-int.<cluster_name>.<base_domain> ). Doing so will leave your cluster in a degraded state. Procedure Login to the new API as the kubeadmin user. USD oc login -u kubeadmin -p <password> https://FQDN:6443 Get the kubeconfig file. USD oc config view --flatten > kubeconfig-newapi Create a secret that contains the certificate chain and private key in the openshift-config namespace. USD oc create secret tls <secret> \ 1 --cert=</path/to/cert.crt> \ 2 --key=</path/to/cert.key> \ 3 -n openshift-config 1 <secret> is the name of the secret that will contain the certificate chain and private key. 2 </path/to/cert.crt> is the path to the certificate chain on your local file system. 3 </path/to/cert.key> is the path to the private key associated with this certificate. Update the API server to reference the created secret. USD oc patch apiserver cluster \ --type=merge -p \ '{"spec":{"servingCerts": {"namedCertificates": [{"names": ["<FQDN>"], 1 "servingCertificate": {"name": "<secret>"}}]}}}' 2 1 Replace <FQDN> with the FQDN that the API server should provide the certificate for. Do not include the port number. 2 Replace <secret> with the name used for the secret in the step. Examine the apiserver/cluster object and confirm the secret is now referenced. USD oc get apiserver cluster -o yaml Example output ... spec: servingCerts: namedCertificates: - names: - <FQDN> servingCertificate: name: <secret> ... Check the kube-apiserver operator, and verify that a new revision of the Kubernetes API server rolls out. It may take a minute for the operator to detect the configuration change and trigger a new deployment. While the new revision is rolling out, PROGRESSING will report True . USD oc get clusteroperators kube-apiserver Do not continue to the step until PROGRESSING is listed as False , as shown in the following output: Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE kube-apiserver 4.16.0 True False False 145m If PROGRESSING is showing True , wait a few minutes and try again. Note A new revision of the Kubernetes API server only rolls out if the API server named certificate is added for the first time. When the API server named certificate is renewed, a new revision of the Kubernetes API server does not roll out because the kube-apiserver pods dynamically reload the updated certificate. 3.3. Securing service traffic using service serving certificate secrets 3.3.1. Understanding service serving certificates Service serving certificates are intended to support complex middleware applications that require encryption. These certificates are issued as TLS web server certificates. The service-ca controller uses the x509.SHA256WithRSA signature algorithm to generate service certificates. The generated certificate and key are in PEM format, stored in tls.crt and tls.key respectively, within a created secret. The certificate and key are automatically replaced when they get close to expiration. The service CA certificate, which issues the service certificates, is valid for 26 months and is automatically rotated when there is less than 13 months validity left. After rotation, the service CA configuration is still trusted until its expiration. This allows a grace period for all affected services to refresh their key material before the expiration. If you do not upgrade your cluster during this grace period, which restarts services and refreshes their key material, you might need to manually restart services to avoid failures after the service CA expires. Note You can use the following command to manually restart all pods in the cluster. Be aware that running this command causes a service interruption, because it deletes every running pod in every namespace. These pods will automatically restart after they are deleted. USD for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{"\n"} {end}'); \ do oc delete pods --all -n USDI; \ sleep 1; \ done 3.3.2. Add a service certificate To secure communication to your service, generate a signed serving certificate and key pair into a secret in the same namespace as the service. The generated certificate is only valid for the internal service DNS name <service.name>.<service.namespace>.svc , and is only valid for internal communications. If your service is a headless service (no clusterIP value set), the generated certificate also contains a wildcard subject in the format of *.<service.name>.<service.namespace>.svc . Important Because the generated certificates contain wildcard subjects for headless services, you must not use the service CA if your client must differentiate between individual pods. In this case: Generate individual TLS certificates by using a different CA. Do not accept the service CA as a trusted CA for connections that are directed to individual pods and must not be impersonated by other pods. These connections must be configured to trust the CA that was used to generate the individual TLS certificates. Prerequisites You must have a service defined. Procedure Annotate the service with service.beta.openshift.io/serving-cert-secret-name : USD oc annotate service <service_name> \ 1 service.beta.openshift.io/serving-cert-secret-name=<secret_name> 2 1 Replace <service_name> with the name of the service to secure. 2 <secret_name> will be the name of the generated secret containing the certificate and key pair. For convenience, it is recommended that this be the same as <service_name> . For example, use the following command to annotate the service test1 : USD oc annotate service test1 service.beta.openshift.io/serving-cert-secret-name=test1 Examine the service to confirm that the annotations are present: USD oc describe service <service_name> Example output ... Annotations: service.beta.openshift.io/serving-cert-secret-name: <service_name> service.beta.openshift.io/serving-cert-signed-by: openshift-service-serving-signer@1556850837 ... After the cluster generates a secret for your service, your Pod spec can mount it, and the pod will run after it becomes available. Additional resources You can use a service certificate to configure a secure route using reencrypt TLS termination. For more information, see Creating a re-encrypt route with a custom certificate . 3.3.3. Add the service CA bundle to a config map A pod can access the service CA certificate by mounting a ConfigMap object that is annotated with service.beta.openshift.io/inject-cabundle=true . Once annotated, the cluster automatically injects the service CA certificate into the service-ca.crt key on the config map. Access to this CA certificate allows TLS clients to verify connections to services using service serving certificates. Important After adding this annotation to a config map all existing data in it is deleted. It is recommended to use a separate config map to contain the service-ca.crt , instead of using the same config map that stores your pod configuration. Procedure Annotate the config map with service.beta.openshift.io/inject-cabundle=true : USD oc annotate configmap <config_map_name> \ 1 service.beta.openshift.io/inject-cabundle=true 1 Replace <config_map_name> with the name of the config map to annotate. Note Explicitly referencing the service-ca.crt key in a volume mount will prevent a pod from starting until the config map has been injected with the CA bundle. This behavior can be overridden by setting the optional field to true for the volume's serving certificate configuration. For example, use the following command to annotate the config map test1 : USD oc annotate configmap test1 service.beta.openshift.io/inject-cabundle=true View the config map to ensure that the service CA bundle has been injected: USD oc get configmap <config_map_name> -o yaml The CA bundle is displayed as the value of the service-ca.crt key in the YAML output: apiVersion: v1 data: service-ca.crt: | -----BEGIN CERTIFICATE----- ... 3.3.4. Add the service CA bundle to an API service You can annotate an APIService object with service.beta.openshift.io/inject-cabundle=true to have its spec.caBundle field populated with the service CA bundle. This allows the Kubernetes API server to validate the service CA certificate used to secure the targeted endpoint. Procedure Annotate the API service with service.beta.openshift.io/inject-cabundle=true : USD oc annotate apiservice <api_service_name> \ 1 service.beta.openshift.io/inject-cabundle=true 1 Replace <api_service_name> with the name of the API service to annotate. For example, use the following command to annotate the API service test1 : USD oc annotate apiservice test1 service.beta.openshift.io/inject-cabundle=true View the API service to ensure that the service CA bundle has been injected: USD oc get apiservice <api_service_name> -o yaml The CA bundle is displayed in the spec.caBundle field in the YAML output: apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata: annotations: service.beta.openshift.io/inject-cabundle: "true" ... spec: caBundle: <CA_BUNDLE> ... 3.3.5. Add the service CA bundle to a custom resource definition You can annotate a CustomResourceDefinition (CRD) object with service.beta.openshift.io/inject-cabundle=true to have its spec.conversion.webhook.clientConfig.caBundle field populated with the service CA bundle. This allows the Kubernetes API server to validate the service CA certificate used to secure the targeted endpoint. Note The service CA bundle will only be injected into the CRD if the CRD is configured to use a webhook for conversion. It is only useful to inject the service CA bundle if a CRD's webhook is secured with a service CA certificate. Procedure Annotate the CRD with service.beta.openshift.io/inject-cabundle=true : USD oc annotate crd <crd_name> \ 1 service.beta.openshift.io/inject-cabundle=true 1 Replace <crd_name> with the name of the CRD to annotate. For example, use the following command to annotate the CRD test1 : USD oc annotate crd test1 service.beta.openshift.io/inject-cabundle=true View the CRD to ensure that the service CA bundle has been injected: USD oc get crd <crd_name> -o yaml The CA bundle is displayed in the spec.conversion.webhook.clientConfig.caBundle field in the YAML output: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: service.beta.openshift.io/inject-cabundle: "true" ... spec: conversion: strategy: Webhook webhook: clientConfig: caBundle: <CA_BUNDLE> ... 3.3.6. Add the service CA bundle to a mutating webhook configuration You can annotate a MutatingWebhookConfiguration object with service.beta.openshift.io/inject-cabundle=true to have the clientConfig.caBundle field of each webhook populated with the service CA bundle. This allows the Kubernetes API server to validate the service CA certificate used to secure the targeted endpoint. Note Do not set this annotation for admission webhook configurations that need to specify different CA bundles for different webhooks. If you do, then the service CA bundle will be injected for all webhooks. Procedure Annotate the mutating webhook configuration with service.beta.openshift.io/inject-cabundle=true : USD oc annotate mutatingwebhookconfigurations <mutating_webhook_name> \ 1 service.beta.openshift.io/inject-cabundle=true 1 Replace <mutating_webhook_name> with the name of the mutating webhook configuration to annotate. For example, use the following command to annotate the mutating webhook configuration test1 : USD oc annotate mutatingwebhookconfigurations test1 service.beta.openshift.io/inject-cabundle=true View the mutating webhook configuration to ensure that the service CA bundle has been injected: USD oc get mutatingwebhookconfigurations <mutating_webhook_name> -o yaml The CA bundle is displayed in the clientConfig.caBundle field of all webhooks in the YAML output: apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration metadata: annotations: service.beta.openshift.io/inject-cabundle: "true" ... webhooks: - myWebhook: - v1beta1 clientConfig: caBundle: <CA_BUNDLE> ... 3.3.7. Add the service CA bundle to a validating webhook configuration You can annotate a ValidatingWebhookConfiguration object with service.beta.openshift.io/inject-cabundle=true to have the clientConfig.caBundle field of each webhook populated with the service CA bundle. This allows the Kubernetes API server to validate the service CA certificate used to secure the targeted endpoint. Note Do not set this annotation for admission webhook configurations that need to specify different CA bundles for different webhooks. If you do, then the service CA bundle will be injected for all webhooks. Procedure Annotate the validating webhook configuration with service.beta.openshift.io/inject-cabundle=true : USD oc annotate validatingwebhookconfigurations <validating_webhook_name> \ 1 service.beta.openshift.io/inject-cabundle=true 1 Replace <validating_webhook_name> with the name of the validating webhook configuration to annotate. For example, use the following command to annotate the validating webhook configuration test1 : USD oc annotate validatingwebhookconfigurations test1 service.beta.openshift.io/inject-cabundle=true View the validating webhook configuration to ensure that the service CA bundle has been injected: USD oc get validatingwebhookconfigurations <validating_webhook_name> -o yaml The CA bundle is displayed in the clientConfig.caBundle field of all webhooks in the YAML output: apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration metadata: annotations: service.beta.openshift.io/inject-cabundle: "true" ... webhooks: - myWebhook: - v1beta1 clientConfig: caBundle: <CA_BUNDLE> ... 3.3.8. Manually rotate the generated service certificate You can rotate the service certificate by deleting the associated secret. Deleting the secret results in a new one being automatically created, resulting in a new certificate. Prerequisites A secret containing the certificate and key pair must have been generated for the service. Procedure Examine the service to determine the secret containing the certificate. This is found in the serving-cert-secret-name annotation, as seen below. USD oc describe service <service_name> Example output ... service.beta.openshift.io/serving-cert-secret-name: <secret> ... Delete the generated secret for the service. This process will automatically recreate the secret. USD oc delete secret <secret> 1 1 Replace <secret> with the name of the secret from the step. Confirm that the certificate has been recreated by obtaining the new secret and examining the AGE . USD oc get secret <service_name> Example output NAME TYPE DATA AGE <service.name> kubernetes.io/tls 2 1s 3.3.9. Manually rotate the service CA certificate The service CA is valid for 26 months and is automatically refreshed when there is less than 13 months validity left. If necessary, you can manually refresh the service CA by using the following procedure. Warning A manually-rotated service CA does not maintain trust with the service CA. You might experience a temporary service disruption until the pods in the cluster are restarted, which ensures that pods are using service serving certificates issued by the new service CA. Prerequisites You must be logged in as a cluster admin. Procedure View the expiration date of the current service CA certificate by using the following command. USD oc get secrets/signing-key -n openshift-service-ca \ -o template='{{index .data "tls.crt"}}' \ | base64 --decode \ | openssl x509 -noout -enddate Manually rotate the service CA. This process generates a new service CA which will be used to sign the new service certificates. USD oc delete secret/signing-key -n openshift-service-ca To apply the new certificates to all services, restart all the pods in your cluster. This command ensures that all services use the updated certificates. USD for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{"\n"} {end}'); \ do oc delete pods --all -n USDI; \ sleep 1; \ done Warning This command will cause a service interruption, as it goes through and deletes every running pod in every namespace. These pods will automatically restart after they are deleted. 3.4. Updating the CA bundle 3.4.1. Understanding the CA Bundle certificate Proxy certificates allow users to specify one or more custom certificate authority (CA) used by platform components when making egress connections. The trustedCA field of the Proxy object is a reference to a config map that contains a user-provided trusted certificate authority (CA) bundle. This bundle is merged with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle and injected into the trust store of platform components that make egress HTTPS calls. For example, image-registry-operator calls an external image registry to download images. If trustedCA is not specified, only the RHCOS trust bundle is used for proxied HTTPS connections. Provide custom CA certificates to the RHCOS trust bundle if you want to use your own certificate infrastructure. The trustedCA field should only be consumed by a proxy validator. The validator is responsible for reading the certificate bundle from required key ca-bundle.crt and copying it to a config map named trusted-ca-bundle in the openshift-config-managed namespace. The namespace for the config map referenced by trustedCA is openshift-config : apiVersion: v1 kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- Custom CA certificate bundle. -----END CERTIFICATE----- 3.4.2. Replacing the CA Bundle certificate Procedure Create a config map that includes the root CA certificate used to sign the wildcard certificate: USD oc create configmap custom-ca \ --from-file=ca-bundle.crt=</path/to/example-ca.crt> \ 1 -n openshift-config 1 </path/to/example-ca.crt> is the path to the CA certificate bundle on your local file system. Update the cluster-wide proxy configuration with the newly created config map: USD oc patch proxy/cluster \ --type=merge \ --patch='{"spec":{"trustedCA":{"name":"custom-ca"}}}' Additional resources Replacing the default ingress certificate Enabling the cluster-wide proxy Proxy certificate customization
[ "oc create configmap custom-ca --from-file=ca-bundle.crt=</path/to/example-ca.crt> \\ 1 -n openshift-config", "oc patch proxy/cluster --type=merge --patch='{\"spec\":{\"trustedCA\":{\"name\":\"custom-ca\"}}}'", "oc create secret tls <secret> \\ 1 --cert=</path/to/cert.crt> \\ 2 --key=</path/to/cert.key> \\ 3 -n openshift-ingress", "oc patch ingresscontroller.operator default --type=merge -p '{\"spec\":{\"defaultCertificate\": {\"name\": \"<secret>\"}}}' \\ 1 -n openshift-ingress-operator", "oc login -u kubeadmin -p <password> https://FQDN:6443", "oc config view --flatten > kubeconfig-newapi", "oc create secret tls <secret> \\ 1 --cert=</path/to/cert.crt> \\ 2 --key=</path/to/cert.key> \\ 3 -n openshift-config", "oc patch apiserver cluster --type=merge -p '{\"spec\":{\"servingCerts\": {\"namedCertificates\": [{\"names\": [\"<FQDN>\"], 1 \"servingCertificate\": {\"name\": \"<secret>\"}}]}}}' 2", "oc get apiserver cluster -o yaml", "spec: servingCerts: namedCertificates: - names: - <FQDN> servingCertificate: name: <secret>", "oc get clusteroperators kube-apiserver", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE kube-apiserver 4.16.0 True False False 145m", "for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{\"\\n\"} {end}'); do oc delete pods --all -n USDI; sleep 1; done", "oc annotate service <service_name> \\ 1 service.beta.openshift.io/serving-cert-secret-name=<secret_name> 2", "oc annotate service test1 service.beta.openshift.io/serving-cert-secret-name=test1", "oc describe service <service_name>", "Annotations: service.beta.openshift.io/serving-cert-secret-name: <service_name> service.beta.openshift.io/serving-cert-signed-by: openshift-service-serving-signer@1556850837", "oc annotate configmap <config_map_name> \\ 1 service.beta.openshift.io/inject-cabundle=true", "oc annotate configmap test1 service.beta.openshift.io/inject-cabundle=true", "oc get configmap <config_map_name> -o yaml", "apiVersion: v1 data: service-ca.crt: | -----BEGIN CERTIFICATE-----", "oc annotate apiservice <api_service_name> \\ 1 service.beta.openshift.io/inject-cabundle=true", "oc annotate apiservice test1 service.beta.openshift.io/inject-cabundle=true", "oc get apiservice <api_service_name> -o yaml", "apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata: annotations: service.beta.openshift.io/inject-cabundle: \"true\" spec: caBundle: <CA_BUNDLE>", "oc annotate crd <crd_name> \\ 1 service.beta.openshift.io/inject-cabundle=true", "oc annotate crd test1 service.beta.openshift.io/inject-cabundle=true", "oc get crd <crd_name> -o yaml", "apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: service.beta.openshift.io/inject-cabundle: \"true\" spec: conversion: strategy: Webhook webhook: clientConfig: caBundle: <CA_BUNDLE>", "oc annotate mutatingwebhookconfigurations <mutating_webhook_name> \\ 1 service.beta.openshift.io/inject-cabundle=true", "oc annotate mutatingwebhookconfigurations test1 service.beta.openshift.io/inject-cabundle=true", "oc get mutatingwebhookconfigurations <mutating_webhook_name> -o yaml", "apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration metadata: annotations: service.beta.openshift.io/inject-cabundle: \"true\" webhooks: - myWebhook: - v1beta1 clientConfig: caBundle: <CA_BUNDLE>", "oc annotate validatingwebhookconfigurations <validating_webhook_name> \\ 1 service.beta.openshift.io/inject-cabundle=true", "oc annotate validatingwebhookconfigurations test1 service.beta.openshift.io/inject-cabundle=true", "oc get validatingwebhookconfigurations <validating_webhook_name> -o yaml", "apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration metadata: annotations: service.beta.openshift.io/inject-cabundle: \"true\" webhooks: - myWebhook: - v1beta1 clientConfig: caBundle: <CA_BUNDLE>", "oc describe service <service_name>", "service.beta.openshift.io/serving-cert-secret-name: <secret>", "oc delete secret <secret> 1", "oc get secret <service_name>", "NAME TYPE DATA AGE <service.name> kubernetes.io/tls 2 1s", "oc get secrets/signing-key -n openshift-service-ca -o template='{{index .data \"tls.crt\"}}' | base64 --decode | openssl x509 -noout -enddate", "oc delete secret/signing-key -n openshift-service-ca", "for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{\"\\n\"} {end}'); do oc delete pods --all -n USDI; sleep 1; done", "apiVersion: v1 kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- Custom CA certificate bundle. -----END CERTIFICATE-----", "oc create configmap custom-ca --from-file=ca-bundle.crt=</path/to/example-ca.crt> \\ 1 -n openshift-config", "oc patch proxy/cluster --type=merge --patch='{\"spec\":{\"trustedCA\":{\"name\":\"custom-ca\"}}}'" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/security_and_compliance/configuring-certificates
28.4.9. Configuring Automatic Reporting
28.4.9. Configuring Automatic Reporting ABRT can be configured to use mReports . This additional type of bug report has these advantages: Once enabled, mReports are sent automatically , without user interaction. In contrast, the normal reports are not sent until manually triggered by the user. mReports are anonymous and do not contain sensitive information . This eliminates the risk that unwanted data will be submitted automatically. A mReport represents the detected problem as a JSON object. Therefore, it is machine-readable and can be created and processed automatically. mReports are smaller than full bug reports. mReports do not require downloading large amounts of debugging information. mReports serve several goals. They help to prevent duplicate customer cases that might get created because of multiple occurences of the same bug. Additionally, mReports enable gathering statistics of bug occurences and finding known bugs across different systems. Finally, if authenticated mReports are enabled as described at the end of this section, ABRT can automatically present instant solutions to the customers. However, mReports do not necessarily provide engineers with enough information to fix the bug, for which a full bug report may be necessary. A mReport generally contains the following information: a call stack trace of a program without any variables, or, in case of multi-threaded C, C++, and Java programs, multiple stack traces which operating system is used versions of the RPM packages involved in the crash whether the program ran under the root user for kernel oops, possibly information about host hardware Warning Do not enable mReports if you do not want to share information about your hardware with Red Hat. For mReport examples, see the Examples of mReports article. With mReports enabled, the following happens by default when a crash is detected: ABRT submits a mReport with basic information about the problem to Red Hat's ABRT server. The server determines whether the problem is already in the bug database. If it is, the server returns a short description of the problem along with a URL of the reported case. If not, the server invites the user to submit a full problem report. To enable mReports for all users, run as root : or add the following line to the /etc/abrt/abrt.conf file: User-specific configuration is located in the USDUSER/.config/abrt/ directory. It overrides the system-wide configuration. To apply the new configuration, restart the ABRT services by running: The default autoreporting behavior - sending mReports - can be changed. To do that, assign a different ABRT event to the AutoreportingEvent directive in the /etc/abrt/abrt.conf configuration file. See Section 28.4.2, "Standard ABRT Installation Supported Events" for an overview of the standard events. In Red Hat Enterprise Linux 7.1 and later, customers can also send authenticated mReports, which contain more information: hostname, machine-id (taken from the /etc/machine-id file), and RHN account number. The advantage of authenticated mReports is that they go directly to the Red Hat Customer Portal, and not only to Red Hat's private crash-report server, as the regular mReports do. This enables Red Hat to provide customers with instant solutions to crashes. To turn the authenticated automatic reporting on, run the following command as root : Replace RHN_username with your Red Hat Network username. This command will ask for your password and save it in plain text into the /etc/libreport/plugins/rhtsupport.conf file.
[ "~]# abrt-auto-reporting enabled", "AutoreportingEnabled = yes", "~]# service abrtd restart", "~]# abrt-auto-reporting enabled -u RHN_username" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sect-abrt-configuring_microreports
5.128. kabi-whitelists
5.128. kabi-whitelists 5.128.1. RHEA-2012:0918 - kabi-whitelists enhancement update An updated kabi-whitelists package that adds various enhancements is now available for Red Hat Enterprise Linux 6. The kabi-whitelists package contains reference files documenting interfaces provided by the Red Hat Enterprise Linux 6 kernel that are considered to be stable by Red Hat engineering, and safe for longer term use by third-party loadable device drivers, as well as for other purposes. Enhancements BZ# 722619 Multiple symbols have been added to the Red Hat Enterprise Linux 6.3 kernel application binary interface (ABI) whitelists. BZ# 737276 Multiple symbols for Hitachi loadable device drivers have been added to the kernel ABI whitelists. BZ# 753771 This update modifies the structure of the kabi-whitelists package: whitelists are now ordered according to various Red Hat Enterprise Linux releases, and a symbolic link that points to the latest release has been added. BZ# 803885 The "__dec_zone_page_state" and "dec_zone_page_state" symbols have been added to the kernel ABI whitelists. BZ# 810456 The "blk_queue_rq_timed_out", "fc_attach_transport", "fc_release_transport", "fc_remote_port_add", "fc_remote_port_delete", "fc_remote_port_rolechg", "fc_remove_host", and "touch_nmi_watchdog" symbols have been added to the kernel ABI whitelists. BZ# 812463 Multiple symbols for Oracle Cloud File System have been added to the kernel ABI whitelists. BZ# 816533 The "get_fs_type" and "vscnprintf" have been added to the kernel ABI whitelists. All users of kabi-whitelists are advised to upgrade to this updated package, which adds these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/kabi-whitelists
Chapter 71. KafkaClientAuthenticationTls schema reference
Chapter 71. KafkaClientAuthenticationTls schema reference Used in: KafkaBridgeSpec , KafkaConnectSpec , KafkaMirrorMaker2ClusterSpec , KafkaMirrorMakerConsumerSpec , KafkaMirrorMakerProducerSpec Full list of KafkaClientAuthenticationTls schema properties To configure mTLS authentication, set the type property to the value tls . mTLS uses a TLS certificate to authenticate. 71.1. certificateAndKey The certificate is specified in the certificateAndKey property and is always loaded from an OpenShift secret. In the secret, the certificate must be stored in X509 format under two different keys: public and private. You can use the secrets created by the User Operator, or you can create your own TLS certificate file, with the keys used for authentication, then create a Secret from the file: oc create secret generic MY-SECRET \ --from-file= MY-PUBLIC-TLS-CERTIFICATE-FILE.crt \ --from-file= MY-PRIVATE.key Note mTLS authentication can only be used with TLS connections. Example mTLS configuration authentication: type: tls certificateAndKey: secretName: my-secret certificate: my-public-tls-certificate-file.crt key: private.key 71.2. KafkaClientAuthenticationTls schema properties The type property is a discriminator that distinguishes use of the KafkaClientAuthenticationTls type from KafkaClientAuthenticationScramSha256 , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationPlain , KafkaClientAuthenticationOAuth . It must have the value tls for the type KafkaClientAuthenticationTls . Property Description certificateAndKey Reference to the Secret which holds the certificate and private key pair. CertAndKeySecretSource type Must be tls . string
[ "create secret generic MY-SECRET --from-file= MY-PUBLIC-TLS-CERTIFICATE-FILE.crt --from-file= MY-PRIVATE.key", "authentication: type: tls certificateAndKey: secretName: my-secret certificate: my-public-tls-certificate-file.crt key: private.key" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-KafkaClientAuthenticationTls-reference
probe::sunrpc.svc.process
probe::sunrpc.svc.process Name probe::sunrpc.svc.process - Process an RPC request Synopsis sunrpc.svc.process Values rq_prog the program number in the request rq_vers the program version in the request peer_ip the peer address where the request is from rq_proc the procedure number in the request sv_prog the number of the program rq_prot the IP protocol of the reqeust sv_name the service name rq_xid the transmission id in the request sv_nrthreads the number of concurrent threads
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-sunrpc-svc-process
Chapter 27. CXF-RS
Chapter 27. CXF-RS Both producer and consumer are supported The CXFRS component provides integration with Apache CXF for connecting to JAX-RS 1.1 and 2.0 services hosted in CXF. 27.1. Dependencies When using camel-cxf-rest with Red Hat build of Camel Spring Boot, add the following Maven dependency to your pom.xml to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-cxf-rest-starter</artifactId> </dependency> 27.2. URI format Where the address represents the address of the CXF endpoint. Where the rsEndpoint represents the name of the spring bean, which presents the CXFRS client or server. For the formats above, you can append options to the URI as follows: 27.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 27.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 27.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 27.4. Component Options The CXF-RS component supports 5 options, which are listed below. Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean headerFilterStrategy (filter) To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. HeaderFilterStrategy useGlobalSslContextParameters (security) Enable usage of global SSL context parameters. false boolean 27.5. Endpoint Options The CXF-RS endpoint is configured using URI syntax: With the following path and query parameters: 27.5.1. Path Parameters (2 parameters) Name Description Default Type beanId (common) To lookup an existing configured CxfRsEndpoint. Must use bean: as prefix. String address (common) The service publish address. String 27.5.2. Query Parameters (31 parameters) Name Description Default Type features (common) Set the feature list to the CxfRs endpoint. List loggingFeatureEnabled (common) This option enables CXF Logging Feature which writes inbound and outbound REST messages to log. false boolean loggingSizeLimit (common) To limit the total size of number of bytes the logger will output when logging feature has been enabled. int modelRef (common) This option is used to specify the model file which is useful for the resource class without annotation. When using this option, then the service class can be omitted, to emulate document-only endpoints. String providers (common) Set custom JAX-RS provider(s) list to the CxfRs endpoint. You can specify a string with a list of providers to lookup in the registy separated by comma. String resourceClasses (common) The resource classes which you want to export as REST service. Multiple classes can be separated by comma. List schemaLocations (common) Sets the locations of the schema(s) which can be used to validate the incoming XML or JAXB-driven JSON. List skipFaultLogging (common) This option controls whether the PhaseInterceptorChain skips logging the Fault that it catches. false boolean bindingStyle (consumer) Sets how requests and responses will be mapped to/from Camel. Two values are possible: SimpleConsumer: This binding style processes request parameters, multiparts, etc. and maps them to IN headers, IN attachments and to the message body. It aims to eliminate low-level processing of org.apache.cxf.message.MessageContentsList. It also also adds more flexibility and simplicity to the response mapping. Only available for consumers. Default: The default style. For consumers this passes on a MessageContentsList to the route, requiring low-level processing in the route. This is the traditional binding style, which simply dumps the org.apache.cxf.message.MessageContentsList coming in from the CXF stack onto the IN message body. The user is then responsible for processing it according to the contract defined by the JAX-RS method signature. Custom: allows you to specify a custom binding through the binding option. Enum values: SimpleConsumer Default Custom Default BindingStyle publishedEndpointUrl (consumer) This option can override the endpointUrl that published from the WADL which can be accessed with resource address url plus _wadl. String bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut ExchangePattern serviceBeans (consumer (advanced)) The service beans (the bean ids to lookup in the registry) which you want to export as REST service. Multiple beans can be separated by comma. String cookieHandler (producer) Configure a cookie handler to maintain a HTTP session. CookieHandler hostnameVerifier (producer) The hostname verifier to be used. Use the # notation to reference a HostnameVerifier from the registry. HostnameVerifier sslContextParameters (producer) The Camel SSL setting reference. Use the # notation to reference the SSL Context. SSLContextParameters throwExceptionOnFailure (producer) This option tells the CxfRsProducer to inspect return codes and will generate an Exception if the return code is larger than 207. true boolean httpClientAPI (producer (advanced)) If it is true, the CxfRsProducer will use the HttpClientAPI to invoke the service. If it is false, the CxfRsProducer will use the ProxyClientAPI to invoke the service. true boolean ignoreDeleteMethodMessageBody (producer (advanced)) This option is used to tell CxfRsProducer to ignore the message body of the DELETE method when using HTTP API. false boolean lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean maxClientCacheSize (producer (advanced)) This option allows you to configure the maximum size of the cache. The implementation caches CXF clients or ClientFactoryBean in CxfProvider and CxfRsProvider. 10 int synchronous (producer (advanced)) Sets whether synchronous processing should be strictly used. false boolean binding (advanced) To use a custom CxfBinding to control the binding between Camel Message and CXF Message. CxfRsBinding bus (advanced) To use a custom configured CXF Bus. Bus continuationTimeout (advanced) This option is used to set the CXF continuation timeout which could be used in CxfConsumer by default when the CXF server is using Jetty or Servlet transport. 30000 long cxfRsConfigurer (advanced) This option could apply the implementation of org.apache.camel.component.cxf.jaxrs.CxfRsEndpointConfigurer which supports to configure the CXF endpoint in programmatic way. User can configure the CXF server and client by implementing configure\\{Server/Client} method of CxfEndpointConfigurer. CxfRsConfigurer defaultBus (advanced) Will set the default bus when CXF endpoint create a bus by itself. false boolean headerFilterStrategy (advanced) To use a custom HeaderFilterStrategy to filter header to and from Camel message. HeaderFilterStrategy performInvocation (advanced) When the option is true, Camel will perform the invocation of the resource class instance and put the response object into the exchange for further processing. false boolean propagateContexts (advanced) When the option is true, JAXRS UriInfo, HttpHeaders, Request and SecurityContext contexts will be available to custom CXFRS processors as typed Camel exchange properties. These contexts can be used to analyze the current requests using JAX-RS API. false boolean 27.6. Message Headers The CXF-RS component supports 16 message header(s), which is/are listed below: Name Description Default Type operationName (common) Constant: OPERATION_NAME The name of the operation. String CamelAuthentication (common) Constant: AUTHENTICATION The authentication. Subject CamelHttpMethod (common) Constant: HTTP_METHOD The http method to use. String CamelHttpPath (common) Constant: HTTP_PATH The http path. String Content-Type (common) Constant: CONTENT_TYPE The content type. String CamelHttpQuery (common) Constant: HTTP_QUERY The http query. String CamelHttpResponseCode (common) Constant: HTTP_RESPONSE_CODE The http response code. Integer Content-Encoding (common) Constant: CONTENT_ENCODING The content encoding. String org.apache.cxf.message.Message.PROTOCOL_HEADERS (common) Constant: PROTOCOL_HEADERS The protocol headers. Map CamelCxfMessage (common) Constant: CAMEL_CXF_MESSAGE The CXF message. Message CamelCxfRsUsingHttpAPI (common) Constant: CAMEL_CXF_RS_USING_HTTP_API If it is true, the CxfRsProducer will use the HttpClientAPI to invoke the service. If it is false, the CxfRsProducer will use the ProxyClientAPI to invoke the service. Boolean CamelCxfRsVarValues (common) Constant: CAMEL_CXF_RS_VAR_VALUES The path values. Object[] CamelCxfRsResponseClass (common) Constant: CAMEL_CXF_RS_RESPONSE_CLASS The response class. Class CamelCxfRsResponseGenericType (common) Constant: CAMEL_CXF_RS_RESPONSE_GENERIC_TYPE The response generic type. Type CamelCxfRsQueryMap (common) Constant: CAMEL_CXF_RS_QUERY_MAP The query map. Map CamelCxfRsOperationResourceInfoStack (common) Constant: CAMEL_CXF_RS_OPERATION_RESOURCE_INFO_STACK The stack of MethodInvocationInfo representing resources path when JAX-RS invocation looks for target. OperationResourceInfoStack You can also configure the CXF REST endpoint through the spring configuration. Note Avoid creating your own HTTP server instances in a Camel Spring Boot application by using either of the cxf-rt-transports-jetty , cxf-rt-transports-netty-server , cxf-rt-transports-undertow libraries. It is recommended to use the Spring Boot embedded HTTP server stack which is created when you use the spring-boot-starter-web , cxf-spring-boot-starter-jaxrs , or cxf-spring-boot-starter-jaxws dependencies. Note Since there are lots of differences between the CXF REST client and CXF REST Server, we provide different configuration for them. + Please check the following files for more details: the schema file . CXF JAX-RS documentation . 27.7. How to configure the REST endpoint in Camel In the camel-cxf schema file , there are two elements for the REST endpoint definition: cxf:rsServer for REST consumer cxf:rsClient for REST producer. You can find a Camel REST service route configuration example there. 27.8. How to override the CXF producer address from message header The camel-cxfrs producer supports overriding the service address by setting the message with the key of CamelDestinationOverrideUrl . // set up the service address from the message header to override the setting of CXF endpoint exchange.getIn().setHeader(Exchange.DESTINATION_OVERRIDE_URL, constant(getServiceAddress())); 27.9. Consuming a REST Request - Simple Binding Style Since Camel 2.11 The Default binding style is rather low-level, requiring the user to manually process the MessageContentsList object coming into the route. Thus, it tightly couples the route logic with the method signature and parameter indices of the JAX-RS operation which is somewhat inelegant, difficult and error-prone. In contrast, the SimpleConsumer binding style performs the following mappings, to make the request data more accessible to you within the Camel Message: JAX-RS Parameters ( @HeaderParam , @QueryParam , etc.) are injected as IN message headers. The header name matches the value of the annotation. The request entity (POJO or another type) becomes the IN message body. If a single entity cannot be identified in the JAX-RS method signature, it falls back to the original MessageContentsList . Binary @Multipart body parts become IN message attachments, supporting DataHandler , InputStream , DataSource and CXF's Attachment class. Non-binary @Multipart body parts are mapped as IN message headers. The header name matches the Body Part name. Additionally, the following rules apply to the Response mapping : If the message body type is different to javax.ws.rs.core.Response (user-built response), a new Response is created and the message body is set as the entity (so long it's not null). The response status code is taken from the Exchange.HTTP_RESPONSE_CODE header, or defaults to 200 OK if not present. If the message body type is equal to javax.ws.rs.core.Response , it means that the user has built a custom response, and therefore it is respected, and it becomes the final response. In all cases, Camel headers permitted by custom or default HeaderFilterStrategy are added to the HTTP response. 27.9.1. Enabling the Simple Binding Style This binding style can be activated by setting the bindingStyle parameter in the consumer endpoint to value SimpleConsumer : from("cxfrs:bean:rsServer?bindingStyle=SimpleConsumer") .to("log:TEST?showAll=true"); 27.9.2. Examples of request binding with different method signatures Below is a list of method signatures along with the expected result from the simple binding: public Response doAction(BusinessObject request); : the request payload is placed in tbe IN message body, replacing the original MessageContentsList. public Response doAction(BusinessObject request, @HeaderParam("abcd") String abcd, @QueryParam("defg") String defg); : the request payload is placed in the IN message body, replacing the original MessageContentsList . Both request parameters are mapped as IN message headers with names "abcd" and "defg" . public Response doAction(@HeaderParam("abcd") String abcd, @QueryParam("defg") String defg); : both request parameters are mapped as the IN message headers with names "abcd" and "defg" . The original MessageContentsList is preserved, even though it only contains the two parameters. public Response doAction(@Multipart(value="body1") BusinessObject request, @Multipart(value="body2") BusinessObject request2); : the first parameter is transferred as a header with name "body1" , and the second one is mapped as header "body2" . The original MessageContentsList is preserved as the IN message body. public Response doAction(InputStream abcd); : the InputStream is unwrapped from the MessageContentsList and preserved as the IN message body. public Response doAction(DataHandler abcd); : the DataHandler is unwrapped from the MessageContentsList and preserved as the IN message body. 27.9.3. Examples of the Simple Binding Style Given a JAX-RS resource class with this method: @POST @Path("/customers/{type}") public Response newCustomer(Customer customer, @PathParam("type") String type, @QueryParam("active") @DefaultValue("true") boolean active) { return null; } Serviced by the following route: from("cxfrs:bean:rsServer?bindingStyle=SimpleConsumer") .recipientList(simple("direct:USD{header.operationName}")); from("direct:newCustomer") .log("Request: type=USD{header.type}, active=USD{header.active}, customerData=USD{body}"); The following HTTP request with XML payload (given that the Customer DTO is JAXB-annotated): Will print the message: NOTE More examples on how to process requests and write responses can be found here . 27.10. Consuming a REST Request - Default Binding Style The CXF JAXRS front end implements the JAX-RS (JSR-311) API , so we can export the resource classes as a REST service. We leverage the CXF Invoker API to turn a REST request into a normal Java object method invocation.There is no need to specify the URI template within your endpoint. The CXF takes care of the REST request URI to resource class method mapping according to the JSR-311 specification. All you need to do in Camel is delegate this method request to the right processor or endpoint. CXFRS route example private static final String CXF_RS_ENDPOINT_URI = "cxfrs://http://localhost:" + CXT + "/rest?resourceClasses=org.apache.camel.component.cxf.jaxrs.testbean.CustomerServiceResource"; private static final String CXF_RS_ENDPOINT_URI2 = "cxfrs://http://localhost:" + CXT + "/rest2?resourceClasses=org.apache.camel.component.cxf.jaxrs.testbean.CustomerService"; private static final String CXF_RS_ENDPOINT_URI3 = "cxfrs://http://localhost:" + CXT + "/rest3?" + "resourceClasses=org.apache.camel.component.cxf.jaxrs.testbean.CustomerServiceNoAnnotations&" + "modelRef=classpath:/org/apache/camel/component/cxf/jaxrs/CustomerServiceModel.xml"; private static final String CXF_RS_ENDPOINT_URI4 = "cxfrs://http://localhost:" + CXT + "/rest4?" + "modelRef=classpath:/org/apache/camel/component/cxf/jaxrs/CustomerServiceDefaultHandlerModel.xml"; private static final String CXF_RS_ENDPOINT_URI5 = "cxfrs://http://localhost:" + CXT + "/rest5?" + "propagateContexts=true&" + "modelRef=classpath:/org/apache/camel/component/cxf/jaxrs/CustomerServiceDefaultHandlerModel.xml"; protected RouteBuilder createRouteBuilder() throws Exception { final Processor testProcessor = new TestProcessor(); final Processor testProcessor2 = new TestProcessor2(); final Processor testProcessor3 = new TestProcessor3(); return new RouteBuilder() { public void configure() { errorHandler(new NoErrorHandlerBuilder()); from(CXF_RS_ENDPOINT_URI).process(testProcessor); from(CXF_RS_ENDPOINT_URI2).process(testProcessor); from(CXF_RS_ENDPOINT_URI3).process(testProcessor); from(CXF_RS_ENDPOINT_URI4).process(testProcessor2); from(CXF_RS_ENDPOINT_URI5).process(testProcessor3); } }; } And the corresponding resource class is used to configure the endpoint. NOTE By default, JAX-RS resource classes are only used to configure JAX-RS properties. Methods will not be executed during routing of messages to the endpoint. Instead, it is the responsibility of the route to do all processing. It is sufficient to provide an interface only as opposed to a no-op service implementation class for the default mode. If a performInvocation option is enabled, the service implementation will be invoked first, the response will be set on the Camel exchange, and the route execution will continue as usual. This can be useful for integrating the existing JAX-RS implementations into Camel routes and for post-processing JAX-RS Responses in custom processors. @Path("/customerservice/") public interface CustomerServiceResource { @GET @Path("/customers/{id}/") Customer getCustomer(@PathParam("id") String id); @PUT @Path("/customers/") Response updateCustomer(Customer customer); @Path("/{id}") @PUT() @Consumes({ "application/xml", "text/plain", "application/json" }) @Produces({ "application/xml", "text/plain", "application/json" }) Object invoke(@PathParam("id") String id, String payload); } 27.11. How to invoke the REST service through camel-cxfrs producer The CXF JAXRS front end implements a proxy-based client API . With this API you can invoke the remote REST service through a proxy. The camel-cxfrs producer is based on this proxy API . You can specify the operation name in the message header and prepare the parameter in the message body, the camel-cxfrs producer will generate the right REST request for you. Example Exchange exchange = template.send("direct://proxy", new Processor() { public void process(Exchange exchange) throws Exception { exchange.setPattern(ExchangePattern.InOut); Message inMessage = exchange.getIn(); // set the operation name inMessage.setHeader(CxfConstants.OPERATION_NAME, "getCustomer"); // using the proxy client API inMessage.setHeader(CxfConstants.CAMEL_CXF_RS_USING_HTTP_API, Boolean.FALSE); // set a customer header inMessage.setHeader("key", "value"); // set up the accepted content type inMessage.setHeader(Exchange.ACCEPT_CONTENT_TYPE, "application/json"); // set the parameters, if you just have one parameter, // camel will put this object into an Object[] itself inMessage.setBody("123"); } }); // get the response message Customer response = (Customer) exchange.getMessage().getBody(); assertNotNull(response, "The response should not be null"); assertEquals(123, response.getId(), "Get a wrong customer id"); assertEquals("John", response.getName(), "Get a wrong customer name"); assertEquals(200, exchange.getMessage().getHeader(Exchange.HTTP_RESPONSE_CODE), "Get a wrong response code"); assertEquals("value", exchange.getMessage().getHeader("key"), "Get a wrong header value"); The CXF JAXRS front end also provides an HTTP centric client API . You can also invoke this API from camel-cxfrs producer. You need to specify the HTTP_PATH and the HTTP_METHOD and let the producer use the http centric client API by using the URI option httpClientAPI or by setting the message header CxfConstants.CAMEL_CXF_RS_USING_HTTP_API . You can turn the response object to the type class specified with the message header CxfConstants.CAMEL_CXF_RS_RESPONSE_CLASS . Exchange exchange = template.send("direct://http", new Processor() { public void process(Exchange exchange) throws Exception { exchange.setPattern(ExchangePattern.InOut) Message inMessage = exchange.getIn(); // using the http central client API inMessage.setHeader(CxfConstants.CAMEL_CXF_RS_USING_HTTP_API, Boolean.TRUE); // set the Http method inMessage.setHeader(Exchange.HTTP_METHOD, "GET"); // set the relative path inMessage.setHeader(Exchange.HTTP_PATH, "/customerservice/customers/123"); // Specify the response class, cxfrs will use InputStream as the response object type inMessage.setHeader(CxfConstants.CAMEL_CXF_RS_RESPONSE_CLASS, Customer.class); // set a customer header inMessage.setHeader("key", "value"); // since we use the Get method, so we don't need to set the message body inMessage.setBody(null); } }); You can also specify the query parameters from cxfrs URI for the CXFRS http centric client. Exchange exchange = template.send("cxfrs://http://localhost:9003/testQuery?httpClientAPI=true&q1=12&q2=13" To support the Dynamical routing, you can override the URI's query parameters by using the CxfConstants.CAMEL_CXF_RS_QUERY_MAP header to set the parameter map for it. Map<String, String> queryMap = new LinkedHashMap<>(); queryMap.put("q1", "new"); queryMap.put("q2", "world"); inMessage.setHeader(CxfConstants.CAMEL_CXF_RS_QUERY_MAP, queryMap); 27.12. Spring Boot Auto-Configuration The component supports 6 options, which are listed below. Name Description Default Type camel.component.cxfrs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.cxfrs.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions (if possible) occurred while the Camel consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. Important: This is only possible if the 3rd party component allows Camel to be alerted if an exception was thrown. Some components handle this internally only, and therefore bridgeErrorHandler is not possible. In other situations we may improve the Camel component to hook into the 3rd party component and make this possible for future releases. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.cxfrs.enabled Whether to enable auto configuration of the cxfrs component. This is enabled by default. Boolean camel.component.cxfrs.header-filter-strategy To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. The option is a org.apache.camel.spi.HeaderFilterStrategy type. HeaderFilterStrategy camel.component.cxfrs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.cxfrs.use-global-ssl-context-parameters Enable usage of global SSL context parameters. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-cxf-rest-starter</artifactId> </dependency>", "cxfrs://address?options", "cxfrs:bean:rsEndpoint", "cxfrs:bean:cxfEndpoint?resourceClasses=org.apache.camel.rs.Example", "cxfrs:beanId:address", "// set up the service address from the message header to override the setting of CXF endpoint exchange.getIn().setHeader(Exchange.DESTINATION_OVERRIDE_URL, constant(getServiceAddress()));", "from(\"cxfrs:bean:rsServer?bindingStyle=SimpleConsumer\") .to(\"log:TEST?showAll=true\");", "@POST @Path(\"/customers/{type}\") public Response newCustomer(Customer customer, @PathParam(\"type\") String type, @QueryParam(\"active\") @DefaultValue(\"true\") boolean active) { return null; }", "from(\"cxfrs:bean:rsServer?bindingStyle=SimpleConsumer\") .recipientList(simple(\"direct:USD{header.operationName}\")); from(\"direct:newCustomer\") .log(\"Request: type=USD{header.type}, active=USD{header.active}, customerData=USD{body}\");", "POST /customers/gold?active=true Payload: <Customer> <fullName>Raul Kripalani</fullName> <country>Spain</country> <project>Apache Camel</project> </Customer>", "Request: type=gold, active=true, customerData=<Customer.toString() representation>", "private static final String CXF_RS_ENDPOINT_URI = \"cxfrs://http://localhost:\" + CXT + \"/rest?resourceClasses=org.apache.camel.component.cxf.jaxrs.testbean.CustomerServiceResource\"; private static final String CXF_RS_ENDPOINT_URI2 = \"cxfrs://http://localhost:\" + CXT + \"/rest2?resourceClasses=org.apache.camel.component.cxf.jaxrs.testbean.CustomerService\"; private static final String CXF_RS_ENDPOINT_URI3 = \"cxfrs://http://localhost:\" + CXT + \"/rest3?\" + \"resourceClasses=org.apache.camel.component.cxf.jaxrs.testbean.CustomerServiceNoAnnotations&\" + \"modelRef=classpath:/org/apache/camel/component/cxf/jaxrs/CustomerServiceModel.xml\"; private static final String CXF_RS_ENDPOINT_URI4 = \"cxfrs://http://localhost:\" + CXT + \"/rest4?\" + \"modelRef=classpath:/org/apache/camel/component/cxf/jaxrs/CustomerServiceDefaultHandlerModel.xml\"; private static final String CXF_RS_ENDPOINT_URI5 = \"cxfrs://http://localhost:\" + CXT + \"/rest5?\" + \"propagateContexts=true&\" + \"modelRef=classpath:/org/apache/camel/component/cxf/jaxrs/CustomerServiceDefaultHandlerModel.xml\"; protected RouteBuilder createRouteBuilder() throws Exception { final Processor testProcessor = new TestProcessor(); final Processor testProcessor2 = new TestProcessor2(); final Processor testProcessor3 = new TestProcessor3(); return new RouteBuilder() { public void configure() { errorHandler(new NoErrorHandlerBuilder()); from(CXF_RS_ENDPOINT_URI).process(testProcessor); from(CXF_RS_ENDPOINT_URI2).process(testProcessor); from(CXF_RS_ENDPOINT_URI3).process(testProcessor); from(CXF_RS_ENDPOINT_URI4).process(testProcessor2); from(CXF_RS_ENDPOINT_URI5).process(testProcessor3); } }; }", "@Path(\"/customerservice/\") public interface CustomerServiceResource { @GET @Path(\"/customers/{id}/\") Customer getCustomer(@PathParam(\"id\") String id); @PUT @Path(\"/customers/\") Response updateCustomer(Customer customer); @Path(\"/{id}\") @PUT() @Consumes({ \"application/xml\", \"text/plain\", \"application/json\" }) @Produces({ \"application/xml\", \"text/plain\", \"application/json\" }) Object invoke(@PathParam(\"id\") String id, String payload); }", "Exchange exchange = template.send(\"direct://proxy\", new Processor() { public void process(Exchange exchange) throws Exception { exchange.setPattern(ExchangePattern.InOut); Message inMessage = exchange.getIn(); // set the operation name inMessage.setHeader(CxfConstants.OPERATION_NAME, \"getCustomer\"); // using the proxy client API inMessage.setHeader(CxfConstants.CAMEL_CXF_RS_USING_HTTP_API, Boolean.FALSE); // set a customer header inMessage.setHeader(\"key\", \"value\"); // set up the accepted content type inMessage.setHeader(Exchange.ACCEPT_CONTENT_TYPE, \"application/json\"); // set the parameters, if you just have one parameter, // camel will put this object into an Object[] itself inMessage.setBody(\"123\"); } }); // get the response message Customer response = (Customer) exchange.getMessage().getBody(); assertNotNull(response, \"The response should not be null\"); assertEquals(123, response.getId(), \"Get a wrong customer id\"); assertEquals(\"John\", response.getName(), \"Get a wrong customer name\"); assertEquals(200, exchange.getMessage().getHeader(Exchange.HTTP_RESPONSE_CODE), \"Get a wrong response code\"); assertEquals(\"value\", exchange.getMessage().getHeader(\"key\"), \"Get a wrong header value\");", "Exchange exchange = template.send(\"direct://http\", new Processor() { public void process(Exchange exchange) throws Exception { exchange.setPattern(ExchangePattern.InOut) Message inMessage = exchange.getIn(); // using the http central client API inMessage.setHeader(CxfConstants.CAMEL_CXF_RS_USING_HTTP_API, Boolean.TRUE); // set the Http method inMessage.setHeader(Exchange.HTTP_METHOD, \"GET\"); // set the relative path inMessage.setHeader(Exchange.HTTP_PATH, \"/customerservice/customers/123\"); // Specify the response class, cxfrs will use InputStream as the response object type inMessage.setHeader(CxfConstants.CAMEL_CXF_RS_RESPONSE_CLASS, Customer.class); // set a customer header inMessage.setHeader(\"key\", \"value\"); // since we use the Get method, so we don't need to set the message body inMessage.setBody(null); } });", "Exchange exchange = template.send(\"cxfrs://http://localhost:9003/testQuery?httpClientAPI=true&q1=12&q2=13\"", "Map<String, String> queryMap = new LinkedHashMap<>(); queryMap.put(\"q1\", \"new\"); queryMap.put(\"q2\", \"world\"); inMessage.setHeader(CxfConstants.CAMEL_CXF_RS_QUERY_MAP, queryMap);" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-cxf-rs-component-starter
7.86. infinipath-psm
7.86. infinipath-psm 7.86.1. RHBA-2013:0536 - infinipath-psm bug fix update Updated infinipath-psm packages that fix one bug are now available for Red Hat Enterprise Linux 6. The PSM Messaging API, or PSM API, is Intel's (formerly QLogic's) low-level, user-level communication interface for the Truescale family of products. PSM users can use mechanisms necessary to implement higher-level communication interfaces in parallel environments. Bug Fix BZ# 907361 Due to a packaging error, not all object files required for the infinipath-psm library were built into the library, rendering it non-functional. This update fixes the infinipath-psm Makefile, which now properly includes all required object files, and the library works as expected. All users of infinipath-psm are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/infinipath-psm
2.6. Red Hat Cluster Suite and SELinux
2.6. Red Hat Cluster Suite and SELinux Red Hat Cluster Suite for Red Hat Enterprise Linux 4 requires that SELinux be disabled. Before configuring a Red Hat cluster, make sure to disable SELinux. For example, you can disable SELinux upon installation of Red Hat Enterprise Linux 4 or you can specify SELINUX=disabled in the /etc/selinux/config file.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/s1-selinux-CA
Chapter 1. Overview of migrating Fuse 7 applications to Red Hat build of Apache Camel for Quarkus
Chapter 1. Overview of migrating Fuse 7 applications to Red Hat build of Apache Camel for Quarkus Fuse Red Hat Fuse is an agile integration solution based on open source communities like Apache Camel and Apache Karaf. Red Hat Fuse is a lightweight, flexible integration platform that enables rapid on-premise cloud integration. You can run Red Hat Fuse using three different runtimes: Karaf which supports OSGi applications Spring Boot JBoss EAP (Enterprise Application Platform) Red Hat build of Apache Camel for Quarkus Red Hat build of Apache Camel for Quarkus brings the integration capabilities of Apache Camel and its vast component library to the Quarkus runtime. Red Hat build of Camel Quarkus provides Quarkus extensions for many of the Camel components. Camel Quarkus takes advantage of the many performance improvements made in Camel 3, which results in a lower memory footprint, less reliance on reflection, and faster startup times. In a Red Hat build of Apache Camel for Quarkus application, you define Camel routes using Java DSL, so you can migrate the Camel routes that you use in your Fuse application to CEQ. Camel on EAP Karaf, which follows the OSGI dependency management concept, and EAP, which follows the JEE specification, are application servers impacted by the adoption of containerized applications. Containers have emerged as the predominant method for packaging applications. Consequently, the responsibility for managing applications, which encompasses deployment, scaling, clustering, and load balancing, has shifted from the application server to the container orchestration using Kubernetes. Although EAP continues to be supported on Red Hat Openshift, Camel 3 is no longer supported on an EAP server. So if you have a Fuse 7 application running on an EAP server, you should consider migrating your application to the Red Hat Build of Apache Camel for Spring Boot or the Red Hat build of Apache Camel for Quarkus and take the benefit of the migration process to consider a redesign, or partial redesign of your application, from a monolith to a microservices architecture. If you do not use Openshift, RHEL virtual machines remain a valid approach when you deploy your application for Spring Boot and Quarkus, and Quarkus also benefits from its native compilation capabilities. It is important to evaluate the tooling to support the management of a microservices architecture on such a platform. Red Hat provides this capability through Ansible, using the Red Hat Ansible for Middleware collections . 1.1. Standard migration paths 1.1.1. XML path Fuse applications written in Spring XML or Blueprint XML should be migrated towards an XML-based flavor, and can target either the Spring Boot or the Quarkus runtime with no difference in the migration steps. 1.1.2. Java path Fuse applications written in Java DSL should be migrated towards a Java-based flavor, and can target either the Spring Boot or the Quarkus runtime with no difference in the migration steps. 1.2. Architectural changes Openshift has replaced Fabric8 as the runtime platform for Fuse 6 users and is the recommended target for your Fuse application migration. You should consider the following architectural changes when you are migrating your application: If your Fuse 6 application relied on the Fabric8 service discovery, you should use Kubernetes Service Discovery when running Camel 3 on OpenShift. If your Fuse 6 application relies on OSGi bundle configuration, you should use Kubernetes ConfigMaps and Secrets when running Camel 3 on OpenShift. If your application uses a file-based route definition, consider using AWS S3 technology when running Camel 3 on OpenShift. If your application uses a standard filesystem, the resulting Spring Boot or Quarkus applications should be deployed on standard RHEL virtual machines rather than the Openshift platform. Delegation of Inbound HTTPS connections to the Openshift Router which handles SSL requirements. Delegation of Hystrix features to Service Mesh .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/migrating_fuse_7_applications_to_red_hat_build_of_apache_camel_for_quarkus/overview-of-apache-camel-endpoints
Chapter 1. Logging
Chapter 1. Logging 1.1. Viewing Argo CD logs You can view the Argo CD logs with the logging subsystem for Red Hat OpenShift. The logging subsystem visualizes the logs on a Kibana dashboard. The OpenShift Logging Operator enables logging with Argo CD by default. 1.1.1. Storing and retrieving Argo CD logs You can use the Kibana dashboard to store and retrieve Argo CD logs. Prerequisites The Red Hat OpenShift GitOps Operator is installed in your cluster. The logging subsystem for Red Hat OpenShift is installed with default configuration in your cluster. Procedure In the OpenShift Container Platform web console, go to the menu Observability Logging to view the Kibana dashboard. Create an index pattern. To display all the indices, define the index pattern as * , and click step . Select @timestamp for Time Filter field name . Click Create index pattern . In the navigation panel of the Kibana dashboard, click the Discover tab. Create a filter to retrieve logs for Argo CD. The following steps create a filter that retrieves logs for all the pods in the openshift-gitops namespace: Click Add a filter + . Select the kubernetes.namespace_name field. Select the is operator. Select the openshift-gitops value. Click Save . Optional: Add additional filters to narrow the search. For example, to retrieve logs for a particular pod, you can create another filter with kubernetes.pod_name as the field. View the filtered Argo CD logs in the Kibana dashboard. 1.1.2. Additional resources Installing the logging subsystem for Red Hat OpenShift using the web console
null
https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.11/html/observability/logging
Chapter 17. Upgrading to OpenShift Data Foundation
Chapter 17. Upgrading to OpenShift Data Foundation 17.1. Overview of the OpenShift Data Foundation update process This chapter helps you to upgrade between the minor releases and z-streams for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. You can upgrade OpenShift Data Foundation and its components, either between minor releases like 4.13 and 4.14, or between z-stream updates like 4.14.0 and 4.14.1 by enabling automatic updates (if not done so during operator installation) or performing manual updates. When a new z-stream release becomes available, the upgrade process triggers automatically if the update strategy was set to Automatic. You also need to upgrade the different parts of Red Hat OpenShift Data Foundation in the following order for both internal and external mode deployments: Update OpenShift Container Platform according to the Updating clusters documentation for OpenShift Container Platform. Update Red Hat OpenShift Data Foundation. To prepare a disconnected environment for updates , see Operators guide to using Operator Lifecycle Manager on restricted networks to be able to update OpenShift Data Foundation as well as Local Storage Operator when in use. For updating between minor releases , see Updating Red Hat OpenShift Data Foundation 4.13 to 4.14 . For updating between z-stream releases , see Updating Red Hat OpenShift Data Foundation 4.14.x to 4.14.y . For updating external mode deployments , you must also perform the steps from section Updating the Red Hat OpenShift Data Foundation external secret . If you use local storage, then update the Local Storage operator . See Checking for Local Storage Operator deployments if you are unsure. Important If you have an existing setup of OpenShift Data Foundation 4.12 with disaster recovery (DR) enabled, ensure to update all your clusters in the environment at the same time and avoid updating a single cluster. This is to avoid any potential issues and maintain best compatibility. It is also important to maintain consistency across all OpenShift Data Foundation DR instances. Update considerations Review the following important considerations before you begin. The Red Hat OpenShift Container Platform version is the same as Red Hat OpenShift Data Foundation. See the Interoperability Matrix for more information about supported combinations of OpenShift Container Platform and Red Hat OpenShift Data Foundation. To know whether your cluster was deployed in internal or external mode, refer to the knowledgebase article on How to determine if ODF cluster has storage in internal or external mode . The Local Storage Operator is fully supported only when the Local Storage Operator version matches the Red Hat OpenShift Container Platform version. The flexible scaling feature is available only in new deployments of OpenShift Data Foundation. For more information, see Scaling storage guide . The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 17.2. Updating Red Hat OpenShift Data Foundation 4.13 to 4.14 This chapter helps you to upgrade between the minor releases for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. The Only difference is what gets upgraded and what's not. For Internal and Internal-attached deployments, upgrading OpenShift Data Foundation upgrades all OpenShift Data Foundation services including the backend Red Hat Ceph Storage (RHCS) cluster. For External mode deployments, upgrading OpenShift Data Foundation only upgrades the OpenShift Data Foundation service while the backend Ceph storage cluster remains untouched and needs to be upgraded separately. We recommend upgrading RHCS along with OpenShift Data Foundation in order to get new feature support, security fixes, and other bug fixes. Since we do not have a strong dependency on RHCS upgrade, you can upgrade the OpenShift Data Foundation operator first followed by RHCS upgrade or vice-versa. See solution to know more about RHCS releases. Important Upgrading to 4.14 directly from any version older than 4.13 is unsupported. Prerequisites Ensure that the OpenShift Container Platform cluster has been updated to the latest stable release of version 4.14.X, see Updating Clusters . Ensure that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage Data Foundation Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of both Overview - Block and File and Object tabs. Green tick indicates that the storage cluster , object service and data resiliency are all healthy. Ensure that all OpenShift Data Foundation Pods, including the operator pods, are in Running state in the openshift-storage namespace. To view the state of the pods, on the OpenShift Web Console, click Workloads Pods . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Ensure that you have sufficient time to complete the OpenShift Data Foundation update process, as the update time varies depending on the number of OSDs that run in the cluster. Procedure On the OpenShift Web Console, navigate to Operators Installed Operators . Select openshift-storage project. Click the OpenShift Data Foundation operator name. Click the Subscription tab and click the link under Update Channel . Select the stable-4.14 update channel and Save it. If the Upgrade status shows requires approval , click on requires approval . On the Install Plan Details page, click Preview Install Plan . Review the install plan and click Approve . Wait for the Status to change from Unknown to Created . Navigate to Operators Installed Operators . Select the openshift-storage project. Wait for the OpenShift Data Foundation Operator Status to change to Up to date . After the operator is successfully upgraded, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. Verification steps Check the Version below the OpenShift Data Foundation name and check the operator status. Navigate to Operators Installed Operators and select the openshift-storage project. When the upgrade completes, the version updates to a new version number for OpenShift Data Foundation and status changes to Succeeded with a green tick. Verify that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage Data Foundation Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of Overview- Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy. If verification steps fail, contact Red Hat Support . Important After updating external mode deployments, you must also update the external secret. For instructions, see Updating the OpenShift Data Foundation external secret . Additional Resources If you face any issues while updating OpenShift Data Foundation, see the Commonly required logs for troubleshooting section in the Troubleshooting guide . 17.3. Updating Red Hat OpenShift Data Foundation 4.14.x to 4.14.y This chapter helps you to upgrade between the z-stream release for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. The Only difference is what gets upgraded and what's not. For Internal and Internal-attached deployments, upgrading OpenShift Data Foundation upgrades all OpenShift Data Foundation services including the backend Red Hat Ceph Storage (RHCS) cluster. For External mode deployments, upgrading OpenShift Data Foundation only upgrades the OpenShift Data Foundation service while the backend Ceph storage cluster remains untouched and needs to be upgraded separately. Hence, we recommend upgrading RHCS along with OpenShift Data Foundation in order to get new feature support, security fixes, and other bug fixes. Since we do not have a strong dependency on RHCS upgrade, you can upgrade the OpenShift Data Foundation operator first followed by RHCS upgrade or vice-versa. See solution to know more about RHCS releases. When a new z-stream release becomes available, the upgrade process triggers automatically if the update strategy was set to Automatic . If the update strategy is set to Manual then use the following procedure. Prerequisites Ensure that the OpenShift Container Platform cluster has been updated to the latest stable release of version 4.14.X, see Updating Clusters . Ensure that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage Data Foundation Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of Overview - Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy. Ensure that all OpenShift Data Foundation Pods, including the operator pods, are in Running state in the openshift-storage namespace. To view the state of the pods, on the OpenShift Web Console, click Workloads Pods . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Ensure that you have sufficient time to complete the OpenShift Data Foundation update process, as the update time varies depending on the number of OSDs that run in the cluster. Procedure On the OpenShift Web Console, navigate to Operators Installed Operators . Select openshift-storage project. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Click the OpenShift Data Foundation operator name. Click the Subscription tab. If the Upgrade Status shows require approval , click on requires approval link. On the InstallPlan Details page, click Preview Install Plan . Review the install plan and click Approve . Wait for the Status to change from Unknown to Created . After the operator is successfully upgraded, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. Verification steps Check the Version below the OpenShift Data Foundation name and check the operator status. Navigate to Operators Installed Operators and select the openshift-storage project. When the upgrade completes, the version updates to a new version number for OpenShift Data Foundation and status changes to Succeeded with a green tick. Verify that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage Data Foundation Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of Overview - Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy If verification steps fail, contact Red Hat Support . 17.4. Changing the update approval strategy To ensure that the storage system gets updated automatically when a new update is available in the same channel, we recommend keeping the update approval strategy to Automatic . Changing the update approval strategy to Manual will need manual approval for each upgrade. Procedure Navigate to Operators Installed Operators . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Click on OpenShift Data Foundation operator name Go to the Subscription tab. Click on the pencil icon for changing the Update approval . Select the update approval strategy and click Save . Verification steps Verify that the Update approval shows the newly selected approval strategy below it.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/upgrading-your-cluster_osp
Chapter 86. DNS Component
Chapter 86. DNS Component Available as of Camel version 2.7 This is an additional component for Camel to run DNS queries, using DNSJava. The component is a thin layer on top of DNSJava . The component offers the following operations: ip, to resolve a domain by its ip lookup, to lookup information about the domain dig, to run DNS queries INFO:*Requires SUN JVM* The DNSJava library requires running on the SUN JVM. If you use Apache ServiceMix or Apache Karaf, you'll need to adjust the etc/jre.properties file, to add sun.net.spi.nameservice to the list of Java platform packages exported. The server will need restarting before this change takes effect. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-dns</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 86.1. URI format The URI scheme for a DNS component is as follows dns://operation[?options] This component only supports producers. 86.2. Options The DNS component has no options. The DNS endpoint is configured using URI syntax: with the following path and query parameters: 86.2.1. Path Parameters (1 parameters): Name Description Default Type dnsType Required The type of the lookup. DnsType 86.2.2. Query Parameters (1 parameters): Name Description Default Type synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 86.3. Spring Boot Auto-Configuration The component supports 2 options, which are listed below. Name Description Default Type camel.component.dns.enabled Enable dns component true Boolean camel.component.dns.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 86.4. Headers Header Type Operations Description dns.domain String ip The domain name. Mandatory. dns.name String lookup The name to lookup. Mandatory. dns.type lookup, dig The type of the lookup. Should match the values of org.xbill.dns.Type . Optional. dns.class lookup, dig The DNS class of the lookup. Should match the values of org.xbill.dns.DClass . Optional. dns.query String dig The query itself. Mandatory. dns.server String dig The server in particular for the query. If none is given, the default one specified by the OS will be used. Optional. 86.5. Examples 86.5.1. IP lookup <route id="IPCheck"> <from uri="direct:start"/> <to uri="dns:ip"/> </route> This looks up a domain's IP. For example, www.example.com resolves to 192.0.32.10. The IP address to lookup must be provided in the header with key "dns.domain" . 86.5.2. DNS lookup <route id="IPCheck"> <from uri="direct:start"/> <to uri="dns:lookup"/> </route> This returns a set of DNS records associated with a domain. The name to lookup must be provided in the header with key "dns.name" . 86.5.3. DNS Dig Dig is a Unix command-line utility to run DNS queries. <route id="IPCheck"> <from uri="direct:start"/> <to uri="dns:dig"/> </route> The query must be provided in the header with key "dns.query" . 86.6. Dns Activation Policy DnsActivationPolicy can be used to dynamically start and stop routes based on dns state. If you have instances of the same component running in different regions you can configure a route in each region to activate only if dns is pointing to its region. i.e. You may have an instance in NYC and an instance in SFO. You would configure a service CNAME service.example.com to point to nyc-service.example.com to bring NYC instance up and SFO instance down. When you change the CNAME service.example.com to point to sfo-service.example.com - nyc instance would stop its routes and sfo will bring its routes up. This allows you to switch regions without restarting actual components. <bean id="dnsActivationPolicy" class="org.apache.camel.component.dns.policy.DnsActivationPolicy"> <property name="hostname" value="service.example.com" /> <property name="resolvesTo" value="nyc-service.example.com" /> <property name="ttl" value="60000" /> <property name="stopRoutesOnException" value="false" /> </bean> <route id="routeId" autoStartup="false" routePolicyRef="dnsActivationPolicy"> </route>
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-dns</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>", "dns://operation[?options]", "dns:dnsType", "<route id=\"IPCheck\"> <from uri=\"direct:start\"/> <to uri=\"dns:ip\"/> </route>", "<route id=\"IPCheck\"> <from uri=\"direct:start\"/> <to uri=\"dns:lookup\"/> </route>", "<route id=\"IPCheck\"> <from uri=\"direct:start\"/> <to uri=\"dns:dig\"/> </route>", "<bean id=\"dnsActivationPolicy\" class=\"org.apache.camel.component.dns.policy.DnsActivationPolicy\"> <property name=\"hostname\" value=\"service.example.com\" /> <property name=\"resolvesTo\" value=\"nyc-service.example.com\" /> <property name=\"ttl\" value=\"60000\" /> <property name=\"stopRoutesOnException\" value=\"false\" /> </bean> <route id=\"routeId\" autoStartup=\"false\" routePolicyRef=\"dnsActivationPolicy\"> </route>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/dns-component
Chapter 6. Uninstalling OpenShift Data Foundation
Chapter 6. Uninstalling OpenShift Data Foundation 6.1. Uninstalling OpenShift Data Foundation in Internal mode To uninstall OpenShift Data Foundation in Internal mode, refer to the knowledge base article on Uninstalling OpenShift Data Foundation .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/deploying_openshift_data_foundation_using_microsoft_azure/uninstalling_openshift_data_foundation
Chapter 116. REST
Chapter 116. REST Both producer and consumer are supported The REST component allows to define REST endpoints (consumer) using the Rest DSL and plugin to other Camel components as the REST transport. The rest component can also be used as a client (producer) to call REST services. 116.1. Dependencies When using rest with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-rest-starter</artifactId> </dependency> 116.2. URI format 116.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 116.3.1. Configuring Component Options At the component level, you set general and shared configurations that are, then, inherited by the endpoints. It is the highest configuration level. For example, a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre-configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. You can configure components using: the Component DSL . in a configuration file (application.properties, *.yaml files, etc). directly in the Java code. 116.3.2. Configuring Endpoint Options You usually spend more time setting up endpoints because they have many options. These options help you customize what you want the endpoint to do. The options are also categorized into whether the endpoint is used as a consumer (from), as a producer (to), or both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java. A good practice when configuring options is to use Property Placeholders . Property placeholders provide a few benefits: They help prevent using hardcoded urls, port numbers, sensitive information, and other settings. They allow externalizing the configuration from the code. They help the code to become more flexible and reusable. The following two sections list all the options, firstly for the component followed by the endpoint. 116.4. Component Options The REST component supports 8 options, which are listed below. Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean consumerComponentName (consumer) The Camel Rest component to use for (consumer) the REST transport, such as jetty, servlet, undertow. If no component has been explicit configured, then Camel will lookup if there is a Camel component that integrates with the Rest DSL, or if a org.apache.camel.spi.RestConsumerFactory is registered in the registry. If either one is found, then that is being used. String apiDoc (producer) The swagger api doc resource to use. The resource is loaded from classpath by default and must be in JSON format. String componentName (producer) Deprecated The Camel Rest component to use for (producer) the REST transport, such as http, undertow. If no component has been explicit configured, then Camel will lookup if there is a Camel component that integrates with the Rest DSL, or if a org.apache.camel.spi.RestProducerFactory is registered in the registry. If either one is found, then that is being used. String host (producer) Host and port of HTTP service to use (override host in swagger schema). String lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean producerComponentName (producer) The Camel Rest component to use for (producer) the REST transport, such as http, undertow. If no component has been explicit configured, then Camel will lookup if there is a Camel component that integrates with the Rest DSL, or if a org.apache.camel.spi.RestProducerFactory is registered in the registry. If either one is found, then that is being used. String autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 116.5. Endpoint Options The REST endpoint is configured using URI syntax: with the following path and query parameters: 116.5.1. Path Parameters (3 parameters) Name Description Default Type method (common) Required HTTP method to use. Enum values: get post put delete patch head trace connect options String path (common) Required The base path. String uriTemplate (common) The uri template. String 116.5.2. Query Parameters (16 parameters) Name Description Default Type consumes (common) Media type such as: 'text/xml', or 'application/json' this REST service accepts. By default we accept all kinds of types. String inType (common) To declare the incoming POJO binding type as a FQN class name. String outType (common) To declare the outgoing POJO binding type as a FQN class name. String produces (common) Media type such as: 'text/xml', or 'application/json' this REST service returns. String routeId (common) Name of the route this REST services creates. String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean consumerComponentName (consumer) The Camel Rest component to use for (consumer) the REST transport, such as jetty, servlet, undertow. If no component has been explicit configured, then Camel will lookup if there is a Camel component that integrates with the Rest DSL, or if a org.apache.camel.spi.RestConsumerFactory is registered in the registry. If either one is found, then that is being used. String description (consumer) Human description to document this REST service. String exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern apiDoc (producer) The openapi api doc resource to use. The resource is loaded from classpath by default and must be in JSON format. String bindingMode (producer) Configures the binding mode for the producer. If set to anything other than 'off' the producer will try to convert the body of the incoming message from inType to the json or xml, and the response from json or xml to outType. Enum values: auto off json xml json_xml RestBindingMode host (producer) Host and port of HTTP service to use (override host in openapi schema). String lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean producerComponentName (producer) The Camel Rest component to use for (producer) the REST transport, such as http, undertow. If no component has been explicit configured, then Camel will lookup if there is a Camel component that integrates with the Rest DSL, or if a org.apache.camel.spi.RestProducerFactory is registered in the registry. If either one is found, then that is being used. String queryParameters (producer) Query parameters for the HTTP service to call. The query parameters can contain multiple parameters separated by ampersand such such as foo=123&bar=456. String 116.6. Supported rest components The following components support rest consumer (Rest DSL): camel-servlet camel-platform-http The following components support rest producer: camel-http 116.7. Path and uriTemplate syntax The path and uriTemplate option is defined using a REST syntax where you define the REST context path using support for parameters. Note If no uriTemplate is configured then path option works the same way. It does not matter if you configure only path or if you configure both options. Though configuring both a path and uriTemplate is a more common practice with REST. The following is a Camel route using a path only from("rest:get:hello") .transform().constant("Bye World"); And the following route uses a parameter which is mapped to a Camel header with the key "me". from("rest:get:hello/{me}") .transform().simple("Bye USD{header.me}"); The following examples have configured a base path as "hello" and then have two REST services configured using uriTemplates. from("rest:get:hello:/{me}") .transform().simple("Hi USD{header.me}"); from("rest:get:hello:/french/{me}") .transform().simple("Bonjour USD{header.me}"); Note The Rest endpoint path does not accept escaped characters, for example, the plus sign. This is default behavior of Apache Camel 3. 116.8. Rest producer examples You can use the rest component to call REST services like any other Camel component. For example to call a REST service on using hello/{me} you can do from("direct:start") .to("rest:get:hello/{me}"); And then the dynamic value {me} is mapped to Camel message with the same name. So to call this REST service you can send an empty message body and a header as shown: template.sendBodyAndHeader("direct:start", null, "me", "Donald Duck"); The Rest producer needs to know the hostname and port of the REST service, which you can configure using the host option as shown: from("direct:start") .to("rest:get:hello/{me}?host=myserver:8080/foo"); Instead of using the host option, you can configure the host on the restConfiguration as shown: restConfiguration().host("myserver:8080/foo"); from("direct:start") .to("rest:get:hello/{me}"); You can use the producerComponent to select which Camel component to use as the HTTP client, for example to use http you can do: restConfiguration().host("myserver:8080/foo").producerComponent("http"); from("direct:start") .to("rest:get:hello/{me}"); 116.9. Rest producer binding The REST producer supports binding using JSon or XML like the rest-dsl does. For example to use jetty with json binding mode turned on you can configure this in the rest configuration: restConfiguration().component("jetty").host("localhost").port(8080).bindingMode(RestBindingMode.json); from("direct:start") .to("rest:post:user"); Then when calling the REST service using rest producer it will automatic bind any POJOs to json before calling the REST service: UserPojo user = new UserPojo(); user.setId(123); user.setName("Donald Duck"); template.sendBody("direct:start", user); In the example above we send a POJO instance UserPojo as the message body. And because we have turned on JSon binding in the rest configuration, then the POJO will be marshalled from POJO to JSon before calling the REST service. However if you want to also perform binding for the response message (eg what the REST service send back as response) you would need to configure the outType option to specify what is the classname of the POJO to unmarshal from JSon to POJO. For example if the REST service returns a JSon payload that binds to com.foo.MyResponsePojo you can configure this as shown: restConfiguration().component("jetty").host("localhost").port(8080).bindingMode(RestBindingMode.json); from("direct:start") .to("rest:post:user?outType=com.foo.MyResponsePojo"); Note You must configure outType option if you want POJO binding to happen for the response messages received from calling the REST service. 116.10. More examples See Rest DSL which offers more examples and how you can use the Rest DSL to define those in a nicer RESTful way. There is a camel-example-servlet-rest-tomcat example in the Apache Camel distribution, that demonstrates how to use the Rest DSL with SERVLET as transport that can be deployed on Apache Tomcat, or similar web containers. 116.11. Spring Boot Auto-Configuration The component supports 12 options, which are listed below. Name Description Default Type camel.component.rest-api.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.rest-api.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.rest-api.enabled Whether to enable auto configuration of the rest-api component. This is enabled by default. Boolean camel.component.rest.api-doc The swagger api doc resource to use. The resource is loaded from classpath by default and must be in JSON format. String camel.component.rest.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.rest.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.rest.consumer-component-name The Camel Rest component to use for (consumer) the REST transport, such as jetty, servlet, undertow. If no component has been explicit configured, then Camel will lookup if there is a Camel component that integrates with the Rest DSL, or if a org.apache.camel.spi.RestConsumerFactory is registered in the registry. If either one is found, then that is being used. String camel.component.rest.enabled Whether to enable auto configuration of the rest component. This is enabled by default. Boolean camel.component.rest.host Host and port of HTTP service to use (override host in swagger schema). String camel.component.rest.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.rest.producer-component-name The Camel Rest component to use for (producer) the REST transport, such as http, undertow. If no component has been explicit configured, then Camel will lookup if there is a Camel component that integrates with the Rest DSL, or if a org.apache.camel.spi.RestProducerFactory is registered in the registry. If either one is found, then that is being used. String camel.component.rest.component-name Deprecated The Camel Rest component to use for (producer) the REST transport, such as http, undertow. If no component has been explicit configured, then Camel will lookup if there is a Camel component that integrates with the Rest DSL, or if a org.apache.camel.spi.RestProducerFactory is registered in the registry. If either one is found, then that is being used. String
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-rest-starter</artifactId> </dependency>", "rest://method:path[:uriTemplate]?[options]", "rest:method:path:uriTemplate", "from(\"rest:get:hello\") .transform().constant(\"Bye World\");", "from(\"rest:get:hello/{me}\") .transform().simple(\"Bye USD{header.me}\");", "from(\"rest:get:hello:/{me}\") .transform().simple(\"Hi USD{header.me}\"); from(\"rest:get:hello:/french/{me}\") .transform().simple(\"Bonjour USD{header.me}\");", "from(\"direct:start\") .to(\"rest:get:hello/{me}\");", "template.sendBodyAndHeader(\"direct:start\", null, \"me\", \"Donald Duck\");", "from(\"direct:start\") .to(\"rest:get:hello/{me}?host=myserver:8080/foo\");", "restConfiguration().host(\"myserver:8080/foo\"); from(\"direct:start\") .to(\"rest:get:hello/{me}\");", "restConfiguration().host(\"myserver:8080/foo\").producerComponent(\"http\"); from(\"direct:start\") .to(\"rest:get:hello/{me}\");", "restConfiguration().component(\"jetty\").host(\"localhost\").port(8080).bindingMode(RestBindingMode.json); from(\"direct:start\") .to(\"rest:post:user\");", "UserPojo user = new UserPojo(); user.setId(123); user.setName(\"Donald Duck\"); template.sendBody(\"direct:start\", user);", "restConfiguration().component(\"jetty\").host(\"localhost\").port(8080).bindingMode(RestBindingMode.json); from(\"direct:start\") .to(\"rest:post:user?outType=com.foo.MyResponsePojo\");" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-rest-component-starter
Chapter 37. Networking
Chapter 37. Networking Timeout policy not enabled in Red Hat Enterprise Linux 7.2 kernel The nfct timeout command is not supported in Red Hat Enterprise Linux 7.2. As a workaround, use the global timeout values available at /proc/sys/net/netfilter/nf_conntrack_*_timeout_* to set the timeout value. Verification of signatures using the MD5 hash algorithm is disabled in Red Hat Enterprise Linux 7 It is not possible to connect to any Wi-Fi Protected Access (WPA) Enterprise Access Point (AP) that requires MD5-signed certificates. To work around this problem, copy the wpa_supplicant.service file from the /usr/lib/systemd/system/ directory to the /etc/systemd/system/ directory and add the following line to the Service section of the file: Then run the systemctl daemon-reload command as root to reload the service file. Important Note that MD5 certificates are highly insecure and Red Hat does not recommend using them.
[ "Environment=OPENSSL_ENABLE_MD5_VERIFY=1" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.2_release_notes/known-issues-networking
Chapter 1. Preparing to install on OpenStack
Chapter 1. Preparing to install on OpenStack You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP). 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . 1.2. Choosing a method to install OpenShift Container Platform on OpenStack You can install OpenShift Container Platform on installer-provisioned or user-provisioned infrastructure. The default installation type uses installer-provisioned infrastructure, where the installation program provisions the underlying infrastructure for the cluster. You can also install OpenShift Container Platform on infrastructure that you provision. If you do not use infrastructure that the installation program provisions, you must manage and maintain the cluster resources yourself. See Installation process for more information about installer-provisioned and user-provisioned installation processes. 1.2.1. Installing a cluster on installer-provisioned infrastructure You can install a cluster on Red Hat OpenStack Platform (RHOSP) infrastructure that is provisioned by the OpenShift Container Platform installation program, by using one of the following methods: Installing a cluster on OpenStack with customizations : You can install a customized cluster on RHOSP. The installation program allows for some customization to be applied at the installation stage. Many other customization options are available post-installation . Installing a cluster on OpenStack with Kuryr : You can install a customized OpenShift Container Platform cluster on RHOSP that uses Kuryr SDN. Kuryr and OpenShift Container Platform integration is primarily designed for OpenShift Container Platform clusters running on RHOSP VMs. Kuryr improves the network performance by plugging OpenShift Container Platform pods into RHOSP SDN. In addition, it provides interconnectivity between pods and RHOSP virtual instances. Installing a cluster on OpenStack in a restricted network : You can install OpenShift Container Platform on RHOSP in a restricted or disconnected network by creating an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. You can also use this installation method to ensure that your clusters only use container images that satisfy your organizational controls on external content. 1.2.2. Installing a cluster on user-provisioned infrastructure You can install a cluster on RHOSP infrastructure that you provision, by using one of the following methods: Installing a cluster on OpenStack on your own infrastructure : You can install OpenShift Container Platform on user-provisioned RHOSP infrastructure. By using this installation method, you can integrate your cluster with existing infrastructure and modifications. For installations on user-provisioned infrastructure, you must create all RHOSP resources, like Nova servers, Neutron ports, and security groups. You can use the provided Ansible playbooks to assist with the deployment process. Installing a cluster on OpenStack with Kuryr on your own infrastructure : You can install OpenShift Container Platform on user-provisioned RHOSP infrastructure that uses Kuryr SDN. 1.3. Scanning RHOSP endpoints for legacy HTTPS certificates Beginning with OpenShift Container Platform 4.10, HTTPS certificates must contain subject alternative name (SAN) fields. Run the following script to scan each HTTPS endpoint in a Red Hat OpenStack Platform (RHOSP) catalog for legacy certificates that only contain the CommonName field. Important OpenShift Container Platform does not check the underlying RHOSP infrastructure for legacy certificates prior to installation or updates. Use the provided script to check for these certificates yourself. Failing to update legacy certificates prior to installing or updating a cluster will result in cluster dysfunction. Prerequisites On the machine where you run the script, have the following software: Bash version 4.0 or greater grep OpenStack client jq OpenSSL version 1.1.1l or greater Populate the machine with RHOSP credentials for the target cloud. Procedure Save the following script to your machine: #!/usr/bin/env bash set -Eeuo pipefail declare catalog san catalog="USD(mktemp)" san="USD(mktemp)" readonly catalog san declare invalid=0 openstack catalog list --format json --column Name --column Endpoints \ | jq -r '.[] | .Name as USDname | .Endpoints[] | select(.interface=="public") | [USDname, .interface, .url] | join(" ")' \ | sort \ > "USDcatalog" while read -r name interface url; do # Ignore HTTP if [[ USD{url#"http://"} != "USDurl" ]]; then continue fi # Remove the schema from the URL noschema=USD{url#"https://"} # If the schema was not HTTPS, error if [[ "USDnoschema" == "USDurl" ]]; then echo "ERROR (unknown schema): USDname USDinterface USDurl" exit 2 fi # Remove the path and only keep host and port noschema="USD{noschema%%/*}" host="USD{noschema%%:*}" port="USD{noschema##*:}" # Add the port if was implicit if [[ "USDport" == "USDhost" ]]; then port='443' fi # Get the SAN fields openssl s_client -showcerts -servername "USDhost" -connect "USDhost:USDport" </dev/null 2>/dev/null \ | openssl x509 -noout -ext subjectAltName \ > "USDsan" # openssl returns the empty string if no SAN is found. # If a SAN is found, openssl is expected to return something like: # # X509v3 Subject Alternative Name: # DNS:standalone, DNS:osp1, IP Address:192.168.2.1, IP Address:10.254.1.2 if [[ "USD(grep -c "Subject Alternative Name" "USDsan" || true)" -gt 0 ]]; then echo "PASS: USDname USDinterface USDurl" else invalid=USD((invalid+1)) echo "INVALID: USDname USDinterface USDurl" fi done < "USDcatalog" # clean up temporary files rm "USDcatalog" "USDsan" if [[ USDinvalid -gt 0 ]]; then echo "USD{invalid} legacy certificates were detected. Update your certificates to include a SAN field." exit 1 else echo "All HTTPS certificates for this cloud are valid." fi Run the script. Replace any certificates that the script reports as INVALID with certificates that contain SAN fields. Important You must replace all legacy HTTPS certificates before you install OpenShift Container Platform 4.10 or update a cluster to that version. Legacy certificates will be rejected with the following message: x509: certificate relies on legacy Common Name field, use SANs instead 1.3.1. Scanning RHOSP endpoints for legacy HTTPS certificates manually Beginning with OpenShift Container Platform 4.10, HTTPS certificates must contain subject alternative name (SAN) fields. If you do not have access to the prerequisite tools that are listed in "Scanning RHOSP endpoints for legacy HTTPS certificates", perform the following steps to scan each HTTPS endpoint in a Red Hat OpenStack Platform (RHOSP) catalog for legacy certificates that only contain the CommonName field. Important OpenShift Container Platform does not check the underlying RHOSP infrastructure for legacy certificates prior to installation or updates. Use the following steps to check for these certificates yourself. Failing to update legacy certificates prior to installing or updating a cluster will result in cluster dysfunction. Procedure On a command line, run the following command to view the URL of RHOSP public endpoints: USD openstack catalog list Record the URL for each HTTPS endpoint that the command returns. For each public endpoint, note the host and the port. Tip Determine the host of an endpoint by removing the scheme, the port, and the path. For each endpoint, run the following commands to extract the SAN field of the certificate: Set a host variable: USD host=<host_name> Set a port variable: USD port=<port_number> If the URL of the endpoint does not have a port, use the value 443 . Retrieve the SAN field of the certificate: USD openssl s_client -showcerts -servername "USDhost" -connect "USDhost:USDport" </dev/null 2>/dev/null \ | openssl x509 -noout -ext subjectAltName Example output X509v3 Subject Alternative Name: DNS:your.host.example.net For each endpoint, look for output that resembles the example. If there is no output for an endpoint, the certificate of that endpoint is invalid and must be re-issued. Important You must replace all legacy HTTPS certificates before you install OpenShift Container Platform 4.10 or update a cluster to that version. Legacy certificates are rejected with the following message: x509: certificate relies on legacy Common Name field, use SANs instead
[ "#!/usr/bin/env bash set -Eeuo pipefail declare catalog san catalog=\"USD(mktemp)\" san=\"USD(mktemp)\" readonly catalog san declare invalid=0 openstack catalog list --format json --column Name --column Endpoints | jq -r '.[] | .Name as USDname | .Endpoints[] | select(.interface==\"public\") | [USDname, .interface, .url] | join(\" \")' | sort > \"USDcatalog\" while read -r name interface url; do # Ignore HTTP if [[ USD{url#\"http://\"} != \"USDurl\" ]]; then continue fi # Remove the schema from the URL noschema=USD{url#\"https://\"} # If the schema was not HTTPS, error if [[ \"USDnoschema\" == \"USDurl\" ]]; then echo \"ERROR (unknown schema): USDname USDinterface USDurl\" exit 2 fi # Remove the path and only keep host and port noschema=\"USD{noschema%%/*}\" host=\"USD{noschema%%:*}\" port=\"USD{noschema##*:}\" # Add the port if was implicit if [[ \"USDport\" == \"USDhost\" ]]; then port='443' fi # Get the SAN fields openssl s_client -showcerts -servername \"USDhost\" -connect \"USDhost:USDport\" </dev/null 2>/dev/null | openssl x509 -noout -ext subjectAltName > \"USDsan\" # openssl returns the empty string if no SAN is found. # If a SAN is found, openssl is expected to return something like: # # X509v3 Subject Alternative Name: # DNS:standalone, DNS:osp1, IP Address:192.168.2.1, IP Address:10.254.1.2 if [[ \"USD(grep -c \"Subject Alternative Name\" \"USDsan\" || true)\" -gt 0 ]]; then echo \"PASS: USDname USDinterface USDurl\" else invalid=USD((invalid+1)) echo \"INVALID: USDname USDinterface USDurl\" fi done < \"USDcatalog\" clean up temporary files rm \"USDcatalog\" \"USDsan\" if [[ USDinvalid -gt 0 ]]; then echo \"USD{invalid} legacy certificates were detected. Update your certificates to include a SAN field.\" exit 1 else echo \"All HTTPS certificates for this cloud are valid.\" fi", "x509: certificate relies on legacy Common Name field, use SANs instead", "openstack catalog list", "host=<host_name>", "port=<port_number>", "openssl s_client -showcerts -servername \"USDhost\" -connect \"USDhost:USDport\" </dev/null 2>/dev/null | openssl x509 -noout -ext subjectAltName", "X509v3 Subject Alternative Name: DNS:your.host.example.net", "x509: certificate relies on legacy Common Name field, use SANs instead" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_openstack/preparing-to-install-on-openstack
7.3. Using the Red Hat Support Tool in Interactive Shell Mode
7.3. Using the Red Hat Support Tool in Interactive Shell Mode To start the tool in interactive mode, enter the following command: The tool can be run as an unprivileged user, with a consequently reduced set of commands, or as root . The commands can be listed by entering the ? character. The program or menu selection can be exited by entering the q or e character. You will be prompted for your Red Hat Customer Portal user name and password when you first search the Knowledgebase or support cases. Alternately, set the user name and password for your Red Hat Customer Portal account using interactive mode, and optionally save it to the configuration file.
[ "~]USD redhat-support-tool Welcome to the Red Hat Support Tool. Command (? for help):" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-using_the_red_hat_support_tool_in_interactive_shell_mode
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_optaplanner/8.38/html/developing_solvers_with_red_hat_build_of_optaplanner/snip-conscious-language_developing-solvers
Chapter 9. Red Hat Developer Hub data telemetry capturing
Chapter 9. Red Hat Developer Hub data telemetry capturing Red Hat Developer Hub (RHDH) sends telemetry data to Red Hat using the backstage-plugin-analytics-provider-segment plug-in, which is enabled by default. This includes telemetry data from the Ansible plug-ins. Red Hat collects and analyzes the following data to improve your experience with Red Hat Developer Hub: Events of page visits and clicks on links or buttons. System-related information, for example, locale, timezone, user agent including browser and OS details. Page-related information, for example, title, category, extension name, URL, path, referrer, and search parameters. Anonymized IP addresses, recorded as 0.0.0.0. Anonymized username hashes, which are unique identifiers used solely to identify the number of unique users of the RHDH application. Feedback and sentiment provided in the Ansible plug-ins feedback form. With Red Hat Developer Hub, you can disable or customize the telemetry data collection feature. For more information, refer to the Telemetry data collection section of the Administration guide for Red Hat Developer Hub .
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/installing_ansible_plug-ins_for_red_hat_developer_hub/rhdh-configure-telemetry_aap-plugin-rhdh-installing
2.11. Post-Installation Script
2.11. Post-Installation Script Figure 2.16. Post-Installation Script You can also add commands to execute on the system after the installation is completed. If the network is properly configured in the kickstart file, the network is enabled, and the script can include commands to access resources on the network. To include a post-installation script, type it in the text area. Warning Do not include the %post command. It is added for you. For example, to change the message of the day for the newly installed system, add the following command to the %post section: Note More examples can be found in Section 1.7.1, "Examples" . 2.11.1. Chroot Environment To run the post-installation script outside of the chroot environment, click the checkbox to this option on the top of the Post-Installation window. This is equivalent to using the --nochroot option in the %post section. To make changes to the newly installed file system, within the post-installation section, but outside of the chroot environment, you must prepend the directory name with /mnt/sysimage/ . For example, if you select Run outside of the chroot environment , the example must be changed to the following:
[ "echo \"Hackers will be punished!\" > /etc/motd", "echo \"Hackers will be punished!\" > /mnt/sysimage/etc/motd" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/RHKSTOOL-Post_Installation_Script
Chapter 6. Working with nodes
Chapter 6. Working with nodes 6.1. Viewing and listing the nodes in your OpenShift Container Platform cluster You can list all the nodes in your cluster to obtain information such as status, age, memory usage, and details about the nodes. When you perform node management operations, the CLI interacts with node objects that are representations of actual node hosts. The master uses the information from node objects to validate nodes with health checks. 6.1.1. About listing all the nodes in a cluster You can get detailed information on the nodes in the cluster. The following command lists all nodes: USD oc get nodes The following example is a cluster with healthy nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.27.3 node1.example.com Ready worker 7h v1.27.3 node2.example.com Ready worker 7h v1.27.3 The following example is a cluster with one unhealthy node: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.27.3 node1.example.com NotReady,SchedulingDisabled worker 7h v1.27.3 node2.example.com Ready worker 7h v1.27.3 The conditions that trigger a NotReady status are shown later in this section. The -o wide option provides additional information on nodes. USD oc get nodes -o wide Example output NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME master.example.com Ready master 171m v1.27.3 10.0.129.108 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.27.3-30.rhaos4.10.gitf2f339d.el8-dev node1.example.com Ready worker 72m v1.27.3 10.0.129.222 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.27.3-30.rhaos4.10.gitf2f339d.el8-dev node2.example.com Ready worker 164m v1.27.3 10.0.142.150 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.27.3-30.rhaos4.10.gitf2f339d.el8-dev The following command lists information about a single node: USD oc get node <node> For example: USD oc get node node1.example.com Example output NAME STATUS ROLES AGE VERSION node1.example.com Ready worker 7h v1.27.3 The following command provides more detailed information about a specific node, including the reason for the current condition: USD oc describe node <node> For example: USD oc describe node node1.example.com Example output Name: node1.example.com 1 Roles: worker 2 Labels: kubernetes.io/os=linux kubernetes.io/hostname=ip-10-0-131-14 kubernetes.io/arch=amd64 3 node-role.kubernetes.io/worker= node.kubernetes.io/instance-type=m4.large node.openshift.io/os_id=rhcos node.openshift.io/os_version=4.5 region=east topology.kubernetes.io/region=us-east-1 topology.kubernetes.io/zone=us-east-1a Annotations: cluster.k8s.io/machine: openshift-machine-api/ahardin-worker-us-east-2a-q5dzc 4 machineconfiguration.openshift.io/currentConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/desiredConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/state: Done volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Wed, 13 Feb 2019 11:05:57 -0500 Taints: <none> 5 Unschedulable: false Conditions: 6 Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- OutOfDisk False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:07:09 -0500 KubeletReady kubelet is posting ready status Addresses: 7 InternalIP: 10.0.140.16 InternalDNS: ip-10-0-140-16.us-east-2.compute.internal Hostname: ip-10-0-140-16.us-east-2.compute.internal Capacity: 8 attachable-volumes-aws-ebs: 39 cpu: 2 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8172516Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7558116Ki pods: 250 System Info: 9 Machine ID: 63787c9534c24fde9a0cde35c13f1f66 System UUID: EC22BF97-A006-4A58-6AF8-0A38DEEA122A Boot ID: f24ad37d-2594-46b4-8830-7f7555918325 Kernel Version: 3.10.0-957.5.1.el7.x86_64 OS Image: Red Hat Enterprise Linux CoreOS 410.8.20190520.0 (Ootpa) Operating System: linux Architecture: amd64 Container Runtime Version: cri-o://1.27.3-0.6.dev.rhaos4.3.git9ad059b.el8-rc2 Kubelet Version: v1.27.3 Kube-Proxy Version: v1.27.3 PodCIDR: 10.128.4.0/24 ProviderID: aws:///us-east-2a/i-04e87b31dc6b3e171 Non-terminated Pods: (12 in total) 10 Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- openshift-cluster-node-tuning-operator tuned-hdl5q 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-dns dns-default-l69zr 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-image-registry node-ca-9hmcg 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-ingress router-default-76455c45c-c5ptv 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-machine-config-operator machine-config-daemon-cvqw9 20m (1%) 0 (0%) 50Mi (0%) 0 (0%) openshift-marketplace community-operators-f67fh 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-monitoring alertmanager-main-0 50m (3%) 50m (3%) 210Mi (2%) 10Mi (0%) openshift-monitoring node-exporter-l7q8d 10m (0%) 20m (1%) 20Mi (0%) 40Mi (0%) openshift-monitoring prometheus-adapter-75d769c874-hvb85 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-multus multus-kw8w5 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-sdn ovs-t4dsn 100m (6%) 0 (0%) 300Mi (4%) 0 (0%) openshift-sdn sdn-g79hg 100m (6%) 0 (0%) 200Mi (2%) 0 (0%) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 380m (25%) 270m (18%) memory 880Mi (11%) 250Mi (3%) attachable-volumes-aws-ebs 0 0 Events: 11 Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeHasSufficientPID 6d (x5 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 6d kubelet, m01.example.com Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasNoDiskPressure Normal NodeHasSufficientDisk 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientDisk Normal NodeHasSufficientPID 6d kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal Starting 6d kubelet, m01.example.com Starting kubelet. #... 1 The name of the node. 2 The role of the node, either master or worker . 3 The labels applied to the node. 4 The annotations applied to the node. 5 The taints applied to the node. 6 The node conditions and status. The conditions stanza lists the Ready , PIDPressure , MemoryPressure , DiskPressure and OutOfDisk status. These condition are described later in this section. 7 The IP address and hostname of the node. 8 The pod resources and allocatable resources. 9 Information about the node host. 10 The pods on the node. 11 The events reported by the node. Note The control plane label is not automatically added to newly created or updated master nodes. If you want to use the control plane label for your nodes, you can manually configure the label. For more information, see Understanding how to update labels on nodes in the Additional resources section. Among the information shown for nodes, the following node conditions appear in the output of the commands shown in this section: Table 6.1. Node Conditions Condition Description Ready If true , the node is healthy and ready to accept pods. If false , the node is not healthy and is not accepting pods. If unknown , the node controller has not received a heartbeat from the node for the node-monitor-grace-period (the default is 40 seconds). DiskPressure If true , the disk capacity is low. MemoryPressure If true , the node memory is low. PIDPressure If true , there are too many processes on the node. OutOfDisk If true , the node has insufficient free space on the node for adding new pods. NetworkUnavailable If true , the network for the node is not correctly configured. NotReady If true , one of the underlying components, such as the container runtime or network, is experiencing issues or is not yet configured. SchedulingDisabled Pods cannot be scheduled for placement on the node. Additional resources Understanding how to update labels on nodes 6.1.2. Listing pods on a node in your cluster You can list all the pods on a specific node. Procedure To list all or selected pods on selected nodes: USD oc get pod --selector=<nodeSelector> USD oc get pod --selector=kubernetes.io/os Or: USD oc get pod -l=<nodeSelector> USD oc get pod -l kubernetes.io/os=linux To list all pods on a specific node, including terminated pods: USD oc get pod --all-namespaces --field-selector=spec.nodeName=<nodename> 6.1.3. Viewing memory and CPU usage statistics on your nodes You can display usage statistics about nodes, which provide the runtime environments for containers. These usage statistics include CPU, memory, and storage consumption. Prerequisites You must have cluster-reader permission to view the usage statistics. Metrics must be installed to view the usage statistics. Procedure To view the usage statistics: USD oc adm top nodes Example output NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% ip-10-0-12-143.ec2.compute.internal 1503m 100% 4533Mi 61% ip-10-0-132-16.ec2.compute.internal 76m 5% 1391Mi 18% ip-10-0-140-137.ec2.compute.internal 398m 26% 2473Mi 33% ip-10-0-142-44.ec2.compute.internal 656m 43% 6119Mi 82% ip-10-0-146-165.ec2.compute.internal 188m 12% 3367Mi 45% ip-10-0-19-62.ec2.compute.internal 896m 59% 5754Mi 77% ip-10-0-44-193.ec2.compute.internal 632m 42% 5349Mi 72% To view the usage statistics for nodes with labels: USD oc adm top node --selector='' You must choose the selector (label query) to filter on. Supports = , == , and != . 6.2. Working with nodes As an administrator, you can perform several tasks to make your clusters more efficient. 6.2.1. Understanding how to evacuate pods on nodes Evacuating pods allows you to migrate all or selected pods from a given node or nodes. You can only evacuate pods backed by a replication controller. The replication controller creates new pods on other nodes and removes the existing pods from the specified node(s). Bare pods, meaning those not backed by a replication controller, are unaffected by default. You can evacuate a subset of pods by specifying a pod-selector. Pod selectors are based on labels, so all the pods with the specified label will be evacuated. Procedure Mark the nodes unschedulable before performing the pod evacuation. Mark the node as unschedulable: USD oc adm cordon <node1> Example output node/<node1> cordoned Check that the node status is Ready,SchedulingDisabled : USD oc get node <node1> Example output NAME STATUS ROLES AGE VERSION <node1> Ready,SchedulingDisabled worker 1d v1.27.3 Evacuate the pods using one of the following methods: Evacuate all or selected pods on one or more nodes: USD oc adm drain <node1> <node2> [--pod-selector=<pod_selector>] Force the deletion of bare pods using the --force option. When set to true , deletion continues even if there are pods not managed by a replication controller, replica set, job, daemon set, or stateful set: USD oc adm drain <node1> <node2> --force=true Set a period of time in seconds for each pod to terminate gracefully, use --grace-period . If negative, the default value specified in the pod will be used: USD oc adm drain <node1> <node2> --grace-period=-1 Ignore pods managed by daemon sets using the --ignore-daemonsets flag set to true : USD oc adm drain <node1> <node2> --ignore-daemonsets=true Set the length of time to wait before giving up using the --timeout flag. A value of 0 sets an infinite length of time: USD oc adm drain <node1> <node2> --timeout=5s Delete pods even if there are pods using emptyDir volumes by setting the --delete-emptydir-data flag to true . Local data is deleted when the node is drained: USD oc adm drain <node1> <node2> --delete-emptydir-data=true List objects that will be migrated without actually performing the evacuation, using the --dry-run option set to true : USD oc adm drain <node1> <node2> --dry-run=true Instead of specifying specific node names (for example, <node1> <node2> ), you can use the --selector=<node_selector> option to evacuate pods on selected nodes. Mark the node as schedulable when done. USD oc adm uncordon <node1> 6.2.2. Understanding how to update labels on nodes You can update any label on a node. Node labels are not persisted after a node is deleted even if the node is backed up by a Machine. Note Any change to a MachineSet object is not applied to existing machines owned by the compute machine set. For example, labels edited or added to an existing MachineSet object are not propagated to existing machines and nodes associated with the compute machine set. The following command adds or updates labels on a node: USD oc label node <node> <key_1>=<value_1> ... <key_n>=<value_n> For example: USD oc label nodes webconsole-7f7f6 unhealthy=true Tip You can alternatively apply the following YAML to apply the label: kind: Node apiVersion: v1 metadata: name: webconsole-7f7f6 labels: unhealthy: 'true' #... The following command updates all pods in the namespace: USD oc label pods --all <key_1>=<value_1> For example: USD oc label pods --all status=unhealthy 6.2.3. Understanding how to mark nodes as unschedulable or schedulable By default, healthy nodes with a Ready status are marked as schedulable, which means that you can place new pods on the node. Manually marking a node as unschedulable blocks any new pods from being scheduled on the node. Existing pods on the node are not affected. The following command marks a node or nodes as unschedulable: Example output USD oc adm cordon <node> For example: USD oc adm cordon node1.example.com Example output node/node1.example.com cordoned NAME LABELS STATUS node1.example.com kubernetes.io/hostname=node1.example.com Ready,SchedulingDisabled The following command marks a currently unschedulable node or nodes as schedulable: USD oc adm uncordon <node1> Alternatively, instead of specifying specific node names (for example, <node> ), you can use the --selector=<node_selector> option to mark selected nodes as schedulable or unschedulable. 6.2.4. Handling errors in single-node OpenShift clusters when the node reboots without draining application pods In single-node OpenShift clusters and in OpenShift Container Platform clusters in general, a situation can arise where a node reboot occurs without first draining the node. This can occur where an application pod requesting devices fails with the UnexpectedAdmissionError error. Deployment , ReplicaSet , or DaemonSet errors are reported because the application pods that require those devices start before the pod serving those devices. You cannot control the order of pod restarts. While this behavior is to be expected, it can cause a pod to remain on the cluster even though it has failed to deploy successfully. The pod continues to report UnexpectedAdmissionError . This issue is mitigated by the fact that application pods are typically included in a Deployment , ReplicaSet , or DaemonSet . If a pod is in this error state, it is of little concern because another instance should be running. Belonging to a Deployment , ReplicaSet , or DaemonSet guarantees the successful creation and execution of subsequent pods and ensures the successful deployment of the application. There is ongoing work upstream to ensure that such pods are gracefully terminated. Until that work is resolved, run the following command for a single-node OpenShift cluster to remove the failed pods: USD oc delete pods --field-selector status.phase=Failed -n <POD_NAMESPACE> Note The option to drain the node is unavailable for single-node OpenShift clusters. Additional resources Understanding how to evacuate pods on nodes 6.2.5. Deleting nodes 6.2.5.1. Deleting nodes from a cluster To delete a node from the OpenShift Container Platform cluster, scale down the appropriate MachineSet object. Important When a cluster is integrated with a cloud provider, you must delete the corresponding machine to delete a node. Do not try to use the oc delete node command for this task. When you delete a node by using the CLI, the node object is deleted in Kubernetes, but the pods that exist on the node are not deleted. Any bare pods that are not backed by a replication controller become inaccessible to OpenShift Container Platform. Pods backed by replication controllers are rescheduled to other available nodes. You must delete local manifest pods. Note If you are running cluster on bare metal, you cannot delete a node by editing MachineSet objects. Compute machine sets are only available when a cluster is integrated with a cloud provider. Instead you must unschedule and drain the node before manually deleting it. Procedure View the compute machine sets that are in the cluster by running the following command: USD oc get machinesets -n openshift-machine-api The compute machine sets are listed in the form of <cluster-id>-worker-<aws-region-az> . Scale down the compute machine set by using one of the following methods: Specify the number of replicas to scale down to by running the following command: USD oc scale --replicas=2 machineset <machine-set-name> -n openshift-machine-api Edit the compute machine set custom resource by running the following command: USD oc edit machineset <machine-set-name> -n openshift-machine-api Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: # ... name: <machine-set-name> namespace: openshift-machine-api # ... spec: replicas: 2 1 # ... 1 Specify the number of replicas to scale down to. Additional resources Manually scaling a compute machine set 6.2.5.2. Deleting nodes from a bare metal cluster When you delete a node using the CLI, the node object is deleted in Kubernetes, but the pods that exist on the node are not deleted. Any bare pods not backed by a replication controller become inaccessible to OpenShift Container Platform. Pods backed by replication controllers are rescheduled to other available nodes. You must delete local manifest pods. Procedure Delete a node from an OpenShift Container Platform cluster running on bare metal by completing the following steps: Mark the node as unschedulable: USD oc adm cordon <node_name> Drain all pods on the node: USD oc adm drain <node_name> --force=true This step might fail if the node is offline or unresponsive. Even if the node does not respond, it might still be running a workload that writes to shared storage. To avoid data corruption, power down the physical hardware before you proceed. Delete the node from the cluster: USD oc delete node <node_name> Although the node object is now deleted from the cluster, it can still rejoin the cluster after reboot or if the kubelet service is restarted. To permanently delete the node and all its data, you must decommission the node . If you powered down the physical hardware, turn it back on so that the node can rejoin the cluster. 6.3. Managing nodes OpenShift Container Platform uses a KubeletConfig custom resource (CR) to manage the configuration of nodes. By creating an instance of a KubeletConfig object, a managed machine config is created to override setting on the node. Note Logging in to remote machines for the purpose of changing their configuration is not supported. 6.3.1. Modifying nodes To make configuration changes to a cluster, or machine pool, you must create a custom resource definition (CRD), or kubeletConfig object. OpenShift Container Platform uses the Machine Config Controller to watch for changes introduced through the CRD to apply the changes to the cluster. Note Because the fields in a kubeletConfig object are passed directly to the kubelet from upstream Kubernetes, the validation of those fields is handled directly by the kubelet itself. Please refer to the relevant Kubernetes documentation for the valid values for these fields. Invalid values in the kubeletConfig object can render cluster nodes unusable. Procedure Obtain the label associated with the static CRD, Machine Config Pool, for the type of node you want to configure. Perform one of the following steps: Check current labels of the desired machine config pool. For example: USD oc get machineconfigpool --show-labels Example output NAME CONFIG UPDATED UPDATING DEGRADED LABELS master rendered-master-e05b81f5ca4db1d249a1bf32f9ec24fd True False False operator.machineconfiguration.openshift.io/required-for-upgrade= worker rendered-worker-f50e78e1bc06d8e82327763145bfcf62 True False False Add a custom label to the desired machine config pool. For example: USD oc label machineconfigpool worker custom-kubelet=enabled Create a kubeletconfig custom resource (CR) for your configuration change. For example: Sample configuration for a custom-config CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: custom-config 1 spec: machineConfigPoolSelector: matchLabels: custom-kubelet: enabled 2 kubeletConfig: 3 podsPerCore: 10 maxPods: 250 systemReserved: cpu: 2000m memory: 1Gi #... 1 Assign a name to CR. 2 Specify the label to apply the configuration change, this is the label you added to the machine config pool. 3 Specify the new value(s) you want to change. Create the CR object. USD oc create -f <file-name> For example: USD oc create -f master-kube-config.yaml Most Kubelet Configuration options can be set by the user. The following options are not allowed to be overwritten: CgroupDriver ClusterDNS ClusterDomain StaticPodPath Note If a single node contains more than 50 images, pod scheduling might be imbalanced across nodes. This is because the list of images on a node is shortened to 50 by default. You can disable the image limit by editing the KubeletConfig object and setting the value of nodeStatusMaxImages to -1 . 6.3.2. Configuring control plane nodes as schedulable You can configure control plane nodes to be schedulable, meaning that new pods are allowed for placement on the master nodes. By default, control plane nodes are not schedulable. You can set the masters to be schedulable, but must retain the worker nodes. Note You can deploy OpenShift Container Platform with no worker nodes on a bare metal cluster. In this case, the control plane nodes are marked schedulable by default. You can allow or disallow control plane nodes to be schedulable by configuring the mastersSchedulable field. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become worker nodes. Procedure Edit the schedulers.config.openshift.io resource. USD oc edit schedulers.config.openshift.io cluster Configure the mastersSchedulable field. apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: "2019-09-10T03:04:05Z" generation: 1 name: cluster resourceVersion: "433" selfLink: /apis/config.openshift.io/v1/schedulers/cluster uid: a636d30a-d377-11e9-88d4-0a60097bee62 spec: mastersSchedulable: false 1 status: {} #... 1 Set to true to allow control plane nodes to be schedulable, or false to disallow control plane nodes to be schedulable. Save the file to apply the changes. 6.3.3. Setting SELinux booleans OpenShift Container Platform allows you to enable and disable an SELinux boolean on a Red Hat Enterprise Linux CoreOS (RHCOS) node. The following procedure explains how to modify SELinux booleans on nodes using the Machine Config Operator (MCO). This procedure uses container_manage_cgroup as the example boolean. You can modify this value to whichever boolean you need. Prerequisites You have installed the OpenShift CLI (oc). Procedure Create a new YAML file with a MachineConfig object, displayed in the following example: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-worker-setsebool spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Set SELinux booleans Before=kubelet.service [Service] Type=oneshot ExecStart=/sbin/setsebool container_manage_cgroup=on RemainAfterExit=true [Install] WantedBy=multi-user.target graphical.target enabled: true name: setsebool.service #... Create the new MachineConfig object by running the following command: USD oc create -f 99-worker-setsebool.yaml Note Applying any changes to the MachineConfig object causes all affected nodes to gracefully reboot after the change is applied. 6.3.4. Adding kernel arguments to nodes In some special cases, you might want to add kernel arguments to a set of nodes in your cluster. This should only be done with caution and clear understanding of the implications of the arguments you set. Warning Improper use of kernel arguments can result in your systems becoming unbootable. Examples of kernel arguments you could set include: nosmt : Disables symmetric multithreading (SMT) in the kernel. Multithreading allows multiple logical threads for each CPU. You could consider nosmt in multi-tenant environments to reduce risks from potential cross-thread attacks. By disabling SMT, you essentially choose security over performance. systemd.unified_cgroup_hierarchy : Enables Linux control group version 2 (cgroup v2). cgroup v2 is the version of the kernel control group and offers multiple improvements. enforcing=0 : Configures Security Enhanced Linux (SELinux) to run in permissive mode. In permissive mode, the system acts as if SELinux is enforcing the loaded security policy, including labeling objects and emitting access denial entries in the logs, but it does not actually deny any operations. While not supported for production systems, permissive mode can be helpful for debugging. Warning Disabling SELinux on RHCOS in production is not supported. Once SELinux has been disabled on a node, it must be re-provisioned before re-inclusion in a production cluster. See Kernel.org kernel parameters for a list and descriptions of kernel arguments. In the following procedure, you create a MachineConfig object that identifies: A set of machines to which you want to add the kernel argument. In this case, machines with a worker role. Kernel arguments that are appended to the end of the existing kernel arguments. A label that indicates where in the list of machine configs the change is applied. Prerequisites Have administrative privilege to a working OpenShift Container Platform cluster. Procedure List existing MachineConfig objects for your OpenShift Container Platform cluster to determine how to label your machine config: USD oc get MachineConfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m Create a MachineConfig object file that identifies the kernel argument (for example, 05-worker-kernelarg-selinuxpermissive.yaml ) apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 05-worker-kernelarg-selinuxpermissive 2 spec: kernelArguments: - enforcing=0 3 1 Applies the new kernel argument only to worker nodes. 2 Named to identify where it fits among the machine configs (05) and what it does (adds a kernel argument to configure SELinux permissive mode). 3 Identifies the exact kernel argument as enforcing=0 . Create the new machine config: USD oc create -f 05-worker-kernelarg-selinuxpermissive.yaml Check the machine configs to see that the new one was added: USD oc get MachineConfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 05-worker-kernelarg-selinuxpermissive 3.2.0 105s 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m Check the nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.27.3 ip-10-0-136-243.ec2.internal Ready master 34m v1.27.3 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.27.3 ip-10-0-142-249.ec2.internal Ready master 34m v1.27.3 ip-10-0-153-11.ec2.internal Ready worker 28m v1.27.3 ip-10-0-153-150.ec2.internal Ready master 34m v1.27.3 You can see that scheduling on each worker node is disabled as the change is being applied. Check that the kernel argument worked by going to one of the worker nodes and listing the kernel command line arguments (in /proc/cmdline on the host): USD oc debug node/ip-10-0-141-105.ec2.internal Example output Starting pod/ip-10-0-141-105ec2internal-debug ... To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline BOOT_IMAGE=/ostree/rhcos-... console=tty0 console=ttyS0,115200n8 rootflags=defaults,prjquota rw root=UUID=fd0... ostree=/ostree/boot.0/rhcos/16... coreos.oem.id=qemu coreos.oem.id=ec2 ignition.platform.id=ec2 enforcing=0 sh-4.2# exit You should see the enforcing=0 argument added to the other kernel arguments. 6.3.5. Enabling swap memory use on nodes Important Enabling swap memory use on nodes is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can enable swap memory use for OpenShift Container Platform workloads on a per-node basis. Warning Enabling swap memory can negatively impact workload performance and out-of-resource handling. Do not enable swap memory on control plane nodes. To enable swap memory, create a kubeletconfig custom resource (CR) to set the swapbehavior parameter. You can set limited or unlimited swap memory: Limited: Use the LimitedSwap value to limit how much swap memory workloads can use. Any workloads on the node that are not managed by OpenShift Container Platform can still use swap memory. The LimitedSwap behavior depends on whether the node is running with Linux control groups version 1 (cgroups v1) or version 2 (cgroup v2) : cgroup v1: OpenShift Container Platform workloads can use any combination of memory and swap, up to the pod's memory limit, if set. cgroup v2: OpenShift Container Platform workloads cannot use swap memory. Unlimited: Use the UnlimitedSwap value to allow workloads to use as much swap memory as they request, up to the system limit. Because the kubelet will not start in the presence of swap memory without this configuration, you must enable swap memory in OpenShift Container Platform before enabling swap memory on the nodes. If there is no swap memory present on a node, enabling swap memory in OpenShift Container Platform has no effect. Prerequisites You have a running OpenShift Container Platform cluster that uses version 4.10 or later. You are logged in to the cluster as a user with administrative privileges. You have enabled the TechPreviewNoUpgrade feature set on the cluster (see Nodes Working with clusters Enabling features using feature gates ). Note Enabling the TechPreviewNoUpgrade feature set cannot be undone and prevents minor version updates. These feature sets are not recommended on production clusters. If cgroup v2 is enabled on a node, you must enable swap accounting on the node, by setting the swapaccount=1 kernel argument. Procedure Apply a custom label to the machine config pool where you want to allow swap memory. USD oc label machineconfigpool worker kubelet-swap=enabled Create a custom resource (CR) to enable and configure swap settings. apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: swap-config spec: machineConfigPoolSelector: matchLabels: kubelet-swap: enabled kubeletConfig: failSwapOn: false 1 memorySwap: swapBehavior: LimitedSwap 2 #... 1 Set to false to enable swap memory use on the associated nodes. Set to true to disable swap memory use. 2 Specify the swap memory behavior. If unspecified, the default is LimitedSwap . Enable swap memory on the machines. 6.3.6. Migrating control plane nodes from one RHOSP host to another manually If control plane machine sets are not enabled on your cluster, you can run a script that moves a control plane node from one Red Hat OpenStack Platform (RHOSP) node to another. Note Control plane machine sets are not enabled on clusters that run on user-provisioned infrastructure. For information about control plane machine sets, see "Managing control plane machines with control plane machine sets". Prerequisites The environment variable OS_CLOUD refers to a clouds entry that has administrative credentials in a clouds.yaml file. The environment variable KUBECONFIG refers to a configuration that contains administrative OpenShift Container Platform credentials. Procedure From a command line, run the following script: #!/usr/bin/env bash set -Eeuo pipefail if [ USD# -lt 1 ]; then echo "Usage: 'USD0 node_name'" exit 64 fi # Check for admin OpenStack credentials openstack server list --all-projects >/dev/null || { >&2 echo "The script needs OpenStack admin credentials. Exiting"; exit 77; } # Check for admin OpenShift credentials oc adm top node >/dev/null || { >&2 echo "The script needs OpenShift admin credentials. Exiting"; exit 77; } set -x declare -r node_name="USD1" declare server_id server_id="USD(openstack server list --all-projects -f value -c ID -c Name | grep "USDnode_name" | cut -d' ' -f1)" readonly server_id # Drain the node oc adm cordon "USDnode_name" oc adm drain "USDnode_name" --delete-emptydir-data --ignore-daemonsets --force # Power off the server oc debug "node/USD{node_name}" -- chroot /host shutdown -h 1 # Verify the server is shut off until openstack server show "USDserver_id" -f value -c status | grep -q 'SHUTOFF'; do sleep 5; done # Migrate the node openstack server migrate --wait "USDserver_id" # Resize the VM openstack server resize confirm "USDserver_id" # Wait for the resize confirm to finish until openstack server show "USDserver_id" -f value -c status | grep -q 'SHUTOFF'; do sleep 5; done # Restart the VM openstack server start "USDserver_id" # Wait for the node to show up as Ready: until oc get node "USDnode_name" | grep -q "^USD{node_name}[[:space:]]\+Ready"; do sleep 5; done # Uncordon the node oc adm uncordon "USDnode_name" # Wait for cluster operators to stabilize until oc get co -o go-template='statuses: {{ range .items }}{{ range .status.conditions }}{{ if eq .type "Degraded" }}{{ if ne .status "False" }}DEGRADED{{ end }}{{ else if eq .type "Progressing"}}{{ if ne .status "False" }}PROGRESSING{{ end }}{{ else if eq .type "Available"}}{{ if ne .status "True" }}NOTAVAILABLE{{ end }}{{ end }}{{ end }}{{ end }}' | grep -qv '\(DEGRADED\|PROGRESSING\|NOTAVAILABLE\)'; do sleep 5; done If the script completes, the control plane machine is migrated to a new RHOSP node. Additional resources For information about control plane machine sets, see Managing control plane machines with control plane machine sets . 6.4. Managing the maximum number of pods per node In OpenShift Container Platform, you can configure the number of pods that can run on a node based on the number of processor cores on the node, a hard limit or both. If you use both options, the lower of the two limits the number of pods on a node. When both options are in use, the lower of the two values limits the number of pods on a node. Exceeding these values can result in: Increased CPU utilization. Slow pod scheduling. Potential out-of-memory scenarios, depending on the amount of memory in the node. Exhausting the pool of IP addresses. Resource overcommitting, leading to poor user application performance. Important In Kubernetes, a pod that is holding a single container actually uses two containers. The second container is used to set up networking prior to the actual container starting. Therefore, a system running 10 pods will actually have 20 containers running. Note Disk IOPS throttling from the cloud provider might have an impact on CRI-O and kubelet. They might get overloaded when there are large number of I/O intensive pods running on the nodes. It is recommended that you monitor the disk I/O on the nodes and use volumes with sufficient throughput for the workload. The podsPerCore parameter sets the number of pods the node can run based on the number of processor cores on the node. For example, if podsPerCore is set to 10 on a node with 4 processor cores, the maximum number of pods allowed on the node will be 40 . kubeletConfig: podsPerCore: 10 Setting podsPerCore to 0 disables this limit. The default is 0 . The value of the podsPerCore parameter cannot exceed the value of the maxPods parameter. The maxPods parameter sets the number of pods the node can run to a fixed value, regardless of the properties of the node. kubeletConfig: maxPods: 250 6.4.1. Configuring the maximum number of pods per node Two parameters control the maximum number of pods that can be scheduled to a node: podsPerCore and maxPods . If you use both options, the lower of the two limits the number of pods on a node. For example, if podsPerCore is set to 10 on a node with 4 processor cores, the maximum number of pods allowed on the node will be 40. Prerequisites Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command: USD oc edit machineconfigpool <name> For example: USD oc edit machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 name: worker #... 1 The label appears under Labels. Tip If the label is not present, add a key/value pair such as: Procedure Create a custom resource (CR) for your configuration change. Sample configuration for a max-pods CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 2 kubeletConfig: podsPerCore: 10 3 maxPods: 250 4 #... 1 Assign a name to CR. 2 Specify the label from the machine config pool. 3 Specify the number of pods the node can run based on the number of processor cores on the node. 4 Specify the number of pods the node can run to a fixed value, regardless of the properties of the node. Note Setting podsPerCore to 0 disables this limit. In the above example, the default value for podsPerCore is 10 and the default value for maxPods is 250 . This means that unless the node has 25 cores or more, by default, podsPerCore will be the limiting factor. Run the following command to create the CR: USD oc create -f <file_name>.yaml Verification List the MachineConfigPool CRDs to see if the change is applied. The UPDATING column reports True if the change is picked up by the Machine Config Controller: USD oc get machineconfigpools Example output NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False False False worker worker-8cecd1236b33ee3f8a5e False True False Once the change is complete, the UPDATED column reports True . USD oc get machineconfigpools Example output NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False True False worker worker-8cecd1236b33ee3f8a5e True False False 6.5. Using the Node Tuning Operator Learn about the Node Tuning Operator and how you can use it to manage node-level tuning by orchestrating the tuned daemon. Purpose The Node Tuning Operator helps you manage node-level tuning by orchestrating the TuneD daemon and achieves low latency performance by using the Performance Profile controller. The majority of high-performance applications require some level of kernel tuning. The Node Tuning Operator provides a unified management interface to users of node-level sysctls and more flexibility to add custom tuning specified by user needs. The Operator manages the containerized TuneD daemon for OpenShift Container Platform as a Kubernetes daemon set. It ensures the custom tuning specification is passed to all containerized TuneD daemons running in the cluster in the format that the daemons understand. The daemons run on all nodes in the cluster, one per node. Node-level settings applied by the containerized TuneD daemon are rolled back on an event that triggers a profile change or when the containerized TuneD daemon is terminated gracefully by receiving and handling a termination signal. The Node Tuning Operator uses the Performance Profile controller to implement automatic tuning to achieve low latency performance for OpenShift Container Platform applications. The cluster administrator configures a performance profile to define node-level settings such as the following: Updating the kernel to kernel-rt. Choosing CPUs for housekeeping. Choosing CPUs for running workloads. Note Currently, disabling CPU load balancing is not supported by cgroup v2. As a result, you might not get the desired behavior from performance profiles if you have cgroup v2 enabled. Enabling cgroup v2 is not recommended if you are using performance profiles. The Node Tuning Operator is part of a standard OpenShift Container Platform installation in version 4.1 and later. Note In earlier versions of OpenShift Container Platform, the Performance Addon Operator was used to implement automatic tuning to achieve low latency performance for OpenShift applications. In OpenShift Container Platform 4.11 and later, this functionality is part of the Node Tuning Operator. 6.5.1. Accessing an example Node Tuning Operator specification Use this process to access an example Node Tuning Operator specification. Procedure Run the following command to access an example Node Tuning Operator specification: oc get tuned.tuned.openshift.io/default -o yaml -n openshift-cluster-node-tuning-operator The default CR is meant for delivering standard node-level tuning for the OpenShift Container Platform platform and it can only be modified to set the Operator Management state. Any other custom changes to the default CR will be overwritten by the Operator. For custom tuning, create your own Tuned CRs. Newly created CRs will be combined with the default CR and custom tuning applied to OpenShift Container Platform nodes based on node or pod labels and profile priorities. Warning While in certain situations the support for pod labels can be a convenient way of automatically delivering required tuning, this practice is discouraged and strongly advised against, especially in large-scale clusters. The default Tuned CR ships without pod label matching. If a custom profile is created with pod label matching, then the functionality will be enabled at that time. The pod label functionality will be deprecated in future versions of the Node Tuning Operator. 6.5.2. Custom tuning specification The custom resource (CR) for the Operator has two major sections. The first section, profile: , is a list of TuneD profiles and their names. The second, recommend: , defines the profile selection logic. Multiple custom tuning specifications can co-exist as multiple CRs in the Operator's namespace. The existence of new CRs or the deletion of old CRs is detected by the Operator. All existing custom tuning specifications are merged and appropriate objects for the containerized TuneD daemons are updated. Management state The Operator Management state is set by adjusting the default Tuned CR. By default, the Operator is in the Managed state and the spec.managementState field is not present in the default Tuned CR. Valid values for the Operator Management state are as follows: Managed: the Operator will update its operands as configuration resources are updated Unmanaged: the Operator will ignore changes to the configuration resources Removed: the Operator will remove its operands and resources the Operator provisioned Profile data The profile: section lists TuneD profiles and their names. profile: - name: tuned_profile_1 data: | # TuneD profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other TuneD daemon plugins supported by the containerized TuneD # ... - name: tuned_profile_n data: | # TuneD profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings Recommended profiles The profile: selection logic is defined by the recommend: section of the CR. The recommend: section is a list of items to recommend the profiles based on a selection criteria. recommend: <recommend-item-1> # ... <recommend-item-n> The individual items of the list: - machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8 tunedConfig: reapply_sysctl: <bool> 9 1 Optional. 2 A dictionary of key/value MachineConfig labels. The keys must be unique. 3 If omitted, profile match is assumed unless a profile with a higher priority matches first or machineConfigLabels is set. 4 An optional list. 5 Profile ordering priority. Lower numbers mean higher priority ( 0 is the highest priority). 6 A TuneD profile to apply on a match. For example tuned_profile_1 . 7 Optional operand configuration. 8 Turn debugging on or off for the TuneD daemon. Options are true for on or false for off. The default is false . 9 Turn reapply_sysctl functionality on or off for the TuneD daemon. Options are true for on and false for off. <match> is an optional list recursively defined as follows: - label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4 1 Node or pod label name. 2 Optional node or pod label value. If omitted, the presence of <label_name> is enough to match. 3 Optional object type ( node or pod ). If omitted, node is assumed. 4 An optional <match> list. If <match> is not omitted, all nested <match> sections must also evaluate to true . Otherwise, false is assumed and the profile with the respective <match> section will not be applied or recommended. Therefore, the nesting (child <match> sections) works as logical AND operator. Conversely, if any item of the <match> list matches, the entire <match> list evaluates to true . Therefore, the list acts as logical OR operator. If machineConfigLabels is defined, machine config pool based matching is turned on for the given recommend: list item. <mcLabels> specifies the labels for a machine config. The machine config is created automatically to apply host settings, such as kernel boot parameters, for the profile <tuned_profile_name> . This involves finding all machine config pools with machine config selector matching <mcLabels> and setting the profile <tuned_profile_name> on all nodes that are assigned the found machine config pools. To target nodes that have both master and worker roles, you must use the master role. The list items match and machineConfigLabels are connected by the logical OR operator. The match item is evaluated first in a short-circuit manner. Therefore, if it evaluates to true , the machineConfigLabels item is not considered. Important When using machine config pool based matching, it is advised to group nodes with the same hardware configuration into the same machine config pool. Not following this practice might result in TuneD operands calculating conflicting kernel parameters for two or more nodes sharing the same machine config pool. Example: Node or pod label based matching - match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node The CR above is translated for the containerized TuneD daemon into its recommend.conf file based on the profile priorities. The profile with the highest priority ( 10 ) is openshift-control-plane-es and, therefore, it is considered first. The containerized TuneD daemon running on a given node looks to see if there is a pod running on the same node with the tuned.openshift.io/elasticsearch label set. If not, the entire <match> section evaluates as false . If there is such a pod with the label, in order for the <match> section to evaluate to true , the node label also needs to be node-role.kubernetes.io/master or node-role.kubernetes.io/infra . If the labels for the profile with priority 10 matched, openshift-control-plane-es profile is applied and no other profile is considered. If the node/pod label combination did not match, the second highest priority profile ( openshift-control-plane ) is considered. This profile is applied if the containerized TuneD pod runs on a node with labels node-role.kubernetes.io/master or node-role.kubernetes.io/infra . Finally, the profile openshift-node has the lowest priority of 30 . It lacks the <match> section and, therefore, will always match. It acts as a profile catch-all to set openshift-node profile, if no other profile with higher priority matches on a given node. Example: Machine config pool based matching apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: "worker-custom" priority: 20 profile: openshift-node-custom To minimize node reboots, label the target nodes with a label the machine config pool's node selector will match, then create the Tuned CR above and finally create the custom machine config pool itself. Cloud provider-specific TuneD profiles With this functionality, all Cloud provider-specific nodes can conveniently be assigned a TuneD profile specifically tailored to a given Cloud provider on a OpenShift Container Platform cluster. This can be accomplished without adding additional node labels or grouping nodes into machine config pools. This functionality takes advantage of spec.providerID node object values in the form of <cloud-provider>://<cloud-provider-specific-id> and writes the file /var/lib/tuned/provider with the value <cloud-provider> in NTO operand containers. The content of this file is then used by TuneD to load provider-<cloud-provider> profile if such profile exists. The openshift profile that both openshift-control-plane and openshift-node profiles inherit settings from is now updated to use this functionality through the use of conditional profile loading. Neither NTO nor TuneD currently include any Cloud provider-specific profiles. However, it is possible to create a custom profile provider-<cloud-provider> that will be applied to all Cloud provider-specific cluster nodes. Example GCE Cloud provider profile apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: provider-gce namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=GCE Cloud provider-specific profile # Your tuning for GCE Cloud provider goes here. name: provider-gce Note Due to profile inheritance, any setting specified in the provider-<cloud-provider> profile will be overwritten by the openshift profile and its child profiles. 6.5.3. Default profiles set on a cluster The following are the default profiles set on a cluster. apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Optimize systems running OpenShift (provider specific parent profile) include=-provider-USD{f:exec:cat:/var/lib/tuned/provider},openshift name: openshift recommend: - profile: openshift-control-plane priority: 30 match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra - profile: openshift-node priority: 40 Starting with OpenShift Container Platform 4.9, all OpenShift TuneD profiles are shipped with the TuneD package. You can use the oc exec command to view the contents of these profiles: USD oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/openshift{,-control-plane,-node} -name tuned.conf -exec grep -H ^ {} \; 6.5.4. Supported TuneD daemon plugins Excluding the [main] section, the following TuneD plugins are supported when using custom profiles defined in the profile: section of the Tuned CR: audio cpu disk eeepc_she modules mounts net scheduler scsi_host selinux sysctl sysfs usb video vm bootloader There is some dynamic tuning functionality provided by some of these plugins that is not supported. The following TuneD plugins are currently not supported: script systemd Note The TuneD bootloader plugin only supports Red Hat Enterprise Linux CoreOS (RHCOS) worker nodes. Additional resources Available TuneD Plugins Getting Started with TuneD 6.6. Remediating, fencing, and maintaining nodes When node-level failures occur, such as the kernel hangs or network interface controllers (NICs) fail, the work required from the cluster does not decrease, and workloads from affected nodes need to be restarted somewhere. Failures affecting these workloads risk data loss, corruption, or both. It is important to isolate the node, known as fencing , before initiating recovery of the workload, known as remediation , and recovery of the node. For more information on remediation, fencing, and maintaining nodes, see the Workload Availability for Red Hat OpenShift documentation. 6.7. Understanding node rebooting To reboot a node without causing an outage for applications running on the platform, it is important to first evacuate the pods. For pods that are made highly available by the routing tier, nothing else needs to be done. For other pods needing storage, typically databases, it is critical to ensure that they can remain in operation with one pod temporarily going offline. While implementing resiliency for stateful pods is different for each application, in all cases it is important to configure the scheduler to use node anti-affinity to ensure that the pods are properly spread across available nodes. Another challenge is how to handle nodes that are running critical infrastructure such as the router or the registry. The same node evacuation process applies, though it is important to understand certain edge cases. 6.7.1. About rebooting nodes running critical infrastructure When rebooting nodes that host critical OpenShift Container Platform infrastructure components, such as router pods, registry pods, and monitoring pods, ensure that there are at least three nodes available to run these components. The following scenario demonstrates how service interruptions can occur with applications running on OpenShift Container Platform when only two nodes are available: Node A is marked unschedulable and all pods are evacuated. The registry pod running on that node is now redeployed on node B. Node B is now running both registry pods. Node B is now marked unschedulable and is evacuated. The service exposing the two pod endpoints on node B loses all endpoints, for a brief period of time, until they are redeployed to node A. When using three nodes for infrastructure components, this process does not result in a service disruption. However, due to pod scheduling, the last node that is evacuated and brought back into rotation does not have a registry pod. One of the other nodes has two registry pods. To schedule the third registry pod on the last node, use pod anti-affinity to prevent the scheduler from locating two registry pods on the same node. Additional information For more information on pod anti-affinity, see Placing pods relative to other pods using affinity and anti-affinity rules . 6.7.2. Rebooting a node using pod anti-affinity Pod anti-affinity is slightly different than node anti-affinity. Node anti-affinity can be violated if there are no other suitable locations to deploy a pod. Pod anti-affinity can be set to either required or preferred. With this in place, if only two infrastructure nodes are available and one is rebooted, the container image registry pod is prevented from running on the other node. oc get pods reports the pod as unready until a suitable node is available. Once a node is available and all pods are back in ready state, the node can be restarted. Procedure To reboot a node using pod anti-affinity: Edit the node specification to configure pod anti-affinity: apiVersion: v1 kind: Pod metadata: name: with-pod-antiaffinity spec: affinity: podAntiAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: registry 4 operator: In 5 values: - default topologyKey: kubernetes.io/hostname #... 1 Stanza to configure pod anti-affinity. 2 Defines a preferred rule. 3 Specifies a weight for a preferred rule. The node with the highest weight is preferred. 4 Description of the pod label that determines when the anti-affinity rule applies. Specify a key and value for the label. 5 The operator represents the relationship between the label on the existing pod and the set of values in the matchExpression parameters in the specification for the new pod. Can be In , NotIn , Exists , or DoesNotExist . This example assumes the container image registry pod has a label of registry=default . Pod anti-affinity can use any Kubernetes match expression. Enable the MatchInterPodAffinity scheduler predicate in the scheduling policy file. Perform a graceful restart of the node. 6.7.3. Understanding how to reboot nodes running routers In most cases, a pod running an OpenShift Container Platform router exposes a host port. The PodFitsPorts scheduler predicate ensures that no router pods using the same port can run on the same node, and pod anti-affinity is achieved. If the routers are relying on IP failover for high availability, there is nothing else that is needed. For router pods relying on an external service such as AWS Elastic Load Balancing for high availability, it is that service's responsibility to react to router pod restarts. In rare cases, a router pod may not have a host port configured. In those cases, it is important to follow the recommended restart process for infrastructure nodes. 6.7.4. Rebooting a node gracefully Before rebooting a node, it is recommended to backup etcd data to avoid any data loss on the node. Note For single-node OpenShift clusters that require users to perform the oc login command rather than having the certificates in kubeconfig file to manage the cluster, the oc adm commands might not be available after cordoning and draining the node. This is because the openshift-oauth-apiserver pod is not running due to the cordon. You can use SSH to access the nodes as indicated in the following procedure. In a single-node OpenShift cluster, pods cannot be rescheduled when cordoning and draining. However, doing so gives the pods, especially your workload pods, time to properly stop and release associated resources. Procedure To perform a graceful restart of a node: Mark the node as unschedulable: USD oc adm cordon <node1> Drain the node to remove all the running pods: USD oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force You might receive errors that pods associated with custom pod disruption budgets (PDB) cannot be evicted. Example error error when evicting pods/"rails-postgresql-example-1-72v2w" -n "rails" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. In this case, run the drain command again, adding the disable-eviction flag, which bypasses the PDB checks: USD oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force --disable-eviction Access the node in debug mode: USD oc debug node/<node1> Change your root directory to /host : USD chroot /host Restart the node: USD systemctl reboot In a moment, the node enters the NotReady state. Note With some single-node OpenShift clusters, the oc commands might not be available after you cordon and drain the node because the openshift-oauth-apiserver pod is not running. You can use SSH to connect to the node and perform the reboot. USD ssh core@<master-node>.<cluster_name>.<base_domain> USD sudo systemctl reboot After the reboot is complete, mark the node as schedulable by running the following command: USD oc adm uncordon <node1> Note With some single-node OpenShift clusters, the oc commands might not be available after you cordon and drain the node because the openshift-oauth-apiserver pod is not running. You can use SSH to connect to the node and uncordon it. USD ssh core@<target_node> USD sudo oc adm uncordon <node> --kubeconfig /etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost.kubeconfig Verify that the node is ready: USD oc get node <node1> Example output NAME STATUS ROLES AGE VERSION <node1> Ready worker 6d22h v1.18.3+b0068a8 Additional information For information on etcd data backup, see Backing up etcd data . 6.8. Freeing node resources using garbage collection As an administrator, you can use OpenShift Container Platform to ensure that your nodes are running efficiently by freeing up resources through garbage collection. The OpenShift Container Platform node performs two types of garbage collection: Container garbage collection: Removes terminated containers. Image garbage collection: Removes images not referenced by any running pods. 6.8.1. Understanding how terminated containers are removed through garbage collection Container garbage collection removes terminated containers by using eviction thresholds. When eviction thresholds are set for garbage collection, the node tries to keep any container for any pod accessible from the API. If the pod has been deleted, the containers will be as well. Containers are preserved as long the pod is not deleted and the eviction threshold is not reached. If the node is under disk pressure, it will remove containers and their logs will no longer be accessible using oc logs . eviction-soft - A soft eviction threshold pairs an eviction threshold with a required administrator-specified grace period. eviction-hard - A hard eviction threshold has no grace period, and if observed, OpenShift Container Platform takes immediate action. The following table lists the eviction thresholds: Table 6.2. Variables for configuring container garbage collection Node condition Eviction signal Description MemoryPressure memory.available The available memory on the node. DiskPressure nodefs.available nodefs.inodesFree imagefs.available imagefs.inodesFree The available disk space or inodes on the node root file system, nodefs , or image file system, imagefs . Note For evictionHard you must specify all of these parameters. If you do not specify all parameters, only the specified parameters are applied and the garbage collection will not function properly. If a node is oscillating above and below a soft eviction threshold, but not exceeding its associated grace period, the corresponding node would constantly oscillate between true and false . As a consequence, the scheduler could make poor scheduling decisions. To protect against this oscillation, use the evictionpressure-transition-period flag to control how long OpenShift Container Platform must wait before transitioning out of a pressure condition. OpenShift Container Platform will not set an eviction threshold as being met for the specified pressure condition for the period specified before toggling the condition back to false. Note Setting the evictionPressureTransitionPeriod parameter to 0 configures the default value of 5 minutes. You cannot set an eviction pressure transition period to zero seconds. 6.8.2. Understanding how images are removed through garbage collection Image garbage collection removes images that are not referenced by any running pods. OpenShift Container Platform determines which images to remove from a node based on the disk usage that is reported by cAdvisor . The policy for image garbage collection is based on two conditions: The percent of disk usage (expressed as an integer) which triggers image garbage collection. The default is 85 . The percent of disk usage (expressed as an integer) to which image garbage collection attempts to free. Default is 80 . For image garbage collection, you can modify any of the following variables using a custom resource. Table 6.3. Variables for configuring image garbage collection Setting Description imageMinimumGCAge The minimum age for an unused image before the image is removed by garbage collection. The default is 2m . imageGCHighThresholdPercent The percent of disk usage, expressed as an integer, which triggers image garbage collection. The default is 85 . This value must be greater than the imageGCLowThresholdPercent value. imageGCLowThresholdPercent The percent of disk usage, expressed as an integer, to which image garbage collection attempts to free. The default is 80 . This value must be less than the imageGCHighThresholdPercent value. Two lists of images are retrieved in each garbage collector run: A list of images currently running in at least one pod. A list of images available on a host. As new containers are run, new images appear. All images are marked with a time stamp. If the image is running (the first list above) or is newly detected (the second list above), it is marked with the current time. The remaining images are already marked from the spins. All images are then sorted by the time stamp. Once the collection starts, the oldest images get deleted first until the stopping criterion is met. 6.8.3. Configuring garbage collection for containers and images As an administrator, you can configure how OpenShift Container Platform performs garbage collection by creating a kubeletConfig object for each machine config pool. Note OpenShift Container Platform supports only one kubeletConfig object for each machine config pool. You can configure any combination of the following: Soft eviction for containers Hard eviction for containers Eviction for images Container garbage collection removes terminated containers. Image garbage collection removes images that are not referenced by any running pods. Prerequisites Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command: USD oc edit machineconfigpool <name> For example: USD oc edit machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 name: worker #... 1 The label appears under Labels. Tip If the label is not present, add a key/value pair such as: Procedure Create a custom resource (CR) for your configuration change. Important If there is one file system, or if /var/lib/kubelet and /var/lib/containers/ are in the same file system, the settings with the highest values trigger evictions, as those are met first. The file system triggers the eviction. Sample configuration for a container garbage collection CR: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-kubeconfig 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 2 kubeletConfig: evictionSoft: 3 memory.available: "500Mi" 4 nodefs.available: "10%" nodefs.inodesFree: "5%" imagefs.available: "15%" imagefs.inodesFree: "10%" evictionSoftGracePeriod: 5 memory.available: "1m30s" nodefs.available: "1m30s" nodefs.inodesFree: "1m30s" imagefs.available: "1m30s" imagefs.inodesFree: "1m30s" evictionHard: 6 memory.available: "200Mi" nodefs.available: "5%" nodefs.inodesFree: "4%" imagefs.available: "10%" imagefs.inodesFree: "5%" evictionPressureTransitionPeriod: 3m 7 imageMinimumGCAge: 5m 8 imageGCHighThresholdPercent: 80 9 imageGCLowThresholdPercent: 75 10 #... 1 Name for the object. 2 Specify the label from the machine config pool. 3 For container garbage collection: Type of eviction: evictionSoft or evictionHard . 4 For container garbage collection: Eviction thresholds based on a specific eviction trigger signal. 5 For container garbage collection: Grace periods for the soft eviction. This parameter does not apply to eviction-hard . 6 For container garbage collection: Eviction thresholds based on a specific eviction trigger signal. For evictionHard you must specify all of these parameters. If you do not specify all parameters, only the specified parameters are applied and the garbage collection will not function properly. 7 For container garbage collection: The duration to wait before transitioning out of an eviction pressure condition. Setting the evictionPressureTransitionPeriod parameter to 0 configures the default value of 5 minutes. 8 For image garbage collection: The minimum age for an unused image before the image is removed by garbage collection. 9 For image garbage collection: Image garbage collection is triggered at the specified percent of disk usage (expressed as an integer). This value must be greater than the imageGCLowThresholdPercent value. 10 For image garbage collection: Image garbage collection attempts to free resources to the specified percent of disk usage (expressed as an integer). This value must be less than the imageGCHighThresholdPercent value. Run the following command to create the CR: USD oc create -f <file_name>.yaml For example: USD oc create -f gc-container.yaml Example output kubeletconfig.machineconfiguration.openshift.io/gc-container created Verification Verify that garbage collection is active by entering the following command. The Machine Config Pool you specified in the custom resource appears with UPDATING as 'true` until the change is fully implemented: USD oc get machineconfigpool Example output NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True 6.9. Allocating resources for nodes in an OpenShift Container Platform cluster To provide more reliable scheduling and minimize node resource overcommitment, reserve a portion of the CPU and memory resources for use by the underlying node components, such as kubelet and kube-proxy , and the remaining system components, such as sshd and NetworkManager . By specifying the resources to reserve, you provide the scheduler with more information about the remaining CPU and memory resources that a node has available for use by pods. You can allow OpenShift Container Platform to automatically determine the optimal system-reserved CPU and memory resources for your nodes or you can manually determine and set the best resources for your nodes. Important To manually set resource values, you must use a kubelet config CR. You cannot use a machine config CR. 6.9.1. Understanding how to allocate resources for nodes CPU and memory resources reserved for node components in OpenShift Container Platform are based on two node settings: Setting Description kube-reserved This setting is not used with OpenShift Container Platform. Add the CPU and memory resources that you planned to reserve to the system-reserved setting. system-reserved This setting identifies the resources to reserve for the node components and system components, such as CRI-O and Kubelet. The default settings depend on the OpenShift Container Platform and Machine Config Operator versions. Confirm the default systemReserved parameter on the machine-config-operator repository. If a flag is not set, the defaults are used. If none of the flags are set, the allocated resource is set to the node's capacity as it was before the introduction of allocatable resources. Note Any CPUs specifically reserved using the reservedSystemCPUs parameter are not available for allocation using kube-reserved or system-reserved . 6.9.1.1. How OpenShift Container Platform computes allocated resources An allocated amount of a resource is computed based on the following formula: Note The withholding of Hard-Eviction-Thresholds from Allocatable improves system reliability because the value for Allocatable is enforced for pods at the node level. If Allocatable is negative, it is set to 0 . Each node reports the system resources that are used by the container runtime and kubelet. To simplify configuring the system-reserved parameter, view the resource use for the node by using the node summary API. The node summary is available at /api/v1/nodes/<node>/proxy/stats/summary . 6.9.1.2. How nodes enforce resource constraints The node is able to limit the total amount of resources that pods can consume based on the configured allocatable value. This feature significantly improves the reliability of the node by preventing pods from using CPU and memory resources that are needed by system services such as the container runtime and node agent. To improve node reliability, administrators should reserve resources based on a target for resource use. The node enforces resource constraints by using a new cgroup hierarchy that enforces quality of service. All pods are launched in a dedicated cgroup hierarchy that is separate from system daemons. Administrators should treat system daemons similar to pods that have a guaranteed quality of service. System daemons can burst within their bounding control groups and this behavior must be managed as part of cluster deployments. Reserve CPU and memory resources for system daemons by specifying the amount of CPU and memory resources in system-reserved . Enforcing system-reserved limits can prevent critical system services from receiving CPU and memory resources. As a result, a critical system service can be ended by the out-of-memory killer. The recommendation is to enforce system-reserved only if you have profiled the nodes exhaustively to determine precise estimates and you are confident that critical system services can recover if any process in that group is ended by the out-of-memory killer. 6.9.1.3. Understanding Eviction Thresholds If a node is under memory pressure, it can impact the entire node and all pods running on the node. For example, a system daemon that uses more than its reserved amount of memory can trigger an out-of-memory event. To avoid or reduce the probability of system out-of-memory events, the node provides out-of-resource handling. You can reserve some memory using the --eviction-hard flag. The node attempts to evict pods whenever memory availability on the node drops below the absolute value or percentage. If system daemons do not exist on a node, pods are limited to the memory capacity - eviction-hard . For this reason, resources set aside as a buffer for eviction before reaching out of memory conditions are not available for pods. The following is an example to illustrate the impact of node allocatable for memory: Node capacity is 32Gi --system-reserved is 3Gi --eviction-hard is set to 100Mi . For this node, the effective node allocatable value is 28.9Gi . If the node and system components use all their reservation, the memory available for pods is 28.9Gi , and kubelet evicts pods when it exceeds this threshold. If you enforce node allocatable, 28.9Gi , with top-level cgroups, then pods can never exceed 28.9Gi . Evictions are not performed unless system daemons consume more than 3.1Gi of memory. If system daemons do not use up all their reservation, with the above example, pods would face memcg OOM kills from their bounding cgroup before node evictions kick in. To better enforce QoS under this situation, the node applies the hard eviction thresholds to the top-level cgroup for all pods to be Node Allocatable + Eviction Hard Thresholds . If system daemons do not use up all their reservation, the node will evict pods whenever they consume more than 28.9Gi of memory. If eviction does not occur in time, a pod will be OOM killed if pods consume 29Gi of memory. 6.9.1.4. How the scheduler determines resource availability The scheduler uses the value of node.Status.Allocatable instead of node.Status.Capacity to decide if a node will become a candidate for pod scheduling. By default, the node will report its machine capacity as fully schedulable by the cluster. 6.9.2. Automatically allocating resources for nodes OpenShift Container Platform can automatically determine the optimal system-reserved CPU and memory resources for nodes associated with a specific machine config pool and update the nodes with those values when the nodes start. By default, the system-reserved CPU is 500m and system-reserved memory is 1Gi . To automatically determine and allocate the system-reserved resources on nodes, create a KubeletConfig custom resource (CR) to set the autoSizingReserved: true parameter. A script on each node calculates the optimal values for the respective reserved resources based on the installed CPU and memory capacity on each node. The script takes into account that increased capacity requires a corresponding increase in the reserved resources. Automatically determining the optimal system-reserved settings ensures that your cluster is running efficiently and prevents node failure due to resource starvation of system components, such as CRI-O and kubelet, without your needing to manually calculate and update the values. This feature is disabled by default. Prerequisites Obtain the label associated with the static MachineConfigPool object for the type of node you want to configure by entering the following command: USD oc edit machineconfigpool <name> For example: USD oc edit machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 name: worker #... 1 The label appears under Labels . Tip If an appropriate label is not present, add a key/value pair such as: Procedure Create a custom resource (CR) for your configuration change: Sample configuration for a resource allocation CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: dynamic-node 1 spec: autoSizingReserved: true 2 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 3 #... 1 Assign a name to CR. 2 Add the autoSizingReserved parameter set to true to allow OpenShift Container Platform to automatically determine and allocate the system-reserved resources on the nodes associated with the specified label. To disable automatic allocation on those nodes, set this parameter to false . 3 Specify the label from the machine config pool that you configured in the "Prerequisites" section. You can choose any desired labels for the machine config pool, such as custom-kubelet: small-pods , or the default label, pools.operator.machineconfiguration.openshift.io/worker: "" . The example enables automatic resource allocation on all worker nodes. OpenShift Container Platform drains the nodes, applies the kubelet config, and restarts the nodes. Create the CR by entering the following command: USD oc create -f <file_name>.yaml Verification Log in to a node you configured by entering the following command: USD oc debug node/<node_name> Set /host as the root directory within the debug shell: # chroot /host View the /etc/node-sizing.env file: Example output SYSTEM_RESERVED_MEMORY=3Gi SYSTEM_RESERVED_CPU=0.08 The kubelet uses the system-reserved values in the /etc/node-sizing.env file. In the example, the worker nodes are allocated 0.08 CPU and 3 Gi of memory. It can take several minutes for the optimal values to appear. 6.9.3. Manually allocating resources for nodes OpenShift Container Platform supports the CPU and memory resource types for allocation. The ephemeral-resource resource type is also supported. For the cpu type, you specify the resource quantity in units of cores, such as 200m , 0.5 , or 1 . For memory and ephemeral-storage , you specify the resource quantity in units of bytes, such as 200Ki , 50Mi , or 5Gi . By default, the system-reserved CPU is 500m and system-reserved memory is 1Gi . As an administrator, you can set these values by using a kubelet config custom resource (CR) through a set of <resource_type>=<resource_quantity> pairs (e.g., cpu=200m,memory=512Mi ). Important You must use a kubelet config CR to manually set resource values. You cannot use a machine config CR. For details on the recommended system-reserved values, refer to the recommended system-reserved values . Prerequisites Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command: USD oc edit machineconfigpool <name> For example: USD oc edit machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 name: worker #... 1 The label appears under Labels. Tip If the label is not present, add a key/value pair such as: Procedure Create a custom resource (CR) for your configuration change. Sample configuration for a resource allocation CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-allocatable 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 2 kubeletConfig: systemReserved: 3 cpu: 1000m memory: 1Gi #... 1 Assign a name to CR. 2 Specify the label from the machine config pool. 3 Specify the resources to reserve for the node components and system components. Run the following command to create the CR: USD oc create -f <file_name>.yaml 6.10. Allocating specific CPUs for nodes in a cluster When using the static CPU Manager policy , you can reserve specific CPUs for use by specific nodes in your cluster. For example, on a system with 24 CPUs, you could reserve CPUs numbered 0 - 3 for the control plane allowing the compute nodes to use CPUs 4 - 23. 6.10.1. Reserving CPUs for nodes To explicitly define a list of CPUs that are reserved for specific nodes, create a KubeletConfig custom resource (CR) to define the reservedSystemCPUs parameter. This list supersedes the CPUs that might be reserved using the systemReserved parameter. Procedure Obtain the label associated with the machine config pool (MCP) for the type of node you want to configure: USD oc describe machineconfigpool <name> For example: USD oc describe machineconfigpool worker Example output Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= pools.operator.machineconfiguration.openshift.io/worker= 1 Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool #... 1 Get the MCP label. Create a YAML file for the KubeletConfig CR: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-reserved-cpus 1 spec: kubeletConfig: reservedSystemCPUs: "0,1,2,3" 2 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 3 #... 1 Specify a name for the CR. 2 Specify the core IDs of the CPUs you want to reserve for the nodes associated with the MCP. 3 Specify the label from the MCP. Create the CR object: USD oc create -f <file_name>.yaml Additional resources For more information on the systemReserved parameter, see Allocating resources for nodes in an OpenShift Container Platform cluster . 6.11. Enabling TLS security profiles for the kubelet You can use a TLS (Transport Layer Security) security profile to define which TLS ciphers are required by the kubelet when it is acting as an HTTP server. The kubelet uses its HTTP/GRPC server to communicate with the Kubernetes API server, which sends commands to pods, gathers logs, and run exec commands on pods through the kubelet. A TLS security profile defines the TLS ciphers that the Kubernetes API server must use when connecting with the kubelet to protect communication between the kubelet and the Kubernetes API server. Note By default, when the kubelet acts as a client with the Kubernetes API server, it automatically negotiates the TLS parameters with the API server. 6.11.1. Understanding TLS security profiles You can use a TLS (Transport Layer Security) security profile to define which TLS ciphers are required by various OpenShift Container Platform components. The OpenShift Container Platform TLS security profiles are based on Mozilla recommended configurations . You can specify one of the following TLS security profiles for each component: Table 6.4. TLS security profiles Profile Description Old This profile is intended for use with legacy clients or libraries. The profile is based on the Old backward compatibility recommended configuration. The Old profile requires a minimum TLS version of 1.0. Note For the Ingress Controller, the minimum TLS version is converted from 1.0 to 1.1. Intermediate This profile is the recommended configuration for the majority of clients. It is the default TLS security profile for the Ingress Controller, kubelet, and control plane. The profile is based on the Intermediate compatibility recommended configuration. The Intermediate profile requires a minimum TLS version of 1.2. Modern This profile is intended for use with modern clients that have no need for backwards compatibility. This profile is based on the Modern compatibility recommended configuration. The Modern profile requires a minimum TLS version of 1.3. Custom This profile allows you to define the TLS version and ciphers to use. Warning Use caution when using a Custom profile, because invalid configurations can cause problems. Note When using one of the predefined profile types, the effective profile configuration is subject to change between releases. For example, given a specification to use the Intermediate profile deployed on release X.Y.Z, an upgrade to release X.Y.Z+1 might cause a new profile configuration to be applied, resulting in a rollout. 6.11.2. Configuring the TLS security profile for the kubelet To configure a TLS security profile for the kubelet when it is acting as an HTTP server, create a KubeletConfig custom resource (CR) to specify a predefined or custom TLS security profile for specific nodes. If a TLS security profile is not configured, the default TLS security profile is Intermediate . Sample KubeletConfig CR that configures the Old TLS security profile on worker nodes apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig # ... spec: tlsSecurityProfile: old: {} type: Old machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" # ... You can see the ciphers and the minimum TLS version of the configured TLS security profile in the kubelet.conf file on a configured node. Prerequisites You are logged in to OpenShift Container Platform as a user with the cluster-admin role. Procedure Create a KubeletConfig CR to configure the TLS security profile: Sample KubeletConfig CR for a Custom profile apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-tls-security-profile spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 4 #... 1 Specify the TLS security profile type ( Old , Intermediate , or Custom ). The default is Intermediate . 2 Specify the appropriate field for the selected type: old: {} intermediate: {} custom: 3 For the custom type, specify a list of TLS ciphers and minimum accepted TLS version. 4 Optional: Specify the machine config pool label for the nodes you want to apply the TLS security profile. Create the KubeletConfig object: USD oc create -f <filename> Depending on the number of worker nodes in the cluster, wait for the configured nodes to be rebooted one by one. Verification To verify that the profile is set, perform the following steps after the nodes are in the Ready state: Start a debug session for a configured node: USD oc debug node/<node_name> Set /host as the root directory within the debug shell: sh-4.4# chroot /host View the kubelet.conf file: sh-4.4# cat /etc/kubernetes/kubelet.conf Example output "kind": "KubeletConfiguration", "apiVersion": "kubelet.config.k8s.io/v1beta1", #... "tlsCipherSuites": [ "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256", "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256" ], "tlsMinVersion": "VersionTLS12", #... 6.12. Machine Config Daemon metrics The Machine Config Daemon is a part of the Machine Config Operator. It runs on every node in the cluster. The Machine Config Daemon manages configuration changes and updates on each of the nodes. 6.12.1. Machine Config Daemon metrics Beginning with OpenShift Container Platform 4.3, the Machine Config Daemon provides a set of metrics. These metrics can be accessed using the Prometheus Cluster Monitoring stack. The following table describes this set of metrics. Some entries contain commands for getting specific logs. However, the most comprehensive set of logs is available using the oc adm must-gather command. Note Metrics marked with * in the Name and Description columns represent serious errors that might cause performance problems. Such problems might prevent updates and upgrades from proceeding. Table 6.5. MCO metrics Name Format Description Notes mcd_host_os_and_version []string{"os", "version"} Shows the OS that MCD is running on, such as RHCOS or RHEL. In case of RHCOS, the version is provided. mcd_drain_err* Logs errors received during failed drain. * While drains might need multiple tries to succeed, terminal failed drains prevent updates from proceeding. The drain_time metric, which shows how much time the drain took, might help with troubleshooting. For further investigation, see the logs by running: USD oc logs -f -n openshift-machine-config-operator machine-config-daemon-<hash> -c machine-config-daemon mcd_pivot_err* []string{"err", "node", "pivot_target"} Logs errors encountered during pivot. * Pivot errors might prevent OS upgrades from proceeding. For further investigation, run this command to see the logs from the machine-config-daemon container: USD oc logs -f -n openshift-machine-config-operator machine-config-daemon-<hash> -c machine-config-daemon mcd_state []string{"state", "reason"} State of Machine Config Daemon for the indicated node. Possible states are "Done", "Working", and "Degraded". In case of "Degraded", the reason is included. For further investigation, see the logs by running: USD oc logs -f -n openshift-machine-config-operator machine-config-daemon-<hash> -c machine-config-daemon mcd_kubelet_state* Logs kubelet health failures. * This is expected to be empty, with failure count of 0. If failure count exceeds 2, the error indicating threshold is exceeded. This indicates a possible issue with the health of the kubelet. For further investigation, run this command to access the node and see all its logs: USD oc debug node/<node> - chroot /host journalctl -u kubelet mcd_reboot_err* []string{"message", "err", "node"} Logs the failed reboots and the corresponding errors. * This is expected to be empty, which indicates a successful reboot. For further investigation, see the logs by running: USD oc logs -f -n openshift-machine-config-operator machine-config-daemon-<hash> -c machine-config-daemon mcd_update_state []string{"config", "err"} Logs success or failure of configuration updates and the corresponding errors. The expected value is rendered-master/rendered-worker-XXXX . If the update fails, an error is present. For further investigation, see the logs by running: USD oc logs -f -n openshift-machine-config-operator machine-config-daemon-<hash> -c machine-config-daemon Additional resources About OpenShift Container Platform monitoring Gathering data about your cluster 6.13. Creating infrastructure nodes Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' You can use infrastructure machine sets to create machines that host only infrastructure components, such as the default router, the integrated container image registry, and the components for cluster metrics and monitoring. These infrastructure machines are not counted toward the total number of subscriptions that are required to run the environment. In a production deployment, it is recommended that you deploy at least three machine sets to hold infrastructure components. Both OpenShift Logging and Red Hat OpenShift Service Mesh deploy Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. This configuration requires three different machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . 6.13.1. OpenShift Container Platform infrastructure components Each self-managed Red Hat OpenShift subscription includes entitlements for OpenShift Container Platform and other OpenShift-related components. These entitlements are included for running OpenShift Container Platform control plane and infrastructure workloads and do not need to be accounted for during sizing. To qualify as an infrastructure node and use the included entitlement, only components that are supporting the cluster, and not part of an end-user application, can run on those instances. Examples include the following components: Kubernetes and OpenShift Container Platform control plane services The default router The integrated container image registry The HAProxy-based Ingress Controller The cluster metrics collection, or monitoring service, including components for monitoring user-defined projects Cluster aggregated logging Red Hat Quay Red Hat OpenShift Data Foundation Red Hat Advanced Cluster Management for Kubernetes Red Hat Advanced Cluster Security for Kubernetes Red Hat OpenShift GitOps Red Hat OpenShift Pipelines Red Hat OpenShift Service Mesh Any node that runs any other container, pod, or component is a worker node that your subscription must cover. For information about infrastructure nodes and which components can run on infrastructure nodes, see the "Red Hat OpenShift control plane and infrastructure nodes" section in the OpenShift sizing and subscription guide for enterprise Kubernetes document. To create an infrastructure node, you can use a machine set , label the node , or use a machine config pool . 6.13.1.1. Creating an infrastructure node Important See Creating infrastructure machine sets for installer-provisioned infrastructure environments or for any cluster where the control plane nodes are managed by the machine API. Requirements of the cluster dictate that infrastructure, also called infra nodes, be provisioned. The installer only provides provisions for control plane and worker nodes. Worker nodes can be designated as infrastructure nodes or application, also called app , nodes through labeling. Procedure Add a label to the worker node that you want to act as application node: USD oc label node <node-name> node-role.kubernetes.io/app="" Add a label to the worker nodes that you want to act as infrastructure nodes: USD oc label node <node-name> node-role.kubernetes.io/infra="" Check to see if applicable nodes now have the infra role and app roles: USD oc get nodes Create a default cluster-wide node selector. The default node selector is applied to pods created in all namespaces. This creates an intersection with any existing node selectors on a pod, which additionally constrains the pod's selector. Important If the default node selector key conflicts with the key of a pod's label, then the default node selector is not applied. However, do not set a default node selector that might cause a pod to become unschedulable. For example, setting the default node selector to a specific node role, such as node-role.kubernetes.io/infra="" , when a pod's label is set to a different node role, such as node-role.kubernetes.io/master="" , can cause the pod to become unschedulable. For this reason, use caution when setting the default node selector to specific node roles. You can alternatively use a project node selector to avoid cluster-wide node selector key conflicts. Edit the Scheduler object: USD oc edit scheduler cluster Add the defaultNodeSelector field with the appropriate node selector: apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: node-role.kubernetes.io/infra="" 1 # ... 1 This example node selector deploys pods on infrastructure nodes by default. Save the file to apply the changes. You can now move infrastructure resources to the newly labeled infra nodes. Additional resources Moving resources to infrastructure machine sets
[ "oc get nodes", "oc get nodes", "NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.27.3 node1.example.com Ready worker 7h v1.27.3 node2.example.com Ready worker 7h v1.27.3", "oc get nodes", "NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.27.3 node1.example.com NotReady,SchedulingDisabled worker 7h v1.27.3 node2.example.com Ready worker 7h v1.27.3", "oc get nodes -o wide", "NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME master.example.com Ready master 171m v1.27.3 10.0.129.108 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.27.3-30.rhaos4.10.gitf2f339d.el8-dev node1.example.com Ready worker 72m v1.27.3 10.0.129.222 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.27.3-30.rhaos4.10.gitf2f339d.el8-dev node2.example.com Ready worker 164m v1.27.3 10.0.142.150 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.27.3-30.rhaos4.10.gitf2f339d.el8-dev", "oc get node <node>", "oc get node node1.example.com", "NAME STATUS ROLES AGE VERSION node1.example.com Ready worker 7h v1.27.3", "oc describe node <node>", "oc describe node node1.example.com", "Name: node1.example.com 1 Roles: worker 2 Labels: kubernetes.io/os=linux kubernetes.io/hostname=ip-10-0-131-14 kubernetes.io/arch=amd64 3 node-role.kubernetes.io/worker= node.kubernetes.io/instance-type=m4.large node.openshift.io/os_id=rhcos node.openshift.io/os_version=4.5 region=east topology.kubernetes.io/region=us-east-1 topology.kubernetes.io/zone=us-east-1a Annotations: cluster.k8s.io/machine: openshift-machine-api/ahardin-worker-us-east-2a-q5dzc 4 machineconfiguration.openshift.io/currentConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/desiredConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/state: Done volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Wed, 13 Feb 2019 11:05:57 -0500 Taints: <none> 5 Unschedulable: false Conditions: 6 Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- OutOfDisk False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:07:09 -0500 KubeletReady kubelet is posting ready status Addresses: 7 InternalIP: 10.0.140.16 InternalDNS: ip-10-0-140-16.us-east-2.compute.internal Hostname: ip-10-0-140-16.us-east-2.compute.internal Capacity: 8 attachable-volumes-aws-ebs: 39 cpu: 2 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8172516Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7558116Ki pods: 250 System Info: 9 Machine ID: 63787c9534c24fde9a0cde35c13f1f66 System UUID: EC22BF97-A006-4A58-6AF8-0A38DEEA122A Boot ID: f24ad37d-2594-46b4-8830-7f7555918325 Kernel Version: 3.10.0-957.5.1.el7.x86_64 OS Image: Red Hat Enterprise Linux CoreOS 410.8.20190520.0 (Ootpa) Operating System: linux Architecture: amd64 Container Runtime Version: cri-o://1.27.3-0.6.dev.rhaos4.3.git9ad059b.el8-rc2 Kubelet Version: v1.27.3 Kube-Proxy Version: v1.27.3 PodCIDR: 10.128.4.0/24 ProviderID: aws:///us-east-2a/i-04e87b31dc6b3e171 Non-terminated Pods: (12 in total) 10 Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- openshift-cluster-node-tuning-operator tuned-hdl5q 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-dns dns-default-l69zr 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-image-registry node-ca-9hmcg 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-ingress router-default-76455c45c-c5ptv 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-machine-config-operator machine-config-daemon-cvqw9 20m (1%) 0 (0%) 50Mi (0%) 0 (0%) openshift-marketplace community-operators-f67fh 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-monitoring alertmanager-main-0 50m (3%) 50m (3%) 210Mi (2%) 10Mi (0%) openshift-monitoring node-exporter-l7q8d 10m (0%) 20m (1%) 20Mi (0%) 40Mi (0%) openshift-monitoring prometheus-adapter-75d769c874-hvb85 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-multus multus-kw8w5 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-sdn ovs-t4dsn 100m (6%) 0 (0%) 300Mi (4%) 0 (0%) openshift-sdn sdn-g79hg 100m (6%) 0 (0%) 200Mi (2%) 0 (0%) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 380m (25%) 270m (18%) memory 880Mi (11%) 250Mi (3%) attachable-volumes-aws-ebs 0 0 Events: 11 Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeHasSufficientPID 6d (x5 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 6d kubelet, m01.example.com Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasNoDiskPressure Normal NodeHasSufficientDisk 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientDisk Normal NodeHasSufficientPID 6d kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal Starting 6d kubelet, m01.example.com Starting kubelet. #", "oc get pod --selector=<nodeSelector>", "oc get pod --selector=kubernetes.io/os", "oc get pod -l=<nodeSelector>", "oc get pod -l kubernetes.io/os=linux", "oc get pod --all-namespaces --field-selector=spec.nodeName=<nodename>", "oc adm top nodes", "NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% ip-10-0-12-143.ec2.compute.internal 1503m 100% 4533Mi 61% ip-10-0-132-16.ec2.compute.internal 76m 5% 1391Mi 18% ip-10-0-140-137.ec2.compute.internal 398m 26% 2473Mi 33% ip-10-0-142-44.ec2.compute.internal 656m 43% 6119Mi 82% ip-10-0-146-165.ec2.compute.internal 188m 12% 3367Mi 45% ip-10-0-19-62.ec2.compute.internal 896m 59% 5754Mi 77% ip-10-0-44-193.ec2.compute.internal 632m 42% 5349Mi 72%", "oc adm top node --selector=''", "oc adm cordon <node1>", "node/<node1> cordoned", "oc get node <node1>", "NAME STATUS ROLES AGE VERSION <node1> Ready,SchedulingDisabled worker 1d v1.27.3", "oc adm drain <node1> <node2> [--pod-selector=<pod_selector>]", "oc adm drain <node1> <node2> --force=true", "oc adm drain <node1> <node2> --grace-period=-1", "oc adm drain <node1> <node2> --ignore-daemonsets=true", "oc adm drain <node1> <node2> --timeout=5s", "oc adm drain <node1> <node2> --delete-emptydir-data=true", "oc adm drain <node1> <node2> --dry-run=true", "oc adm uncordon <node1>", "oc label node <node> <key_1>=<value_1> ... <key_n>=<value_n>", "oc label nodes webconsole-7f7f6 unhealthy=true", "kind: Node apiVersion: v1 metadata: name: webconsole-7f7f6 labels: unhealthy: 'true' #", "oc label pods --all <key_1>=<value_1>", "oc label pods --all status=unhealthy", "oc adm cordon <node>", "oc adm cordon node1.example.com", "node/node1.example.com cordoned NAME LABELS STATUS node1.example.com kubernetes.io/hostname=node1.example.com Ready,SchedulingDisabled", "oc adm uncordon <node1>", "oc delete pods --field-selector status.phase=Failed -n <POD_NAMESPACE>", "oc get machinesets -n openshift-machine-api", "oc scale --replicas=2 machineset <machine-set-name> -n openshift-machine-api", "oc edit machineset <machine-set-name> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: # name: <machine-set-name> namespace: openshift-machine-api # spec: replicas: 2 1 #", "oc adm cordon <node_name>", "oc adm drain <node_name> --force=true", "oc delete node <node_name>", "oc get machineconfigpool --show-labels", "NAME CONFIG UPDATED UPDATING DEGRADED LABELS master rendered-master-e05b81f5ca4db1d249a1bf32f9ec24fd True False False operator.machineconfiguration.openshift.io/required-for-upgrade= worker rendered-worker-f50e78e1bc06d8e82327763145bfcf62 True False False", "oc label machineconfigpool worker custom-kubelet=enabled", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: custom-config 1 spec: machineConfigPoolSelector: matchLabels: custom-kubelet: enabled 2 kubeletConfig: 3 podsPerCore: 10 maxPods: 250 systemReserved: cpu: 2000m memory: 1Gi #", "oc create -f <file-name>", "oc create -f master-kube-config.yaml", "oc edit schedulers.config.openshift.io cluster", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: \"2019-09-10T03:04:05Z\" generation: 1 name: cluster resourceVersion: \"433\" selfLink: /apis/config.openshift.io/v1/schedulers/cluster uid: a636d30a-d377-11e9-88d4-0a60097bee62 spec: mastersSchedulable: false 1 status: {} #", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-worker-setsebool spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Set SELinux booleans Before=kubelet.service [Service] Type=oneshot ExecStart=/sbin/setsebool container_manage_cgroup=on RemainAfterExit=true [Install] WantedBy=multi-user.target graphical.target enabled: true name: setsebool.service #", "oc create -f 99-worker-setsebool.yaml", "oc get MachineConfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 05-worker-kernelarg-selinuxpermissive 2 spec: kernelArguments: - enforcing=0 3", "oc create -f 05-worker-kernelarg-selinuxpermissive.yaml", "oc get MachineConfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 05-worker-kernelarg-selinuxpermissive 3.2.0 105s 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.27.3 ip-10-0-136-243.ec2.internal Ready master 34m v1.27.3 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.27.3 ip-10-0-142-249.ec2.internal Ready master 34m v1.27.3 ip-10-0-153-11.ec2.internal Ready worker 28m v1.27.3 ip-10-0-153-150.ec2.internal Ready master 34m v1.27.3", "oc debug node/ip-10-0-141-105.ec2.internal", "Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline BOOT_IMAGE=/ostree/rhcos-... console=tty0 console=ttyS0,115200n8 rootflags=defaults,prjquota rw root=UUID=fd0... ostree=/ostree/boot.0/rhcos/16 coreos.oem.id=qemu coreos.oem.id=ec2 ignition.platform.id=ec2 enforcing=0 sh-4.2# exit", "oc label machineconfigpool worker kubelet-swap=enabled", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: swap-config spec: machineConfigPoolSelector: matchLabels: kubelet-swap: enabled kubeletConfig: failSwapOn: false 1 memorySwap: swapBehavior: LimitedSwap 2 #", "#!/usr/bin/env bash set -Eeuo pipefail if [ USD# -lt 1 ]; then echo \"Usage: 'USD0 node_name'\" exit 64 fi Check for admin OpenStack credentials openstack server list --all-projects >/dev/null || { >&2 echo \"The script needs OpenStack admin credentials. Exiting\"; exit 77; } Check for admin OpenShift credentials adm top node >/dev/null || { >&2 echo \"The script needs OpenShift admin credentials. Exiting\"; exit 77; } set -x declare -r node_name=\"USD1\" declare server_id server_id=\"USD(openstack server list --all-projects -f value -c ID -c Name | grep \"USDnode_name\" | cut -d' ' -f1)\" readonly server_id Drain the node adm cordon \"USDnode_name\" adm drain \"USDnode_name\" --delete-emptydir-data --ignore-daemonsets --force Power off the server debug \"node/USD{node_name}\" -- chroot /host shutdown -h 1 Verify the server is shut off until openstack server show \"USDserver_id\" -f value -c status | grep -q 'SHUTOFF'; do sleep 5; done Migrate the node openstack server migrate --wait \"USDserver_id\" Resize the VM openstack server resize confirm \"USDserver_id\" Wait for the resize confirm to finish until openstack server show \"USDserver_id\" -f value -c status | grep -q 'SHUTOFF'; do sleep 5; done Restart the VM openstack server start \"USDserver_id\" Wait for the node to show up as Ready: until oc get node \"USDnode_name\" | grep -q \"^USD{node_name}[[:space:]]\\+Ready\"; do sleep 5; done Uncordon the node adm uncordon \"USDnode_name\" Wait for cluster operators to stabilize until oc get co -o go-template='statuses: {{ range .items }}{{ range .status.conditions }}{{ if eq .type \"Degraded\" }}{{ if ne .status \"False\" }}DEGRADED{{ end }}{{ else if eq .type \"Progressing\"}}{{ if ne .status \"False\" }}PROGRESSING{{ end }}{{ else if eq .type \"Available\"}}{{ if ne .status \"True\" }}NOTAVAILABLE{{ end }}{{ end }}{{ end }}{{ end }}' | grep -qv '\\(DEGRADED\\|PROGRESSING\\|NOTAVAILABLE\\)'; do sleep 5; done", "kubeletConfig: podsPerCore: 10", "kubeletConfig: maxPods: 250", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: podsPerCore: 10 3 maxPods: 250 4 #", "oc create -f <file_name>.yaml", "oc get machineconfigpools", "NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False False False worker worker-8cecd1236b33ee3f8a5e False True False", "oc get machineconfigpools", "NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False True False worker worker-8cecd1236b33ee3f8a5e True False False", "get tuned.tuned.openshift.io/default -o yaml -n openshift-cluster-node-tuning-operator", "profile: - name: tuned_profile_1 data: | # TuneD profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other TuneD daemon plugins supported by the containerized TuneD - name: tuned_profile_n data: | # TuneD profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings", "recommend: <recommend-item-1> <recommend-item-n>", "- machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8 tunedConfig: reapply_sysctl: <bool> 9", "- label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4", "- match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: \"worker-custom\" priority: 20 profile: openshift-node-custom", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: provider-gce namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=GCE Cloud provider-specific profile # Your tuning for GCE Cloud provider goes here. name: provider-gce", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Optimize systems running OpenShift (provider specific parent profile) include=-provider-USD{f:exec:cat:/var/lib/tuned/provider},openshift name: openshift recommend: - profile: openshift-control-plane priority: 30 match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra - profile: openshift-node priority: 40", "oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/openshift{,-control-plane,-node} -name tuned.conf -exec grep -H ^ {} \\;", "apiVersion: v1 kind: Pod metadata: name: with-pod-antiaffinity spec: affinity: podAntiAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: registry 4 operator: In 5 values: - default topologyKey: kubernetes.io/hostname #", "oc adm cordon <node1>", "oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force", "error when evicting pods/\"rails-postgresql-example-1-72v2w\" -n \"rails\" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.", "oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force --disable-eviction", "oc debug node/<node1>", "chroot /host", "systemctl reboot", "ssh core@<master-node>.<cluster_name>.<base_domain>", "sudo systemctl reboot", "oc adm uncordon <node1>", "ssh core@<target_node>", "sudo oc adm uncordon <node> --kubeconfig /etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost.kubeconfig", "oc get node <node1>", "NAME STATUS ROLES AGE VERSION <node1> Ready worker 6d22h v1.18.3+b0068a8", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-kubeconfig 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: evictionSoft: 3 memory.available: \"500Mi\" 4 nodefs.available: \"10%\" nodefs.inodesFree: \"5%\" imagefs.available: \"15%\" imagefs.inodesFree: \"10%\" evictionSoftGracePeriod: 5 memory.available: \"1m30s\" nodefs.available: \"1m30s\" nodefs.inodesFree: \"1m30s\" imagefs.available: \"1m30s\" imagefs.inodesFree: \"1m30s\" evictionHard: 6 memory.available: \"200Mi\" nodefs.available: \"5%\" nodefs.inodesFree: \"4%\" imagefs.available: \"10%\" imagefs.inodesFree: \"5%\" evictionPressureTransitionPeriod: 3m 7 imageMinimumGCAge: 5m 8 imageGCHighThresholdPercent: 80 9 imageGCLowThresholdPercent: 75 10 #", "oc create -f <file_name>.yaml", "oc create -f gc-container.yaml", "kubeletconfig.machineconfiguration.openshift.io/gc-container created", "oc get machineconfigpool", "NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True", "[Allocatable] = [Node Capacity] - [system-reserved] - [Hard-Eviction-Thresholds]", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: dynamic-node 1 spec: autoSizingReserved: true 2 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 3 #", "oc create -f <file_name>.yaml", "oc debug node/<node_name>", "chroot /host", "SYSTEM_RESERVED_MEMORY=3Gi SYSTEM_RESERVED_CPU=0.08", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-allocatable 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: systemReserved: 3 cpu: 1000m memory: 1Gi #", "oc create -f <file_name>.yaml", "oc describe machineconfigpool <name>", "oc describe machineconfigpool worker", "Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= pools.operator.machineconfiguration.openshift.io/worker= 1 Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool #", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-reserved-cpus 1 spec: kubeletConfig: reservedSystemCPUs: \"0,1,2,3\" 2 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 3 #", "oc create -f <file_name>.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig spec: tlsSecurityProfile: old: {} type: Old machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\"", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-tls-security-profile spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 4 #", "oc create -f <filename>", "oc debug node/<node_name>", "sh-4.4# chroot /host", "sh-4.4# cat /etc/kubernetes/kubelet.conf", "\"kind\": \"KubeletConfiguration\", \"apiVersion\": \"kubelet.config.k8s.io/v1beta1\", # \"tlsCipherSuites\": [ \"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256\", \"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256\" ], \"tlsMinVersion\": \"VersionTLS12\", #", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "oc label node <node-name> node-role.kubernetes.io/app=\"\"", "oc label node <node-name> node-role.kubernetes.io/infra=\"\"", "oc get nodes", "oc edit scheduler cluster", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: node-role.kubernetes.io/infra=\"\" 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/nodes/working-with-nodes
Configuring the model registry component
Configuring the model registry component Red Hat OpenShift AI Cloud Service 1 Configuring the model registry component in Red Hat OpenShift AI Cloud Service
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/configuring_the_model_registry_component/index
Chapter 8. Creating infrastructure machine sets
Chapter 8. Creating infrastructure machine sets Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' You can use infrastructure machine sets to create machines that host only infrastructure components, such as the default router, the integrated container image registry, and the components for cluster metrics and monitoring. These infrastructure machines are not counted toward the total number of subscriptions that are required to run the environment. In a production deployment, it is recommended that you deploy at least three machine sets to hold infrastructure components. Red Hat OpenShift Service Mesh deploys Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. This configuration requires three different machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. 8.1. OpenShift Container Platform infrastructure components Each self-managed Red Hat OpenShift subscription includes entitlements for OpenShift Container Platform and other OpenShift-related components. These entitlements are included for running OpenShift Container Platform control plane and infrastructure workloads and do not need to be accounted for during sizing. To qualify as an infrastructure node and use the included entitlement, only components that are supporting the cluster, and not part of an end-user application, can run on those instances. Examples include the following components: Kubernetes and OpenShift Container Platform control plane services The default router The integrated container image registry The HAProxy-based Ingress Controller The cluster metrics collection, or monitoring service, including components for monitoring user-defined projects Cluster aggregated logging Red Hat Quay Red Hat OpenShift Data Foundation Red Hat Advanced Cluster Management for Kubernetes Red Hat Advanced Cluster Security for Kubernetes Red Hat OpenShift GitOps Red Hat OpenShift Pipelines Red Hat OpenShift Service Mesh Any node that runs any other container, pod, or component is a worker node that your subscription must cover. For information about infrastructure nodes and which components can run on infrastructure nodes, see the "Red Hat OpenShift control plane and infrastructure nodes" section in the OpenShift sizing and subscription guide for enterprise Kubernetes document. To create an infrastructure node, you can use a machine set , label the node , or use a machine config pool . 8.2. Creating infrastructure machine sets for production environments In a production deployment, it is recommended that you deploy at least three compute machine sets to hold infrastructure components. Red Hat OpenShift Service Mesh deploys Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. A configuration like this requires three different compute machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. 8.2.1. Creating infrastructure machine sets for different clouds Use the sample compute machine set for your cloud. 8.2.1.1. Sample YAML for a compute machine set custom resource on AWS The sample YAML defines a compute machine set that runs in the us-east-1a Amazon Web Services (AWS) Local Zone and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-infra-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: infra 3 machine.openshift.io/cluster-api-machine-type: infra machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> spec: metadata: labels: node-role.kubernetes.io/infra: "" providerSpec: value: ami: id: ami-046fe691f52a953f9 4 apiVersion: machine.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile instanceType: m6i.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 5 region: <region> 6 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-node - filters: - name: tag:Name values: - <infrastructure_id>-lb subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 7 tags: - name: kubernetes.io/cluster/<infrastructure_id> value: owned - name: <custom_tag_name> 8 value: <custom_tag_value> userDataSecret: name: worker-user-data taints: 9 - key: node-role.kubernetes.io/infra effect: NoSchedule 1 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 Specify the infrastructure ID, infra role node label, and zone. 3 Specify the infra role node label. 4 Specify a valid Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) for your AWS zone for your OpenShift Container Platform nodes. If you want to use an AWS Marketplace image, you must complete the OpenShift Container Platform subscription from the AWS Marketplace to obtain an AMI ID for your region. USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}{"\n"}' \ get machineset/<infrastructure_id>-<role>-<zone> 5 Specify the zone name, for example, us-east-1a . 6 Specify the region, for example, us-east-1 . 7 Specify the infrastructure ID and zone. 8 Optional: Specify custom tag data for your cluster. For example, you might add an admin contact email address by specifying a name:value pair of Email:[email protected] . Note Custom tags can also be specified during installation in the install-config.yml file. If the install-config.yml file and the machine set include a tag with the same name data, the value for the tag from the machine set takes priority over the value for the tag in the install-config.yml file. 9 Specify a taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . Machine sets running on AWS support non-guaranteed Spot Instances . You can save on costs by using Spot Instances at a lower price compared to On-Demand Instances on AWS. Configure Spot Instances by adding spotMarketOptions to the MachineSet YAML file. 8.2.1.2. Sample YAML for a compute machine set custom resource on Azure This sample YAML defines a compute machine set that runs in the 1 Microsoft Azure zone in a region and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and infra is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: infra 2 machine.openshift.io/cluster-api-machine-type: infra name: <infrastructure_id>-infra-<region> 3 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: infra machine.openshift.io/cluster-api-machine-type: infra machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> spec: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-machineset: <machineset_name> node-role.kubernetes.io/infra: "" providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: 4 offer: "" publisher: "" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/galleries/gallery_<infrastructure_id>/images/<infrastructure_id>-gen2/versions/latest 5 sku: "" version: "" internalLoadBalancer: "" kind: AzureMachineProviderSpec location: <region> 6 managedIdentity: <infrastructure_id>-identity metadata: creationTimestamp: null natRule: null networkResourceGroup: "" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: "" resourceGroup: <infrastructure_id>-rg sshPrivateKey: "" sshPublicKey: "" tags: - name: <custom_tag_name> 7 value: <custom_tag_value> subnet: <infrastructure_id>-<role>-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4s_v3 vnet: <infrastructure_id>-vnet zone: "1" 8 taints: 9 - key: node-role.kubernetes.io/infra effect: NoSchedule 1 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster You can obtain the subnet by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 You can obtain the vnet by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 2 Specify the infra node label. 3 Specify the infrastructure ID, infra node label, and region. 4 Specify the image details for your compute machine set. If you want to use an Azure Marketplace image, see "Using the Azure Marketplace offering". 5 Specify an image that is compatible with your instance type. The Hyper-V generation V2 images created by the installation program have a -gen2 suffix, while V1 images have the same name without the suffix. 6 Specify the region to place machines on. 7 Optional: Specify custom tags in your machine set. Provide the tag name in <custom_tag_name> field and the corresponding tag value in <custom_tag_value> field. 8 Specify the zone within your region to place machines on. Ensure that your region supports the zone that you specify. Important If your region supports availability zones, you must specify the zone. Specifying the zone avoids volume node affinity failure when a pod requires a persistent volume attachment. To do this, you can create a compute machine set for each zone in the same region. 9 Specify a taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . Machine sets running on Azure support non-guaranteed Spot VMs . You can save on costs by using Spot VMs at a lower price compared to standard VMs on Azure. You can configure Spot VMs by adding spotVMOptions to the MachineSet YAML file. Additional resources Using the Azure Marketplace offering 8.2.1.3. Sample YAML for a compute machine set custom resource on Azure Stack Hub This sample YAML defines a compute machine set that runs in the 1 Microsoft Azure zone in a region and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 6 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: "" 11 taints: 12 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: machine.openshift.io/v1beta1 availabilitySet: <availability_set> 13 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: "" publisher: "" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> 14 sku: "" version: "" internalLoadBalancer: "" kind: AzureMachineProviderSpec location: <region> 15 managedIdentity: <infrastructure_id>-identity 16 metadata: creationTimestamp: null natRule: null networkResourceGroup: "" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: "" resourceGroup: <infrastructure_id>-rg 17 sshPrivateKey: "" sshPublicKey: "" subnet: <infrastructure_id>-<role>-subnet 18 19 userDataSecret: name: worker-user-data 20 vmSize: Standard_DS4_v2 vnet: <infrastructure_id>-vnet 21 zone: "1" 22 1 5 7 14 16 17 18 21 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster You can obtain the subnet by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 You can obtain the vnet by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 2 3 8 9 11 19 20 Specify the <infra> node label. 4 6 10 Specify the infrastructure ID, <infra> node label, and region. 12 Specify a taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . 15 Specify the region to place machines on. 13 Specify the availability set for the cluster. 22 Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify. Note Machine sets running on Azure Stack Hub do not support non-guaranteed Spot VMs. 8.2.1.4. Sample YAML for a compute machine set custom resource on IBM Cloud This sample YAML defines a compute machine set that runs in a specified IBM Cloud(R) zone in a region and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-<infra>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<region> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<region> 10 spec: metadata: labels: node-role.kubernetes.io/infra: "" providerSpec: value: apiVersion: ibmcloudproviderconfig.openshift.io/v1beta1 credentialsSecret: name: ibmcloud-credentials image: <infrastructure_id>-rhcos 11 kind: IBMCloudMachineProviderSpec primaryNetworkInterface: securityGroups: - <infrastructure_id>-sg-cluster-wide - <infrastructure_id>-sg-openshift-net subnet: <infrastructure_id>-subnet-compute-<zone> 12 profile: <instance_profile> 13 region: <region> 14 resourceGroup: <resource_group> 15 userDataSecret: name: <role>-user-data 16 vpc: <vpc_name> 17 zone: <zone> 18 taints: 19 - key: node-role.kubernetes.io/infra effect: NoSchedule 1 5 7 The infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 3 8 9 16 The <infra> node label. 4 6 10 The infrastructure ID, <infra> node label, and region. 11 The custom Red Hat Enterprise Linux CoreOS (RHCOS) image that was used for cluster installation. 12 The infrastructure ID and zone within your region to place machines on. Be sure that your region supports the zone that you specify. 13 Specify the IBM Cloud(R) instance profile . 14 Specify the region to place machines on. 15 The resource group that machine resources are placed in. This is either an existing resource group specified at installation time, or an installer-created resource group named based on the infrastructure ID. 17 The VPC name. 18 Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify. 19 The taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . 8.2.1.5. Sample YAML for a compute machine set custom resource on GCP This sample YAML defines a compute machine set that runs in Google Cloud Platform (GCP) and creates nodes that are labeled with node-role.kubernetes.io/infra: "" , where infra is the node label to add. Values obtained by using the OpenShift CLI In the following example, you can obtain some of the values for your cluster by using the OpenShift CLI. Infrastructure ID The <infrastructure_id> string is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster Image path The <path_to_image> string is the path to the image that was used to create the disk. If you have the OpenShift CLI installed, you can obtain the path to the image by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.disks[0].image}{"\n"}' \ get machineset/<infrastructure_id>-worker-a Sample GCP MachineSet values apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a spec: metadata: labels: node-role.kubernetes.io/infra: "" providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 3 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 4 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network subnetwork: <infrastructure_id>-worker-subnet projectID: <project_name> 5 region: us-central1 serviceAccounts: 6 - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: worker-user-data zone: us-central1-a taints: 7 - key: node-role.kubernetes.io/infra effect: NoSchedule 1 For <infrastructure_id> , specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. 2 For <infra> , specify the <infra> node label. 3 Specify the path to the image that is used in current compute machine sets. To use a GCP Marketplace image, specify the offer to use: OpenShift Container Platform: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-413-x86-64-202305021736 OpenShift Platform Plus: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-opp-413-x86-64-202305021736 OpenShift Kubernetes Engine: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-oke-413-x86-64-202305021736 4 Optional: Specify custom metadata in the form of a key:value pair. For example use cases, see the GCP documentation for setting custom metadata . 5 For <project_name> , specify the name of the GCP project that you use for your cluster. 6 Specifies a single service account. Multiple service accounts are not supported. 7 Specify a taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . Machine sets running on GCP support non-guaranteed preemptible VM instances . You can save on costs by using preemptible VM instances at a lower price compared to normal instances on GCP. You can configure preemptible VM instances by adding preemptible to the MachineSet YAML file. 8.2.1.6. Sample YAML for a compute machine set custom resource on Nutanix This sample YAML defines a Nutanix compute machine set that creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. Values obtained by using the OpenShift CLI In the following example, you can obtain some of the values for your cluster by using the OpenShift CLI ( oc ). Infrastructure ID The <infrastructure_id> string is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> name: <infrastructure_id>-<infra>-<zone> 3 namespace: openshift-machine-api annotations: 4 machine.openshift.io/memoryMb: "16384" machine.openshift.io/vCPU: "4" spec: replicas: 3 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <infra> machine.openshift.io/cluster-api-machine-type: <infra> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> spec: metadata: labels: node-role.kubernetes.io/infra: "" providerSpec: value: apiVersion: machine.openshift.io/v1 bootType: "" 5 categories: 6 - key: <category_name> value: <category_value> cluster: 7 type: uuid uuid: <cluster_uuid> credentialsSecret: name: nutanix-credentials image: name: <infrastructure_id>-rhcos 8 type: name kind: NutanixMachineProviderConfig memorySize: 16Gi 9 project: 10 type: name name: <project_name> subnets: - type: uuid uuid: <subnet_uuid> systemDiskSize: 120Gi 11 userDataSecret: name: <user_data_secret> 12 vcpuSockets: 4 13 vcpusPerSocket: 1 14 taints: 15 - key: node-role.kubernetes.io/infra effect: NoSchedule 1 For <infrastructure_id> , specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. 2 Specify the <infra> node label. 3 Specify the infrastructure ID, <infra> node label, and zone. 4 Annotations for the cluster autoscaler. 5 Specifies the boot type that the compute machines use. For more information about boot types, see Understanding UEFI, Secure Boot, and TPM in the Virtualized Environment . Valid values are Legacy , SecureBoot , or UEFI . The default is Legacy . Note You must use the Legacy boot type in OpenShift Container Platform 4.16. 6 Specify one or more Nutanix Prism categories to apply to compute machines. This stanza requires key and value parameters for a category key-value pair that exists in Prism Central. For more information about categories, see Category management . 7 Specify a Nutanix Prism Element cluster configuration. In this example, the cluster type is uuid , so there is a uuid stanza. 8 Specify the image to use. Use an image from an existing default compute machine set for the cluster. 9 Specify the amount of memory for the cluster in Gi. 10 Specify the Nutanix project that you use for your cluster. In this example, the project type is name , so there is a name stanza. 11 Specify the size of the system disk in Gi. 12 Specify the name of the secret in the user data YAML file that is in the openshift-machine-api namespace. Use the value that installation program populates in the default compute machine set. 13 Specify the number of vCPU sockets. 14 Specify the number of vCPUs per socket. 15 Specify a taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . 8.2.1.7. Sample YAML for a compute machine set custom resource on RHOSP This sample YAML defines a compute machine set that runs on Red Hat OpenStack Platform (RHOSP) and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: "" taints: 11 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> 12 kind: OpenstackProviderSpec networks: 13 - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_id> 14 primarySubnet: <rhosp_subnet_UUID> 15 securityGroups: - filter: {} name: <infrastructure_id>-worker 16 serverMetadata: Name: <infrastructure_id>-worker 17 openshiftClusterID: <infrastructure_id> 18 tags: - openshiftClusterID=<infrastructure_id> 19 trunk: true userDataSecret: name: worker-user-data 20 availabilityZone: <optional_openstack_availability_zone> 1 5 7 14 16 17 18 19 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 3 8 9 20 Specify the <infra> node label. 4 6 10 Specify the infrastructure ID and <infra> node label. 11 Specify a taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . 12 To set a server group policy for the MachineSet, enter the value that is returned from creating a server group . For most deployments, anti-affinity or soft-anti-affinity policies are recommended. 13 Required for deployments to multiple networks. If deploying to multiple networks, this list must include the network that is used as the primarySubnet value. 15 Specify the RHOSP subnet that you want the endpoints of nodes to be published on. Usually, this is the same subnet that is used as the value of machinesSubnet in the install-config.yaml file. 8.2.1.8. Sample YAML for a compute machine set custom resource on vSphere This sample YAML defines a compute machine set that runs on VMware vSphere and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-infra 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <infra> 6 machine.openshift.io/cluster-api-machine-type: <infra> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: "" 9 taints: 10 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: creationTimestamp: null network: devices: - networkName: "<vm_network_name>" 11 numCPUs: 4 numCoresPerSocket: 1 snapshot: "" template: <vm_template_name> 12 userDataSecret: name: worker-user-data workspace: datacenter: <vcenter_data_center_name> 13 datastore: <vcenter_datastore_name> 14 folder: <vcenter_vm_folder_path> 15 resourcepool: <vsphere_resource_pool> 16 server: <vcenter_server_ip> 17 1 3 5 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI ( oc ) installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 4 8 Specify the infrastructure ID and <infra> node label. 6 7 9 Specify the <infra> node label. 10 Specify a taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . 11 Specify the vSphere VM network to deploy the compute machine set to. This VM network must be where other compute machines reside in the cluster. 12 Specify the vSphere VM template to use, such as user-5ddjd-rhcos . 13 Specify the vCenter data center to deploy the compute machine set on. 14 Specify the vCenter datastore to deploy the compute machine set on. 15 Specify the path to the vSphere VM folder in vCenter, such as /dc1/vm/user-inst-5ddjd . 16 Specify the vSphere resource pool for your VMs. 17 Specify the vCenter server IP or fully qualified domain name. 8.2.2. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 8.2.3. Creating an infrastructure node Important See Creating infrastructure machine sets for installer-provisioned infrastructure environments or for any cluster where the control plane nodes are managed by the machine API. Requirements of the cluster dictate that infrastructure, also called infra nodes, be provisioned. The installer only provides provisions for control plane and worker nodes. Worker nodes can be designated as infrastructure nodes or application, also called app , nodes through labeling. Procedure Add a label to the worker node that you want to act as application node: USD oc label node <node-name> node-role.kubernetes.io/app="" Add a label to the worker nodes that you want to act as infrastructure nodes: USD oc label node <node-name> node-role.kubernetes.io/infra="" Check to see if applicable nodes now have the infra role and app roles: USD oc get nodes Create a default cluster-wide node selector. The default node selector is applied to pods created in all namespaces. This creates an intersection with any existing node selectors on a pod, which additionally constrains the pod's selector. Important If the default node selector key conflicts with the key of a pod's label, then the default node selector is not applied. However, do not set a default node selector that might cause a pod to become unschedulable. For example, setting the default node selector to a specific node role, such as node-role.kubernetes.io/infra="" , when a pod's label is set to a different node role, such as node-role.kubernetes.io/master="" , can cause the pod to become unschedulable. For this reason, use caution when setting the default node selector to specific node roles. You can alternatively use a project node selector to avoid cluster-wide node selector key conflicts. Edit the Scheduler object: USD oc edit scheduler cluster Add the defaultNodeSelector field with the appropriate node selector: apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: node-role.kubernetes.io/infra="" 1 # ... 1 This example node selector deploys pods on infrastructure nodes by default. Save the file to apply the changes. You can now move infrastructure resources to the newly labeled infra nodes. Additional resources Moving resources to infrastructure machine sets 8.2.4. Creating a machine config pool for infrastructure machines If you need infrastructure machines to have dedicated configurations, you must create an infra pool. Important Creating a custom machine configuration pool overrides default worker pool configurations if they refer to the same file or unit. Procedure Add a label to the node you want to assign as the infra node with a specific label: USD oc label node <node_name> <label> USD oc label node ci-ln-n8mqwr2-f76d1-xscn2-worker-c-6fmtx node-role.kubernetes.io/infra= Create a machine config pool that contains both the worker role and your custom role as machine config selector: USD cat infra.mcp.yaml Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]} 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: "" 2 1 Add the worker role and your custom role. 2 Add the label you added to the node as a nodeSelector . Note Custom machine config pools inherit machine configs from the worker pool. Custom pools use any machine config targeted for the worker pool, but add the ability to also deploy changes that are targeted at only the custom pool. Because a custom pool inherits resources from the worker pool, any change to the worker pool also affects the custom pool. After you have the YAML file, you can create the machine config pool: USD oc create -f infra.mcp.yaml Check the machine configs to ensure that the infrastructure configuration rendered successfully: USD oc get machineconfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED 00-master 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 00-worker 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-1ae2a1e0-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-ssh 3.2.0 31d 99-worker-1ae64748-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-worker-ssh 3.2.0 31d rendered-infra-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 23m rendered-master-072d4b2da7f88162636902b074e9e28e 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-master-3e88ec72aed3886dec061df60d16d1af 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-master-419bee7de96134963a15fdf9dd473b25 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-master-53f5c91c7661708adce18739cc0f40fb 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d rendered-master-a6a357ec18e5bce7f5ac426fc7c5ffcd 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-master-dc7f874ec77fc4b969674204332da037 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-1a75960c52ad18ff5dfa6674eb7e533d 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-2640531be11ba43c61d72e82dc634ce6 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-worker-4f110718fe88e5f349987854a1147755 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-worker-afc758e194d6188677eb837842d3b379 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-worker-daa08cc1e8f5fcdeba24de60cd955cc3 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d You should see a new machine config, with the rendered-infra-* prefix. Optional: To deploy changes to a custom pool, create a machine config that uses the custom pool name as the label, such as infra . Note that this is not required and only shown for instructional purposes. In this manner, you can apply any custom configurations specific to only your infra nodes. Note After you create the new machine config pool, the MCO generates a new rendered config for that pool, and associated nodes of that pool reboot to apply the new configuration. Create a machine config: USD cat infra.mc.yaml Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 51-infra labels: machineconfiguration.openshift.io/role: infra 1 spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/infratest mode: 0644 contents: source: data:,infra 1 Add the label you added to the node as a nodeSelector . Apply the machine config to the infra-labeled nodes: USD oc create -f infra.mc.yaml Confirm that your new machine config pool is available: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE infra rendered-infra-60e35c2e99f42d976e084fa94da4d0fc True False False 1 1 1 0 4m20s master rendered-master-9360fdb895d4c131c7c4bebbae099c90 True False False 3 3 3 0 91m worker rendered-worker-60e35c2e99f42d976e084fa94da4d0fc True False False 2 2 2 0 91m In this example, a worker node was changed to an infra node. Additional resources See Node configuration management with machine config pools for more information on grouping infra machines in a custom pool. 8.3. Assigning machine set resources to infrastructure nodes After creating an infrastructure machine set, the worker and infra roles are applied to new infra nodes. Nodes with the infra role applied are not counted toward the total number of subscriptions that are required to run the environment, even when the worker role is also applied. However, with an infra node being assigned as a worker, there is a chance user workloads could get inadvertently assigned to an infra node. To avoid this, you can apply a taint to the infra node and tolerations for the pods you want to control. 8.3.1. Binding infrastructure node workloads using taints and tolerations If you have an infra node that has the infra and worker roles assigned, you must configure the node so that user workloads are not assigned to it. Important It is recommended that you preserve the dual infra,worker label that is created for infra nodes and use taints and tolerations to manage nodes that user workloads are scheduled on. If you remove the worker label from the node, you must create a custom pool to manage it. A node with a label other than master or worker is not recognized by the MCO without a custom pool. Maintaining the worker label allows the node to be managed by the default worker machine config pool, if no custom pools that select the custom label exists. The infra label communicates to the cluster that it does not count toward the total number of subscriptions. Prerequisites Configure additional MachineSet objects in your OpenShift Container Platform cluster. Procedure Add a taint to the infra node to prevent scheduling user workloads on it: Determine if the node has the taint: USD oc describe nodes <node_name> Sample output oc describe node ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Name: ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Roles: worker ... Taints: node-role.kubernetes.io/infra:NoSchedule ... This example shows that the node has a taint. You can proceed with adding a toleration to your pod in the step. If you have not configured a taint to prevent scheduling user workloads on it: USD oc adm taint nodes <node_name> <key>=<value>:<effect> For example: USD oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoSchedule Tip You can alternatively apply the following YAML to add the taint: kind: Node apiVersion: v1 metadata: name: <node_name> labels: ... spec: taints: - key: node-role.kubernetes.io/infra effect: NoSchedule value: reserved ... This example places a taint on node1 that has key node-role.kubernetes.io/infra and taint effect NoSchedule . Nodes with the NoSchedule effect schedule only pods that tolerate the taint, but allow existing pods to remain scheduled on the node. Note If a descheduler is used, pods violating node taints could be evicted from the cluster. Add the taint with NoExecute Effect along with the above taint with NoSchedule Effect: USD oc adm taint nodes <node_name> <key>=<value>:<effect> For example: USD oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoExecute Tip You can alternatively apply the following YAML to add the taint: kind: Node apiVersion: v1 metadata: name: <node_name> labels: ... spec: taints: - key: node-role.kubernetes.io/infra effect: NoExecute value: reserved ... This example places a taint on node1 that has the key node-role.kubernetes.io/infra and taint effect NoExecute . Nodes with the NoExecute effect schedule only pods that tolerate the taint. The effect will remove any existing pods from the node that do not have a matching toleration. Add tolerations for the pod configurations you want to schedule on the infra node, like router, registry, and monitoring workloads. Add the following code to the Pod object specification: tolerations: - effect: NoSchedule 1 key: node-role.kubernetes.io/infra 2 value: reserved 3 - effect: NoExecute 4 key: node-role.kubernetes.io/infra 5 operator: Equal 6 value: reserved 7 1 Specify the effect that you added to the node. 2 Specify the key that you added to the node. 3 Specify the value of the key-value pair taint that you added to the node. 4 Specify the effect that you added to the node. 5 Specify the key that you added to the node. 6 Specify the Equal Operator to require a taint with the key node-role.kubernetes.io/infra to be present on the node. 7 Specify the value of the key-value pair taint that you added to the node. This toleration matches the taint created by the oc adm taint command. A pod with this toleration can be scheduled onto the infra node. Note Moving pods for an Operator installed via OLM to an infra node is not always possible. The capability to move Operator pods depends on the configuration of each Operator. Schedule the pod to the infra node using a scheduler. See the documentation for Controlling pod placement onto nodes for details. Additional resources See Controlling pod placement using the scheduler for general information on scheduling a pod to a node. See Moving resources to infrastructure machine sets for instructions on scheduling pods to infra nodes. See Understanding taints and tolerations for more details about different effects of taints. 8.4. Moving resources to infrastructure machine sets Some of the infrastructure resources are deployed in your cluster by default. You can move them to the infrastructure machine sets that you created by adding the infrastructure node selector, as shown: spec: nodePlacement: 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved 1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration. Applying a specific node selector to all infrastructure components causes OpenShift Container Platform to schedule those workloads on nodes with that label . 8.4.1. Moving the router You can deploy the router pod to a different compute machine set. By default, the pod is deployed to a worker node. Prerequisites Configure additional compute machine sets in your OpenShift Container Platform cluster. Procedure View the IngressController custom resource for the router Operator: USD oc get ingresscontroller default -n openshift-ingress-operator -o yaml The command output resembles the following text: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: 2019-04-18T12:35:39Z finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default namespace: openshift-ingress-operator resourceVersion: "11341" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 79509e05-61d6-11e9-bc55-02ce4781844a spec: {} status: availableReplicas: 2 conditions: - lastTransitionTime: 2019-04-18T12:36:15Z status: "True" type: Available domain: apps.<cluster>.example.com endpointPublishingStrategy: type: LoadBalancerService selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default Edit the ingresscontroller resource and change the nodeSelector to use the infra label: USD oc edit ingresscontroller default -n openshift-ingress-operator spec: nodePlacement: nodeSelector: 1 matchLabels: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved 1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrastructure node, also add a matching toleration. Confirm that the router pod is running on the infra node. View the list of router pods and note the node name of the running pod: USD oc get pod -n openshift-ingress -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none> In this example, the running pod is on the ip-10-0-217-226.ec2.internal node. View the node status of the running pod: USD oc get node <node_name> 1 1 Specify the <node_name> that you obtained from the pod list. Example output NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.29.4 Because the role list includes infra , the pod is running on the correct node. 8.4.2. Moving the default registry You configure the registry Operator to deploy its pods to different nodes. Prerequisites Configure additional compute machine sets in your OpenShift Container Platform cluster. Procedure View the config/instance object: USD oc get configs.imageregistry.operator.openshift.io/cluster -o yaml Example output apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: 2019-02-05T13:52:05Z finalizers: - imageregistry.operator.openshift.io/finalizer generation: 1 name: cluster resourceVersion: "56174" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: 36fd3724-294d-11e9-a524-12ffeee2931b spec: httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623 logging: 2 managementState: Managed proxy: {} replicas: 1 requests: read: {} write: {} storage: s3: bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c region: us-east-1 status: ... Edit the config/instance object: USD oc edit configs.imageregistry.operator.openshift.io/cluster spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: namespaces: - openshift-image-registry topologyKey: kubernetes.io/hostname weight: 100 logLevel: Normal managementState: Managed nodeSelector: 1 node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved 1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration. Verify the registry pod has been moved to the infrastructure node. Run the following command to identify the node where the registry pod is located: USD oc get pods -o wide -n openshift-image-registry Confirm the node has the label you specified: USD oc describe node <node_name> Review the command output and confirm that node-role.kubernetes.io/infra is in the LABELS list. 8.4.3. Moving the monitoring solution The monitoring stack includes multiple components, including Prometheus, Thanos Querier, and Alertmanager. The Cluster Monitoring Operator manages this stack. To redeploy the monitoring stack to infrastructure nodes, you can create and apply a custom config map. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. You have installed the OpenShift CLI ( oc ). Procedure Edit the cluster-monitoring-config config map and change the nodeSelector to use the infra label: USD oc edit configmap cluster-monitoring-config -n openshift-monitoring apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector: 1 node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute metricsServer: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute telemeterClient: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute openshiftStateMetrics: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute thanosQuerier: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute monitoringPlugin: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute 1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrastructure node, also add a matching toleration. Watch the monitoring pods move to the new machines: USD watch 'oc get pod -n openshift-monitoring -o wide' If a component has not moved to the infra node, delete the pod with this component: USD oc delete pod -n openshift-monitoring <pod> The component from the deleted pod is re-created on the infra node. 8.4.4. Moving the Vertical Pod Autoscaler Operator components The Vertical Pod Autoscaler Operator (VPA) consists of three components: the recommender, updater, and admission controller. The Operator and each component has its own pod in the VPA namespace on the control plane nodes. You can move the VPA Operator and component pods to infrastructure nodes by adding a node selector to the VPA subscription and the VerticalPodAutoscalerController CR. The following example shows the default deployment of the VPA pods to the control plane nodes. Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES vertical-pod-autoscaler-operator-6c75fcc9cd-5pb6z 1/1 Running 0 7m59s 10.128.2.24 c416-tfsbj-master-1 <none> <none> vpa-admission-plugin-default-6cb78d6f8b-rpcrj 1/1 Running 0 5m37s 10.129.2.22 c416-tfsbj-master-1 <none> <none> vpa-recommender-default-66846bd94c-dsmpp 1/1 Running 0 5m37s 10.129.2.20 c416-tfsbj-master-0 <none> <none> vpa-updater-default-db8b58df-2nkvf 1/1 Running 0 5m37s 10.129.2.21 c416-tfsbj-master-1 <none> <none> Procedure Move the VPA Operator pod by adding a node selector to the Subscription custom resource (CR) for the VPA Operator: Edit the CR: USD oc edit Subscription vertical-pod-autoscaler -n openshift-vertical-pod-autoscaler Add a node selector to match the node role label on the infra node: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: labels: operators.coreos.com/vertical-pod-autoscaler.openshift-vertical-pod-autoscaler: "" name: vertical-pod-autoscaler # ... spec: config: nodeSelector: node-role.kubernetes.io/infra: "" 1 1 Specifies the node role of an infra node. Note If the infra node uses taints, you need to add a toleration to the Subscription CR. For example: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: labels: operators.coreos.com/vertical-pod-autoscaler.openshift-vertical-pod-autoscaler: "" name: vertical-pod-autoscaler # ... spec: config: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: 1 - key: "node-role.kubernetes.io/infra" operator: "Exists" effect: "NoSchedule" 1 Specifies a toleration for a taint on the infra node. Move each VPA component by adding node selectors to the VerticalPodAutoscaler custom resource (CR): Edit the CR: USD oc edit VerticalPodAutoscalerController default -n openshift-vertical-pod-autoscaler Add node selectors to match the node role label on the infra node: apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: name: default namespace: openshift-vertical-pod-autoscaler # ... spec: deploymentOverrides: admission: container: resources: {} nodeSelector: node-role.kubernetes.io/infra: "" 1 recommender: container: resources: {} nodeSelector: node-role.kubernetes.io/infra: "" 2 updater: container: resources: {} nodeSelector: node-role.kubernetes.io/infra: "" 3 1 Optional: Specifies the node role for the VPA admission pod. 2 Optional: Specifies the node role for the VPA recommender pod. 3 Optional: Specifies the node role for the VPA updater pod. Note If a target node uses taints, you need to add a toleration to the VerticalPodAutoscalerController CR. For example: apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: name: default namespace: openshift-vertical-pod-autoscaler # ... spec: deploymentOverrides: admission: container: resources: {} nodeSelector: node-role.kubernetes.io/infra: "" tolerations: 1 - key: "my-example-node-taint-key" operator: "Exists" effect: "NoSchedule" recommender: container: resources: {} nodeSelector: node-role.kubernetes.io/infra: "" tolerations: 2 - key: "my-example-node-taint-key" operator: "Exists" effect: "NoSchedule" updater: container: resources: {} nodeSelector: node-role.kubernetes.io/infra: "" tolerations: 3 - key: "my-example-node-taint-key" operator: "Exists" effect: "NoSchedule" 1 Specifies a toleration for the admission controller pod for a taint on the infra node. 2 Specifies a toleration for the recommender pod for a taint on the infra node. 3 Specifies a toleration for the updater pod for a taint on the infra node. Verification You can verify the pods have moved by using the following command: USD oc get pods -n openshift-vertical-pod-autoscaler -o wide The pods are no longer deployed to the control plane nodes. Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES vertical-pod-autoscaler-operator-6c75fcc9cd-5pb6z 1/1 Running 0 7m59s 10.128.2.24 c416-tfsbj-infra-eastus3-2bndt <none> <none> vpa-admission-plugin-default-6cb78d6f8b-rpcrj 1/1 Running 0 5m37s 10.129.2.22 c416-tfsbj-infra-eastus1-lrgj8 <none> <none> vpa-recommender-default-66846bd94c-dsmpp 1/1 Running 0 5m37s 10.129.2.20 c416-tfsbj-infra-eastus1-lrgj8 <none> <none> vpa-updater-default-db8b58df-2nkvf 1/1 Running 0 5m37s 10.129.2.21 c416-tfsbj-infra-eastus1-lrgj8 <none> <none> Additional resources Moving monitoring components to different nodes Using node selectors to move logging resources Using taints and tolerations to control logging pod placement
[ "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-infra-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: infra 3 machine.openshift.io/cluster-api-machine-type: infra machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> spec: metadata: labels: node-role.kubernetes.io/infra: \"\" providerSpec: value: ami: id: ami-046fe691f52a953f9 4 apiVersion: machine.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile instanceType: m6i.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 5 region: <region> 6 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-node - filters: - name: tag:Name values: - <infrastructure_id>-lb subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 7 tags: - name: kubernetes.io/cluster/<infrastructure_id> value: owned - name: <custom_tag_name> 8 value: <custom_tag_value> userDataSecret: name: worker-user-data taints: 9 - key: node-role.kubernetes.io/infra effect: NoSchedule", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}{\"\\n\"}' get machineset/<infrastructure_id>-<role>-<zone>", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: infra 2 machine.openshift.io/cluster-api-machine-type: infra name: <infrastructure_id>-infra-<region> 3 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: infra machine.openshift.io/cluster-api-machine-type: infra machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> spec: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-machineset: <machineset_name> node-role.kubernetes.io/infra: \"\" providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: 4 offer: \"\" publisher: \"\" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/galleries/gallery_<infrastructure_id>/images/<infrastructure_id>-gen2/versions/latest 5 sku: \"\" version: \"\" internalLoadBalancer: \"\" kind: AzureMachineProviderSpec location: <region> 6 managedIdentity: <infrastructure_id>-identity metadata: creationTimestamp: null natRule: null networkResourceGroup: \"\" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: \"\" resourceGroup: <infrastructure_id>-rg sshPrivateKey: \"\" sshPublicKey: \"\" tags: - name: <custom_tag_name> 7 value: <custom_tag_value> subnet: <infrastructure_id>-<role>-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4s_v3 vnet: <infrastructure_id>-vnet zone: \"1\" 8 taints: 9 - key: node-role.kubernetes.io/infra effect: NoSchedule", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 6 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" 11 taints: 12 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: machine.openshift.io/v1beta1 availabilitySet: <availability_set> 13 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: \"\" publisher: \"\" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> 14 sku: \"\" version: \"\" internalLoadBalancer: \"\" kind: AzureMachineProviderSpec location: <region> 15 managedIdentity: <infrastructure_id>-identity 16 metadata: creationTimestamp: null natRule: null networkResourceGroup: \"\" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: \"\" resourceGroup: <infrastructure_id>-rg 17 sshPrivateKey: \"\" sshPublicKey: \"\" subnet: <infrastructure_id>-<role>-subnet 18 19 userDataSecret: name: worker-user-data 20 vmSize: Standard_DS4_v2 vnet: <infrastructure_id>-vnet 21 zone: \"1\" 22", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-<infra>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<region> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<region> 10 spec: metadata: labels: node-role.kubernetes.io/infra: \"\" providerSpec: value: apiVersion: ibmcloudproviderconfig.openshift.io/v1beta1 credentialsSecret: name: ibmcloud-credentials image: <infrastructure_id>-rhcos 11 kind: IBMCloudMachineProviderSpec primaryNetworkInterface: securityGroups: - <infrastructure_id>-sg-cluster-wide - <infrastructure_id>-sg-openshift-net subnet: <infrastructure_id>-subnet-compute-<zone> 12 profile: <instance_profile> 13 region: <region> 14 resourceGroup: <resource_group> 15 userDataSecret: name: <role>-user-data 16 vpc: <vpc_name> 17 zone: <zone> 18 taints: 19 - key: node-role.kubernetes.io/infra effect: NoSchedule", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.disks[0].image}{\"\\n\"}' get machineset/<infrastructure_id>-worker-a", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a spec: metadata: labels: node-role.kubernetes.io/infra: \"\" providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 3 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 4 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network subnetwork: <infrastructure_id>-worker-subnet projectID: <project_name> 5 region: us-central1 serviceAccounts: 6 - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: worker-user-data zone: us-central1-a taints: 7 - key: node-role.kubernetes.io/infra effect: NoSchedule", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> name: <infrastructure_id>-<infra>-<zone> 3 namespace: openshift-machine-api annotations: 4 machine.openshift.io/memoryMb: \"16384\" machine.openshift.io/vCPU: \"4\" spec: replicas: 3 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <infra> machine.openshift.io/cluster-api-machine-type: <infra> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> spec: metadata: labels: node-role.kubernetes.io/infra: \"\" providerSpec: value: apiVersion: machine.openshift.io/v1 bootType: \"\" 5 categories: 6 - key: <category_name> value: <category_value> cluster: 7 type: uuid uuid: <cluster_uuid> credentialsSecret: name: nutanix-credentials image: name: <infrastructure_id>-rhcos 8 type: name kind: NutanixMachineProviderConfig memorySize: 16Gi 9 project: 10 type: name name: <project_name> subnets: - type: uuid uuid: <subnet_uuid> systemDiskSize: 120Gi 11 userDataSecret: name: <user_data_secret> 12 vcpuSockets: 4 13 vcpusPerSocket: 1 14 taints: 15 - key: node-role.kubernetes.io/infra effect: NoSchedule", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" taints: 11 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> 12 kind: OpenstackProviderSpec networks: 13 - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_id> 14 primarySubnet: <rhosp_subnet_UUID> 15 securityGroups: - filter: {} name: <infrastructure_id>-worker 16 serverMetadata: Name: <infrastructure_id>-worker 17 openshiftClusterID: <infrastructure_id> 18 tags: - openshiftClusterID=<infrastructure_id> 19 trunk: true userDataSecret: name: worker-user-data 20 availabilityZone: <optional_openstack_availability_zone>", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-infra 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <infra> 6 machine.openshift.io/cluster-api-machine-type: <infra> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" 9 taints: 10 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: creationTimestamp: null network: devices: - networkName: \"<vm_network_name>\" 11 numCPUs: 4 numCoresPerSocket: 1 snapshot: \"\" template: <vm_template_name> 12 userDataSecret: name: worker-user-data workspace: datacenter: <vcenter_data_center_name> 13 datastore: <vcenter_datastore_name> 14 folder: <vcenter_vm_folder_path> 15 resourcepool: <vsphere_resource_pool> 16 server: <vcenter_server_ip> 17", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc label node <node-name> node-role.kubernetes.io/app=\"\"", "oc label node <node-name> node-role.kubernetes.io/infra=\"\"", "oc get nodes", "oc edit scheduler cluster", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: node-role.kubernetes.io/infra=\"\" 1", "oc label node <node_name> <label>", "oc label node ci-ln-n8mqwr2-f76d1-xscn2-worker-c-6fmtx node-role.kubernetes.io/infra=", "cat infra.mcp.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]} 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: \"\" 2", "oc create -f infra.mcp.yaml", "oc get machineconfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED 00-master 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 00-worker 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-1ae2a1e0-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-ssh 3.2.0 31d 99-worker-1ae64748-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-worker-ssh 3.2.0 31d rendered-infra-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 23m rendered-master-072d4b2da7f88162636902b074e9e28e 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-master-3e88ec72aed3886dec061df60d16d1af 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-master-419bee7de96134963a15fdf9dd473b25 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-master-53f5c91c7661708adce18739cc0f40fb 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d rendered-master-a6a357ec18e5bce7f5ac426fc7c5ffcd 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-master-dc7f874ec77fc4b969674204332da037 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-1a75960c52ad18ff5dfa6674eb7e533d 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-2640531be11ba43c61d72e82dc634ce6 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-worker-4f110718fe88e5f349987854a1147755 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-worker-afc758e194d6188677eb837842d3b379 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-worker-daa08cc1e8f5fcdeba24de60cd955cc3 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d", "cat infra.mc.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 51-infra labels: machineconfiguration.openshift.io/role: infra 1 spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/infratest mode: 0644 contents: source: data:,infra", "oc create -f infra.mc.yaml", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE infra rendered-infra-60e35c2e99f42d976e084fa94da4d0fc True False False 1 1 1 0 4m20s master rendered-master-9360fdb895d4c131c7c4bebbae099c90 True False False 3 3 3 0 91m worker rendered-worker-60e35c2e99f42d976e084fa94da4d0fc True False False 2 2 2 0 91m", "oc describe nodes <node_name>", "describe node ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Name: ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Roles: worker Taints: node-role.kubernetes.io/infra:NoSchedule", "oc adm taint nodes <node_name> <key>=<value>:<effect>", "oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoSchedule", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: spec: taints: - key: node-role.kubernetes.io/infra effect: NoSchedule value: reserved", "oc adm taint nodes <node_name> <key>=<value>:<effect>", "oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoExecute", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: spec: taints: - key: node-role.kubernetes.io/infra effect: NoExecute value: reserved", "tolerations: - effect: NoSchedule 1 key: node-role.kubernetes.io/infra 2 value: reserved 3 - effect: NoExecute 4 key: node-role.kubernetes.io/infra 5 operator: Equal 6 value: reserved 7", "spec: nodePlacement: 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved", "oc get ingresscontroller default -n openshift-ingress-operator -o yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: 2019-04-18T12:35:39Z finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default namespace: openshift-ingress-operator resourceVersion: \"11341\" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 79509e05-61d6-11e9-bc55-02ce4781844a spec: {} status: availableReplicas: 2 conditions: - lastTransitionTime: 2019-04-18T12:36:15Z status: \"True\" type: Available domain: apps.<cluster>.example.com endpointPublishingStrategy: type: LoadBalancerService selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default", "oc edit ingresscontroller default -n openshift-ingress-operator", "spec: nodePlacement: nodeSelector: 1 matchLabels: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved", "oc get pod -n openshift-ingress -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none>", "oc get node <node_name> 1", "NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.29.4", "oc get configs.imageregistry.operator.openshift.io/cluster -o yaml", "apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: 2019-02-05T13:52:05Z finalizers: - imageregistry.operator.openshift.io/finalizer generation: 1 name: cluster resourceVersion: \"56174\" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: 36fd3724-294d-11e9-a524-12ffeee2931b spec: httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623 logging: 2 managementState: Managed proxy: {} replicas: 1 requests: read: {} write: {} storage: s3: bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c region: us-east-1 status:", "oc edit configs.imageregistry.operator.openshift.io/cluster", "spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: namespaces: - openshift-image-registry topologyKey: kubernetes.io/hostname weight: 100 logLevel: Normal managementState: Managed nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved", "oc get pods -o wide -n openshift-image-registry", "oc describe node <node_name>", "oc edit configmap cluster-monitoring-config -n openshift-monitoring", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute metricsServer: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute telemeterClient: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute openshiftStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute thanosQuerier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute monitoringPlugin: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute", "watch 'oc get pod -n openshift-monitoring -o wide'", "oc delete pod -n openshift-monitoring <pod>", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES vertical-pod-autoscaler-operator-6c75fcc9cd-5pb6z 1/1 Running 0 7m59s 10.128.2.24 c416-tfsbj-master-1 <none> <none> vpa-admission-plugin-default-6cb78d6f8b-rpcrj 1/1 Running 0 5m37s 10.129.2.22 c416-tfsbj-master-1 <none> <none> vpa-recommender-default-66846bd94c-dsmpp 1/1 Running 0 5m37s 10.129.2.20 c416-tfsbj-master-0 <none> <none> vpa-updater-default-db8b58df-2nkvf 1/1 Running 0 5m37s 10.129.2.21 c416-tfsbj-master-1 <none> <none>", "oc edit Subscription vertical-pod-autoscaler -n openshift-vertical-pod-autoscaler", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: labels: operators.coreos.com/vertical-pod-autoscaler.openshift-vertical-pod-autoscaler: \"\" name: vertical-pod-autoscaler spec: config: nodeSelector: node-role.kubernetes.io/infra: \"\" 1", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: labels: operators.coreos.com/vertical-pod-autoscaler.openshift-vertical-pod-autoscaler: \"\" name: vertical-pod-autoscaler spec: config: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: 1 - key: \"node-role.kubernetes.io/infra\" operator: \"Exists\" effect: \"NoSchedule\"", "oc edit VerticalPodAutoscalerController default -n openshift-vertical-pod-autoscaler", "apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: name: default namespace: openshift-vertical-pod-autoscaler spec: deploymentOverrides: admission: container: resources: {} nodeSelector: node-role.kubernetes.io/infra: \"\" 1 recommender: container: resources: {} nodeSelector: node-role.kubernetes.io/infra: \"\" 2 updater: container: resources: {} nodeSelector: node-role.kubernetes.io/infra: \"\" 3", "apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: name: default namespace: openshift-vertical-pod-autoscaler spec: deploymentOverrides: admission: container: resources: {} nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: 1 - key: \"my-example-node-taint-key\" operator: \"Exists\" effect: \"NoSchedule\" recommender: container: resources: {} nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: 2 - key: \"my-example-node-taint-key\" operator: \"Exists\" effect: \"NoSchedule\" updater: container: resources: {} nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: 3 - key: \"my-example-node-taint-key\" operator: \"Exists\" effect: \"NoSchedule\"", "oc get pods -n openshift-vertical-pod-autoscaler -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES vertical-pod-autoscaler-operator-6c75fcc9cd-5pb6z 1/1 Running 0 7m59s 10.128.2.24 c416-tfsbj-infra-eastus3-2bndt <none> <none> vpa-admission-plugin-default-6cb78d6f8b-rpcrj 1/1 Running 0 5m37s 10.129.2.22 c416-tfsbj-infra-eastus1-lrgj8 <none> <none> vpa-recommender-default-66846bd94c-dsmpp 1/1 Running 0 5m37s 10.129.2.20 c416-tfsbj-infra-eastus1-lrgj8 <none> <none> vpa-updater-default-db8b58df-2nkvf 1/1 Running 0 5m37s 10.129.2.21 c416-tfsbj-infra-eastus1-lrgj8 <none> <none>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/machine_management/creating-infrastructure-machinesets
Chapter 6. Running System Containers
Chapter 6. Running System Containers System containers provide a way to containerize services that need to run before the docker daemon is running. They use different technologies than the Docker-formatted containers, ostree for storage, runc for runtime, skopeo for searching and systemd for service management. Previously, such services were provided in the system as packages, or as part of the ostree in Atomic Host. Excluding applications from the Atomic Host system and containerizing them makes the system itself smaller. Red Hat provides the etcd and flannel services as system containers. Note To use the system containers on Atomic Host, you need to have the atomic command-line tool version 1.12 or later, along with ostree and runc utilities (all of which are included on the latest version of Atomic Host). To use system containers on RHEL Server systems, you must be running at least RHEL 7.3.3 (because the ostree package was not available on RHEL server until that release). Because they are not Docker-formatted containers, you do not use the docker command for container management. The atomic command-line tool and systemd are used to pull, install and manage system containers. Here is a brief comparison between how you pull, install and run docker containers and system containers. docker docker pull rhel7/rsyslog atomic install rhel7/syslog atomic run rhel7/rsyslog system containers atomic pull --storage=ostree rhel7/etcd atomic install --system [--set=VARIABLE] rhel7/etcd (you will notice this command also runs systemctl start etcd ) The atomic install command supports several options to configure the settings for system containers. The --set option is used to pass variables which you would normally set for this service. These variables are stored in the manifest.json file. To uninstall a system image, use: System containers use runc as runtime, and docker and runc images are stored in different places on the system: /var/lib/containers/atomic/USDNAME and /etc/systemd/system/USDNAME.service respectively. Therefore, when you use docker images and docker ps you will only see the Docker-formatted containers. The atomic tool will show all containers on the system: Note that unlike docker containers, where the services are managed by the docker daemon, with system containers you have to manage the dependencies between the services yourself. For example, flannel is a dependency for etcd and when you run flannel, it checks whether etcd is set up (if it is not, flannel will wait). System containers require root privileges. Because runc requires root, containers also run as the root user. 6.1. Using the etcd System Container Image 6.1.1. Overview The etcd service provides a highly-available key value store that can be used by applications that need to access and share configuration and service discovery information. Applications that use etcd include Kubernetes , flannel , OpenShift , fleet , vulcand , and locksmith . The etcd container described here is what is referred to as a system container. A system container is designed to come up before the docker service or in a situation where no docker service is available. In this case, the etcd container can be used to bring up a keystore for the flannel system container, both of which can then be in place to provide networking services before the docker service comes up. Prior to RHEL Atomic 7.3.2, there were two containerized versions of the etcd services maintained by Red Hat: etcd 2 (etcd container) and etcd 3 (etcd3 container). With 7.3.2, etcd 2 has been deprecated and etcd 3 is the only supported version of etcd. So the only available etcd container is: etcd : This is based on etcd version 3. Support for etcd Along with the etcd 3 container, the etcd3 rpm package is also deprecated. Going forward, Red Hat expects to maintain only one version of etcd at a time. For RHEL Atomic 7.3.2, system containers in general and the etcd container specifically in supported as Tech Preview only. Besides bypassing the docker service, this etcd container can also bypass the docker command and the storage area used to hold docker containers by default. To use the container, you need a combination of commands that include atomic (to pull, list, install, delete and unstall the image), skopeo (to inspect the image), runc (to ultimately run the image) and systemctl to manage the image among your other systemd services. Here are some of the features of the etcd container: Supports atomic pull : Use the atomic pull command to pull the container to your system. Supports atomic install : Use the atomic install --system command to set up the etcd service to run as a systemd service. Configures the etcd service : When the etcd service starts, a set of ETCD environment variables are exported. Those variables identify the location of the etcd data directory and set the IP addresses and ports the etcd service listens on. System container : After you have used the atomic command to install the etcd container, you can use the systemd systemctl command to manage the service. 6.1.2. Getting and Running the etcd System Container To use an etcd system container image on a RHEL Atomic system, you need to pull it, install it and enable it. There identity of the currently supported etcd container is: The procedure below illustrates how to pull, install, and run the etcd container. Pull the etcd container : While logged into the RHEL Atomic system, get the etcd container by running the following command: This pulls the etcd system container from the Red Hat Registry to the ostree storage area on the local system. By setting ostree storage, the docker storage area is not used and the docker daemon and docker command won't see the pulled etcd container image. Install the etcd container : Type the following to do a default installation of the etcd container so it is set up as a systemd service. Note Before running atomic install , refer to "Configuring etcd" to see options you could add to the atomic install command to change it from the default install shown here. Start the etcd service : Use the systemctl command to start the installed etcd service as you would any other systemd service. Check etcd with runc : To make sure the etcd container is running, you can use the runc list command as you would use docker ps to see containers running under docker: Test that the etcd service is working : You can use the curl command to set and retrieve keys from your etcd service. This example assigns a value to a key called testkey , then retrieves that value: Note that the first action does a set to set the key and the second does a get to return the value of the key. The "Configuring etcd" section shows ways of setting up the etcd service in different ways. 6.1.3. Configuring etcd You can change how the etcd service is configured on the atomic install command line or after it is running using the runc command. 6.1.3.1. Configuring etcd during "atomic install" The correct way to configure the etcd container image is when you first run atomic install . Setting that are defined initially in the /etc/etcd/etcd.conf file inside of the container can be overridden on the atomic install command line using the --set option. For example, this example shows how to reset the value of ETCD_ADVERTISE_CLIENT_URLS value: Here is the list of other values and setting in the etcd.conf file that you can change on the atomic install command line. See the etcd.conf.yaml.sample page for descriptions of these settings. 6.1.3.2. Configuring etcd security settings The etcd service is configured with authentication and encryption disabled by default. Because etcd is initially configured to listen to localhost only, the lack of security becomes much more of an issue when the etcd service is exposed to nodes that are outside of the local host. Remote attackers will have access to passwords and secret keys. In general, here is what you need to do to configure a secure, multi-node etcd cluster service: Create TLS certificates and a signed key pair for every member in a cluster, as described in The etcd Security Model . Identify the certificates and keys in the /etc/etcd/etcd.conf file. Open the firewall to allow access to TCP ports 7379 (client communication) and 7380 (server-to-server communication). Install and run the etcd service (see atomic install --system rhel7/etcd as described earlier) 6.1.3.3. Configuring etcd with "runc" With the etcd container running, you can configure settings in the etcd container using the runc exec command. For example, you could run the etcdctl command inside the etcd container to change the network range set by the Network value in the etcd keystore (used later by the flannel service) with the following command: The example just shown illustrates the runc exec command running etcdctl set at first to set the Network value. After that, runc executes the etcdctl get command to get configuration information. 6.1.4. Tips for Running etcd Container If you are done with the etcd container image, you can remove it with the atomic uninstall command: For more information on system containers, see Introduction to System Containers . 6.2. Using the flannel System Container Image 6.2.1. Overview The flannel service was designed to provide virtual subnets for use among container hosts. Using flannel, Kubernetes (or other container platforms) can ensure that each container pod has a unique address that is routable within a Kubernetes cluster. As a result, the job of finding ports and services between containers is simpler. The flannel container described here is what is referred to as a system container. A system container is designed to come up before the docker service or in a situation where no docker service is available. In this case, the flannel container is meant to be brought up after the etcd service (also available as a system container) and before docker and kubernetes services to provide virtual subnets that the later services can leverage. Besides bypassing the docker service, the flannel container can also bypass the docker command and the storage area used to hold docker containers by default. To use the container, you need a combination of commands that include atomic (to pull, list, install, delete and unstall the image), skopeo (to inspect the image), runc (to ultimately run the image) and systemctl to manage the image among your other systemd services. Note For RHEL 7.3, system containers in general and the flannel container specifically are supported as Tech Preview only. Here are some of the features of the flannel container: Supports atomic pull : Use the atomic pull --storage=ostree" command to pull the container to the ostree storage area, instead of default docker storage, on your system. Supports atomic install : Use the atomic install --system command to set up the flannel service to run as a systemd service. Configures the flannel service : When the flannel service starts, configuration data are stored for flannel in the etcd keystore. To configure flannel, you can use the runc command to run an etcdctl command to configure flannel settings inside the etcd container. System container : After you have used the atomic command to install the flannel container, you can use the systemd systemctl command to manage the service. 6.2.2. Getting and Running the RHEL flannel System Container To use the flannel system container image on a RHEL system, you need to pull it, install it and enable it, as described in the following procedure: Pull and run the etcd container : The flannel container is dependent on there being an available etcd keystore. See Using the etcd System Container Image for information on pulling, installing, and running the etcd system container before setting up the flannel system container. Pull the flannel container : While logged into the RHEL system, get the RHEL etcd container by running the following command: This pulls the flannel system container from the Red Hat registry to the ostree storage area on the local system. By setting ostree storage, the docker storage area is not used and the docker daemon and docker command won't see the pulled flannel container image. Install the flannel container : Type the following to do a default installation of the flannel container so it is set up as a systemd service. See "Configuring flannel" to see options you could add to the atomic install command to change it from the default install shown here. Start the flannel service : Use the systemctl command to start the installed etcd service as you would any other systemd service. Check etcd and flannel with runc : To make sure the flannel and etcd containers are running, you can use the runc list command as you would use docker ps to see containers running under docker: Test that the flannel service is working : If the flannel service is working properly, the time you start up the docker0 network interface, the docker network interface should pick up an address range from those assigned by flannel. After starting flannel and before restarting docker, run these commands: Note that the docker0 interface picks up an address in the address range assigned by flannel and will, going forward, assign containers to addresses in the 10.40.4.0/24 address range. The "Configuring flannel" section shows ways of setting up the etcd service in different ways. 6.2.3. Configuring flannel You can change how the flannel service is configured on the atomic install command line or after it is running using the runc command. 6.2.3.1. Configuring etcd during "atomic install" Environment variables that that are defined initially when the flannel container starts up can be overridden on the atomic install command line using the --set option. For example, this example shows how to reset the value of FLANNELD_ETCD_ENDPOINTS: This is how two of these variables are set by default: FLANNELD_ETCD_ENDPOINTS=http://127.0.0.1:2379 : Identifies the location of the etcd service IP address and port number. FLANNELD_ETCD_PREFIX=/atomic.io/network : Identifies the location of flannel values in the etcd keystore. Here is the list of other values that you can change on the atomic install command line. See the Key Command Line Options and Environment Variables sections of the Flannel Github page for descriptions of these settings. 6.2.3.2. Configuring flannel with "runc" Flannel settings that are stored in the etcd keystore can be changed by executing etcdctl commands in the etcd container. Here's an example of how to change the Network value in the etcd keystore so that flannel uses a different set of IP address ranges. The example just shown illustrates the runc exec command running etcdctl set at first to set the Network value. After that, runc executes the etcdctl get command to get configuration information. 6.2.4. Tips for Running flannel Container If you are done with the flannel container image, you can remove it with the atomic uninstall command: For more information on system containers, see Introduction to System Containers . 6.3. Using the ovirt-guest-agent System Container Image for Red Hat Virtualization 6.3.1. Overview The ovirt-guest-agent container launches the Red Hat Virtualization (RHV) management agent. This container is made to be deployed on Red Hat Enterprise Linux virtual machines that are running in a RHV environment. The agent provides an interface to the RHV manager that supplies heart-beat and other run-time data from inside the guest VM. The RHV manager can send control commands to shutdown, restart and otherwise change the state of the virtual machine through the agent. The overt-guest-agent is added automatically to the Red Hat Atomic Image for RHV, which is an OVA-formatted image made for RHEV environments. You can download the image from the Red Hat Enterprise Linux Atomic Host download page . Or, you can get and run the container image manually on a RHEL Server or RHEL Atomic Host virtual machine you install yourself. The ovirt-guest-agent container is a system container. System containers are designed to come up before the docker service or in a situation where no docker service is available. In this case, the ovirt-guest-agent allows the RHV manager to change the state of the virtual machine on which it is running whether the docker service is running or not. Here are some of the features of the ovirt-guest-agent container: Supports atomic pull : Use the atomic pull command to pull the ovirt-guest-agent container to your system. Supports atomic install : Use the atomic install --system command to set up the ovirt-guest-agent service to run as a systemd service. System container : After you have used the atomic command to install the ovirt-guest-agent container, you can use the systemd systemctl command to manage the service. Note that the ovirt-guest-agent container image is not made to run in environments other than a RHEL or RHEL Atomic virtual machine in a RHV environment. 6.3.2. Getting and Running the ovirt-guest-agent System Container To use an ovirt-guest-agent system container image on a RHEL Server or RHEL Atomic system, you need to pull it, install it and enable it. The identity of the currently supported ovirt-guest-agent container is: The procedure below illustrates how to pull, install, and run the ovirt-guest-agent container. Pull the ovirt-guest-agent container : While logged into the RHEL or RHEL Atomic system, get the ovirt-guest-agent container by running the following command: This pulls the ovirt-guest-agent system container from the Red Hat Registry to the ostree storage area on the local system. By setting ostree storage, the docker storage area is not used and the docker daemon and docker command won't see the pulled ovirt-guest-agent container image. Install the ovirt-guest-agent container : Type the following to do a default installation of the ovirt-guest-agent container so it is set up as a systemd service. Start the ovirt-guest-agent service : Use the systemctl command to start and enable the installed ovirt-guest-agent service as you would any other systemd service. Check ovirt-guest-agent with runc : To make sure the ovirt-guest-agent container is running, you can use the runc list command as you would use docker ps to see containers running under docker: 6.3.3. Removing the ovirt-guest-agent Container and Image If you are done with the ovirt-guest-agent container image, you can stop and remove the container, then uninstall the image: For more information on system containers, see Introduction to System Containers . 6.4. Using the open-vm-tools System Container Image for VMware 6.4.1. Overview The open-vm-tools container provides services and modules that allow VMware technology to manage and otherwise work with Red Hat Enterprise Linux and RHEL Atomic Host virtual machines running in VMware environments. Kernel modules included in this container are made to improve performance of RHEL systems running as VMware guests. Services provided by this container include: Graceful power operations Script execution on guests during power operations Enhanced guest automation via custom programs or file system operations Guest authentication Guest network, memory, and disk usage information collection Guest heartbeat generation, used to determine if guests are available Guest, host, and client desktop clock synchronization Host access to obtain file-system-consistent guest file system snapshots Guest script execution associated with quiescing guest file systems (pre-freeze and post-thaw) Guest customization opportunities after guests power up File folder sharing between VMware (Workstation or Fusion) and guest system Text, graphics, and file pasting between guests, hosts and client desktops The open-vm-tools container is a system container, designed to come up before the docker service or in a situation where no docker service is available. In this case, the open-vm-tools container allows VMware technologies to manage the RHEL or RHEL Atomic virtual machines on which it is running whether the docker service is running or not. Here are some of the features of the open-vm-tools container on the RHEL guest system: Supports atomic pull : Use the atomic pull command to pull the open-vm-tools container to your system. Supports atomic install : Use the atomic install --system command to set up the open-vm-tools service to run as a systemd service. System container : After you have used the atomic command to install the open-vm-tools container, you can use the systemd systemctl command to manage the service. Note that the open-vm-tools container image is not made to run in environments other than a RHEL or RHEL Atomic virtual machine in a VMware environment. 6.4.2. Getting and Running the open-vm-tools System Container To use an open-vm-tools system container image on a RHEL Server or RHEL Atomic system, you need to pull it, install it and enable it. The identity of the currently supported open-vm-tools container is: The procedure below illustrates how to pull, install, and run the open-vm-tools container. Pull the open-vm-tools container : While logged into the RHEL or RHEL Atomic system, get the open-vm-tools container by running the following command: This pulls the open-vm-tools system container from the Red Hat Registry to the ostree storage area on the local system. By setting ostree storage, the docker storage area is not used and the docker daemon and docker command won't see the pulled open-vm-tools container image. Install the open-vm-tools container : Type the following to do a default installation of the open-vm-tools container so it is set up as a systemd service. Start the open-vm-tools service : Use the systemctl command to start and enable the installed open-vm-tools service as you would any other systemd service. Check open-vm-tools with runc : To make sure the open-vm-tools container is running, you can use the runc list command as you would use docker ps to see containers running under docker: 6.4.3. Removing the open-vm-tools Container and Image If you are done with the open-vm-tools container image, you can stop and remove the container, then uninstall the image: To learn more about how the open-vm-tools container was built, refer to Containerizing open-vm-tools . Using the instructions in that article allows you to build your own open-vm-tools container, using custom configuration settings. For more information on system containers, see Introduction to System Containers .
[ "atomic containers delete rhel7/etcd atomic uninstall rhel7/etcd", "atomic containers list -a CONTAINER ID IMAGE COMMAND CREATED STATUS RUNTIME etcd rhel7/etcd /usr/bin/etcd-env.sh 2016-10-13 14:21 running runc flannel rhel7/flannel /usr/bin/flanneld-ru 2016-10-13 15:12 failed runc 1cf730472572 rhel7/cockpit-ws /container/atomic-ru 2016-10-13 17:55 exited Docker 9a2bb24e5978 rhel7/rsyslog /bin/rsyslog.sh 2016-10-13 17:49 created Docker 34f95af8f8f9 rhel7/cockpit-ws /container/atomic-ru 2016-09-27 19:10 exited Docker", "registry.access.redhat.com/rhel7/etcd", "atomic pull --storage=ostree registry.access.redhat.com/rhel7/etcd Image rhel7/etcd is being pulled to ostree Pulling layer 2bf01635e2a0f7ed3800c8cb3effc5ff46adc6b9b86f0e80743c956371efe553 Pulling layer 38bd6ce6e1f2271d48ecb41a70a86122060ea91871a154b37d54ec66f593706f Pulling layer 852368668be3e36086ae7a47c8b9e40b5ca87819b3200bc83d7a2f95b73f0f12 Pulling layer e5d06327f2054d371f725243b619d66982c8d4589c1caa19bfcc23a93cf6b4d2 Pulling layer 82e7326c732857423e13163ff1e41ad63b3e2bddef8809175f89dec25f58b6ee Pulling layer b65a93c9f67115dc4c9da8dfeee63b58ec52c6ea58ff7f727b00d932d1f4e8f5", "atomic install --system rhel7/etcd Extracting to /var/lib/containers/atomic/etcd.0 systemctl daemon-reload systemd-tmpfiles --create /etc/tmpfiles.d/etcd.conf systemctl enable etcd", "systemctl start etcd", "runc list ID PID STATUS BUNDLE CREATED etcd 4521 running /sysroot/ostree/deploy... 2016-10-25T22:58:13.756410403Z", "curl -L http://127.0.0.1:2379/v2/keys/testkey -XPUT -d value=\"testing my etcd\" {\"action\":\"set\",\"node\":{\"key\":\"/testkey\",\"value\":\"testing my etcd\",\"modifiedIndex\":6,\"createdIndex\":6}} curl -L http://127.0.0.1:2379/v2/keys/testkey {\"action\":\"get\",\"node\":{\"key\":\"/testkey\",\"value\":\"testing my etcd\",\"modifiedIndex\":6,\"createdIndex\":6}}", "atomic install --system --set ETCD_ADVERTISE_CLIENT_URLS=\"http://192.168.122.55:2379\" rhel/etcd", "[member] ETCD_NAME=default ETCD_DATA_DIR=\"/var/lib/etcd/default.etcd\" #ETCD_WAL_DIR=\"\" #ETCD_SNAPSHOT_COUNT=\"10000\" #ETCD_HEARTBEAT_INTERVAL=\"100\" #ETCD_ELECTION_TIMEOUT=\"1000\" #ETCD_LISTEN_PEER_URLS=\"http://localhost:2380\" ETCD_LISTEN_CLIENT_URLS=\"http://localhost:2379\" #ETCD_MAX_SNAPSHOTS=\"5\" #ETCD_MAX_WALS=\"5\" #ETCD_CORS=\"\" #[cluster] #ETCD_INITIAL_ADVERTISE_PEER_URLS=\"http://localhost:2380\" if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. \"test=http://...\" #ETCD_INITIAL_CLUSTER=\"default=http://localhost:2380\" #ETCD_INITIAL_CLUSTER_STATE=\"new\" #ETCD_INITIAL_CLUSTER_TOKEN=\"etcd-cluster\" ETCD_ADVERTISE_CLIENT_URLS=\"http://localhost:2379\" #ETCD_DISCOVERY=\"\" #ETCD_DISCOVERY_SRV=\"\" #ETCD_DISCOVERY_FALLBACK=\"proxy\" #ETCD_DISCOVERY_PROXY=\"\" #ETCD_STRICT_RECONFIG_CHECK=\"false\" #[proxy] #ETCD_PROXY=\"off\" #ETCD_PROXY_FAILURE_WAIT=\"5000\" #ETCD_PROXY_REFRESH_INTERVAL=\"30000\" #ETCD_PROXY_DIAL_TIMEOUT=\"1000\" #ETCD_PROXY_WRITE_TIMEOUT=\"5000\" #ETCD_PROXY_READ_TIMEOUT=\"0\" #[security] #ETCD_CERT_FILE=\"\" #ETCD_KEY_FILE=\"\" #ETCD_CLIENT_CERT_AUTH=\"false\" #ETCD_TRUSTED_CA_FILE=\"\" #ETCD_PEER_CERT_FILE=\"\" #ETCD_PEER_KEY_FILE=\"\" #ETCD_PEER_CLIENT_CERT_AUTH=\"false\" #ETCD_PEER_TRUSTED_CA_FILE=\"\" #[logging] #ETCD_DEBUG=\"false\" examples for -log-package-levels etcdserver=WARNING,security=DEBUG #ETCD_LOG_PACKAGE_LEVELS=\"\" #[profiling] #ETCD_ENABLE_PPROF=\"false\"", "runc exec etcd etcdctl set /atomic.io/network/config '{\"Network\":\"10.40.0.0/16\"}' runc exec etcd etcdctl get /atomic.io/network/config {\"Network\":\"10.40.0.0/16\"}", "atomic uninstall etcd", "atomic pull --storage=ostree rhel7/flannel Image rhel7/flannel is being pulled to ostree Pulling layer 2bf01635e2a0f7ed3800c8cb3effc5ff46adc6b9b86f0e80743c956371efe553 Pulling layer 38bd6ce6e1f2271d48ecb41a70a86122060ea91871a154b37d54ec66f593706f", "atomic install --system rhel7/flannel Extracting to /var/lib/containers/atomic/flannel.0 systemctl daemon-reload systemd-tmpfiles --create /etc/tmpfiles.d/flannel.conf systemctl enable flannel", "systemctl start flannel", "runc list ID PID STATUS BUNDLE CREATED etcd 4521 running /sysroot/ostree/deploy... 2016-10-25T22:58:13.756410403Z flannel 6562 running /sysroot/ostree/deploy... 2016-10-26T13:50:49.041148994Z", "ip a | grep docker | grep inet inet 172.17.0.1/16 scope global docker0 systemctl reboot ip a | grep docker | grep inet inet 10.40.4.1/24 scope global docker0", "atomic install --system --set FLANNELD_ETCD_ENDPOINTS=\"http://192.168.122.55:2379\" rhel7/flannel", "* *FLANNELD_PUBLIC_IP* * *FLANNELD_ETCD_ENDPOINTS* * *FLANNELD_ETCD_PREFIX* * *FLANNELD_ETCD_KEYFILE* * *FLANNELD_ETCD_CERTFILE* * *FLANNELD_ETCD_CAFILE* * *FLANNELD_IFACE* * *FLANNELD_SUBNET_FILE* * *FLANNELD_IP_MASQ* * *FLANNELD_LISTEN* * *FLANNELD_REMOTE* * *FLANNELD_REMOTE_KEYFILE* * *FLANNELD_REMOTE_CERTFILE* * *FLANNELD_REMOTE_CAFILE* * *FLANNELD_NETWORKS*", "runc exec etcd etcdctl set /atomic.io/network/config '{\"Network\":\"10.40.0.0/16\"}' runc exec etcd etcdctl get /atomic.io/network/config {\"Network\":\"10.40.0.0/16\"}", "atomic uninstall flannel", "registry.access.redhat.com/rhev4/ovirt-guest-agent", "atomic pull --storage=ostree registry.access.redhat.com/rhev4/ovirt-guest-agent", "atomic install --system rhel7/ovirt-guest-agent Extracting to /var/lib/containers/atomic/ovirt-guest-agent.0 systemctl daemon-reload systemd-tmpfiles --create /etc/tmpfiles.d/ovirt-guest-agent.conf systemctl enable ovirt-guest-agent", "systemctl start ovirt-guest-agent systemctl enable ovirt-guest-agent", "runc list ID PID STATUS BUNDLE CREATED ovirt-guest-agent 4521 running /sysroot/ostree/de... 2017-04-07T21:01:07.279104535Z", "atomic containers delete ovirt-guest-agent Do you wish to delete the following images? ID NAME IMAGE_NAME STORAGE ovirt-guest- ovirt-guest-agent registry.access.redhat.com ostree Confirm (y/N) y systemctl stop ovirt-guest-agent systemctl disable ovirt-guest-agent systemd-tmpfiles --remove /etc/tmpfiles.d/ovirt-guest-agent.conf atomic uninstall registry.access.redhat.com/rhev4/ovirt-guest-agent Do you wish to delete the following images? IMAGE STORAGE registry.access.redhat.com/rhev4/ovirt-guest-agent ostree Confirm (y/N) y", "registry.access.redhat.com/rhel7/open-vm-tools", "atomic pull --storage=ostree registry.access.redhat.com/rhel7/open-vm-tools", "atomic install --system rhel7/open-vm-tools Extracting to /var/lib/containers/atomic/open-vm-tools.0 systemctl daemon-reload systemd-tmpfiles --create /etc/tmpfiles.d/open-vm-tools.conf systemctl enable open-vm-tools", "systemctl start open-vm-tools systemctl enable open-vm-tools", "runc list ID PID STATUS BUNDLE CREATED open-vm-tools 4521 running /sysroot/ostree/de... 2017-04-07T18:03:01.913246491Z", "atomic containers delete open-vm-tools Do you wish to delete the following images? ID NAME IMAGE_NAME STORAGE ovirt-guest- open-vm-tools registry.access.redhat.com ostree Confirm (y/N) y systemctl stop open-vm-tools systemctl disable open-vm-tools systemd-tmpfiles --remove /etc/tmpfiles.d/open-vm-tools.conf atomic uninstall registry.access.redhat.com/rhel7/open-vm-tools Do you wish to delete the following images? IMAGE STORAGE registry.access.redhat.com/rhel7/open-vm-tools ostree Confirm (y/N) y" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/managing_containers/running_system_containers
6.4. Removing a Server from the Topology
6.4. Removing a Server from the Topology IdM does not allow removing a server from the topology if one of the following applies: the server being removed is the only server connecting other servers with the rest of the topology; this would cause the other servers to become isolated, which is not allowed the server being removed is your last CA or DNS server In these situations, the attempt fails with an error. For example, on the command line: 6.4.1. Web UI: Removing a Server from the Topology To remove a server from the topology without uninstalling the server components from the machine: Select IPA Server Topology IPA Servers . Click on the name of the server you want to delete. Figure 6.13. Selecting a Server Click Delete Server . 6.4.2. Command Line: Removing a Server from the Topology Important Removing a server is an irreversible action. If you remove a server, the only way to introduce it back into the topology is to install a new replica on the machine. To remove server1.example.com : On another server, run the ipa server-del command to remove server1.example.com . The command removes all topology segments pointing to the server: On server1.example.com , run the ipa server-install --uninstall command to uninstall the server components from the machine.
[ "ipa server-del Server name: server1.example.com Removing server1.example.com from replication topology, please wait ipa: ERROR: Server removal aborted: Removal of 'server1.example.com' leads to disconnected topology in suffix 'domain': Topology does not allow server server2.example.com to replicate with servers: server3.example.com server4.example.com", "[user@server2 ~]USD ipa server-del Server name: server1.example.com Removing server1.example.com from replication topology, please wait ---------------------------------------------------------- Deleted IPA server \"server1.example.com\" ----------------------------------------------------------", "ipa server-install --uninstall" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/managing-topology-remove
Chapter 2. Types of container images
Chapter 2. Types of container images The container image is a binary that includes all of the requirements for running a single container, and metadata describing its needs and capabilities. There are two types of container images: Red Hat Enterprise Linux Base Images (RHEL base images) Red Hat Universal Base Images (UBI images) Both types of container images are built from portions of Red Hat Enterprise Linux. By using these containers, users can benefit from great reliability, security, performance and life cycles. The main difference between the two types of container images is that the UBI images allow you to share container images with others. You can build a containerized application using UBI, push it to your choice of registry server, easily share it with others, and even deploy it on non-Red Hat platforms. The UBI images are designed to be a foundation for cloud-native and web applications use cases developed in containers. 2.1. General characteristics of RHEL container images Following characteristics apply to both RHEL base images and UBI images. In general, RHEL container images are: Supported : Supported by Red Hat for use with containerized applications. They contain the same secured, tested, and certified software packages found in Red Hat Enterprise Linux. Cataloged : Listed in the Red Hat Container Catalog , with descriptions, technical details, and a health index for each image. Updated : Offered with a well-defined update schedule, to get the latest software, see Red Hat Container Image Updates article. Tracked : Tracked by Red Hat Product Errata to help understand the changes that are added into each update. Reusable : The container images need to be downloaded and cached in your production environment once. Each container image can be reused by all containers that include it as their foundation. 2.2. Characteristics of UBI images The UBI images allow you to share container images with others. Four UBI images are offered: micro, minimal, standard, and init. Pre-build language runtime images and DNF repositories are available to build your applications. Following characteristics apply to UBI images: Built from a subset of RHEL content : Red Hat Universal Base images are built from a subset of normal Red Hat Enterprise Linux content. Redistributable : UBI images allow standardization for Red Hat customers, partners, ISVs, and others. With UBI images, you can build your container images on a foundation of official Red Hat software that can be freely shared and deployed. Provide a set of four base images : micro, minimal, standard, and init. Provide a set of pre-built language runtime container images : The runtime images based on Application Streams provide a foundation for applications that can benefit from standard, supported runtimes such as python, perl, php, dotnet, nodejs, and ruby. Provide a set of associated DNF repositories : DNF repositories include RPM packages and updates that allow you to add application dependencies and rebuild UBI container images. The ubi-9-baseos repository holds the redistributable subset of RHEL packages you can include in your container. The ubi-9-appstream repository holds Application streams packages that you can add to a UBI image to help you standardize the environments you use with applications that require particular runtimes. Adding UBI RPMs : You can add RPM packages to UBI images from preconfigured UBI repositories. If you happen to be in a disconnected environment, you must allowlist the UBI Content Delivery Network ( https://cdn-ubi.redhat.com ) to use that feature. For more information, see the Red Hat Knowledgebase solution Connect to https://cdn-ubi.redhat.com . Licensing : You are free to use and redistribute UBI images, provided you adhere to the Red Hat Universal Base Image End User Licensing Agreement . Note All of the layered images are based on UBI images. To check on which UBI image is your image based, display the Containerfile in the Red Hat Container Catalog and ensure that the UBI image contains all required content. Additional resources Introducing the Red Hat Universal Base Image Universal Base Images (UBI): Images, repositories, and packages All You Need to Know About Red Hat Universal Base Image FAQ - Universal Base Images 2.3. Understanding the UBI standard images The standard images (named ubi ) are designed for any application that runs on RHEL. The key features of UBI standard images include: init system : All the features of the systemd initialization system you need to manage systemd services are available in the standard base images. These init systems let you install RPM packages that are pre-configured to start up services automatically, such as a Web server ( httpd ) or FTP server ( vsftpd ). dnf : You have access to free dnf repositories for adding and updating software. You can use the standard set of dnf commands ( dnf , dnf-config-manager , dnfdownloader , and so on). utilities : Utilities include tar , dmidecode , gzip , getfacl and further acl commands, dmsetup and further device mapper commands, between other utilities not mentioned here. 2.4. Understanding the UBI init images The UBI init images, named ubi-init , contain the systemd initialization system, making them useful for building images in which you want to run systemd services, such as a web server or file server. The init image contents are less than what you get with the standard images, but more than what is in the minimal images. Note Because the ubi9-init image builds on top of the ubi9 image, their contents are mostly the same. However, there are a few critical differences: ubi9-init : CMD is set to /sbin/init to start the systemd Init service by default includes ps and process related commands ( procps-ng package) sets SIGRTMIN+3 as the StopSignal , as systemd in ubi9-init ignores normal signals to exit ( SIGTERM and SIGKILL ), but will terminate if it receives SIGRTMIN+3 ubi9 : CMD is set to /bin/bash does not include ps and process related commands ( procps-ng package) does not ignore normal signals to exit ( SIGTERM and SIGKILL ) 2.5. Understanding the UBI minimal images The UBI minimal images, named ubi-minimal offer a minimized pre-installed content set and a package manager ( microdnf` ). As a result, you can use a Containerfile while minimizing the dependencies included in the image. The key features of UBI minimal images include: Small size : Minimal images are about 92M on disk and 32M, when compressed. This makes it less than half the size of the standard images. Software installation ( microdnf ) : Instead of including the fully-developed dnf facility for working with software repositories and RPM software packages, the minimal images includes the microdnf utility. The microdnf is a scaled-down version of dnf allowing you to enable and disable repositories, remove and update packages, and clean out cache after packages have been installed. Based on RHEL packaging : Minimal images incorporate regular RHEL software RPM packages, with a few features removed. Minimal images do not include initialization and service management system, such as systemd or System V init, Python run-time environment, and some shell utilities. You can rely on RHEL repositories for building your images, while carrying the smallest possible amount of overhead. Modules for microdnf are supported : Modules used with microdnf command let you install multiple versions of the same software, when available. You can use microdnf module enable , microdnf module disable , and microdnf module reset to enable, disable, and reset a module stream, respectively. For example, to enable the nodejs:14 module stream inside the UBI minimal container, enter: Red Hat only supports the latest version of UBI and does not support parking on a dot release. If you need to park on a specific dot release, please take a look at Extended Update Support . 2.6. Understanding the UBI micro images The ubi-micro is the smallest possible UBI image, obtained by excluding a package manager and all of its dependencies which are normally included in a container image. This minimizes the attack surface of container images based on the ubi-micro image and is suitable for minimal applications, even if you use UBI Standard, Minimal, or Init for other applications. The container image without the Linux distribution packaging is called a Distroless container image.
[ "microdnf module enable nodejs:14 Downloading metadata Enabling module streams: nodejs:14 Running transaction test" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/building_running_and_managing_containers/assembly_types-of-container-images_building-running-and-managing-containers
Chapter 65. Process engine in Red Hat Process Automation Manager
Chapter 65. Process engine in Red Hat Process Automation Manager The process engine implements the Business Process Management (BPM) paradigm in Red Hat Process Automation Manager. BPM is a business methodology that enables modeling, measuring, and optimizing processes within an enterprise. In BPM, a repeatable business process is represented as a workflow diagram. The Business Process Model and Notation (BPMN) specification defines the available elements of this diagram. The process engine implements a large subset of the BPMN 2.0 specification. With the process engine, business analysts can develop the diagram itself. Developers can implement the business logic of every element of the flow in code, making an executable business process. Users can execute the business process and interact with it as necessary. Analysts can generate metrics that reflect the efficiency of the process. The workflow diagram consists of a number of nodes. The BPMN specification defines many kinds of nodes, including the following principal types: Event : Nodes representing something happening in the process or outside of the process. Typical events are the start and the end of a process. An event can throw messages to other processes and catch such messages. Circles on the diagram represent events. Activity : Nodes representing an action that must be taken (whether automatically or with user involvement). Typical events are a task, which represents an action taken within the process, and a call to a sub-process. Rounded rectangles on the diagram represent activities. Gateway : A branching or merging node. A typical gateway evaluates an expression and, depending on the result, continues to one of several execution paths. Diamond shapes on the diagram represent gateways. When a user starts the process, a process instance is created. The process instance contains a set of data, or context , stored in process variables. The state of a process instance includes all the context data and also the current active node (or, in some cases, several active nodes). Some of these variables can be initialized when a user starts the process. An activity can read from process variables and write to process variables. A gateway can evaluate process variables to determine the execution path. For example, a purchase process in a shop can be a business process. The content of the user's cart can be the initial process context. At the end of execution, the process context can contain the payment confirmation and shipment tracking details. Optionally, you can use the BPMN data modeler in Business Central to design the model for the data in process variables. The workflow diagram is represented in code by an XML business process definition . The logic of events, gateways, and sub-process calls are defined within the business process definition. Some task types (for example, script tasks and the standard decision engine rule task) are implemented in the engine. For other task types, including all custom tasks, when the task must be executed the process engine executes a call using the Work Item Handler API . Code external to the engine can implement this API, providing a flexible mechanism for implementing various tasks. The process engine includes a number of predefined types of tasks. These types include a script task that runs user Java code, a service task that calls a Java method or a Web Service, a decision task that calls a decision engine service, and other custom tasks (for example, REST and database calls). Another predefined type of task is a user task , which includes interaction with a user. User tasks in the process can be assigned to users and groups. The process engine uses the KIE API to interact with other software components. You can run business processes as services on a KIE Server and interact with them using a REST implementation of the KIE API. Alternatively, you can embed business processes in your application and interact with them using KIE API Java calls. In this case, you can run the process engine in any Java environment. Business Central includes a user interface for users executing human tasks and a form modeler for creating the web forms for human tasks. However, you can also implement a custom user interface that interacts with the process engine using the KIE API. The process engine supports the following additional features: Support for persistence of the process information using the JPA standard. Persistence preserves the state and context (data in process variables) of every process instance, so that they are not lost in case any components are restarted or taken offline for some time. You can use an SQL database engine to store the persistence information. Pluggable support for transactional execution of process elements using the JTA standard. If you use a JTA transaction manager, every element of the business process starts as a transaction. If the element does not complete, the context of the process instance is restored to the state in which it was before the element started. Support for custom extension code, including new node types and other process languages. Support for custom listener classes that are notified about various events. Support for migrating running process instances to a new version of their process definition The process engine can also be integrated with other independent core services: The human task service can manage user tasks when human actors need to participate in the process. It is fully pluggable and the default implementation is based on the WS-HumanTask specification. The human task service manages the lifecycle of the tasks, task lists, task forms, and some more advanced features like escalation, delegation, and rule-based assignments. The history log can store all information about the execution of all the processes in the process engine. While runtime persistence stores the current state of all active process instances, you need the history log to ensure access to historic information. The history log contains all current and historic states of all active and completed process instances. You can use the log to query for any information related to the execution of process instances for monitoring and analysis. Additional resources Designing business processes using BPMN models Interacting with Red Hat Process Automation Manager using KIE APIs Java documentation for the public KIE API
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_process_services_in_red_hat_process_automation_manager/processengine-overview-con
Chapter 119. AclRule schema reference
Chapter 119. AclRule schema reference Used in: KafkaUserAuthorizationSimple Full list of AclRule schema properties Configures access control rules for a KafkaUser when brokers are using simple authorization. Example KafkaUser configuration with simple authorization apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: # ... authorization: type: simple acls: - resource: type: topic name: "*" patternType: literal operations: - Read - Describe - resource: type: group name: my-group patternType: prefix operations: - Read Use the resource property to specify the resource that the rule applies to. Simple authorization supports four resource types, which are specified in the type property: Topics ( topic ) Consumer Groups ( group ) Clusters ( cluster ) Transactional IDs ( transactionalId ) For Topic, Group, and Transactional ID resources you can specify the name of the resource the rule applies to in the name property. Cluster type resources have no name. A name is specified as a literal or a prefix using the patternType property. Literal names are taken exactly as they are specified in the name field. Prefix names use the name value as a prefix and then apply the rule to all resources with names starting with that value. When patternType is set as literal , you can set the name to * to indicate that the rule applies to all resources. For more details about simple authorization, ACLs, and supported combinations of resources and operations, see Authorization and ACLs . 119.1. AclRule schema properties Property Property type Description type string (one of [allow, deny]) The type of the rule. Currently the only supported type is allow . ACL rules with type allow are used to allow user to execute the specified operations. Default value is allow . resource AclRuleTopicResource , AclRuleGroupResource , AclRuleClusterResource , AclRuleTransactionalIdResource Indicates the resource for which given ACL rule applies. host string The host from which the action described in the ACL rule is allowed or denied. If not set, it defaults to * , allowing or denying the action from any host. operation string (one of [Read, Write, Delete, Alter, Describe, All, IdempotentWrite, ClusterAction, Create, AlterConfigs, DescribeConfigs]) The operation property has been deprecated, and should now be configured using spec.authorization.acls[*].operations . Operation which will be allowed or denied. Supported operations are: Read, Write, Create, Delete, Alter, Describe, ClusterAction, AlterConfigs, DescribeConfigs, IdempotentWrite and All. operations string (one or more of [Read, Write, Delete, Alter, Describe, All, IdempotentWrite, ClusterAction, Create, AlterConfigs, DescribeConfigs]) array List of operations to allow or deny. Supported operations are: Read, Write, Create, Delete, Alter, Describe, ClusterAction, AlterConfigs, DescribeConfigs, IdempotentWrite and All. Only certain operations work with the specified resource.
[ "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: # authorization: type: simple acls: - resource: type: topic name: \"*\" patternType: literal operations: - Read - Describe - resource: type: group name: my-group patternType: prefix operations: - Read" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-aclrule-reference
5.5. Backend Settings
5.5. Backend Settings The backend settings specify the real server IP addresses as well as the load balancer scheduling algorithm. The following example shows a typical backend section: The back-end server is named app . The balance specifies the load balancer scheduling algorithm to be used, which in this case is Round Robin ( roundrobin ), but can be any scheduler supported by HAProxy. For more information configuring schedulers in HAProxy, see Section 5.1, "HAProxy Scheduling Algorithms" . The server lines specify the servers available in the back end. app1 to app4 are the names assigned internally to each real server. Log files will specify server messages by name. The address is the assigned IP address. The value after the colon in the IP address is the port number to which the connection occurs on the particular server. The check option flags a server for periodic healthchecks to ensure that it is available and able receive and send data and take session requests. Server app3 also configures the healthcheck interval to two seconds ( inter 2s ), the amount of checks app3 has to pass to determine if the server is considered healthy ( rise 4 ), and the number of times a server consecutively fails a check before it is considered failed ( fall 3 ).
[ "backend app balance roundrobin server app1 192.168.1.1:80 check server app2 192.168.1.2:80 check server app3 192.168.1.3:80 check inter 2s rise 4 fall 3 server app4 192.168.1.4:80 backup" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/load_balancer_administration/s1-haproxy-setup-backend
Chapter 2. Image Registry Operator in OpenShift Container Platform
Chapter 2. Image Registry Operator in OpenShift Container Platform 2.1. Image Registry on cloud platforms and OpenStack The Image Registry Operator installs a single instance of the OpenShift image registry, and manages all registry configuration, including setting up registry storage. Note Storage is only automatically configured when you install an installer-provisioned infrastructure cluster on AWS, Azure, GCP, IBM(R), or OpenStack. When you install or upgrade an installer-provisioned infrastructure cluster on AWS, Azure, GCP, IBM(R), or OpenStack, the Image Registry Operator sets the spec.storage.managementState parameter to Managed . If the spec.storage.managementState parameter is set to Unmanaged , the Image Registry Operator takes no action related to storage. After the control plane deploys, the Operator creates a default configs.imageregistry.operator.openshift.io resource instance based on configuration detected in the cluster. If insufficient information is available to define a complete configs.imageregistry.operator.openshift.io resource, the incomplete resource is defined and the Operator updates the resource status with information about what is missing. The Image Registry Operator runs in the openshift-image-registry namespace, and manages the registry instance in that location as well. All configuration and workload resources for the registry reside in that namespace. Important The Image Registry Operator's behavior for managing the pruner is orthogonal to the managementState specified on the ClusterOperator object for the Image Registry Operator. If the Image Registry Operator is not in the Managed state, the image pruner can still be configured and managed by the Pruning custom resource. However, the managementState of the Image Registry Operator alters the behavior of the deployed image pruner job: Managed : the --prune-registry flag for the image pruner is set to true . Removed : the --prune-registry flag for the image pruner is set to false , meaning it only prunes image metadata in etcd. 2.2. Image Registry on bare metal, Nutanix, and vSphere 2.2.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . When this has completed, you must configure storage. 2.3. Image Registry Operator distribution across availability zones The default configuration of the Image Registry Operator spreads image registry pods across topology zones to prevent delayed recovery times in case of a complete zone failure where all pods are impacted. The Image Registry Operator defaults to the following when deployed with a zone-related topology constraint: Image Registry Operator deployed with a zone related topology constraint topologySpreadConstraints: - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: DoNotSchedule - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: node-role.kubernetes.io/worker whenUnsatisfiable: DoNotSchedule - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: DoNotSchedule The Image Registry Operator defaults to the following when deployed without a zone-related topology constraint, which applies to bare metal and vSphere instances: Image Registry Operator deployed without a zone related topology constraint topologySpreadConstraints: - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: DoNotSchedule - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: node-role.kubernetes.io/worker whenUnsatisfiable: DoNotSchedule A cluster administrator can override the default topologySpreadConstraints by configuring the configs.imageregistry.operator.openshift.io/cluster spec file. In that case, only the constraints you provide apply. 2.4. Additional resources Configuring pod topology spread constraints 2.5. Image Registry Operator configuration parameters The configs.imageregistry.operator.openshift.io resource offers the following configuration parameters. Parameter Description managementState Managed : The Operator updates the registry as configuration resources are updated. Unmanaged : The Operator ignores changes to the configuration resources. Removed : The Operator removes the registry instance and tear down any storage that the Operator provisioned. logLevel Sets logLevel of the registry instance. Defaults to Normal . The following values for logLevel are supported: Normal Debug Trace TraceAll httpSecret Value needed by the registry to secure uploads, generated by default. operatorLogLevel The operatorLogLevel configuration parameter provides intent-based logging for the Operator itself and a simple way to manage coarse-grained logging choices that Operators must interpret for themselves. This configuration parameter defaults to Normal . It does not provide fine-grained control. The following values for operatorLogLevel are supported: Normal Debug Trace TraceAll proxy Defines the Proxy to be used when calling master API and upstream registries. affinity You can use the affinity parameter to configure pod scheduling preferences and constraints for Image Registry Operator pods. Affinity settings can use the podAffinity or podAntiAffinity spec. Both options can use either preferredDuringSchedulingIgnoredDuringExecution rules or requiredDuringSchedulingIgnoredDuringExecution rules. storage Storagetype : Details for configuring registry storage, for example S3 bucket coordinates. Normally configured by default. readOnly Indicates whether the registry instance should reject attempts to push new images or delete existing ones. requests API Request Limit details. Controls how many parallel requests a given registry instance will handle before queuing additional requests. defaultRoute Determines whether or not an external route is defined using the default hostname. If enabled, the route uses re-encrypt encryption. Defaults to false . routes Array of additional routes to create. You provide the hostname and certificate for the route. rolloutStrategy Defines rollout strategy for the image registry deployment. Defaults to RollingUpdate . replicas Replica count for the registry. disableRedirect Controls whether to route all data through the registry, rather than redirecting to the back end. Defaults to false . spec.storage.managementState The Image Registry Operator sets the spec.storage.managementState parameter to Managed on new installations or upgrades of clusters using installer-provisioned infrastructure on AWS or Azure. Managed : Determines that the Image Registry Operator manages underlying storage. If the Image Registry Operator's managementState is set to Removed , then the storage is deleted. If the managementState is set to Managed , the Image Registry Operator attempts to apply some default configuration on the underlying storage unit. For example, if set to Managed , the Operator tries to enable encryption on the S3 bucket before making it available to the registry. If you do not want the default settings to be applied on the storage you are providing, make sure the managementState is set to Unmanaged . Unmanaged : Determines that the Image Registry Operator ignores the storage settings. If the Image Registry Operator's managementState is set to Removed , then the storage is not deleted. If you provided an underlying storage unit configuration, such as a bucket or container name, and the spec.storage.managementState is not yet set to any value, then the Image Registry Operator configures it to Unmanaged . 2.6. Enable the Image Registry default route with the Custom Resource Definition In OpenShift Container Platform, the Registry Operator controls the OpenShift image registry feature. The Operator is defined by the configs.imageregistry.operator.openshift.io Custom Resource Definition (CRD). If you need to automatically enable the Image Registry default route, patch the Image Registry Operator CRD. Procedure Patch the Image Registry Operator CRD: USD oc patch configs.imageregistry.operator.openshift.io/cluster --type merge -p '{"spec":{"defaultRoute":true}}' 2.7. Configuring additional trust stores for image registry access The image.config.openshift.io/cluster custom resource can contain a reference to a config map that contains additional certificate authorities to be trusted during image registry access. Prerequisites The certificate authorities (CA) must be PEM-encoded. Procedure You can create a config map in the openshift-config namespace and use its name in AdditionalTrustedCA in the image.config.openshift.io custom resource to provide additional CAs that should be trusted when contacting external registries. The config map key is the hostname of a registry with the port for which this CA is to be trusted, and the PEM certificate content is the value, for each additional registry CA to trust. Image registry CA config map example apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: registry.example.com: | -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- registry-with-port.example.com..5000: | 1 -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- 1 If the registry has the port, such as registry-with-port.example.com:5000 , : should be replaced with .. . You can configure additional CAs with the following procedure. To configure an additional CA: USD oc create configmap registry-config --from-file=<external_registry_address>=ca.crt -n openshift-config USD oc edit image.config.openshift.io cluster spec: additionalTrustedCA: name: registry-config 2.8. Configuring storage credentials for the Image Registry Operator In addition to the configs.imageregistry.operator.openshift.io and ConfigMap resources, storage credential configuration is provided to the Operator by a separate secret resource located within the openshift-image-registry namespace. The image-registry-private-configuration-user secret provides credentials needed for storage access and management. It overrides the default credentials used by the Operator, if default credentials were found. Procedure Create an OpenShift Container Platform secret that contains the required keys. USD oc create secret generic image-registry-private-configuration-user --from-literal=KEY1=value1 --from-literal=KEY2=value2 --namespace openshift-image-registry 2.9. Additional resources Configuring the registry for AWS user-provisioned infrastructure Configuring the registry for GCP user-provisioned infrastructure Configuring the registry for Azure user-provisioned infrastructure Configuring the registry for bare metal Configuring the registry for vSphere Configuring the registry for RHOSP Configuring the registry for Red Hat OpenShift Data Foundation Configuring the registry for Nutanix
[ "topologySpreadConstraints: - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: DoNotSchedule - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: node-role.kubernetes.io/worker whenUnsatisfiable: DoNotSchedule - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: DoNotSchedule", "topologySpreadConstraints: - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: DoNotSchedule - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: node-role.kubernetes.io/worker whenUnsatisfiable: DoNotSchedule", "oc patch configs.imageregistry.operator.openshift.io/cluster --type merge -p '{\"spec\":{\"defaultRoute\":true}}'", "apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: registry.example.com: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- registry-with-port.example.com..5000: | 1 -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----", "oc create configmap registry-config --from-file=<external_registry_address>=ca.crt -n openshift-config", "oc edit image.config.openshift.io cluster", "spec: additionalTrustedCA: name: registry-config", "oc create secret generic image-registry-private-configuration-user --from-literal=KEY1=value1 --from-literal=KEY2=value2 --namespace openshift-image-registry" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/registry/configuring-registry-operator
16.3. USB Devices
16.3. USB Devices This section gives the commands required for handling USB devices. 16.3.1. Assigning USB Devices to Guest Virtual Machines Most devices such as web cameras, card readers, disk drives, keyboards, mice are connected to a computer using a USB port and cable. There are two ways to pass such devices to a guest virtual machine: Using USB passthrough - this requires the device to be physically connected to the host physical machine that is hosting the guest virtual machine. SPICE is not needed in this case. USB devices on the host can be passed to the guest in the command line or virt-manager . See Section 19.3.2, "Attaching USB Devices to a Guest Virtual Machine" for virt manager directions. Note that the virt-manager directions are not suitable for hot plugging or hot unplugging devices. If you want to hot plug/or hot unplug a USB device, see Procedure 20.4, "Hot plugging USB devices for use by the guest virtual machine" . Using USB re-direction - USB re-direction is best used in cases where there is a host physical machine that is running in a data center. The user connects to his/her guest virtual machine from a local machine or thin client. On this local machine there is a SPICE client. The user can attach any USB device to the thin client and the SPICE client will redirect the device to the host physical machine on the data center so it can be used by the guest virtual machine that is running on the thin client. For instructions via the virt-manager see Section 19.3.3, "USB Redirection" . 16.3.2. Setting a Limit on USB Device Redirection To filter out certain devices from redirection, pass the filter property to -device usb-redir . The filter property takes a string consisting of filter rules, the format for a rule is: Use the value -1 to designate it to accept any value for a particular field. You may use multiple rules on the same command line using | as a separator. Note that if a device matches none of the passed in rules, redirecting it will not be allowed! Example 16.1. An example of limiting redirection with a guest virtual machine Prepare a guest virtual machine. Add the following code excerpt to the guest virtual machine's' domain XML file: Start the guest virtual machine and confirm the setting changes by running the following: Plug a USB device into a host physical machine, and use virt-manager to connect to the guest virtual machine. Click USB device selection in the menu, which will produce the following message: "Some USB devices are blocked by host policy". Click OK to confirm and continue. The filter takes effect. To make sure that the filter captures properly check the USB device vendor and product, then make the following changes in the host physical machine's domain XML to allow for USB redirection. Restart the guest virtual machine, then use virt-viewer to connect to the guest virtual machine. The USB device will now redirect traffic to the guest virtual machine.
[ "<class>:<vendor>:<product>:<version>:<allow>", "<redirdev bus='usb' type='spicevmc'> <alias name='redir0'/> <address type='usb' bus='0' port='3'/> </redirdev> <redirfilter> <usbdev class='0x08' vendor='0x1234' product='0xBEEF' version='2.0' allow='yes'/> <usbdev class='-1' vendor='-1' product='-1' version='-1' allow='no'/> </redirfilter>", "ps -ef | grep USDguest_name", "-device usb-redir,chardev=charredir0,id=redir0, / filter=0x08:0x1234:0xBEEF:0x0200:1|-1:-1:-1:-1:0,bus=usb.0,port=3", "<redirfilter> <usbdev class='0x08' vendor='0x0951' product='0x1625' version='2.0' allow='yes'/> <usbdev allow='no'/> </redirfilter>" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-guest_virtual_machine_device_configuration-usb_devices
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/updating_openshift_data_foundation/providing-feedback-on-red-hat-documentation_rhodf
23.6. Memory Backing
23.6. Memory Backing Memory backing allows the hypervisor to properly manage large pages within the guest virtual machine. <domain> ... <memoryBacking> <hugepages> <page size="1" unit="G" nodeset="0-3,5"/> <page size="2" unit="M" nodeset="4"/> </hugepages> <nosharepages/> <locked/> </memoryBacking> ... </domain> Figure 23.8. Memory backing For detailed information on memoryBacking elements, see the libvirt upstream documentation .
[ "<domain> <memoryBacking> <hugepages> <page size=\"1\" unit=\"G\" nodeset=\"0-3,5\"/> <page size=\"2\" unit=\"M\" nodeset=\"4\"/> </hugepages> <nosharepages/> <locked/> </memoryBacking> </domain>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-manipulating_the_domain_xml-memory_backing
Chapter 16. Securing access to a Kafka cluster
Chapter 16. Securing access to a Kafka cluster Secure connections by configuring Kafka and Kafka users. Through configuration, you can implement encryption, authentication, and authorization mechanisms. Kafka configuration To establish secure access to Kafka, configure the Kafka resource to set up the following configurations based on your specific requirements: Listeners with specified authentication types to define how clients authenticate TLS encryption for communication between Kafka and clients Supported TLS versions and cipher suites for additional security Authorization for the entire Kafka cluster Network policies for restricting access Super users for unconstrained access to brokers Authentication is configured independently for each listener, while authorization is set up for the whole Kafka cluster. For more information on access configuration for Kafka, see the Kafka schema reference and GenericKafkaListener schema reference . User (client-side) configuration To enable secure client access to Kafka, configure KafkaUser resources. These resources represent clients and determine how they authenticate and authorize with the Kafka cluster. Configure the KafkaUser resource to set up the following configurations based on your specific requirements: Authentication that must match the enabled listener authentication Supported TLS versions and cipher suites that must match the Kafka configuration Simple authorization to apply Access Control List (ACL) rules ACLs for fine-grained control over user access to topics and actions Quotas to limit client access based on byte rates or CPU utilization The User Operator creates the user representing the client and the security credentials used for client authentication, based on the chosen authentication type. For more information on access configuration for users, see the KafkaUser schema reference . 16.1. Configuring client authentication on listeners Configure client authentication for Kafka brokers when creating listeners. Specify the listener authentication type using the Kafka.spec.kafka.listeners.authentication property in the Kafka resource. For clients inside the OpenShift cluster, you can create plain (without encryption) or tls internal listeners. The internal listener type use a headless service and the DNS names given to the broker pods. As an alternative to the headless service, you can also create a cluster-ip type of internal listener to expose Kafka using per-broker ClusterIP services. For clients outside the OpenShift cluster, you create external listeners and specify a connection mechanism, which can be nodeport , loadbalancer , ingress (Kubernetes only), or route (OpenShift only). For more information on the configuration options for connecting an external client, see Chapter 15, Setting up client access to a Kafka cluster . Supported authentication options: mTLS authentication (only on the listeners with TLS enabled encryption) SCRAM-SHA-512 authentication OAuth 2.0 token-based authentication Custom authentication TLS versions and cipher suites If you're using OAuth 2.0 for client access management, user authentication and authorization credentials are handled through the authorization server. The authentication option you choose depends on how you wish to authenticate client access to Kafka brokers. Note Try exploring the standard authentication options before using custom authentication. Custom authentication allows for any type of Kafka-supported authentication. It can provide more flexibility, but also adds complexity. Figure 16.1. Kafka listener authentication options The listener authentication property is used to specify an authentication mechanism specific to that listener. If no authentication property is specified then the listener does not authenticate clients which connect through that listener. The listener will accept all connections without authentication. Authentication must be configured when using the User Operator to manage KafkaUsers . The following example shows: A plain listener configured for SCRAM-SHA-512 authentication A tls listener with mTLS authentication An external listener with mTLS authentication Each listener is configured with a unique name and port within a Kafka cluster. Important When configuring listeners for client access to brokers, you can use port 9092 or higher (9093, 9094, and so on), but with a few exceptions. The listeners cannot be configured to use the ports reserved for interbroker communication (9090 and 9091), Prometheus metrics (9404), and JMX (Java Management Extensions) monitoring (9999). Example listener authentication configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # ... listeners: - name: plain port: 9092 type: internal tls: true authentication: type: scram-sha-512 - name: tls port: 9093 type: internal tls: true authentication: type: tls - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: tls # ... 16.1.1. mTLS authentication mTLS authentication is always used for the communication between Kafka brokers and ZooKeeper pods. Streams for Apache Kafka can configure Kafka to use TLS (Transport Layer Security) to provide encrypted communication between Kafka brokers and clients either with or without mutual authentication. For mutual, or two-way, authentication, both the server and the client present certificates. When you configure mTLS authentication, the broker authenticates the client (client authentication) and the client authenticates the broker (server authentication). mTLS listener configuration in the Kafka resource requires the following: tls: true to specify TLS encryption and server authentication authentication.type: tls to specify the client authentication When a Kafka cluster is created by the Cluster Operator, it creates a new secret with the name <cluster_name>-cluster-ca-cert . The secret contains a CA certificate. The CA certificate is in PEM and PKCS #12 format . To verify a Kafka cluster, add the CA certificate to the truststore in your client configuration. To verify a client, add a user certificate and key to the keystore in your client configuration. For more information on configuring a client for mTLS, see Section 16.3.2, "Configuring user authentication" . Note TLS authentication is more commonly one-way, with one party authenticating the identity of another. For example, when HTTPS is used between a web browser and a web server, the browser obtains proof of the identity of the web server. 16.1.2. SCRAM-SHA-512 authentication SCRAM (Salted Challenge Response Authentication Mechanism) is an authentication protocol that can establish mutual authentication using passwords. Streams for Apache Kafka can configure Kafka to use SASL (Simple Authentication and Security Layer) SCRAM-SHA-512 to provide authentication on both unencrypted and encrypted client connections. When SCRAM-SHA-512 authentication is used with a TLS connection, the TLS protocol provides the encryption, but is not used for authentication. The following properties of SCRAM make it safe to use SCRAM-SHA-512 even on unencrypted connections: The passwords are not sent in the clear over the communication channel. Instead the client and the server are each challenged by the other to offer proof that they know the password of the authenticating user. The server and client each generate a new challenge for each authentication exchange. This means that the exchange is resilient against replay attacks. When KafkaUser.spec.authentication.type is configured with scram-sha-512 the User Operator will generate a random 32-character password consisting of upper and lowercase ASCII letters and numbers. 16.1.3. Restricting access to listeners with network policies Control listener access by configuring the networkPolicyPeers property in the Kafka resource. By default, Streams for Apache Kafka automatically creates a NetworkPolicy resource for every enabled Kafka listener, allowing connections from all namespaces. To restrict listener access to specific applications or namespaces at the network level, configure the networkPolicyPeers property. Each listener can have its own networkPolicyPeers configuration . For more information on network policy peers, refer to the NetworkPolicyPeer API reference . If you want to use custom network policies, you can set the STRIMZI_NETWORK_POLICY_GENERATION environment variable to false in the Cluster Operator configuration. For more information, see Section 10.7, "Configuring the Cluster Operator" . Note Your configuration of OpenShift must support ingress NetworkPolicies in order to use network policies. Prerequisites An OpenShift cluster with support for Ingress NetworkPolicies. The Cluster Operator is running. Procedure Configure the networkPolicyPeers property to define the application pods or namespaces allowed to access the Kafka cluster. This example shows configuration for a tls listener to allow connections only from application pods with the label app set to kafka-client : apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... listeners: - name: tls port: 9093 type: internal tls: true authentication: type: tls networkPolicyPeers: - podSelector: matchLabels: app: kafka-client # ... zookeeper: # ... Apply the changes to the Kafka resource configuration. Additional resources networkPolicyPeers configuration NetworkPolicyPeer API reference 16.1.4. Using custom listener certificates for TLS encryption This procedure shows how to configure custom server certificates for TLS listeners or external listeners which have TLS encryption enabled. By default, Kafka listeners use certificates signed by Streams for Apache Kafka's internal CA (certificate authority). The Cluster Operator automatically generates a CA certificate when creating a Kafka cluster. To configure a client for TLS, the CA certificate is included in its truststore configuration to authenticate the Kafka cluster. Alternatively, you have the option to install and use your own CA certificates . However, if you prefer more granular control by using your own custom certificates at the listener-level, you can configure listeners using brokerCertChainAndKey properties. You create a secret with your own private key and server certificate, then specify them in the brokerCertChainAndKey configuration. User-provided certificates allow you to leverage existing security infrastructure. You can use a certificate signed by a public (external) CA or a private CA. Kafka clients need to trust the CA which was used to sign the listener certificate. If signed by a public CA, you usually won't need to add it to a client's truststore configuration. Custom certificates are not managed by Streams for Apache Kafka, so you need to renew them manually. Note Listener certificates are used for TLS encryption and server authentication only. They are not used for TLS client authentication. If you want to use your own certificate for TLS client authentication as well, you must install and use your own clients CA . Prerequisites The Cluster Operator is running. Each listener requires the following: A compatible server certificate signed by an external CA. (Provide an X.509 certificate in PEM format.) You can use one listener certificate for multiple listeners. Subject Alternative Names (SANs) are specified in the certificate for each listener. For more information, see Section 16.1.5, "Specifying SANs for custom listener certificates" . If you are not using a self-signed certificate, you can provide a certificate that includes the whole CA chain in the certificate. You can only use the brokerCertChainAndKey properties if TLS encryption ( tls: true ) is configured for the listener. Note Streams for Apache Kafka does not support the use of encrypted private keys for TLS. The private key stored in the secret must be unencrypted for this to work. Procedure Create a Secret containing your private key and server certificate: oc create secret generic <my_secret> --from-file=<my_listener_key.key> --from-file=<my_listener_certificate.crt> Edit the Kafka resource for your cluster. Configure the listener to use your Secret , certificate file, and private key file in the configuration.brokerCertChainAndKey property. Example configuration for a loadbalancer external listener with TLS encryption enabled # ... listeners: - name: plain port: 9092 type: internal tls: false - name: external3 port: 9094 type: loadbalancer tls: true configuration: brokerCertChainAndKey: secretName: my-secret certificate: my-listener-certificate.crt key: my-listener-key.key # ... Example configuration for a TLS listener # ... listeners: - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true configuration: brokerCertChainAndKey: secretName: my-secret certificate: my-listener-certificate.crt key: my-listener-key.key # ... Apply the changes to the Kafka resource configuration. The Cluster Operator starts a rolling update of the Kafka cluster, which updates the configuration of the listeners. Note A rolling update is also started if you update a Kafka listener certificate in a Secret that is already used by a listener. 16.1.5. Specifying SANs for custom listener certificates In order to use TLS hostname verification with custom Kafka listener certificates , you must specify the correct Subject Alternative Names (SANs) for each listener. The certificate SANs must specify hostnames for the following: All of the Kafka brokers in your cluster The Kafka cluster bootstrap service You can use wildcard certificates if they are supported by your CA. 16.1.5.1. Examples of SANs for internal listeners Use the following examples to help you specify hostnames of the SANs in your certificates for your internal listeners. Replace <cluster-name> with the name of the Kafka cluster and <namespace> with the OpenShift namespace where the cluster is running. Wildcards example for a type: internal listener //Kafka brokers *.<cluster_name>-kafka-brokers *.<cluster_name>-kafka-brokers.<namespace>.svc // Bootstrap service <cluster_name>-kafka-bootstrap <cluster_name>-kafka-bootstrap.<namespace>.svc Non-wildcards example for a type: internal listener // Kafka brokers <cluster_name>-kafka-0.<cluster_name>-kafka-brokers <cluster_name>-kafka-0.<cluster_name>-kafka-brokers.<namespace>.svc <cluster_name>-kafka-1.<cluster_name>-kafka-brokers <cluster_name>-kafka-1.<cluster_name>-kafka-brokers.<namespace>.svc # ... // Bootstrap service <cluster_name>-kafka-bootstrap <cluster_name>-kafka-bootstrap.<namespace>.svc Non-wildcards example for a type: cluster-ip listener // Kafka brokers <cluster_name>-kafka-<listener-name>-0 <cluster_name>-kafka-<listener-name>-0.<namespace>.svc <cluster_name>-kafka-_listener-name>-1 <cluster_name>-kafka-<listener-name>-1.<namespace>.svc # ... // Bootstrap service <cluster_name>-kafka-<listener-name>-bootstrap <cluster_name>-kafka-<listener-name>-bootstrap.<namespace>.svc 16.1.5.2. Examples of SANs for external listeners For external listeners which have TLS encryption enabled, the hostnames you need to specify in certificates depends on the external listener type . Table 16.1. SANs for each type of external listener External listener type In the SANs, specify... ingress Addresses of all Kafka broker Ingress resources and the address of the bootstrap Ingress . You can use a matching wildcard name. route Addresses of all Kafka broker Routes and the address of the bootstrap Route . You can use a matching wildcard name. loadbalancer Addresses of all Kafka broker loadbalancers and the bootstrap loadbalancer address. You can use a matching wildcard name. nodeport Addresses of all OpenShift worker nodes that the Kafka broker pods might be scheduled to. You can use a matching wildcard name. 16.2. Configuring authorized access to Kafka Configure authorized access to a Kafka cluster using the Kafka.spec.kafka.authorization property in the Kafka resource. If the authorization property is missing, no authorization is enabled and clients have no restrictions. When enabled, authorization is applied to all enabled listeners. The authorization method is defined in the type field. Supported authorization options: Simple authorization OAuth 2.0 authorization (if you are using OAuth 2.0 token based authentication) Open Policy Agent (OPA) authorization Custom authorization Figure 16.2. Kafka cluster authorization options 16.2.1. Designating super users Super users can access all resources in your Kafka cluster regardless of any access restrictions, and are supported by all authorization mechanisms. To designate super users for a Kafka cluster, add a list of user principals to the superUsers property. If a user uses mTLS authentication, the username is the common name from the TLS certificate subject prefixed with CN= . If you are not using the User Operator and using your own certificates for mTLS, the username is the full certificate subject. A full certificate subject can include the following fields: CN=<common_name> OU=<organizational_unit> O=<organization> L=<locality> ST=<state> C=<country_code> Omit any fields that are not applicable. An example configuration with super users apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # ... authorization: type: simple superUsers: - CN=user-1 - user-2 - CN=user-3 - CN=user-4,OU=my-ou,O=my-org,L=my-location,ST=my-state,C=US - CN=user-5,OU=my-ou,O=my-org,C=GB - CN=user-6,O=my-org # ... 16.3. Configuring user (client-side) security mechanisms When configuring security mechanisms in clients, the clients are represented as users. Use the KafkaUser resource to configure the authentication, authorization, and access rights for Kafka clients. Authentication permits user access, and authorization constrains user access to permissible actions. You can also create super users that have unconstrained access to Kafka brokers. The authentication and authorization mechanisms must match the specification for the listener used to access the Kafka brokers . For more information on configuring a KafkaUser resource to access Kafka brokers securely, see Section 16.4, "Example: Setting up secure client access" . 16.3.1. Associating users with Kafka clusters A KafkaUser resource includes a label that defines the appropriate name of the Kafka cluster (derived from the name of the Kafka resource) to which it belongs. apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster The label enables the User Operator to identify the KafkaUser resource and create and manager the user. If the label does not match the Kafka cluster, the User Operator cannot identify the KafkaUser , and the user is not created. If the status of the KafkaUser resource remains empty, check your label configuration. 16.3.2. Configuring user authentication Use the KafkaUser custom resource to configure authentication credentials for users (clients) that require access to a Kafka cluster. Configure the credentials using the authentication property in KafkaUser.spec . By specifying a type , you control what credentials are generated. Supported authentication types: tls for mTLS authentication tls-external for mTLS authentication using external certificates scram-sha-512 for SCRAM-SHA-512 authentication If tls or scram-sha-512 is specified, the User Operator creates authentication credentials when it creates the user. If tls-external is specified, the user still uses mTLS, but no authentication credentials are created. Use this option when you're providing your own certificates. When no authentication type is specified, the User Operator does not create the user or its credentials. You can use tls-external to authenticate with mTLS using a certificate issued outside the User Operator. The User Operator does not generate a TLS certificate or a secret. You can still manage ACL rules and quotas through the User Operator in the same way as when you're using the tls mechanism. This means that you use the CN= USER-NAME format when specifying ACL rules and quotas. USER-NAME is the common name given in a TLS certificate. 16.3.2.1. mTLS authentication To use mTLS authentication, you set the type field in the KafkaUser resource to tls . Example user with mTLS authentication enabled apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: tls # ... The authentication type must match the equivalent configuration for the Kafka listener used to access the Kafka cluster. When the user is created by the User Operator, it creates a new secret with the same name as the KafkaUser resource. The secret contains a private and public key for mTLS. The public key is contained in a user certificate, which is signed by a clients CA (certificate authority) when it is created. All keys are in X.509 format. Note If you are using the clients CA generated by the Cluster Operator, the user certificates generated by the User Operator are also renewed when the clients CA is renewed by the Cluster Operator. The user secret provides keys and certificates in PEM and PKCS #12 formats . Example secret with user credentials apiVersion: v1 kind: Secret metadata: name: my-user labels: strimzi.io/kind: KafkaUser strimzi.io/cluster: my-cluster type: Opaque data: ca.crt: <public_key> # Public key of the clients CA used to sign this user certificate user.crt: <user_certificate> # Public key of the user user.key: <user_private_key> # Private key of the user user.p12: <store> # PKCS #12 store for user certificates and keys user.password: <password_for_store> # Protects the PKCS #12 store When you configure a client, you specify the following: Truststore properties for the public cluster CA certificate to verify the identity of the Kafka cluster Keystore properties for the user authentication credentials to verify the client The configuration depends on the file format (PEM or PKCS #12). This example uses PKCS #12 stores, and the passwords required to access the credentials in the stores. Example client configuration using mTLS in PKCS #12 format bootstrap.servers=<kafka_cluster_name>-kafka-bootstrap:9093 1 security.protocol=SSL 2 ssl.truststore.location=/tmp/ca.p12 3 ssl.truststore.password=<truststore_password> 4 ssl.keystore.location=/tmp/user.p12 5 ssl.keystore.password=<keystore_password> 6 1 The bootstrap server address to connect to the Kafka cluster. 2 The security protocol option when using TLS for encryption. 3 The truststore location contains the public key certificate ( ca.p12 ) for the Kafka cluster. A cluster CA certificate and password is generated by the Cluster Operator in the <cluster_name>-cluster-ca-cert secret when the Kafka cluster is created. 4 The password ( ca.password ) for accessing the truststore. 5 The keystore location contains the public key certificate ( user.p12 ) for the Kafka user. 6 The password ( user.password ) for accessing the keystore. 16.3.2.2. mTLS authentication using a certificate issued outside the User Operator To use mTLS authentication using a certificate issued outside the User Operator, you set the type field in the KafkaUser resource to tls-external . A secret and credentials are not created for the user. Example user with mTLS authentication that uses a certificate issued outside the User Operator apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: tls-external # ... 16.3.2.3. SCRAM-SHA-512 authentication To use the SCRAM-SHA-512 authentication mechanism, you set the type field in the KafkaUser resource to scram-sha-512 . Example user with SCRAM-SHA-512 authentication enabled apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: scram-sha-512 # ... When the user is created by the User Operator, it creates a new secret with the same name as the KafkaUser resource. The secret contains the generated password in the password key, which is encoded with base64. In order to use the password, it must be decoded. Example secret with user credentials apiVersion: v1 kind: Secret metadata: name: my-user labels: strimzi.io/kind: KafkaUser strimzi.io/cluster: my-cluster type: Opaque data: password: Z2VuZXJhdGVkcGFzc3dvcmQ= 1 sasl.jaas.config: b3JnLmFwYWNoZS5rYWZrYS5jb21tb24uc2VjdXJpdHkuc2NyYW0uU2NyYW1Mb2dpbk1vZHVsZSByZXF1aXJlZCB1c2VybmFtZT0ibXktdXNlciIgcGFzc3dvcmQ9ImdlbmVyYXRlZHBhc3N3b3JkIjsK 2 1 The generated password, base64 encoded. 2 The JAAS configuration string for SASL SCRAM-SHA-512 authentication, base64 encoded. Decoding the generated password: 16.3.2.3.1. Custom password configuration When a user is created, Streams for Apache Kafka generates a random password. You can use your own password instead of the one generated by Streams for Apache Kafka. To do so, create a secret with the password and reference it in the KafkaUser resource. Example user with a password set for SCRAM-SHA-512 authentication apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: scram-sha-512 password: valueFrom: secretKeyRef: name: my-secret 1 key: my-password 2 # ... 1 The name of the secret containing the predefined password. 2 The key for the password stored inside the secret. 16.3.3. Configuring user authorization Use the KafkaUser custom resource to configure authorization rules for users (clients) that require access to a Kafka cluster. Configure the rules using the authorization property in KafkaUser.spec . By specifying a type , you control what rules are used. To use simple authorization, you set the type property to simple in KafkaUser.spec.authorization . The simple authorization uses the Kafka Admin API to manage the ACL rules inside your Kafka cluster. Whether ACL management in the User Operator is enabled or not depends on your authorization configuration in the Kafka cluster. For simple authorization, ACL management is always enabled. For OPA authorization, ACL management is always disabled. Authorization rules are configured in the OPA server. For Red Hat build of Keycloak authorization, you can manage the ACL rules directly in Red Hat build of Keycloak. You can also delegate authorization to the simple authorizer as a fallback option in the configuration. When delegation to the simple authorizer is enabled, the User Operator will enable management of ACL rules as well. For custom authorization using a custom authorization plugin, use the supportsAdminApi property in the .spec.kafka.authorization configuration of the Kafka custom resource to enable or disable the support. Authorization is cluster-wide. The authorization type must match the equivalent configuration in the Kafka custom resource. If ACL management is not enabled, Streams for Apache Kafka rejects a resource if it contains any ACL rules. If you're using a standalone deployment of the User Operator, ACL management is enabled by default. You can disable it using the STRIMZI_ACLS_ADMIN_API_SUPPORTED environment variable. If no authorization is specified, the User Operator does not provision any access rights for the user. Whether such a KafkaUser can still access resources depends on the authorizer being used. For example, for simple authorization, this is determined by the allow.everyone.if.no.acl.found configuration in the Kafka cluster. 16.3.3.1. ACL rules simple authorization uses ACL rules to manage access to Kafka brokers. ACL rules grant access rights to the user, which you specify in the acls property. For more information about the AclRule object, see the AclRule schema reference . 16.3.3.2. Super user access to Kafka brokers If a user is added to a list of super users in a Kafka broker configuration, the user is allowed unlimited access to the cluster regardless of any authorization constraints defined in ACLs in KafkaUser . For more information on configuring super user access to brokers, see Kafka authorization . 16.3.4. Configuring user quotas Configure the spec for the KafkaUser resource to enforce quotas so that a user does not overload Kafka brokers. Set size-based network usage and time-based CPU utilization thresholds. Partition mutations occur in response to the following types of user requests: Creating partitions for a new topic Adding partitions to an existing topic Deleting partitions from a topic You can also add a partition mutation quota to control the rate at which requests to change partitions are accepted. Example KafkaUser with user quotas apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: # ... quotas: producerByteRate: 1048576 1 consumerByteRate: 2097152 2 requestPercentage: 55 3 controllerMutationRate: 10 4 1 Byte-per-second quota on the amount of data the user can push to a Kafka broker. 2 Byte-per-second quota on the amount of data the user can fetch from a Kafka broker. 3 CPU utilization limit as a percentage of time for a client group. 4 Number of concurrent partition creation and deletion operations (mutations) allowed per second. Using quotas for Kafka clients might be useful in a number of situations. Consider a wrongly configured Kafka producer which is sending requests at too high a rate. Such misconfiguration can cause a denial of service to other clients, so the problematic client ought to be blocked. By using a network limiting quota, it is possible to prevent this situation from significantly impacting other clients. Note Streams for Apache Kafka supports user-level quotas, but not client-level quotas. 16.4. Example: Setting up secure client access This procedure shows how to configure client access to a Kafka cluster from outside OpenShift or from another OpenShift cluster. It's split into two parts: Securing Kafka brokers Securing user access to Kafka Resource configuration Client access to the Kafka cluster is secured with the following configuration: An external listener is configured with TLS encryption and mutual TLS (mTLS) authentication in the Kafka resource, as well as simple authorization. A KafkaUser is created for the client, utilizing mTLS authentication, and Access Control Lists (ACLs) are defined for simple authorization. At least one listener supporting the desired authentication must be configured for the KafkaUser . Listeners can be configured for mutual TLS , SCRAM-SHA-512 , or OAuth authentication. While mTLS always uses encryption, it's also recommended when using SCRAM-SHA-512 and OAuth 2.0 authentication. Authorization options for Kafka include simple , OAuth , OPA , or custom . When enabled, authorization is applied to all enabled listeners. To ensure compatibility between Kafka and clients, configuration of the following authentication and authorization mechanisms must align: For type: tls and type: scram-sha-512 authentication types, Kafka.spec.kafka.listeners[*].authentication must match KafkaUser.spec.authentication For type: simple authorization, Kafka.spec.kafka.authorization must match KafkaUser.spec.authorization For example, mTLS authentication for a user is only possible if it's also enabled in the Kafka configuration. Automation and certificate management Streams for Apache Kafka operators automate the configuration process and create the certificates required for authentication: The Cluster Operator creates the listeners and sets up the cluster and client certificate authority (CA) certificates to enable authentication within the Kafka cluster. The User Operator creates the user representing the client and the security credentials used for client authentication, based on the chosen authentication type. You add the certificates to your client configuration. In this procedure, the CA certificates generated by the Cluster Operator are used. Alternatively, you can replace them by installing your own custom CA certificates . You can also configure listeners to use Kafka listener certificates managed by an external CA . Certificates are available in PEM (.crt) and PKCS #12 (.p12) formats. This procedure uses PEM certificates. Use PEM certificates with clients that support the X.509 certificate format. Note For internal clients in the same OpenShift cluster and namespace, you can mount the cluster CA certificate in the pod specification. For more information, see Configuring internal clients to trust the cluster CA . Prerequisites The Kafka cluster is available for connection by a client running outside the OpenShift cluster The Cluster Operator and User Operator are running in the cluster 16.4.1. Securing Kafka brokers Configure the Kafka cluster with a Kafka listener. Define the authentication required to access the Kafka broker through the listener. Enable authorization on the Kafka broker. Example listener configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # ... listeners: 1 - name: external1 2 port: 9094 3 type: <listener_type> 4 tls: true 5 authentication: type: tls 6 configuration: 7 #... authorization: 8 type: simple superUsers: - super-user-name 9 # ... 1 Configuration options for enabling external listeners are described in the Generic Kafka listener schema reference . 2 Name to identify the listener. Must be unique within the Kafka cluster. 3 Port number used by the listener inside Kafka. The port number has to be unique within a given Kafka cluster. Allowed port numbers are 9092 and higher with the exception of ports 9404 and 9999, which are already used for Prometheus and JMX. Depending on the listener type, the port number might not be the same as the port number that connects Kafka clients. 4 External listener type specified as route (OpenShift only), loadbalancer , nodeport or ingress (Kubernetes only). An internal listener is specified as internal or cluster-ip . 5 Required. TLS encryption on the listener. For route and ingress type listeners it must be set to true . For mTLS authentication, also use the authentication property. 6 Client authentication mechanism on the listener. For server and client authentication using mTLS, you specify tls: true and authentication.type: tls . 7 (Optional) Depending on the requirements of the listener type, you can specify additional listener configuration . 8 Authorization specified as simple , which uses the AclAuthorizer and StandardAuthorizer Kafka plugins. 9 (Optional) Super users can access all brokers regardless of any access restrictions defined in ACLs. Warning An OpenShift route address comprises the Kafka cluster name, the listener name, the project name, and the domain of the router. For example, my-cluster-kafka-external1-bootstrap-my-project.domain.com (<cluster_name>-kafka-<listener_name>-bootstrap-<namespace>.<domain>). Each DNS label (between periods ".") must not exceed 63 characters, and the total length of the address must not exceed 255 characters. Apply the changes to the Kafka resource configuration. The Kafka cluster is configured with a Kafka broker listener using mTLS authentication. A service is created for each Kafka broker pod. A service is created to serve as the bootstrap address for connection to the Kafka cluster. A service is also created as the external bootstrap address for external connection to the Kafka cluster using nodeport listeners. The cluster CA certificate to verify the identity of the kafka brokers is also created in the secret <cluster_name>-cluster-ca-cert . Note If you scale your Kafka cluster while using external listeners, it might trigger a rolling update of all Kafka brokers. This depends on the configuration. Retrieve the bootstrap address you can use to access the Kafka cluster from the status of the Kafka resource. oc get kafka <kafka_cluster_name> -o=jsonpath='{.status.listeners[?(@.name=="<listener_name>")].bootstrapServers}{"\n"}' For example: oc get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name=="external")].bootstrapServers}{"\n"}' Use the bootstrap address in your Kafka client to connect to the Kafka cluster. 16.4.2. Securing user access to Kafka Create or modify a user representing the client that requires access to the Kafka cluster. Specify the same authentication type as the Kafka listener. Specify the authorization ACLs for simple authorization. Example user configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster 1 spec: authentication: type: tls 2 authorization: type: simple acls: 3 - resource: type: topic name: my-topic patternType: literal operations: - Describe - Read - resource: type: group name: my-group patternType: literal operations: - Read 1 The label must match the label of the Kafka cluster. 2 Authentication specified as mutual tls . 3 Simple authorization requires an accompanying list of ACL rules to apply to the user. The rules define the operations allowed on Kafka resources based on the username ( my-user ). Apply the changes to the KafkaUser resource configuration. The user is created, as well as a secret with the same name as the KafkaUser resource. The secret contains a public and private key for mTLS authentication. Example secret with user credentials apiVersion: v1 kind: Secret metadata: name: my-user labels: strimzi.io/kind: KafkaUser strimzi.io/cluster: my-cluster type: Opaque data: ca.crt: <public_key> # Public key of the clients CA used to sign this user certificate user.crt: <user_certificate> # Public key of the user user.key: <user_private_key> # Private key of the user user.p12: <store> # PKCS #12 store for user certificates and keys user.password: <password_for_store> # Protects the PKCS #12 store Extract the cluster CA certificate from the <cluster_name>-cluster-ca-cert secret of the Kafka cluster. oc get secret <cluster_name>-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt Extract the user CA certificate from the <user_name> secret. oc get secret <user_name> -o jsonpath='{.data.user\.crt}' | base64 -d > user.crt Extract the private key of the user from the <user_name> secret. oc get secret <user_name> -o jsonpath='{.data.user\.key}' | base64 -d > user.key Configure your client with the bootstrap address hostname and port for connecting to the Kafka cluster: props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "<hostname>:<port>"); Configure your client with the truststore credentials to verify the identity of the Kafka cluster. Specify the public cluster CA certificate. Example truststore configuration props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SSL"); props.put(SslConfigs.SSL_TRUSTSTORE_TYPE_CONFIG, "PEM"); props.put(SslConfigs.SSL_TRUSTSTORE_CERTIFICATES_CONFIG, "<ca.crt_file_content>"); SSL is the specified security protocol for mTLS authentication. Specify SASL_SSL for SCRAM-SHA-512 authentication over TLS. PEM is the file format of the truststore. Configure your client with the keystore credentials to verify the user when connecting to the Kafka cluster. Specify the public certificate and private key. Example keystore configuration props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SSL"); props.put(SslConfigs.SSL_KEYSTORE_TYPE_CONFIG, "PEM"); props.put(SslConfigs.SSL_KEYSTORE_CERTIFICATE_CHAIN_CONFIG, "<user.crt_file_content>"); props.put(SslConfigs.SSL_KEYSTORE_KEY_CONFIG, "<user.key_file_content>"); Add the keystore certificate and the private key directly to the configuration. Add as a single-line format. Between the BEGIN CERTIFICATE and END CERTIFICATE delimiters, start with a newline character ( \n ). End each line from the original certificate with \n too. Example keystore configuration props.put(SslConfigs.SSL_KEYSTORE_CERTIFICATE_CHAIN_CONFIG, "-----BEGIN CERTIFICATE----- \n<user_certificate_content_line_1>\n<user_certificate_content_line_n>\n-----END CERTIFICATE---"); props.put(SslConfigs.SSL_KEYSTORE_KEY_CONFIG, "----BEGIN PRIVATE KEY-----\n<user_key_content_line_1>\n<user_key_content_line_n>\n-----END PRIVATE KEY-----"); 16.5. Troubleshooting TLS hostname verification with node ports Off-cluster access using node ports with TLS encryption enabled does not support TLS hostname verification. This is because Streams for Apache Kafka does not know the address of the node where the broker pod is scheduled and cannot include it in the broker certificate. Consequently, clients that perform hostname verification will fail to connect. For example, a Java client will fail with the following exception: Exception for TLS hostname verification Caused by: java.security.cert.CertificateException: No subject alternative names matching IP address 168.72.15.231 found ... To connect, you must disable hostname verification. In the Java client, set the ssl.endpoint.identification.algorithm configuration option to an empty string. When configuring the client using a properties file, you can do it this way: ssl.endpoint.identification.algorithm= When configuring the client directly in Java, set the configuration option to an empty string: props.put("ssl.endpoint.identification.algorithm", ""); Alternatively, if you know the addresses of the worker nodes where the brokers are scheduled, you can add them as additional SANs (Subject Alternative Names) to the broker certificates manually. For example, this might apply if your cluster is running on a bare metal deployment with a limited number of available worker nodes. Use the alternativeNames property to specify additional SANS.
[ "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # listeners: - name: plain port: 9092 type: internal tls: true authentication: type: scram-sha-512 - name: tls port: 9093 type: internal tls: true authentication: type: tls - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: tls", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # listeners: - name: tls port: 9093 type: internal tls: true authentication: type: tls networkPolicyPeers: - podSelector: matchLabels: app: kafka-client # zookeeper: #", "create secret generic <my_secret> --from-file=<my_listener_key.key> --from-file=<my_listener_certificate.crt>", "listeners: - name: plain port: 9092 type: internal tls: false - name: external3 port: 9094 type: loadbalancer tls: true configuration: brokerCertChainAndKey: secretName: my-secret certificate: my-listener-certificate.crt key: my-listener-key.key", "listeners: - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true configuration: brokerCertChainAndKey: secretName: my-secret certificate: my-listener-certificate.crt key: my-listener-key.key", "//Kafka brokers *.<cluster_name>-kafka-brokers *.<cluster_name>-kafka-brokers.<namespace>.svc // Bootstrap service <cluster_name>-kafka-bootstrap <cluster_name>-kafka-bootstrap.<namespace>.svc", "// Kafka brokers <cluster_name>-kafka-0.<cluster_name>-kafka-brokers <cluster_name>-kafka-0.<cluster_name>-kafka-brokers.<namespace>.svc <cluster_name>-kafka-1.<cluster_name>-kafka-brokers <cluster_name>-kafka-1.<cluster_name>-kafka-brokers.<namespace>.svc // Bootstrap service <cluster_name>-kafka-bootstrap <cluster_name>-kafka-bootstrap.<namespace>.svc", "// Kafka brokers <cluster_name>-kafka-<listener-name>-0 <cluster_name>-kafka-<listener-name>-0.<namespace>.svc <cluster_name>-kafka-_listener-name>-1 <cluster_name>-kafka-<listener-name>-1.<namespace>.svc // Bootstrap service <cluster_name>-kafka-<listener-name>-bootstrap <cluster_name>-kafka-<listener-name>-bootstrap.<namespace>.svc", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # authorization: type: simple superUsers: - CN=user-1 - user-2 - CN=user-3 - CN=user-4,OU=my-ou,O=my-org,L=my-location,ST=my-state,C=US - CN=user-5,OU=my-ou,O=my-org,C=GB - CN=user-6,O=my-org #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: tls #", "apiVersion: v1 kind: Secret metadata: name: my-user labels: strimzi.io/kind: KafkaUser strimzi.io/cluster: my-cluster type: Opaque data: ca.crt: <public_key> # Public key of the clients CA used to sign this user certificate user.crt: <user_certificate> # Public key of the user user.key: <user_private_key> # Private key of the user user.p12: <store> # PKCS #12 store for user certificates and keys user.password: <password_for_store> # Protects the PKCS #12 store", "bootstrap.servers=<kafka_cluster_name>-kafka-bootstrap:9093 1 security.protocol=SSL 2 ssl.truststore.location=/tmp/ca.p12 3 ssl.truststore.password=<truststore_password> 4 ssl.keystore.location=/tmp/user.p12 5 ssl.keystore.password=<keystore_password> 6", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: tls-external #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: scram-sha-512 #", "apiVersion: v1 kind: Secret metadata: name: my-user labels: strimzi.io/kind: KafkaUser strimzi.io/cluster: my-cluster type: Opaque data: password: Z2VuZXJhdGVkcGFzc3dvcmQ= 1 sasl.jaas.config: b3JnLmFwYWNoZS5rYWZrYS5jb21tb24uc2VjdXJpdHkuc2NyYW0uU2NyYW1Mb2dpbk1vZHVsZSByZXF1aXJlZCB1c2VybmFtZT0ibXktdXNlciIgcGFzc3dvcmQ9ImdlbmVyYXRlZHBhc3N3b3JkIjsK 2", "echo \"Z2VuZXJhdGVkcGFzc3dvcmQ=\" | base64 --decode", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: scram-sha-512 password: valueFrom: secretKeyRef: name: my-secret 1 key: my-password 2 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: # quotas: producerByteRate: 1048576 1 consumerByteRate: 2097152 2 requestPercentage: 55 3 controllerMutationRate: 10 4", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # listeners: 1 - name: external1 2 port: 9094 3 type: <listener_type> 4 tls: true 5 authentication: type: tls 6 configuration: 7 # authorization: 8 type: simple superUsers: - super-user-name 9 #", "get kafka <kafka_cluster_name> -o=jsonpath='{.status.listeners[?(@.name==\"<listener_name>\")].bootstrapServers}{\"\\n\"}'", "get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name==\"external\")].bootstrapServers}{\"\\n\"}'", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster 1 spec: authentication: type: tls 2 authorization: type: simple acls: 3 - resource: type: topic name: my-topic patternType: literal operations: - Describe - Read - resource: type: group name: my-group patternType: literal operations: - Read", "apiVersion: v1 kind: Secret metadata: name: my-user labels: strimzi.io/kind: KafkaUser strimzi.io/cluster: my-cluster type: Opaque data: ca.crt: <public_key> # Public key of the clients CA used to sign this user certificate user.crt: <user_certificate> # Public key of the user user.key: <user_private_key> # Private key of the user user.p12: <store> # PKCS #12 store for user certificates and keys user.password: <password_for_store> # Protects the PKCS #12 store", "get secret <cluster_name>-cluster-ca-cert -o jsonpath='{.data.ca\\.crt}' | base64 -d > ca.crt", "get secret <user_name> -o jsonpath='{.data.user\\.crt}' | base64 -d > user.crt", "get secret <user_name> -o jsonpath='{.data.user\\.key}' | base64 -d > user.key", "props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, \"<hostname>:<port>\");", "props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, \"SSL\"); props.put(SslConfigs.SSL_TRUSTSTORE_TYPE_CONFIG, \"PEM\"); props.put(SslConfigs.SSL_TRUSTSTORE_CERTIFICATES_CONFIG, \"<ca.crt_file_content>\");", "props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, \"SSL\"); props.put(SslConfigs.SSL_KEYSTORE_TYPE_CONFIG, \"PEM\"); props.put(SslConfigs.SSL_KEYSTORE_CERTIFICATE_CHAIN_CONFIG, \"<user.crt_file_content>\"); props.put(SslConfigs.SSL_KEYSTORE_KEY_CONFIG, \"<user.key_file_content>\");", "props.put(SslConfigs.SSL_KEYSTORE_CERTIFICATE_CHAIN_CONFIG, \"-----BEGIN CERTIFICATE----- \\n<user_certificate_content_line_1>\\n<user_certificate_content_line_n>\\n-----END CERTIFICATE---\"); props.put(SslConfigs.SSL_KEYSTORE_KEY_CONFIG, \"----BEGIN PRIVATE KEY-----\\n<user_key_content_line_1>\\n<user_key_content_line_n>\\n-----END PRIVATE KEY-----\");", "Caused by: java.security.cert.CertificateException: No subject alternative names matching IP address 168.72.15.231 found", "ssl.endpoint.identification.algorithm=", "props.put(\"ssl.endpoint.identification.algorithm\", \"\");" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/deploying_and_managing_streams_for_apache_kafka_on_openshift/assembly-securing-access-str
Chapter 8. Additional resources
Chapter 8. Additional resources Decision Model and Notation specification DMN Technology Compatibility Kit Packaging and deploying an Red Hat Process Automation Manager project Interacting with Red Hat Process Automation Manager using KIE APIs
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/additional_resources
Chapter 1. Developing clients overview
Chapter 1. Developing clients overview Develop Kafka client applications for your Streams for Apache Kafka installation that can produce messages, consume messages, or do both. You can develop client applications for use with Streams for Apache Kafka on OpenShift or Streams for Apache Kafka on RHEL. Messages comprise an optional key and a value that contains the message data, plus headers and related metadata. The key identifies the subject of the message, or a property of the message. You must use the same key if you need to process a group of messages in the same order as they are sent. Messages are delivered in batches. Messages contain headers and metadata that provide details that are useful for filtering and routing by clients, such as the timestamp and offset position for the message. Kafka provides client APIs for developing client applications. Kafka producer and consumer APIs are the primary means of interacting with a Kafka cluster in a client application. The APIs control the flow of messages. The producer API sends messages to Kafka topics, while the consumer API reads messages from topics. Streams for Apache Kafka supports clients written in Java. How you develop your clients depends on your specific use case. Data durability might be a priority or high throughput. These demands can be met through configuration of your clients and brokers. All clients, however, must be able to connect to all brokers in a given Kafka cluster. 1.1. Supporting a HTTP client As an alternative to using the Kafka producer and consumer APIs in your client, you can set up and use the Streams for Apache Kafka Bridge. The Kafka Bridge provides a RESTful interface that allows HTTP-based clients to interact with a Kafka cluster. It offers the advantages of a web API connection to Strimzi, without the need for client applications that need to interpret the Kafka protocol. Kafka uses a binary protocol over TCP. For more information, see Using the Streams for Apache Kafka Bridge . 1.2. Tuning your producers and consumers You can add more configuration properties to optimize the performance of your Kafka clients. You probably want to do this when you've had some time to analyze how your client and broker configuration performs. For more information, see Kafka configuration tuning . 1.3. Monitoring client interaction Distributed tracing facilitates the end-to-end tracking of messages. You can enable tracing in Kafka consumer and producer client applications. For more information, see the documentation for distributed tracing in the following guides: Deploying and Upgrading Streams for Apache Kafka on OpenShift Using Streams for Apache Kafka on RHEL in KRaft mode Using Streams for Apache Kafka on RHEL with ZooKeeper Note When we use the term client application, we're specifically referring to applications that use Kafka producers and consumers to send and receive messages to and from a Kafka cluster. We are not referring to other Kafka components, such as Kafka Connect or Kafka Streams, which have their own distinct use cases and functionality.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/developing_kafka_client_applications/con-client-dev-intro-str
5.4. Logging Attributes
5.4. Logging Attributes 5.4.1. About Log Levels Log levels are an ordered set of enumerated values that indicate the nature and severity of a log message. The level of a given log message is specified by the developer using the appropriate methods of their chosen logging framework to send the message. Red Hat JBoss Data Grid supports all the log levels used by the supported application logging frameworks. The six most commonly used log levels are (ordered by lowest to highest severity): TRACE DEBUG INFO WARN ERROR FATAL Log levels are used by log categories and handlers to limit the messages they are responsible for. Each log level has an assigned numeric value which indicates its order relative to other log levels. Log categories and handlers are assigned a log level and they only process log messages of that numeric value or higher. For example a log handler with the level of WARN will only record messages of the levels WARN , ERROR and FATAL . Report a bug 5.4.2. Supported Log Levels The following table lists log levels that are supported in Red Hat JBoss Data Grid. Each entry includes the log level, its value and description. The log level values indicate each log level's relative value to other log levels. Additionally, log levels in different frameworks may be named differently, but have a log value consistent to the provided list. Table 5.2. Supported Log Levels Log Level Value Description FINEST 300 - FINER 400 - TRACE 400 Used for messages that provide detailed information about the running state of an application. TRACE level log messages are captured when the server runs with the TRACE level enabled. DEBUG 500 Used for messages that indicate the progress of individual requests or activities of an application. DEBUG level log messages are captured when the server runs with the DEBUG level enabled. FINE 500 - CONFIG 700 - INFO 800 Used for messages that indicate the overall progress of the application. Used for application start up, shut down and other major lifecycle events. WARN 900 Used to indicate a situation that is not in error but is not considered ideal. Indicates circumstances that can lead to errors in the future. WARNING 900 - ERROR 1000 Used to indicate an error that has occurred that could prevent the current activity or request from completing but will not prevent the application from running. SEVERE 1000 - FATAL 1100 Used to indicate events that could cause critical service failure and application shutdown and possibly cause JBoss Data Grid to shut down. Report a bug 5.4.3. About Log Categories Log categories define a set of log messages to capture and one or more log handlers which will process the messages. The log messages to capture are defined by their Java package of origin and log level. Messages from classes in that package and of that log level or higher (with greater or equal numeric value) are captured by the log category and sent to the specified log handlers. As an example, the WARNING log level results in log values of 900 , 1000 and 1100 are captured. Log categories can optionally use the log handlers of the root logger instead of their own handlers. 23149%2C+Administration+and+Configuration+Guide-6.628-06-2017+13%3A51%3A02JBoss+Data+Grid+6Documentation6.6.1 Report a bug 5.4.4. About the Root Logger The root logger captures all log messages sent to the server (of a specified level) that are not captured by a log category. These messages are then sent to one or more log handlers. By default the root logger is configured to use a console and a periodic log handler. The periodic log handler is configured to write to the file server.log . This file is sometimes referred to as the server log. Report a bug 5.4.5. About Log Handlers Log handlers define how captured log messages are recorded by Red Hat JBoss Data Grid. The six types of log handlers configurable in JBoss Data Grid are: Console File Periodic Size Async Custom Log handlers direct specified log objects to a variety of outputs (including the console or specified log files). Some log handlers used in JBoss Data Grid are wrapper log handlers, used to direct other log handlers' behavior. Log handlers are used to direct log outputs to specific files for easier sorting or to write logs for specific intervals of time. They are primarily useful to specify the kind of logs required and where they are stored or displayed or the logging behavior in JBoss Data Grid. Report a bug 5.4.6. Log Handler Types The following table lists the different types of log handlers available in Red Hat JBoss Data Grid: Table 5.3. Log Handler Types Log Handler Type Description Use Case Console Console log handlers write log messages to either the host operating system's standard out ( stdout ) or standard error ( stderr ) stream. These messages are displayed when JBoss Data Grid is run from a command line prompt. The Console log handler is preferred when JBoss Data Grid is administered using the command line. In such a case, the messages from a Console log handler are not saved unless the operating system is configured to capture the standard out or standard error stream. File File log handlers are the simplest log handlers. Their primary use is to write log messages to a specified file. File log handlers are most useful if the requirement is to store all log entries according to the time in one place. Periodic Periodic file handlers write log messages to a named file until a specified period of time has elapsed. Once the time period has elapsed, the specified time stamp is appended to the file name. The handler then continues to write into the newly created log file with the original name. The Periodic file handler can be used to accumulate log messages on a weekly, daily, hourly or other basis depending on the requirements of the environment. Size Size log handlers write log messages to a named file until the file reaches a specified size. When the file reaches a specified size, it is renamed with a numeric prefix and the handler continues to write into a newly created log file with the original name. Each size log handler must specify the maximum number of files to be kept in this fashion. The Size handler is best suited to an environment where the log file size must be consistent. Async Async log handlers are wrapper log handlers that provide asynchronous behavior for one or more other log handlers. These are useful for log handlers that have high latency or other performance problems such as writing a log file to a network file system. The Async log handlers are best suited to an environment where high latency is a problem or when writing to a network file system. Custom Custom log handlers enable to you to configure new types of log handlers that have been implemented. A custom handler must be implemented as a Java class that extends java.util.logging.Handler and be contained in a module. Custom log handlers create customized log handler types and are recommended for advanced users. 23149%2C+Administration+and+Configuration+Guide-6.628-06-2017+13%3A51%3A02JBoss+Data+Grid+6Documentation6.6.1 Report a bug 5.4.7. Selecting Log Handlers The following are the most common uses for each of the log handler types available for Red Hat JBoss Data Grid: The Console log handler is preferred when JBoss Data Grid is administered using the command line. In such a case, errors and log messages appear on the console window and are not saved unless separately configured to do so. The File log handler is used to direct log entries into a specified file. This simplicity is useful if the requirement is to store all log entries according to the time in one place. The Periodic log handler is similar to the File handler but creates files according to the specified period. As an example, this handler can be used to accumulate log messages on a weekly, daily, hourly or other basis depending on the requirements of the environment. The Size log handler also writes log messages to a specified file, but only while the log file size is within a specified limit. Once the file size reaches the specified limit, log files are written to a new log file. This handler is best suited to an environment where the log file size must be consistent. The Async log handler is a wrapper that forces other log handlers to operate asynchronously. This is best suited to an environment where high latency is a problem or when writing to a network file system. The Custom log handler creates new, customized types of log handlers. This is an advanced log handler. Report a bug 5.4.8. About Log Formatters A log formatter is the configuration property of a log handler. The log formatter defines the appearance of log messages that originate from the relevant log handler. The log formatter is a string that uses the same syntax as the java.util.Formatter class. See http://docs.oracle.com/javase/6/docs/api/java/util/Formatter.html for more information. Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/sect-logging_attributes
Chapter 15. Messaging Parameters
Chapter 15. Messaging Parameters Parameter Description RpcPassword The password for messaging backend. RpcPort The network port for messaging backend. The default value is 5672 . RpcUseSSL Messaging client subscriber parameter to specify an SSL connection to the messaging host. The default value is False . RpcUserName The username for messaging backend. The default value is guest .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/overcloud_parameters/messaging-parameters
Chapter 7. Observing the network traffic
Chapter 7. Observing the network traffic As an administrator, you can observe the network traffic in the OpenShift Container Platform console for detailed troubleshooting and analysis. This feature helps you get insights from different graphical representations of traffic flow. There are several available views to observe the network traffic. 7.1. Observing the network traffic from the Overview view The Overview view displays the overall aggregated metrics of the network traffic flow on the cluster. As an administrator, you can monitor the statistics with the available display options. 7.1.1. Working with the Overview view As an administrator, you can navigate to the Overview view to see the graphical representation of the flow rate statistics. Procedure Navigate to Observe Network Traffic . In the Network Traffic page, click the Overview tab. You can configure the scope of each flow rate data by clicking the menu icon. 7.1.2. Configuring advanced options for the Overview view You can customize the graphical view by using advanced options. To access the advanced options, click Show advanced options . You can configure the details in the graph by using the Display options drop-down menu. The options available are as follows: Scope : Select to view the components that network traffic flows between. You can set the scope to Node , Namespace , Owner , Zones , Cluster or Resource . Owner is an aggregation of resources. Resource can be a pod, service, node, in case of host-network traffic, or an unknown IP address. The default value is Namespace . Truncate labels : Select the required width of the label from the drop-down list. The default value is M . 7.1.2.1. Managing panels and display You can select the required panels to be displayed, reorder them, and focus on a specific panel. To add or remove panels, click Manage panels . The following panels are shown by default: Top X average bytes rates Top X bytes rates stacked with total Other panels can be added in Manage panels : Top X average packets rates Top X packets rates stacked with total Query options allows you to choose whether to show the Top 5 , Top 10 , or Top 15 rates. 7.1.3. Packet drop tracking You can configure graphical representation of network flow records with packet loss in the Overview view. By employing eBPF tracepoint hooks, you can gain valuable insights into packet drops for TCP, UDP, SCTP, ICMPv4, and ICMPv6 protocols, which can result in the following actions: Identification: Pinpoint the exact locations and network paths where packet drops are occurring. Determine whether specific devices, interfaces, or routes are more prone to drops. Root cause analysis: Examine the data collected by the eBPF program to understand the causes of packet drops. For example, are they a result of congestion, buffer issues, or specific network events? Performance optimization: With a clearer picture of packet drops, you can take steps to optimize network performance, such as adjust buffer sizes, reconfigure routing paths, or implement Quality of Service (QoS) measures. When packet drop tracking is enabled, you can see the following panels in the Overview by default: Top X packet dropped state stacked with total Top X packet dropped cause stacked with total Top X average dropped packets rates Top X dropped packets rates stacked with total Other packet drop panels are available to add in Manage panels : Top X average dropped bytes rates Top X dropped bytes rates stacked with total 7.1.3.1. Types of packet drops Two kinds of packet drops are detected by Network Observability: host drops and OVS drops. Host drops are prefixed with SKB_DROP and OVS drops are prefixed with OVS_DROP . Dropped flows are shown in the side panel of the Traffic flows table along with a link to a description of each drop type. Examples of host drop reasons are as follows: SKB_DROP_REASON_NO_SOCKET : the packet dropped due to a missing socket. SKB_DROP_REASON_TCP_CSUM : the packet dropped due to a TCP checksum error. Examples of OVS drops reasons are as follows: OVS_DROP_LAST_ACTION : OVS packets dropped due to an implicit drop action, for example due to a configured network policy. OVS_DROP_IP_TTL : OVS packets dropped due to an expired IP TTL. See the Additional resources of this section for more information about enabling and working with packet drop tracking. Additional resources Working with packet drops Network Observability metrics 7.1.4. DNS tracking You can configure graphical representation of Domain Name System (DNS) tracking of network flows in the Overview view. Using DNS tracking with extended Berkeley Packet Filter (eBPF) tracepoint hooks can serve various purposes: Network Monitoring: Gain insights into DNS queries and responses, helping network administrators identify unusual patterns, potential bottlenecks, or performance issues. Security Analysis: Detect suspicious DNS activities, such as domain name generation algorithms (DGA) used by malware, or identify unauthorized DNS resolutions that might indicate a security breach. Troubleshooting: Debug DNS-related issues by tracing DNS resolution steps, tracking latency, and identifying misconfigurations. By default, when DNS tracking is enabled, you can see the following non-empty metrics represented in a donut or line chart in the Overview : Top X DNS Response Code Top X average DNS latencies with overall Top X 90th percentile DNS latencies Other DNS tracking panels can be added in Manage panels : Bottom X minimum DNS latencies Top X maximum DNS latencies Top X 99th percentile DNS latencies This feature is supported for IPv4 and IPv6 UDP and TCP protocols. See the Additional resources in this section for more information about enabling and working with this view. Additional resources Working with DNS tracking Network Observability metrics 7.1.5. Round-Trip Time You can use TCP smoothed Round-Trip Time (sRTT) to analyze network flow latencies. You can use RTT captured from the fentry/tcp_rcv_established eBPF hookpoint to read sRTT from the TCP socket to help with the following: Network Monitoring: Gain insights into TCP latencies, helping network administrators identify unusual patterns, potential bottlenecks, or performance issues. Troubleshooting: Debug TCP-related issues by tracking latency and identifying misconfigurations. By default, when RTT is enabled, you can see the following TCP RTT metrics represented in the Overview : Top X 90th percentile TCP Round Trip Time with overall Top X average TCP Round Trip Time with overall Bottom X minimum TCP Round Trip Time with overall Other RTT panels can be added in Manage panels : Top X maximum TCP Round Trip Time with overall Top X 99th percentile TCP Round Trip Time with overall See the Additional resources in this section for more information about enabling and working with this view. Additional resources Working with RTT tracing 7.1.6. eBPF flow rule filter You can use rule-based filtering to control the volume of packets cached in the eBPF flow table. For example, a filter can specify that only packets coming from port 100 should be recorded. Then only the packets that match the filter are cached and the rest are not cached. 7.1.6.1. Ingress and egress traffic filtering CIDR notation efficiently represents IP address ranges by combining the base IP address with a prefix length. For both ingress and egress traffic, the source IP address is first used to match filter rules configured with CIDR notation. If there is a match, then the filtering proceeds. If there is no match, then the destination IP is used to match filter rules configured with CIDR notation. After matching either the source IP or the destination IP CIDR, you can pinpoint specific endpoints using the peerIP to differentiate the destination IP address of the packet. Based on the provisioned action, the flow data is either cached in the eBPF flow table or not cached. 7.1.6.2. Dashboard and metrics integrations When this option is enabled, the Netobserv/Health dashboard for eBPF agent statistics now has the Filtered flows rate view. Additionally, in Observe Metrics you can query netobserv_agent_filtered_flows_total to observe metrics with the reason in FlowFilterAcceptCounter , FlowFilterNoMatchCounter or FlowFilterRecjectCounter . 7.1.6.3. Flow filter configuration parameters The flow filter rules consist of required and optional parameters. Table 7.1. Required configuration parameters Parameter Description enable Set enable to true to enable the eBPF flow filtering feature. cidr Provides the IP address and CIDR mask for the flow filter rule. Supports both IPv4 and IPv6 address format. If you want to match against any IP, you can use 0.0.0.0/0 for IPv4 or ::/0 for IPv6. action Describes the action that is taken for the flow filter rule. The possible values are Accept or Reject . For the Accept action matching rule, the flow data is cached in the eBPF table and updated with the global metric, FlowFilterAcceptCounter . For the Reject action matching rule, the flow data is dropped and not cached in the eBPF table. The flow data is updated with the global metric, FlowFilterRejectCounter . If the rule is not matched, the flow is cached in the eBPF table and updated with the global metric, FlowFilterNoMatchCounter . Table 7.2. Optional configuration parameters Parameter Description direction Defines the direction of the flow filter rule. Possible values are Ingress or Egress . protocol Defines the protocol of the flow filter rule. Possible values are TCP , UDP , SCTP , ICMP , and ICMPv6 . tcpFlags Defines the TCP flags to filter flows. Possible values are SYN , SYN-ACK , ACK , FIN , RST , PSH , URG , ECE , CWR , FIN-ACK , and RST-ACK . ports Defines the ports to use for filtering flows. It can be used for either source or destination ports. To filter a single port, set a single port as an integer value. For example ports: 80 . To filter a range of ports, use a "start-end" range in string format. For example ports: "80-100" sourcePorts Defines the source port to use for filtering flows. To filter a single port, set a single port as an integer value, for example sourcePorts: 80 . To filter a range of ports, use a "start-end" range, string format, for example sourcePorts: "80-100" . destPorts DestPorts defines the destination ports to use for filtering flows. To filter a single port, set a single port as an integer value, for example destPorts: 80 . To filter a range of ports, use a "start-end" range in string format, for example destPorts: "80-100" . icmpType Defines the ICMP type to use for filtering flows. icmpCode Defines the ICMP code to use for filtering flows. peerIP Defines the IP address to use for filtering flows, for example: 10.10.10.10 . Additional resources Filtering eBPF flow data with rules Network Observability metrics Health dashboards 7.2. Observing the network traffic from the Traffic flows view The Traffic flows view displays the data of the network flows and the amount of traffic in a table. As an administrator, you can monitor the amount of traffic across the application by using the traffic flow table. 7.2.1. Working with the Traffic flows view As an administrator, you can navigate to Traffic flows table to see network flow information. Procedure Navigate to Observe Network Traffic . In the Network Traffic page, click the Traffic flows tab. You can click on each row to get the corresponding flow information. 7.2.2. Configuring advanced options for the Traffic flows view You can customize and export the view by using Show advanced options . You can set the row size by using the Display options drop-down menu. The default value is Normal . 7.2.2.1. Managing columns You can select the required columns to be displayed, and reorder them. To manage columns, click Manage columns . 7.2.2.2. Exporting the traffic flow data You can export data from the Traffic flows view. Procedure Click Export data . In the pop-up window, you can select the Export all data checkbox to export all the data, and clear the checkbox to select the required fields to be exported. Click Export . 7.2.3. Working with conversation tracking As an administrator, you can group network flows that are part of the same conversation. A conversation is defined as a grouping of peers that are identified by their IP addresses, ports, and protocols, resulting in an unique Conversation Id . You can query conversation events in the web console. These events are represented in the web console as follows: Conversation start : This event happens when a connection is starting or TCP flag intercepted Conversation tick : This event happens at each specified interval defined in the FlowCollector spec.processor.conversationHeartbeatInterval parameter while the connection is active. Conversation end : This event happens when the FlowCollector spec.processor.conversationEndTimeout parameter is reached or the TCP flag is intercepted. Flow : This is the network traffic flow that occurs within the specified interval. Procedure In the web console, navigate to Operators Installed Operators . Under the Provided APIs heading for the NetObserv Operator , select Flow Collector . Select cluster then select the YAML tab. Configure the FlowCollector custom resource so that spec.processor.logTypes , conversationEndTimeout , and conversationHeartbeatInterval parameters are set according to your observation needs. A sample configuration is as follows: Configure FlowCollector for conversation tracking apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: processor: logTypes: Flows 1 advanced: conversationEndTimeout: 10s 2 conversationHeartbeatInterval: 30s 3 1 When logTypes is set to Flows , only the Flow event is exported. If you set the value to All , both conversation and flow events are exported and visible in the Network Traffic page. To focus only on conversation events, you can specify Conversations which exports the Conversation start , Conversation tick and Conversation end events; or EndedConversations exports only the Conversation end events. Storage requirements are highest for All and lowest for EndedConversations . 2 The Conversation end event represents the point when the conversationEndTimeout is reached or the TCP flag is intercepted. 3 The Conversation tick event represents each specified interval defined in the FlowCollector conversationHeartbeatInterval parameter while the network connection is active. Note If you update the logType option, the flows from the selection do not clear from the console plugin. For example, if you initially set logType to Conversations for a span of time until 10 AM and then move to EndedConversations , the console plugin shows all conversation events before 10 AM and only ended conversations after 10 AM. Refresh the Network Traffic page on the Traffic flows tab. Notice there are two new columns, Event/Type and Conversation Id . All the Event/Type fields are Flow when Flow is the selected query option. Select Query Options and choose the Log Type , Conversation . Now the Event/Type shows all of the desired conversation events. you can filter on a specific conversation ID or switch between the Conversation and Flow log type options from the side panel. 7.2.4. Working with packet drops Packet loss occurs when one or more packets of network flow data fail to reach their destination. You can track these drops by editing the FlowCollector to the specifications in the following YAML example. Important CPU and memory usage increases when this feature is enabled. Procedure In the web console, navigate to Operators Installed Operators . Under the Provided APIs heading for the NetObserv Operator , select Flow Collector . Select cluster , and then select the YAML tab. Configure the FlowCollector custom resource for packet drops, for example: Example FlowCollector configuration apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv agent: type: eBPF ebpf: features: - PacketDrop 1 privileged: true 2 1 You can start reporting the packet drops of each network flow by listing the PacketDrop parameter in the spec.agent.ebpf.features specification list. 2 The spec.agent.ebpf.privileged specification value must be true for packet drop tracking. Verification When you refresh the Network Traffic page, the Overview , Traffic Flow , and Topology views display new information about packet drops: Select new choices in Manage panels to choose which graphical visualizations of packet drops to display in the Overview . Select new choices in Manage columns to choose which packet drop information to display in the Traffic flows table. In the Traffic Flows view, you can also expand the side panel to view more information about packet drops. Host drops are prefixed with SKB_DROP and OVS drops are prefixed with OVS_DROP . In the Topology view, red lines are displayed where drops are present. 7.2.5. Working with DNS tracking Using DNS tracking, you can monitor your network, conduct security analysis, and troubleshoot DNS issues. You can track DNS by editing the FlowCollector to the specifications in the following YAML example. Important CPU and memory usage increases are observed in the eBPF agent when this feature is enabled. Procedure In the web console, navigate to Operators Installed Operators . Under the Provided APIs heading for Network Observability , select Flow Collector . Select cluster then select the YAML tab. Configure the FlowCollector custom resource. A sample configuration is as follows: Configure FlowCollector for DNS tracking apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv agent: type: eBPF ebpf: features: - DNSTracking 1 sampling: 1 2 1 You can set the spec.agent.ebpf.features parameter list to enable DNS tracking of each network flow in the web console. 2 You can set sampling to a value of 1 for more accurate metrics and to capture DNS latency . For a sampling value greater than 1, you can observe flows with DNS Response Code and DNS Id , and it is unlikely that DNS Latency can be observed. When you refresh the Network Traffic page, there are new DNS representations you can choose to view in the Overview and Traffic Flow views and new filters you can apply. Select new DNS choices in Manage panels to display graphical visualizations and DNS metrics in the Overview . Select new choices in Manage columns to add DNS columns to the Traffic Flows view. Filter on specific DNS metrics, such as DNS Id , DNS Error DNS Latency and DNS Response Code , and see more information from the side panel. The DNS Latency and DNS Response Code columns are shown by default. Note TCP handshake packets do not have DNS headers. TCP protocol flows without DNS headers are shown in the traffic flow data with DNS Latency , ID , and Response code values of "n/a". You can filter out flow data to view only flows that have DNS headers using the Common filter "DNSError" equal to "0". 7.2.6. Working with RTT tracing You can track RTT by editing the FlowCollector to the specifications in the following YAML example. Procedure In the web console, navigate to Operators Installed Operators . In the Provided APIs heading for the NetObserv Operator , select Flow Collector . Select cluster , and then select the YAML tab. Configure the FlowCollector custom resource for RTT tracing, for example: Example FlowCollector configuration apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv agent: type: eBPF ebpf: features: - FlowRTT 1 1 You can start tracing RTT network flows by listing the FlowRTT parameter in the spec.agent.ebpf.features specification list. Verification When you refresh the Network Traffic page, the Overview , Traffic Flow , and Topology views display new information about RTT: In the Overview , select new choices in Manage panels to choose which graphical visualizations of RTT to display. In the Traffic flows table, the Flow RTT column can be seen, and you can manage display in Manage columns . In the Traffic Flows view, you can also expand the side panel to view more information about RTT. Example filtering Click the Common filters Protocol . Filter the network flow data based on TCP , Ingress direction, and look for FlowRTT values greater than 10,000,000 nanoseconds (10ms). Remove the Protocol filter. Filter for Flow RTT values greater than 0 in the Common filters. In the Topology view, click the Display option dropdown. Then click RTT in the edge labels drop-down list. 7.2.6.1. Using the histogram You can click Show histogram to display a toolbar view for visualizing the history of flows as a bar chart. The histogram shows the number of logs over time. You can select a part of the histogram to filter the network flow data in the table that follows the toolbar. 7.2.7. Working with availability zones You can configure the FlowCollector to collect information about the cluster availability zones. This allows you to enrich network flow data with the topology.kubernetes.io/zone label value applied to the nodes. Procedure In the web console, go to Operators Installed Operators . Under the Provided APIs heading for the NetObserv Operator , select Flow Collector . Select cluster then select the YAML tab. Configure the FlowCollector custom resource so that the spec.processor.addZone parameter is set to true . A sample configuration is as follows: Configure FlowCollector for availability zones collection apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: # ... processor: addZone: true # ... Verification When you refresh the Network Traffic page, the Overview , Traffic Flow , and Topology views display new information about availability zones: In the Overview tab, you can see Zones as an available Scope . In Network Traffic Traffic flows , Zones are viewable under the SrcK8S_Zone and DstK8S_Zone fields. In the Topology view, you can set Zones as Scope or Group . 7.2.8. Filtering eBPF flow data using a global rule You can configure the FlowCollector to filter eBPF flows using a global rule to control the flow of packets cached in the eBPF flow table. Procedure In the web console, navigate to Operators Installed Operators . Under the Provided APIs heading for Network Observability , select Flow Collector . Select cluster , then select the YAML tab. Configure the FlowCollector custom resource, similar to the following sample configurations: Example 7.1. Filter Kubernetes service traffic to a specific Pod IP endpoint apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF ebpf: flowFilter: action: Accept 1 cidr: 172.210.150.1/24 2 protocol: SCTP direction: Ingress destPortRange: 80-100 peerIP: 10.10.10.10 enable: true 3 1 The required action parameter describes the action that is taken for the flow filter rule. Possible values are Accept or Reject . 2 The required cidr parameter provides the IP address and CIDR mask for the flow filter rule and supports IPv4 and IPv6 address formats. If you want to match against any IP address, you can use 0.0.0.0/0 for IPv4 or ::/0 for IPv6. 3 You must set spec.agent.ebpf.flowFilter.enable to true to enable this feature. Example 7.2. See flows to any addresses outside the cluster apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF ebpf: flowFilter: action: Accept 1 cidr: 0.0.0.0/0 2 protocol: TCP direction: Egress sourcePort: 100 peerIP: 192.168.127.12 3 enable: true 4 1 You can Accept flows based on the criteria in the flowFilter specification. 2 The cidr value of 0.0.0.0/0 matches against any IP address. 3 See flows after peerIP is configured with 192.168.127.12 . 4 You must set spec.agent.ebpf.flowFilter.enable to true to enable the feature. 7.2.9. Endpoint translation (xlat) You can gain visibility into the endpoints serving traffic in a consolidated view using Network Observability and extended Berkeley Packet Filter (eBPF). Typically, when traffic flows through a service, egressIP, or load balancer, the traffic flow information is abstracted as it is routed to one of the available pods. If you try to get information about the traffic, you can only view service related info, such as service IP and port, and not information about the specific pod that is serving the request. Often the information for both the service traffic and the virtual service endpoint is captured as two separate flows, which complicates troubleshooting. To solve this, endpoint xlat can help in the following ways: Capture the network flows at the kernel level, which has a minimal impact on performance. Enrich the network flows with translated endpoint information, showing not only the service but also the specific backend pod, so you can see which pod served a request. As network packets are processed, the eBPF hook enriches flow logs with metadata about the translated endpoint that includes the following pieces of information that you can view in the Network Traffic page in a single row: Source Pod IP Source Port Destination Pod IP Destination Port Conntrack Zone ID 7.2.10. Working with endpoint translation (xlat) You can use Network Observability and eBPF to enrich network flows from a Kubernetes service with translated endpoint information, gaining insight into the endpoints serving traffic. Procedure In the web console, navigate to Operators Installed Operators . In the Provided APIs heading for the NetObserv Operator , select Flow Collector . Select cluster , and then select the YAML tab. Configure the FlowCollector custom resource for PacketTranslation , for example: Example FlowCollector configuration apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv agent: type: eBPF ebpf: features: - PacketTranslation 1 1 You can start enriching network flows with translated packet information by listing the PacketTranslation parameter in the spec.agent.ebpf.features specification list. Example filtering When you refresh the Network Traffic page you can filter for information about translated packets: Filter the network flow data based on Destination kind: Service . You can see the xlat column, which distinguishes where translated information is displayed, and the following default columns: Xlat Zone ID Xlat Src Kubernetes Object Xlat Dst Kubernetes Object You can manage the display of additional xlat columns in Manage columns . 7.3. Observing the network traffic from the Topology view The Topology view provides a graphical representation of the network flows and the amount of traffic. As an administrator, you can monitor the traffic data across the application by using the Topology view. 7.3.1. Working with the Topology view As an administrator, you can navigate to the Topology view to see the details and metrics of the component. Procedure Navigate to Observe Network Traffic . In the Network Traffic page, click the Topology tab. You can click each component in the Topology to view the details and metrics of the component. 7.3.2. Configuring the advanced options for the Topology view You can customize and export the view by using Show advanced options . The advanced options view has the following features: Find in view : To search the required components in the view. Display options : To configure the following options: Edge labels : To show the specified measurements as edge labels. The default is to show the Average rate in Bytes . Scope : To select the scope of components between which the network traffic flows. The default value is Namespace . Groups : To enhance the understanding of ownership by grouping the components. The default value is None . Layout : To select the layout of the graphical representation. The default value is ColaNoForce . Show : To select the details that need to be displayed. All the options are checked by default. The options available are: Edges , Edges label , and Badges . Truncate labels : To select the required width of the label from the drop-down list. The default value is M . Collapse groups : To expand or collapse the groups. The groups are expanded by default. This option is disabled if Groups has the value of None . 7.3.2.1. Exporting the topology view To export the view, click Export topology view . The view is downloaded in PNG format. 7.4. Filtering the network traffic By default, the Network Traffic page displays the traffic flow data in the cluster based on the default filters configured in the FlowCollector instance. You can use the filter options to observe the required data by changing the preset filter. Query Options You can use Query Options to optimize the search results, as listed below: Log Type : The available options Conversation and Flows provide the ability to query flows by log type, such as flow log, new conversation, completed conversation, and a heartbeat, which is a periodic record with updates for long conversations. A conversation is an aggregation of flows between the same peers. Match filters : You can determine the relation between different filter parameters selected in the advanced filter. The available options are Match all and Match any . Match all provides results that match all the values, and Match any provides results that match any of the values entered. The default value is Match all . Datasource : You can choose the datasource to use for queries: Loki , Prometheus , or Auto . Notable performance improvements can be realized when using Prometheus as a datasource rather than Loki, but Prometheus supports a limited set of filters and aggregations. The default datasource is Auto , which uses Prometheus on supported queries or uses Loki if the query does not support Prometheus. Drops filter : You can view different levels of dropped packets with the following query options: Fully dropped shows flow records with fully dropped packets. Containing drops shows flow records that contain drops but can be sent. Without drops shows records that contain sent packets. All shows all the aforementioned records. Limit : The data limit for internal backend queries. Depending upon the matching and the filter settings, the number of traffic flow data is displayed within the specified limit. Quick filters The default values in Quick filters drop-down menu are defined in the FlowCollector configuration. You can modify the options from console. Advanced filters You can set the advanced filters, Common , Source , or Destination , by selecting the parameter to be filtered from the dropdown list. The flow data is filtered based on the selection. To enable or disable the applied filter, you can click on the applied filter listed below the filter options. You can toggle between One way and Back and forth filtering. The One way filter shows only Source and Destination traffic according to your filter selections. You can use Swap to change the directional view of the Source and Destination traffic. The Back and forth filter includes return traffic with the Source and Destination filters. The directional flow of network traffic is shown in the Direction column in the Traffic flows table as Ingress`or `Egress for inter-node traffic and `Inner`for traffic inside a single node. You can click Reset defaults to remove the existing filters, and apply the filter defined in FlowCollector configuration. Note To understand the rules of specifying the text value, click Learn More . Alternatively, you can access the traffic flow data in the Network Traffic tab of the Namespaces , Services , Routes , Nodes , and Workloads pages which provide the filtered data of the corresponding aggregations. Additional resources Configuring Quick Filters Flow Collector sample resource
[ "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: processor: logTypes: Flows 1 advanced: conversationEndTimeout: 10s 2 conversationHeartbeatInterval: 30s 3", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv agent: type: eBPF ebpf: features: - PacketDrop 1 privileged: true 2", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv agent: type: eBPF ebpf: features: - DNSTracking 1 sampling: 1 2", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv agent: type: eBPF ebpf: features: - FlowRTT 1", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: processor: addZone: true", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF ebpf: flowFilter: action: Accept 1 cidr: 172.210.150.1/24 2 protocol: SCTP direction: Ingress destPortRange: 80-100 peerIP: 10.10.10.10 enable: true 3", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF ebpf: flowFilter: action: Accept 1 cidr: 0.0.0.0/0 2 protocol: TCP direction: Egress sourcePort: 100 peerIP: 192.168.127.12 3 enable: true 4", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv agent: type: eBPF ebpf: features: - PacketTranslation 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/network_observability/nw-observe-network-traffic
Preface
Preface Red Hat Quay is an enterprise-quality container registry. Use Red Hat Quay to build and store container images, then make them available to deploy across your enterprise. The Red Hat Quay Operator provides a simple method to deploy and manage Red Hat Quay on an OpenShift cluster. With the release of Red Hat Quay 3.4.0, the Red Hat Quay Operator was re-written to offer an enhanced experience and to add more support for Day 2 operations. As a result, the Red Hat Quay Operator is now simpler to use and is more opinionated. The key difference from versions prior to Red Hat Quay 3.4.0 include the following: The QuayEcosystem custom resource has been replaced with the QuayRegistry custom resource. The default installation options produces a fully supported Red Hat Quay environment, with all managed dependencies (database, caches, object storage, and so on) supported for production use. Note Some components might not be highly available. A new validation library for Red Hat Quay's configuration, which is shared by the Red Hat Quay application and config tool for consistency. Object storage can now be managed by the Red Hat Quay Operator using the ObjectBucketClaim Kubernetes API Note Red Hat OpenShift Data Foundation can be used to provide a supported implementation of this API on OpenShift Container Platform. Customization of the container images used by deployed pods for testing and development scenarios.
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/deploying_the_red_hat_quay_operator_on_openshift_container_platform/pr01
7.2. Displaying Information on Failed Devices
7.2. Displaying Information on Failed Devices You can use the -P argument of the lvs or vgs command to display information about a failed volume that would otherwise not appear in the output. This argument permits some operations even though the metadata is not completely consistent internally. For example, if one of the devices that made up the volume group vg failed, the vgs command might show the following output. If you specify the -P argument of the vgs command, the volume group is still unusable but you can see more information about the failed device. In this example, the failed device caused both a linear and a striped logical volume in the volume group to fail. The lvs command without the -P argument shows the following output. Using the -P argument shows the logical volumes that have failed. The following examples show the output of the pvs and lvs commands with the -P argument specified when a leg of a mirrored logical volume has failed.
[ "vgs -o +devices Volume group \"vg\" not found", "vgs -P -o +devices Partial mode. Incomplete volume groups will be activated read-only. VG #PV #LV #SN Attr VSize VFree Devices vg 9 2 0 rz-pn- 2.11T 2.07T unknown device(0) vg 9 2 0 rz-pn- 2.11T 2.07T unknown device(5120),/dev/sda1(0)", "lvs -a -o +devices Volume group \"vg\" not found", "lvs -P -a -o +devices Partial mode. Incomplete volume groups will be activated read-only. LV VG Attr LSize Origin Snap% Move Log Copy% Devices linear vg -wi-a- 20.00G unknown device(0) stripe vg -wi-a- 20.00G unknown device(5120),/dev/sda1(0)", "vgs -a -o +devices -P Partial mode. Incomplete volume groups will be activated read-only. VG #PV #LV #SN Attr VSize VFree Devices corey 4 4 0 rz-pnc 1.58T 1.34T my_mirror_mimage_0(0),my_mirror_mimage_1(0) corey 4 4 0 rz-pnc 1.58T 1.34T /dev/sdd1(0) corey 4 4 0 rz-pnc 1.58T 1.34T unknown device(0) corey 4 4 0 rz-pnc 1.58T 1.34T /dev/sdb1(0)", "lvs -a -o +devices -P Partial mode. Incomplete volume groups will be activated read-only. LV VG Attr LSize Origin Snap% Move Log Copy% Devices my_mirror corey mwi-a- 120.00G my_mirror_mlog 1.95 my_mirror_mimage_0(0),my_mirror_mimage_1(0) [my_mirror_mimage_0] corey iwi-ao 120.00G unknown device(0) [my_mirror_mimage_1] corey iwi-ao 120.00G /dev/sdb1(0) [my_mirror_mlog] corey lwi-ao 4.00M /dev/sdd1(0)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/partial_output
probe::socket.sendmsg
probe::socket.sendmsg Name probe::socket.sendmsg - Message is currently being sent on a socket. Synopsis socket.sendmsg Values family Protocol family value name Name of this probe protocol Protocol value state Socket state value flags Socket flags value type Socket type value size Message size in bytes Context The message sender Description Fires at the beginning of sending a message on a socket via the sock_sendmsg function
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-socket-sendmsg
Chapter 28. Installing into a Disk Image
Chapter 28. Installing into a Disk Image This chapter describes the process of creating custom, bootable images of several different types, and other related topics. The image creation and installation process can be either performed manually in a procedure similar to a normal hard drive installation, or it can be automated using a Kickstart file and the livemedia-creator tool. Note Creating custom images using livemedia-creator is currently supported only on AMD64 and Intel 64 (x86_64) and IBM POWER (big endian) systems. Additionally, Red Hat only supports creating custom images of Red Hat Enterprise Linux 7. If you choose the manual approach, you will be able to perform the installation interactively, using the graphical installation program. The process is similar to installing using Red Hat Enterprise Linux bootable media and the graphical installation program; however, before you begin the installation, you must create one or more empty image files manually. Automated disk image installations using livemedia-creator are somewhat similar to Kickstart installations with network boot. To use this approach, you must prepare a valid Kickstart file, which will be used by livemedia-creator to perform the installation. The disk image file will be created automatically. Both approaches to disk image installations require a separate installation source. In most cases, the best approach is to use an ISO image of the binary Red Hat Enterprise Linux DVD. See Chapter 2, Downloading Red Hat Enterprise Linux for information about obtaining installation ISO images. Important It is not currently possible to use an installation ISO image of Red Hat Enterprise Linux without any additional preparation. The installation source for a disk image installation must be prepared the same way it would be prepared when performing a normal installation. See Section 3.3, "Preparing Installation Sources" for information about preparing installation sources. 28.1. Manual Disk Image Installation A manual installation into a disk image is performed by executing the Anaconda installation program on an existing system and specifying one or more disk image files as installation targets. Additional options can also be used to configure Anaconda further. A list of available options can be obtained by using the anaconda -h command. Warning Image installation using Anaconda is potentially dangerous, because it uses the installation program on an already installed system. While no bugs are known at this moment which could cause any problems, it is possible that this process could render the entire system unusable. Installation into disk images should always be performed on systems or virtual machines specifically reserved for this purpose, and not on systems containing any valuable data. This section provides information about creating empty disk images and using the Anaconda installation program to install Red Hat Enterprise Linux into these images. 28.1.1. Preparing a Disk Image The first step in manual disk image installation is creating one or more image files, which will later be used as installation targets similar to physical storage devices. On Red Hat Enterprise Linux, a disk image file can be created using the following command: Replace size with a value representing the size of the image (such as 10G or 5000M ), and name with the file name of the image to be created. For example, to create a disk image file named myimage.raw with the size of 30GB, use the following command: Note The fallocate command allows you to specify the size of the file to be created in different ways, depending on the suffix used. For details about specifying the size, see the fallocate(1) man page. The size of the disk image file you create will limit the maximum capacity of file systems created during the installation. The image must always have a minimum size of 3GB, but in most cases, the space requirements will be larger. The exact size you will need for your installation will vary depending on the software you want to install, the amount of swap space, and the amount of space you will need to be available after the installation. More details about partitioning are available in: Section 8.14.4.4, "Recommended Partitioning Scheme" for 64-bit AMD, Intel, and ARM systems Section 13.15.4.4, "Recommended Partitioning Scheme" for IBM Power Systems servers After you create one or more empty disk image files, continue with Section 28.1.2, "Installing Red Hat Enterprise Linux into a Disk Image" . 28.1.2. Installing Red Hat Enterprise Linux into a Disk Image Important Set Security Enhanced Linux ( SELinux ) to permissive (or disabled) mode before creating custom images with Anaconda . See Red Hat Enterprise Linux 7 SELinux User's and Administrator's Guide for information on setting SELinux modes. To start the installation into a disk image file, execute the following command as root : Replace /path/to/image/file with the full path to the image file you created earlier. After executing this command, Anaconda will start on your system. The installation interface will be the same as if you performed the installation normally (booting the system from Red Hat Enterprise Linux media), but the graphical installation will start directly, skipping the boot menu. This means that boot options must be specified as additional arguments to the anaconda command. You can view the full list of supported commands by executing anaconda -h on a command line. One of the most important options is --repo= , which allows you to specify an installation source. This option uses the same syntax as the inst.repo= boot option. See Section 23.1, "Configuring the Installation System at the Boot Menu" for more information. When you use the --image= option, only the disk image file specified will be available as the installation target. No other devices will be visible in the Installation Destination dialog. If you want to use multiple disk images, you must specify the --image= option separately for each image file separately. For example: The above command will start Anaconda , and in the Installation Destination screen, both image files specified will be available as installation targets. Optionally, you can also assign custom names to the disk image files used in the installation. To assign a name to a disk image file, append : name to the end of the disk image file name. For example, to use a disk image file located in /home/testuser/diskinstall/image1.raw and assign the name myimage to it, execute the following command:
[ "fallocate -l size name", "fallocate -l 30G myimage.raw", "anaconda --image= /path/to/image/file", "anaconda --image=/home/testuser/diskinstall/image1.raw --image=/home/testuser/diskinstall/image2.raw", "anaconda --image=/home/testuser/diskinstall/image1.raw:myimage" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/chap-disk-image-installation
function::user_int32
function::user_int32 Name function::user_int32 - Retrieves a 32-bit integer value stored in user space Synopsis Arguments addr the user space address to retrieve the 32-bit integer from Description Returns the 32-bit integer value from a given user space address. Returns zero when user space data is not accessible.
[ "user_int32:long(addr:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-user-int32
Chapter 10. File-based configuration
Chapter 10. File-based configuration AMQ JavaScript can read the configuration options used to establish connections from a local file named connect.json . This enables you to configure connections in your application at the time of deployment. The library attempts to read the file when the application calls the container connect method without supplying any connection options. 10.1. File locations If set, AMQ JavaScript uses the value of the MESSAGING_CONNECT_FILE environment variable to locate the configuration file. If MESSAGING_CONNECT_FILE is not set, AMQ JavaScript searches for a file named connect.json at the following locations and in the order shown. It stops at the first match it encounters. On Linux: USDPWD/connect.json , where USDPWD is the current working directory of the client process USDHOME/.config/messaging/connect.json , where USDHOME is the current user home directory /etc/messaging/connect.json On Windows: %cd%/connect.json , where %cd% is the current working directory of the client process If no connect.json file is found, the library uses default values for all options. 10.2. The file format The connect.json file contains JSON data, with additional support for JavaScript comments. All of the configuration attributes are optional or have default values, so a simple example need only provide a few details: Example: A simple connect.json file { "host": "example.com", "user": "alice", "password": "secret" } SASL and SSL/TLS options are nested under "sasl" and "tls" namespaces: Example: A connect.json file with SASL and SSL/TLS options { "host": "example.com", "user": "ortega", "password": "secret", "sasl": { "mechanisms": ["SCRAM-SHA-1", "SCRAM-SHA-256"] }, "tls": { "cert": "/home/ortega/cert.pem", "key": "/home/ortega/key.pem" } } 10.3. Configuration options The option keys containing a dot (.) represent attributes nested inside a namespace. Table 10.1. Configuration options in connect.json Key Value type Default value Description scheme string "amqps" "amqp" for cleartext or "amqps" for SSL/TLS host string "localhost" The hostname or IP address of the remote host port string or number "amqps" A port number or port literal user string None The user name for authentication password string None The password for authentication sasl.mechanisms list or string None (system defaults) A JSON list of enabled SASL mechanisms. A bare string represents one mechanism. If none are specified, the client uses the default mechanisms provided by the system. sasl.allow_insecure boolean false Enable mechanisms that send cleartext passwords tls.cert string None The filename or database ID of the client certificate tls.key string None The filename or database ID of the private key for the client certificate tls.ca string None The filename, directory, or database ID of the CA certificate tls.verify boolean true Require a valid server certificate with a matching hostname
[ "{ \"host\": \"example.com\", \"user\": \"alice\", \"password\": \"secret\" }", "{ \"host\": \"example.com\", \"user\": \"ortega\", \"password\": \"secret\", \"sasl\": { \"mechanisms\": [\"SCRAM-SHA-1\", \"SCRAM-SHA-256\"] }, \"tls\": { \"cert\": \"/home/ortega/cert.pem\", \"key\": \"/home/ortega/key.pem\" } }" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_javascript_client/file_based_configuration
20.26. Sending a Keystroke Combination to a Specified Guest Virtual machine
20.26. Sending a Keystroke Combination to a Specified Guest Virtual machine The virsh send-key domain --codeset --holdtime keycode command allows you to send a sequence as a keycode to a specific guest virtual machine. Each keycode can either be a numeric value or a symbolic name from the corresponding codeset below. If a --holdtime is given, each keystroke will be held for the specified amount in milliseconds. The --codeset allows you to specify a code set, the default being Linux , but the following options are permitted: linux - choosing this option causes the symbolic names to match the corresponding Linux key constant macro names and the numeric values are those offered by the Linux generic input event subsystems. xt - this will send a value that is defined by the XT keyboard controller. No symbolic names are provided atset1 - the numeric values are those that are defined by the AT keyboard controller, set1 (XT compatible set). Extended keycodes from the atset1 may differ from extended keycodes in the XT codeset. No symbolic names are provided. atset2 - The numeric values are those defined by the AT keyboard controller, set 2. No symbolic names are provided. atset3 - The numeric values are those defined by the AT keyboard controller, set 3 (PS/2 compatible). No symbolic names are provided. os_x - The numeric values are those defined by the OS-X keyboard input subsystem. The symbolic names match the corresponding OS-X key constant macro names. xt_kbd - The numeric values are those defined by the Linux KBD device. These are a variant on the original XT codeset, but often with different encoding for extended keycodes. No symbolic names are provided. win32 - The numeric values are those defined by the Win32 keyboard input subsystem. The symbolic names match the corresponding Win32 key constant macro names. usb - The numeric values are those defined by the USB HID specification for keyboard input. No symbolic names are provided. rfb - The numeric values are those defined by the RFB extension for sending raw keycodes. These are a variant on the XT codeset, but extended keycodes have the low bit of the second bite set, instead of the high bit of the first byte. No symbolic names are provided. Example 20.53. How to send a keystroke combination to a guest virtual machine The following example sends the Left Ctrl , Left Alt , and Delete in the Linux encoding to the guest1 virtual machine, and holds them for 1 second. These keys are all sent simultaneously, and may be received by the guest in a random order: # virsh send-key guest1 --codeset Linux --holdtime 1000 KEY_LEFTCTRL KEY_LEFTALT KEY_DELETE Note If multiple keycodes are specified, they are all sent simultaneously to the guest virtual machine and as such may be received in random order. If you need distinct keycodes, you must run the virsh send-key command multiple times in the order you want the sequences to be sent.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-editing_a_guest_virtual_machines_configuration_file-sending_keystoke_combinations_to_a_specified_domain
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. To provide feedback, you can highlight the text in a document and add comments. This section explains how to submit feedback. Prerequisites You are logged in to the Red Hat Customer Portal. In the Red Hat Customer Portal, view the document in HTML format. Procedure To provide your feedback, perform the following steps: Click the Feedback button in the top-right corner of the document to see existing feedback. Note The feedback feature is enabled only in the HTML format. Highlight the section of the document where you want to provide feedback. Click the Add Feedback pop-up that appears near the highlighted text. A text box appears in the feedback section on the right side of the page. Enter your feedback in the text box and click Submit . A documentation issue is created. To view the issue, click the issue tracker link in the feedback view.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_eclipse_vert.x/4.3/html/release_notes_for_eclipse_vert.x_4.3/proc_providing-feedback-on-red-hat-documentation
probe::netdev.get_stats
probe::netdev.get_stats Name probe::netdev.get_stats - Called when someone asks the device statistics Synopsis netdev.get_stats Values dev_name The device that is going to provide the statistics
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-netdev-get-stats
Chapter 5. Clustering
Chapter 5. Clustering Pacemaker does not update the fail count when on-fail=ignore is used When a resource in a Pacemaker cluster failed to start, Pacemaker updated the resource's last failure time and fail count, even if the on-fail=ignore option was used. This could cause unwanted resource migrations. Now, Pacemaker does not update the fail count when on-fail=ignore is used. As a result, the failure is displayed in the cluster status output, but is properly ignored and thus does not cause resource migration. (BZ# 1200853 ) pacemaker and other Corosync clients again connect successfully Previously, the libqb library had a limited buffer size when building names for IPC sockets. If the process IDs on the system exceeded 5 digits, they were truncated and the IPC socket names could become non-unique. As a consequence, clients of the Corosync cluster manager could fail to connect and could exit, assuming the cluster services were unavailable. This could include pacemaker which could fail, leaving no cluster services running. This update increases the buffer size used for building IPC socket names to cover the maximum possible process ID number. As a result, pacemaker and other Corosync clients start consistently and continue running regardless of the process ID size. (BZ#1276345) Security features added to the luci interface to prevent clickjacking Previously, luci was not defended against clickjacking, a technique to attack a web site in which a user is tricked into performing unintended or malicious actions through purposefully injected elements on top of the genuine web page. To guard against this type of attack, luci is now served with X-Frame-Options: DENY and Content-Security-Policy: frame-ancestors 'none' headers that are intended to prevent luci pages from being contained within external, possibly malicious, web pages. Additionally, when a user configures luci to use a custom certificate and is properly anchored with a recognized CA certificate, a Strict-Transport-Security mechanism with a validity period of 7 days is enforced in newer web browsers, also by means of a dedicated HTTP header. These new static HTTP headers can be deactivated, should it be necessary to overcome incompatibilites, and a user can add custom static HTTP headers in the /etc/sysconfig/luci file, which provides examples. (BZ#1270958) glusterfs can now properly recover from failed synchronization of cached writes to backend Previously, if synchronization of cached writes to a Gluster backend failed due to a lack of space, write-behind marked the file descriptor ( fd ) as bad. This meant virtual machines could not recover and could not be restarted after synchronization to backend failed for any reason. With this update, glusterfs retries synchronization to backend on error until synchronization succeeds until a flush. Additionally, file descriptors are not marked as bad in this scenario, and only operations overlapping with regions with failed synchronizations fail until the synchronization is successful. Virtual machines can therefore be resumed normally once the underlying error condition is fixed and synchronization to backend succeeds. (BZ#1171261) Fixed an AVC denial error when setting up Gluster storage on NFS Ganesha clusters Attempting to set up Gluster storage on an NFS-Ganesha cluster previously failed due to an Access Vector Cache (AVC) denial error. The responsible SELinux policy has been adjusted to allow handling of volumes mounted by NFS-Ganesha, and the described failure no longer occurs. (BZ# 1241386 ) Installing glusterfs no longer affects default logrotate settings When installing the glusterfs packages on Red Hat Enterprise Linux 6, the glusterfs-logrotate and glusterfs-georep-logrotate files were previously installed with several global logrotate options. Consequently, the global options affected the default settings in the /etc/logrotate.conf file. The glusterfs RPMs have been rebuilt to prevent the default settings from being overridden. As a result, global settings in /etc/logrotate.conf continue to function as configured without being overridden by settings from glusterfs logrotate files. (BZ# 1171865 ) Fence agent for DM Multipath no longer loses SCSI keys on non-cluster reboot Previously, the fence agent for DM Multipath lost SCSI keys when the node was not rebooted using cluster methods. This resulted in an error when the cluster tried to fence the node. With this update, keys are properly regenerated after each reboot in this situation. (BZ#1254183) Fence agent for HP Integrated Lights-Out (iLo) now uses TLS1.0 automatically when connection over SSL v3 fails Previously, the fence agent for HP Integrated Lights-Out (iLO) required the tls1.0 argument in order to use TLS1.0 instead of SSL v3. With this update, TLS1.0 is used automatically when the connection over SSL v3 fails. (BZ#1256902)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.8_technical_notes/bug_fixes_clustering
function::mem_page_size
function::mem_page_size Name function::mem_page_size - Number of bytes in a page for this architecture Synopsis Arguments None
[ "mem_page_size:long()" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-mem-page-size
API overview
API overview OpenShift Container Platform 4.15 Overview content for the OpenShift Container Platform API Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/api_overview/index
Chapter 3. ClusterRole [rbac.authorization.k8s.io/v1]
Chapter 3. ClusterRole [rbac.authorization.k8s.io/v1] Description ClusterRole is a cluster level, logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding or ClusterRoleBinding. Type object 3.1. Specification Property Type Description aggregationRule object AggregationRule describes how to locate ClusterRoles to aggregate into the ClusterRole apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. rules array Rules holds all the PolicyRules for this ClusterRole rules[] object PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to. 3.1.1. .aggregationRule Description AggregationRule describes how to locate ClusterRoles to aggregate into the ClusterRole Type object Property Type Description clusterRoleSelectors array (LabelSelector) ClusterRoleSelectors holds a list of selectors which will be used to find ClusterRoles and create the rules. If any of the selectors match, then the ClusterRole's permissions will be added 3.1.2. .rules Description Rules holds all the PolicyRules for this ClusterRole Type array 3.1.3. .rules[] Description PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to. Type object Required verbs Property Type Description apiGroups array (string) APIGroups is the name of the APIGroup that contains the resources. If multiple API groups are specified, any action requested against one of the enumerated resources in any API group will be allowed. "" represents the core API group and "*" represents all API groups. nonResourceURLs array (string) NonResourceURLs is a set of partial urls that a user should have access to. *s are allowed, but only as the full, final step in the path Since non-resource URLs are not namespaced, this field is only applicable for ClusterRoles referenced from a ClusterRoleBinding. Rules can either apply to API resources (such as "pods" or "secrets") or non-resource URL paths (such as "/api"), but not both. resourceNames array (string) ResourceNames is an optional white list of names that the rule applies to. An empty set means that everything is allowed. resources array (string) Resources is a list of resources this rule applies to. '*' represents all resources. verbs array (string) Verbs is a list of Verbs that apply to ALL the ResourceKinds contained in this rule. '*' represents all verbs. 3.2. API endpoints The following API endpoints are available: /apis/rbac.authorization.k8s.io/v1/clusterroles DELETE : delete collection of ClusterRole GET : list or watch objects of kind ClusterRole POST : create a ClusterRole /apis/rbac.authorization.k8s.io/v1/watch/clusterroles GET : watch individual changes to a list of ClusterRole. deprecated: use the 'watch' parameter with a list operation instead. /apis/rbac.authorization.k8s.io/v1/clusterroles/{name} DELETE : delete a ClusterRole GET : read the specified ClusterRole PATCH : partially update the specified ClusterRole PUT : replace the specified ClusterRole /apis/rbac.authorization.k8s.io/v1/watch/clusterroles/{name} GET : watch changes to an object of kind ClusterRole. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 3.2.1. /apis/rbac.authorization.k8s.io/v1/clusterroles HTTP method DELETE Description delete collection of ClusterRole Table 3.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind ClusterRole Table 3.3. HTTP responses HTTP code Reponse body 200 - OK ClusterRoleList schema 401 - Unauthorized Empty HTTP method POST Description create a ClusterRole Table 3.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.5. Body parameters Parameter Type Description body ClusterRole schema Table 3.6. HTTP responses HTTP code Reponse body 200 - OK ClusterRole schema 201 - Created ClusterRole schema 202 - Accepted ClusterRole schema 401 - Unauthorized Empty 3.2.2. /apis/rbac.authorization.k8s.io/v1/watch/clusterroles HTTP method GET Description watch individual changes to a list of ClusterRole. deprecated: use the 'watch' parameter with a list operation instead. Table 3.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.3. /apis/rbac.authorization.k8s.io/v1/clusterroles/{name} Table 3.8. Global path parameters Parameter Type Description name string name of the ClusterRole HTTP method DELETE Description delete a ClusterRole Table 3.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.10. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ClusterRole Table 3.11. HTTP responses HTTP code Reponse body 200 - OK ClusterRole schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ClusterRole Table 3.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.13. HTTP responses HTTP code Reponse body 200 - OK ClusterRole schema 201 - Created ClusterRole schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ClusterRole Table 3.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.15. Body parameters Parameter Type Description body ClusterRole schema Table 3.16. HTTP responses HTTP code Reponse body 200 - OK ClusterRole schema 201 - Created ClusterRole schema 401 - Unauthorized Empty 3.2.4. /apis/rbac.authorization.k8s.io/v1/watch/clusterroles/{name} Table 3.17. Global path parameters Parameter Type Description name string name of the ClusterRole HTTP method GET Description watch changes to an object of kind ClusterRole. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 3.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/rbac_apis/clusterrole-rbac-authorization-k8s-io-v1
Chapter 3. Configuring Red Hat build of OpenJDK to run with customized heap size
Chapter 3. Configuring Red Hat build of OpenJDK to run with customized heap size Red Hat build of OpenJDK 11 for Microsoft Windows can be configured to use a customized heap size. Prequisites Installed Java Runtime Procedure Run the application by adding maximum heap size option to your java command line. For example to set the maximum heap size to 100 megabytes use the -Xmx100m option. Additional resources For reference see https://docs.oracle.com/javase/8/docs/technotes/tools/windows/java.html#BABDJJFI Revised on 2024-05-09 16:46:07 UTC
[ "java -Xmx100m <your-main-class>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/configuring_red_hat_build_of_openjdk_11_for_windows/openjdk11-windows-config-custom-heap
Chapter 5. Creating a Red Hat OpenShift Service on AWS cluster with egress lockdown
Chapter 5. Creating a Red Hat OpenShift Service on AWS cluster with egress lockdown Creating a Red Hat OpenShift Service on AWS cluster with egress lockdown provides a way to enhance your cluster's stability and security by allowing your cluster to use the image registry in the local region if the cluster cannot access the Internet. Your cluster will try to pull the images from Quay, but when they aren't reached, it will instead pull the images from the image registry in the local region. Important You can only use egress lockdown on clusters that use the following AWS regions: us-west-1 us-west-2 us-east-1 us-east-2 ap-northeast-1 ap-northeast-2 ap-northeast-3 ap-south-1 ap-southeast-1 ap-southeast-2 ca-central-1 eu-central-1 eu-north-1 eu-west-1 eu-west-2 eu-west-3 sa-east-1 All public and private clusters with egress lockdown get their Red Hat container images from a registery that is located in the local region of the cluster instead of gathering these images from various endpoints and registeries on the Internet. You can create a fully operational cluster that does not require a public egress by configuring a virtual private cloud (VPC) and using the --properties zero_egress:true flag when creating your cluster. Important Egress lockdown is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prequisites You have an AWS account with sufficient permissions to create VPCs, subnets, and other required infrastructure. You have installed the Terraform v1.4.0+ CLI. You have installed the ROSA v1.2.45+ CLI. You have installed and configured the AWS CLI with the necessary credentials. You have installed the git CLI. Important You can use egress lockdown on all supported versions of Red Hat OpenShift Service on AWS that use the hosted control plane architecture; however, Red Hat suggests using the latest available z-stream release for each OpenShift Container Platform version. While you may install and upgrade your clusters as you would a regular cluster, due to an upstream issue with how the internal image registry functions in disconnected environments, your cluster that uses egress lockdown will not be able to fully use all platform components, such as the image registry. You can restore these features by using the latest ROSA version when upgrading or installing your cluster. 5.1. Creating a Virtual Private Cloud for your egress lockdown ROSA with HCP clusters You must have a Virtual Private Cloud (VPC) to create ROSA with HCP clusters. You can use one of the following methods to create a VPC: Create a VPC by using a Terraform template Manually create the VPC resources in the AWS console Note The Terraform instructions are for testing and demonstration purposes. Your own installation requires modifications to the VPC for your specific needs and constraints. You should also ensure that when you use the following Terraform script it is in the same region that you intend to install your cluster. 5.1.1. Creating a Virtual Private Cloud using Terraform Terraform is a tool that allows you to create various resources using an established template. The following process uses the default options as required to create a ROSA with HCP cluster. For more information about using Terraform, see the additional resources. Prerequisites You have installed Terraform version 1.4.0 or newer on your machine. You have installed Git on your machine. Procedure Open a shell prompt and clone the Terraform VPC repository by running the following command: USD git clone https://github.com/openshift-cs/terraform-vpc-example Navigate to the created directory by running the following command: USD cd terraform-vpc-example/zero-egress Initiate the Terraform file by running the following command: USD terraform init A message confirming the initialization appears when this process completes. To build your VPC Terraform plan based on the existing Terraform template, run the plan command. You must include your AWS region, availability zones, CIDR blocks, and private subnets. You can choose to specify a cluster name. A rosa-zero-egress.tfplan file is added to the hypershift-tf directory after the terraform plan completes. For more detailed options, see the Terraform VPC repository's README file . USD terraform plan -out rosa-zero-egress.tfplan -var region=<aws_region> \ 1 -var 'availability_zones=["aws_region_1a","aws_region_1b","aws_region_1c"]'\ 2 -var vpc_cidr_block=10.0.0.0/16 \ 3 -var 'private_subnets=["10.0.0.0/24", "10.0.1.0/24", "10.0.2.0/24"]' 4 1 Enter your AWS region. Important You can only use egress lockdown on clusters that use the us-west-1, us-west-2, us-east-1, us-east-2, ap-northeast-1, ap-northeast-2, ap-northeast-3, ap-south-1, ap-southeast-1, ap-southeast-2, ca-central-1, eu-central-1, eu-north-1, eu-west-1, eu-west-2, eu-west-3 , and sa-east-1 AWS regions. 2 Enter the availability zones for the VPC. For example, for a VPC that uses ap-southeast-1 , you would use the following as availability zones: ["ap-southeast-1a", "ap-southeast-1b", "ap-southeast-1c"] . 3 Enter the CIDR block for your VPC. 4 Enter each of the subnets that are created for the VPC. Apply this plan file to build your VPC by running the following command: USD terraform apply rosa-zero-egress.tfplan Additional resources See the Zero Egress Terraform VPC Example repository for a detailed list of all options available when customizing the VPC for your needs. 5.1.2. Creating a Virtual Private Cloud manually If you choose to manually create your Virtual Private Cloud (VPC) instead of using Terraform, go to the VPC page in the AWS console . Your VPC must meet the requirements shown in the following table. Table 5.1. Requirements for your VPC Requirement Details VPC name You need to have the specific VPC name and ID when creating your cluster. CIDR range Your VPC CIDR range should match your machine CIDR. Availability zone You need one availability zone for a single zone, and you need three for availability zones for multi-zone. Public subnet You must have one public subnet with a NAT gateway for public clusters. Private clusters do not need a public subnet. DNS hostname and resolution You must ensure that the DNS hostname and resolution are enabled. Tagging your subnets Before you can use your VPC to create a ROSA with HCP cluster, you must tag your VPC subnets. Automated service preflight checks verify that these resources are tagged correctly before you can use these resources. The following table shows how your resources should be tagged as the following: Resource Key Value Public subnet kubernetes.io/role/elb 1 or no value Private subnet kubernetes.io/role/internal-elb 1 or no value Note You must tag at least one private subnet and, if applicable, and one public subnet. Prerequisites You have created a VPC. You have installed the aws CLI. Procedure Tag your resources in your terminal by running the following commands: For public subnets, run: USD aws ec2 create-tags --resources <public-subnet-id> --region <aws_region> --tags Key=kubernetes.io/role/elb,Value=1 For private subnets, run: USD aws ec2 create-tags --resources <private-subnet-id> --region <aws_region> --tags Key=kubernetes.io/role/internal-elb,Value=1 Verification Verify that the tag is correctly applied by running the following command: USD aws ec2 describe-tags --filters "Name=resource-id,Values=<subnet_id>" Example output TAGS Name <subnet-id> subnet <prefix>-subnet-public1-us-east-1a TAGS kubernetes.io/role/elb <subnet-id> subnet 1 Configuring AWS security groups and PrivateLink connections After creating your VPC, create your AWS security groups and VPC endpoints. Procedure Create the AWS security group by running the following command: USD aws ec2 create-security-group \ --group-name allow-inbound-traffic \ --description "allow inbound traffic" \ --vpc-id <vpc_id> \ 1 --region <aws_region> \ 2 1 Enter your VPC's ID. 2 Enter the AWS region where the VPC was installed. Grant access to the security group's ingress by running the following command: USD aws ec2 authorize-security-group-ingress \ --group-id <group_id> \ 1 --protocol -1 \ --port 0-0 \ --cidr <vpc_cidr> \ 2 --region <aws_region> \ 3 1 --group-id uses ID of the security group created with the command. 2 Enter the CIDR of your VPC. 3 The AWS region where you installed your VPC Create your STS VPC endpoint by running the following command: USD aws ec2 create-vpc-endpoint \ --vpc-id <vpc_id> \ 1 --service-name com.amazonaws.<aws_region>.sts \ 2 --vpc-endpoint-type Interface 1 Enter your VPC's ID. 2 Enter the AWS region where the VPC was installed. Create your ECR VPC endpoints by running the following command: USD aws ec2 create-vpc-endpoint \ --vpc-id <vpc_id> \ --service-name com.amazonaws.<aws_region>.ecr.dkr \ 1 --vpc-endpoint-type Interface 1 Enter the AWS region where the VPC is located. Create your S3 VPC endpoint by running the following command: USD aws ec2 create-vpc-endpoint \ --vpc-id <vpc_id> \ --service-name com.amazonaws.<aws_region>.s3 5.2. Creating the account-wide STS roles and policies Before using the Red Hat OpenShift Service on AWS (ROSA) CLI ( rosa ) to create Red Hat OpenShift Service on AWS (ROSA) with hosted control planes (HCP) clusters, create the required account-wide roles and policies, including the Operator policies. Note ROSA with HCP clusters require account and Operator roles with AWS managed policies attached. Customer managed policies are not supported. For more information regarding AWS managed policies for ROSA with HCP clusters, see AWS managed policies for ROSA account roles . Prerequisites You have completed the AWS prerequisites for ROSA with HCP. You have available AWS service quotas. You have enabled the ROSA service in the AWS Console. You have installed and configured the latest ROSA CLI ( rosa ) on your installation host. You have logged in to your Red Hat account by using the ROSA CLI. Procedure If they do not exist in your AWS account, create the required account-wide STS roles and attach the policies by running the following command: USD rosa create account-roles --hosted-cp Ensure that the your worker role has the correct AWS policy by running the following command: USD aws iam attach-role-policy \ --role-name ManagedOpenShift-HCP-ROSA-Worker-Role \ 1 --policy-arn "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly" 1 This role needs to include the prefix that was created in the step. Optional: Set your prefix as an environmental variable by running the following command: USD export ACCOUNT_ROLES_PREFIX=<account_role_prefix> View the value of the variable by running the following command: USD echo USDACCOUNT_ROLES_PREFIX Example output ManagedOpenShift For more information regarding AWS managed IAM policies for ROSA, see AWS managed IAM policies for ROSA . 5.3. Creating an OpenID Connect configuration When using a Red Hat OpenShift Service on AWS cluster, you can create the OpenID Connect (OIDC) configuration prior to creating your cluster. This configuration is registered to be used with OpenShift Cluster Manager. Prerequisites You have completed the AWS prerequisites for Red Hat OpenShift Service on AWS. You have installed and configured the latest Red Hat OpenShift Service on AWS (ROSA) CLI, rosa , on your installation host. Procedure To create your OIDC configuration alongside the AWS resources, run the following command: USD rosa create oidc-config --mode=auto --yes This command returns the following information. Example output ? Would you like to create a Managed (Red Hat hosted) OIDC Configuration Yes I: Setting up managed OIDC configuration I: To create Operator Roles for this OIDC Configuration, run the following command and remember to replace <user-defined> with a prefix of your choice: rosa create operator-roles --prefix <user-defined> --oidc-config-id 13cdr6b If you are going to create a Hosted Control Plane cluster please include '--hosted-cp' I: Creating OIDC provider using 'arn:aws:iam::4540112244:user/userName' ? Create the OIDC provider? Yes I: Created OIDC provider with ARN 'arn:aws:iam::4540112244:oidc-provider/dvbwgdztaeq9o.cloudfront.net/13cdr6b' When creating your cluster, you must supply the OIDC config ID. The CLI output provides this value for --mode auto , otherwise you must determine these values based on aws CLI output for --mode manual . Optional: you can save the OIDC configuration ID as a variable to use later. Run the following command to save the variable: USD export OIDC_ID=<oidc_config_id> 1 1 In the example output above, the OIDC configuration ID is 13cdr6b. View the value of the variable by running the following command: USD echo USDOIDC_ID Example output 13cdr6b Verification You can list the possible OIDC configurations available for your clusters that are associated with your user organization. Run the following command: USD rosa list oidc-config Example output ID MANAGED ISSUER URL SECRET ARN 2330dbs0n8m3chkkr25gkkcd8pnj3lk2 true https://dvbwgdztaeq9o.cloudfront.net/2330dbs0n8m3chkkr25gkkcd8pnj3lk2 233hvnrjoqu14jltk6lhbhf2tj11f8un false https://oidc-r7u1.s3.us-east-1.amazonaws.com aws:secretsmanager:us-east-1:242819244:secret:rosa-private-key-oidc-r7u1-tM3MDN 5.4. Creating Operator roles and policies When using a ROSA with HCP cluster, you must create the Operator IAM roles that are required for Red Hat OpenShift Service on AWS (ROSA) with hosted control planes (HCP) deployments. The cluster Operators use the Operator roles to obtain the temporary permissions required to carry out cluster operations, such as managing back-end storage, cloud provider credentials, and external access to a cluster. Prerequisites You have completed the AWS prerequisites for ROSA with HCP. You have installed and configured the latest Red Hat OpenShift Service on AWS ROSA CLI ( rosa ), on your installation host. You created the account-wide AWS roles. Procedure Set your prefix name to an environment variable using the following command: USD export OPERATOR_ROLES_PREFIX=<prefix_name> To create your Operator roles, run the following command: USD rosa create operator-roles --hosted-cp --prefix=USDOPERATOR_ROLES_PREFIX --oidc-config-id=USDOIDC_ID --installer-role-arn arn:aws:iam::USD{AWS_ACCOUNT_ID}:role/USD{ACCOUNT_ROLES_PREFIX}-HCP-ROSA-Installer-Role The following breakdown provides options for the Operator role creation. USD rosa create operator-roles --hosted-cp --prefix=USDOPERATOR_ROLES_PREFIX 1 --oidc-config-id=USDOIDC_ID 2 --installer-role-arn arn:aws:iam::USD{AWS_ACCOUNT_ID}:role/USD{ACCOUNT_ROLES_PREFIX}-HCP-ROSA-Installer-Role 3 1 You must supply a prefix when creating these Operator roles. Failing to do so produces an error. See the Additional resources of this section for information on the Operator prefix. 2 This value is the OIDC configuration ID that you created for your ROSA with HCP cluster. 3 This value is the installer role ARN that you created when you created the ROSA account roles. You must include the --hosted-cp parameter to create the correct roles for ROSA with HCP clusters. This command returns the following information. Example output ? Role creation mode: auto ? Operator roles prefix: <pre-filled_prefix> 1 ? OIDC Configuration ID: 23soa2bgvpek9kmes9s7os0a39i13qm4 | https://dvbwgdztaeq9o.cloudfront.net/23soa2bgvpek9kmes9s7os0a39i13qm4 2 ? Create hosted control plane operator roles: Yes W: More than one Installer role found ? Installer role ARN: arn:aws:iam::4540112244:role/<prefix>-HCP-ROSA-Installer-Role ? Permissions boundary ARN (optional): I: Reusable OIDC Configuration detected. Validating trusted relationships to operator roles: I: Creating roles using 'arn:aws:iam::4540112244:user/<userName>' I: Created role '<prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials' I: Created role '<prefix>-openshift-cloud-network-config-controller-cloud-credenti' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-cloud-network-config-controller-cloud-credenti' I: Created role '<prefix>-kube-system-kube-controller-manager' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-kube-controller-manager' I: Created role '<prefix>-kube-system-capa-controller-manager' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-capa-controller-manager' I: Created role '<prefix>-kube-system-control-plane-operator' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-control-plane-operator' I: Created role '<prefix>-kube-system-kms-provider' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-kms-provider' I: Created role '<prefix>-openshift-image-registry-installer-cloud-credentials' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-image-registry-installer-cloud-credentials' I: Created role '<prefix>-openshift-ingress-operator-cloud-credentials' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-ingress-operator-cloud-credentials' I: To create a cluster with these roles, run the following command: rosa create cluster --sts --oidc-config-id 23soa2bgvpek9kmes9s7os0a39i13qm4 --operator-roles-prefix <prefix> --hosted-cp 1 This field is prepopulated with the prefix that you set in the initial creation command. 2 This field requires you to select an OIDC configuration that you created for your ROSA with HCP cluster. The Operator roles are now created and ready to use for creating your ROSA with HCP cluster. Verification You can list the Operator roles associated with your ROSA account. Run the following command: USD rosa list operator-roles Example output I: Fetching operator roles ROLE PREFIX AMOUNT IN BUNDLE <prefix> 8 ? Would you like to detail a specific prefix Yes 1 ? Operator Role Prefix: <prefix> ROLE NAME ROLE ARN VERSION MANAGED <prefix>-kube-system-capa-controller-manager arn:aws:iam::4540112244:role/<prefix>-kube-system-capa-controller-manager 4.13 No <prefix>-kube-system-control-plane-operator arn:aws:iam::4540112244:role/<prefix>-kube-system-control-plane-operator 4.13 No <prefix>-kube-system-kms-provider arn:aws:iam::4540112244:role/<prefix>-kube-system-kms-provider 4.13 No <prefix>-kube-system-kube-controller-manager arn:aws:iam::4540112244:role/<prefix>-kube-system-kube-controller-manager 4.13 No <prefix>-openshift-cloud-network-config-controller-cloud-credenti arn:aws:iam::4540112244:role/<prefix>-openshift-cloud-network-config-controller-cloud-credenti 4.13 No <prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials arn:aws:iam::4540112244:role/<prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials 4.13 No <prefix>-openshift-image-registry-installer-cloud-credentials arn:aws:iam::4540112244:role/<prefix>-openshift-image-registry-installer-cloud-credentials 4.13 No <prefix>-openshift-ingress-operator-cloud-credentials arn:aws:iam::4540112244:role/<prefix>-openshift-ingress-operator-cloud-credentials 4.13 No 1 After the command runs, it displays all the prefixes associated with your AWS account and notes how many roles are associated with this prefix. If you need to see all of these roles and their details, enter "Yes" on the detail prompt to have these roles listed out with specifics. 5.5. Creating a ROSA with HCP cluster with egress lockdown using the CLI When using the Red Hat OpenShift Service on AWS (ROSA) command-line interface (CLI), rosa , to create a cluster, you can select the default options to create the cluster quickly. Prerequisites You have completed the AWS prerequisites for ROSA with HCP. You have available AWS service quotas. You have enabled the ROSA service in the AWS Console. You have installed and configured the latest ROSA CLI ( rosa ) on your installation host. Run rosa version to see your currently installed version of the ROSA CLI. If a newer version is available, the CLI provides a link to download this upgrade. You have logged in to your Red Hat account by using the ROSA CLI. You have created an OIDC configuration. You have verified that the AWS Elastic Load Balancing (ELB) service role exists in your AWS account. Procedure Use one of the following commands to create your ROSA with HCP cluster: Note When creating a ROSA with HCP cluster, the default machine Classless Inter-Domain Routing (CIDR) is 10.0.0.0/16 . If this does not correspond to the CIDR range for your VPC subnets, add --machine-cidr <address_block> to the following commands. To learn more about the default CIDR ranges for Red Hat OpenShift Service on AWS, see the CIDR range definitions. If you did not set environment variables, run the following command: USD rosa create cluster --cluster-name=<cluster_name> \ <.> --mode=auto --hosted-cp [--private] \ <.> --operator-roles-prefix <operator-role-prefix> \ <.> --oidc-config-id <id-of-oidc-configuration> \ --subnet-ids=<private-subnet-id> --region <region> \ --machine-cidr 10.0.0.0/16 --service-cidr 172.30.0.0/16 \ --pod-cidr 10.128.0.0/14 --host-prefix 23 \ --billing-account <root-acct-id> \ <.> --properties zero_egress:true <.> Specify the name of your cluster. If your cluster name is longer than 15 characters, it will contain an autogenerated domain prefix as a subdomain for your provisioned cluster on openshiftapps.com. To customize the subdomain, use the --domain-prefix flag. The domain prefix cannot be longer than 15 characters, must be unique, and cannot be changed after cluster creation. <.> By default, the cluster-specific Operator role names are prefixed with the cluster name and a random 4-digit hash. You can optionally specify a custom prefix to replace <cluster_name>-<hash> in the role names. The prefix is applied when you create the cluster-specific Operator IAM roles. For information about the prefix, see About custom Operator IAM role prefixes . + Note If you specified custom ARN paths when you created the associated account-wide roles, the custom path is automatically detected. The custom path is applied to the cluster-specific Operator roles when you create them in a later step. <.> Provide the AWS account that is responsible for all billing. If you set the environment variables, create a cluster with egress lockdown that has a single, initial machine pool, using a privately available API, and a privately available Ingress by running the following command: USD rosa create cluster --private --cluster-name=<cluster_name> \ --mode=auto --hosted-cp --operator-roles-prefix=USDOPERATOR_ROLES_PREFIX \ --oidc-config-id=USDOIDC_ID --subnet-ids=USDSUBNET_IDS \ --region <region> --machine-cidr 10.0.0.0/16 --service-cidr 172.30.0.0/16 \ --pod-cidr 10.128.0.0/14 --host-prefix 23 --billing-account <root-acct-id> \ --private --properties zero_egress:true Check the status of your cluster by running the following command: USD rosa describe cluster --cluster=<cluster_name> The following State field changes are listed in the output as cluster installation progresses: pending (Preparing account) installing (DNS setup in progress) installing ready Note If the installation fails or the State field does not change to ready after more than 10 minutes, check the installation troubleshooting documentation for details. For more information, see Troubleshooting installations . For steps to contact Red Hat Support for assistance, see Getting support for Red Hat OpenShift Service on AWS . Track the cluster creation progress by watching the Red Hat OpenShift Service on AWS installation program logs. To check the logs, run the following command: USD rosa logs install --cluster=<cluster_name> --watch \ <.> <.> Optional: To watch for new log messages as the installation progresses, use the --watch argument.
[ "git clone https://github.com/openshift-cs/terraform-vpc-example", "cd terraform-vpc-example/zero-egress", "terraform init", "terraform plan -out rosa-zero-egress.tfplan -var region=<aws_region> \\ 1 -var 'availability_zones=[\"aws_region_1a\",\"aws_region_1b\",\"aws_region_1c\"]'\\ 2 -var vpc_cidr_block=10.0.0.0/16 \\ 3 -var 'private_subnets=[\"10.0.0.0/24\", \"10.0.1.0/24\", \"10.0.2.0/24\"]' 4", "terraform apply rosa-zero-egress.tfplan", "aws ec2 create-tags --resources <public-subnet-id> --region <aws_region> --tags Key=kubernetes.io/role/elb,Value=1", "aws ec2 create-tags --resources <private-subnet-id> --region <aws_region> --tags Key=kubernetes.io/role/internal-elb,Value=1", "aws ec2 describe-tags --filters \"Name=resource-id,Values=<subnet_id>\"", "TAGS Name <subnet-id> subnet <prefix>-subnet-public1-us-east-1a TAGS kubernetes.io/role/elb <subnet-id> subnet 1", "aws ec2 create-security-group --group-name allow-inbound-traffic --description \"allow inbound traffic\" --vpc-id <vpc_id> \\ 1 --region <aws_region> \\ 2", "aws ec2 authorize-security-group-ingress --group-id <group_id> \\ 1 --protocol -1 --port 0-0 --cidr <vpc_cidr> \\ 2 --region <aws_region> \\ 3", "aws ec2 create-vpc-endpoint --vpc-id <vpc_id> \\ 1 --service-name com.amazonaws.<aws_region>.sts \\ 2 --vpc-endpoint-type Interface", "aws ec2 create-vpc-endpoint --vpc-id <vpc_id> --service-name com.amazonaws.<aws_region>.ecr.dkr \\ 1 --vpc-endpoint-type Interface", "aws ec2 create-vpc-endpoint --vpc-id <vpc_id> --service-name com.amazonaws.<aws_region>.s3", "rosa create account-roles --hosted-cp", "aws iam attach-role-policy --role-name ManagedOpenShift-HCP-ROSA-Worker-Role \\ 1 --policy-arn \"arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly\"", "export ACCOUNT_ROLES_PREFIX=<account_role_prefix>", "echo USDACCOUNT_ROLES_PREFIX", "ManagedOpenShift", "rosa create oidc-config --mode=auto --yes", "? Would you like to create a Managed (Red Hat hosted) OIDC Configuration Yes I: Setting up managed OIDC configuration I: To create Operator Roles for this OIDC Configuration, run the following command and remember to replace <user-defined> with a prefix of your choice: rosa create operator-roles --prefix <user-defined> --oidc-config-id 13cdr6b If you are going to create a Hosted Control Plane cluster please include '--hosted-cp' I: Creating OIDC provider using 'arn:aws:iam::4540112244:user/userName' ? Create the OIDC provider? Yes I: Created OIDC provider with ARN 'arn:aws:iam::4540112244:oidc-provider/dvbwgdztaeq9o.cloudfront.net/13cdr6b'", "export OIDC_ID=<oidc_config_id> 1", "echo USDOIDC_ID", "13cdr6b", "rosa list oidc-config", "ID MANAGED ISSUER URL SECRET ARN 2330dbs0n8m3chkkr25gkkcd8pnj3lk2 true https://dvbwgdztaeq9o.cloudfront.net/2330dbs0n8m3chkkr25gkkcd8pnj3lk2 233hvnrjoqu14jltk6lhbhf2tj11f8un false https://oidc-r7u1.s3.us-east-1.amazonaws.com aws:secretsmanager:us-east-1:242819244:secret:rosa-private-key-oidc-r7u1-tM3MDN", "export OPERATOR_ROLES_PREFIX=<prefix_name>", "rosa create operator-roles --hosted-cp --prefix=USDOPERATOR_ROLES_PREFIX --oidc-config-id=USDOIDC_ID --installer-role-arn arn:aws:iam::USD{AWS_ACCOUNT_ID}:role/USD{ACCOUNT_ROLES_PREFIX}-HCP-ROSA-Installer-Role", "rosa create operator-roles --hosted-cp --prefix=USDOPERATOR_ROLES_PREFIX 1 --oidc-config-id=USDOIDC_ID 2 --installer-role-arn arn:aws:iam::USD{AWS_ACCOUNT_ID}:role/USD{ACCOUNT_ROLES_PREFIX}-HCP-ROSA-Installer-Role 3", "? Role creation mode: auto ? Operator roles prefix: <pre-filled_prefix> 1 ? OIDC Configuration ID: 23soa2bgvpek9kmes9s7os0a39i13qm4 | https://dvbwgdztaeq9o.cloudfront.net/23soa2bgvpek9kmes9s7os0a39i13qm4 2 ? Create hosted control plane operator roles: Yes W: More than one Installer role found ? Installer role ARN: arn:aws:iam::4540112244:role/<prefix>-HCP-ROSA-Installer-Role ? Permissions boundary ARN (optional): I: Reusable OIDC Configuration detected. Validating trusted relationships to operator roles: I: Creating roles using 'arn:aws:iam::4540112244:user/<userName>' I: Created role '<prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials' I: Created role '<prefix>-openshift-cloud-network-config-controller-cloud-credenti' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-cloud-network-config-controller-cloud-credenti' I: Created role '<prefix>-kube-system-kube-controller-manager' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-kube-controller-manager' I: Created role '<prefix>-kube-system-capa-controller-manager' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-capa-controller-manager' I: Created role '<prefix>-kube-system-control-plane-operator' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-control-plane-operator' I: Created role '<prefix>-kube-system-kms-provider' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-kms-provider' I: Created role '<prefix>-openshift-image-registry-installer-cloud-credentials' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-image-registry-installer-cloud-credentials' I: Created role '<prefix>-openshift-ingress-operator-cloud-credentials' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-ingress-operator-cloud-credentials' I: To create a cluster with these roles, run the following command: rosa create cluster --sts --oidc-config-id 23soa2bgvpek9kmes9s7os0a39i13qm4 --operator-roles-prefix <prefix> --hosted-cp", "rosa list operator-roles", "I: Fetching operator roles ROLE PREFIX AMOUNT IN BUNDLE <prefix> 8 ? Would you like to detail a specific prefix Yes 1 ? Operator Role Prefix: <prefix> ROLE NAME ROLE ARN VERSION MANAGED <prefix>-kube-system-capa-controller-manager arn:aws:iam::4540112244:role/<prefix>-kube-system-capa-controller-manager 4.13 No <prefix>-kube-system-control-plane-operator arn:aws:iam::4540112244:role/<prefix>-kube-system-control-plane-operator 4.13 No <prefix>-kube-system-kms-provider arn:aws:iam::4540112244:role/<prefix>-kube-system-kms-provider 4.13 No <prefix>-kube-system-kube-controller-manager arn:aws:iam::4540112244:role/<prefix>-kube-system-kube-controller-manager 4.13 No <prefix>-openshift-cloud-network-config-controller-cloud-credenti arn:aws:iam::4540112244:role/<prefix>-openshift-cloud-network-config-controller-cloud-credenti 4.13 No <prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials arn:aws:iam::4540112244:role/<prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials 4.13 No <prefix>-openshift-image-registry-installer-cloud-credentials arn:aws:iam::4540112244:role/<prefix>-openshift-image-registry-installer-cloud-credentials 4.13 No <prefix>-openshift-ingress-operator-cloud-credentials arn:aws:iam::4540112244:role/<prefix>-openshift-ingress-operator-cloud-credentials 4.13 No", "rosa create cluster --cluster-name=<cluster_name> \\ <.> --mode=auto --hosted-cp [--private] \\ <.> --operator-roles-prefix <operator-role-prefix> \\ <.> --oidc-config-id <id-of-oidc-configuration> --subnet-ids=<private-subnet-id> --region <region> --machine-cidr 10.0.0.0/16 --service-cidr 172.30.0.0/16 --pod-cidr 10.128.0.0/14 --host-prefix 23 --billing-account <root-acct-id> \\ <.> --properties zero_egress:true", "rosa create cluster --private --cluster-name=<cluster_name> --mode=auto --hosted-cp --operator-roles-prefix=USDOPERATOR_ROLES_PREFIX --oidc-config-id=USDOIDC_ID --subnet-ids=USDSUBNET_IDS --region <region> --machine-cidr 10.0.0.0/16 --service-cidr 172.30.0.0/16 --pod-cidr 10.128.0.0/14 --host-prefix 23 --billing-account <root-acct-id> --private --properties zero_egress:true", "rosa describe cluster --cluster=<cluster_name>", "rosa logs install --cluster=<cluster_name> --watch \\ <.>" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/install_rosa_with_hcp_clusters/rosa-hcp-egress-lockdown-install
Standalone Deployment Guide
Standalone Deployment Guide Red Hat OpenStack Platform 16.2 Creating an all-in-one OpenStack cloud for test and proof-of-concept environments OpenStack Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/standalone_deployment_guide/index
Chapter 117. Simple
Chapter 117. Simple The Simple Expression Language was a really simple language when it was created, but has since grown more powerful. It is primarily intended for being a very small and simple language for evaluating Expression or Predicate without requiring any new dependencies or knowledge of other scripting languages such as Groovy. The simple language is designed with intend to cover almost all the common use cases when little need for scripting in your Camel routes. However, for much more complex use cases then a more powerful language is recommended such as: Groovy MVEL OGNL Note The simple language requires camel-bean JAR as classpath dependency if the simple language uses OGNL expressions, such as calling a method named myMethod on the message body: USD{body.myMethod()} . At runtime the simple language will then us its built-in OGNL support which requires the camel-bean component. The simple language uses USD{body} placeholders for complex expressions or functions. Note See also the CSimple language which is compiled. Note Alternative syntax You can also use the alternative syntax which uses USDsimple{ } as placeholders. This can be used in situations to avoid clashes when using for example Spring property placeholder together with Camel. 117.1. Dependencies When using simple with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-core-starter</artifactId> </dependency> 117.2. Simple Language options The Simple language supports 2 options, which are listed below. Name Default Java Type Description resultType String Sets the class name of the result type (type from output). trim Boolean Whether to trim the value to remove leading and trailing whitespaces and line breaks. 117.3. Variables Variable Type Description camelId String the CamelContext name camelContext. OGNL Object the CamelContext invoked using a Camel OGNL expression. exchange Exchange the Exchange exchange. OGNL Object the Exchange invoked using a Camel OGNL expression. exchangeId String the exchange id id String the message id messageTimestamp String the message timestamp (millis since epoc) that this message originates from. Some systems like JMS, Kafka, AWS have a timestamp on the event/message, that Camel received. This method returns the timestamp, if a timestamp exists. The message timestamp and exchange created are not the same. An exchange always have a created timestamp which is the local timestamp when Camel created the exchange. The message timestamp is only available in some Camel components when the consumer is able to extract the timestamp from the source event. If the message has no timestamp then 0 is returned. body Object the body body. OGNL Object the body invoked using a Camel OGNL expression. bodyAs( type ) Type Converts the body to the given type determined by its classname. The converted body can be null. bodyAs( type ). OGNL Object Converts the body to the given type determined by its classname and then invoke methods using a Camel OGNL expression. The converted body can be null. bodyOneLine String Converts the body to a String and removes all line-breaks so the string is in one line. mandatoryBodyAs( type ) Type Converts the body to the given type determined by its classname, and expects the body to be not null. mandatoryBodyAs( type ). OGNL Object Converts the body to the given type determined by its classname and then invoke methods using a Camel OGNL expression. header.foo Object refer to the foo header header[foo] Object refer to the foo header headers.foo Object refer to the foo header headers:foo Object refer to the foo header headers[foo] Object refer to the foo header header.foo[bar] Object regard foo header as a map and perform lookup on the map with bar as key header.foo. OGNL Object refer to the foo header and invoke its value using a Camel OGNL expression. headerAs( key , type ) Type converts the header to the given type determined by its classname headers Map refer to the headers exchangeProperty.foo Object refer to the foo property on the exchange exchangeProperty[foo] Object refer to the foo property on the exchange exchangeProperty.foo. OGNL Object refer to the foo property on the exchange and invoke its value using a Camel OGNL expression. sys.foo String refer to the JVM system property sysenv.foo String refer to the system environment variable env.foo String refer to the system environment variable exception Object refer to the exception object on the exchange, is null if no exception set on exchange. Will fallback and grab caught exceptions ( Exchange.EXCEPTION_CAUGHT ) if the Exchange has any. exception. OGNL Object refer to the exchange exception invoked using a Camel OGNL expression object exception.message String refer to the exception.message on the exchange, is null if no exception set on exchange. Will fallback and grab caught exceptions ( Exchange.EXCEPTION_CAUGHT ) if the Exchange has any. exception.stacktrace String refer to the exception.stracktrace on the exchange, is null if no exception set on exchange. Will fallback and grab caught exceptions ( Exchange.EXCEPTION_CAUGHT ) if the Exchange has any. date:_command_ Date evaluates to a Date object. Supported commands are: now for current timestamp, exchangeCreated for the timestamp when the current exchange was created, header.xxx to use the Long/Date object header with the key xxx. exchangeProperty.xxx to use the Long/Date object in the exchange property with the key xxx. file for the last modified timestamp of the file (available with a File consumer). Command accepts offsets such as: now-24h or header.xxx+1h or even now+1h30m-100 . date:_command:pattern_ String Date formatting using java.text.SimpleDateFormat patterns. date-with-timezone:_command:timezone:pattern_ String Date formatting using java.text.SimpleDateFormat timezones and patterns. bean:_bean expression_ Object Invoking a bean expression using the language. Specifying a method name you must use dot as separator. We also support the ?method=methodname syntax that is used by the component. Camel will by default lookup a bean by the given name. However if you need to refer to a bean class (such as calling a static method) then you can prefix with type, such as bean:type:fqnClassName . properties:key:default String Lookup a property with the given key. If the key does not exists or has no value, then an optional default value can be specified. routeId String Returns the id of the current route the Exchange is being routed. stepId String Returns the id of the current step the Exchange is being routed. threadName String Returns the name of the current thread. Can be used for logging purpose. hostname String Returns the local hostname (may be empty if not possible to resolve). ref:xxx Object To lookup a bean from the Registry with the given id. type:name.field Object To refer to a type or field by its FQN name. To refer to a field you can append .FIELD_NAME. For example, you can refer to the constant field from Exchange as: org.apache.camel.Exchange.FILE_NAME null null represents a null random(value) Integer returns a random Integer between 0 (included) and value (excluded) random(min,max) Integer returns a random Integer between min (included) and max (excluded) collate(group) List The collate function iterates the message body and groups the data into sub lists of specified size. This can be used with the Splitter EIP to split a message body and group/batch the splitted sub message into a group of N sub lists. This method works similar to the collate method in Groovy. skip(number) Iterator The skip function iterates the message body and skips the first number of items. This can be used with the Splitter EIP to split a message body and skip the first N number of items. messageHistory String The message history of the current exchange how it has been routed. This is similar to the route stack-trace message history the error handler logs in case of an unhandled exception. messageHistory(false) String As messageHistory but without the exchange details (only includes the route stack-trace). This can be used if you do not want to log sensitive data from the message itself. 117.4. OGNL expression support When using OGNL then camel-bean JAR is required to be on the classpath. Camel's OGNL support is for invoking methods only. You cannot access fields. Camel support accessing the length field of Java arrays. The Simple and Bean language now supports a Camel OGNL notation for invoking beans in a chain like fashion. Suppose the Message IN body contains a POJO which has a getAddress() method. Then you can use Camel OGNL notation to access the address object: simple("USD{body.address}") simple("USD{body.address.street}") simple("USD{body.address.zip}") Camel understands the shorthand names for getters, but you can invoke any method or use the real name such as: simple("USD{body.address}") simple("USD{body.getAddress.getStreet}") simple("USD{body.address.getZip}") simple("USD{body.doSomething}") You can also use the null safe operator ( ?. ) to avoid NPE if for example the body does NOT have an address simple("USD{body?.address?.street}") It is also possible to index in Map or List types, so you can do: simple("USD{body[foo].name}") To assume the body is Map based and lookup the value with foo as key, and invoke the getName method on that value. If the key has space, then you must enclose the key with quotes, for example 'foo bar': simple("USD{body['foo bar'].name}") You can access the Map or List objects directly using their key name (with or without dots) : simple("USD{body[foo]}") simple("USD{body[this.is.foo]}") Suppose there was no value with the key foo then you can use the null safe operator to avoid the NPE as shown: simple("USD{body[foo]?.name}") You can also access List types, for example to get lines from the address you can do: simple("USD{body.address.lines[0]}") simple("USD{body.address.lines[1]}") simple("USD{body.address.lines[2]}") There is a special last keyword which can be used to get the last value from a list. simple("USD{body.address.lines[last]}") And to get the 2nd last you can subtract a number, so we can use last-1 to indicate this: simple("USD{body.address.lines[last-1]}") And the 3rd last is of course: simple("USD{body.address.lines[last-2]}") And you can call the size method on the list with simple("USD{body.address.lines.size}") Camel supports the length field for Java arrays as well, eg: String[] lines = new String[]{"foo", "bar", "cat"}; exchange.getIn().setBody(lines); simple("There are USD{body.length} lines") And yes you can combine this with the operator support as shown below: simple("USD{body.address.zip} > 1000") 117.5. Operator support The parser is limited to only support a single operator. To enable it the left value must be enclosed in USD\\{ }. The syntax is: USD{leftValue} OP rightValue Where the rightValue can be a String literal enclosed in ' ' , null , a constant value or another expression enclosed in USD\{ } . Note There must be spaces around the operator. Camel will automatically type convert the rightValue type to the leftValue type, so it is able to eg. convert a string into a numeric, so you can use > comparison for numeric values. The following operators are supported: Operator Description == equals =~ equals ignore case (will ignore case when comparing String values) > greater than >= greater than or equals < less than ⇐ less than or equals != not equals !=~ not equals ignore case (will ignore case when comparing String values) contains For testing if contains in a string based value !contains For testing if not contains in a string based value ~~ For testing if contains by ignoring case sensitivity in a string based value !~~ For testing if not contains by ignoring case sensitivity in a string based value regex For matching against a given regular expression pattern defined as a String value !regex For not matching against a given regular expression pattern defined as a String value in For matching if in a set of values, each element must be separated by comma. If you want to include an empty value, then it must be defined using double comma, eg ',,bronze,silver,gold', which is a set of four values with an empty value and then the three medals. !in For matching if not in a set of values, each element must be separated by comma. If you want to include an empty value, then it must be defined using double comma, eg ',,bronze,silver,gold', which is a set of four values with an empty value and then the three medals. is For matching if the left hand side type is an instance of the value. !is For matching if the left hand side type is not an instance of the value. range For matching if the left hand side is within a range of values defined as numbers: from..to .. !range For matching if the left hand side is not within a range of values defined as numbers: from..to . . startsWith For testing if the left hand side string starts with the right hand string. starts with Same as the startsWith operator. endsWith For testing if the left hand side string ends with the right hand string. ends with Same as the endsWith operator. And the following unary operators can be used: Operator Description ++ To increment a number by one. The left hand side must be a function, otherwise parsed as literal. - To decrement a number by one. The left hand side must be a function, otherwise parsed as literal. \n To use newline character. \t To use tab character. \r To use carriage return character. \} To use the } character as text. This may be needed when building a JSon structure with the simple language. And the following logical operators can be used to group expressions: Operator Description && The logical and operator is used to group two expressions. The logical or operator is used to group two expressions. The syntax for AND is: USD{leftValue} OP rightValue && USD{leftValue} OP rightValue And the syntax for OR is: USD{leftValue} OP rightValue || USD{leftValue} OP rightValue Some examples: // exact equals match simple("USD{header.foo} == 'foo'") // ignore case when comparing, so if the header has value FOO this will match simple("USD{header.foo} =~ 'foo'") // here Camel will type convert '100' into the type of header.bar and if it is an Integer '100' will also be converter to an Integer simple("USD{header.bar} == '100'") simple("USD{header.bar} == 100") // 100 will be converter to the type of header.bar so we can do > comparison simple("USD{header.bar} > 100") 117.5.1. Comparing with different types When you compare with different types such as String and int, then you have to take a bit care. Camel will use the type from the left hand side as 1st priority. And fallback to the right hand side type if both values couldn't be compared based on that type. This means you can flip the values to enforce a specific type. Suppose the bar value above is a String. Then you can flip the equation: simple("100 < USD{header.bar}") which then ensures the int type is used as 1st priority. This may change in the future if the Camel team improves the binary comparison operations to prefer numeric types to String based. It's most often the String type which causes problem when comparing with numbers. // testing for null simple("USD{header.baz} == null") // testing for not null simple("USD{header.baz} != null") And a bit more advanced example where the right value is another expression simple("USD{header.date} == USD{date:now:yyyyMMdd}") simple("USD{header.type} == USD{bean:orderService?method=getOrderType}") And an example with contains, testing if the title contains the word Camel simple("USD{header.title} contains 'Camel'") And an example with regex, testing if the number header is a 4 digit value: simple("USD{header.number} regex '\\d{4}'") And finally an example if the header equals any of the values in the list. Each element must be separated by comma, and no space around. This also works for numbers etc, as Camel will convert each element into the type of the left hand side. simple("USD{header.type} in 'gold,silver'") And for all the last 3 we also support the negate test using not: simple("USD{header.type} !in 'gold,silver'") And you can test if the type is a certain instance, eg for instance a String simple("USD{header.type} is 'java.lang.String'") We have added a shorthand for all java.lang types so you can write it as: simple("USD{header.type} is 'String'") Ranges are also supported. The range interval requires numbers and both from and end are inclusive. For instance to test whether a value is between 100 and 199: simple("USD{header.number} range 100..199") Notice we use .. in the range without spaces. It is based on the same syntax as Groovy. simple("USD{header.number} range '100..199'") As the XML DSL does not have all the power as the Java DSL with all its various builder methods, you have to resort to use some other languages for testing with simple operators. Now you can do this with the simple language. In the sample below we want to test if the header is a widget order: <from uri="seda:orders"> <filter> <simple>USD{header.type} == 'widget'</simple> <to uri="bean:orderService?method=handleWidget"/> </filter> </from> 117.5.2. Using and / or If you have two expressions you can combine them with the && or || operator. For instance: simple("USD{header.title} contains 'Camel' && USD{header.type'} == 'gold'") And of course the || is also supported. The sample would be: simple("USD{header.title} contains 'Camel' || USD{header.type'} == 'gold'") 117.6. Examples In the XML DSL sample below we filter based on a header value: <from uri="seda:orders"> <filter> <simple>USD{header.foo}</simple> <to uri="mock:fooOrders"/> </filter> </from> The Simple language can be used for the predicate test above in the Message Filter pattern, where we test if the in message has a foo header (a header with the key foo exists). If the expression evaluates to true then the message is routed to the mock:fooOrders endpoint, otherwise the message is dropped. The same example in Java DSL: from("seda:orders") .filter().simple("USD{header.foo}") .to("seda:fooOrders"); You can also use the simple language for simple text concatenations such as: from("direct:hello") .transform().simple("Hello USD{header.user} how are you?") .to("mock:reply"); Notice that we must use USD\\{ } placeholders in the expression now to allow Camel to parse it correctly. And this sample uses the date command to output current date. from("direct:hello") .transform().simple("The today is USD{date:now:yyyyMMdd} and it is a great day.") .to("mock:reply"); And in the sample below we invoke the bean language to invoke a method on a bean to be included in the returned string: from("direct:order") .transform().simple("OrderId: USD{bean:orderIdGenerator}") .to("mock:reply"); Where orderIdGenerator is the id of the bean registered in the Registry. If using Spring then it is the Spring bean id. If we want to declare which method to invoke on the order id generator bean we must prepend .method name such as below where we invoke the generateId method. from("direct:order") .transform().simple("OrderId: USD{bean:orderIdGenerator.generateId}") .to("mock:reply"); We can use the ?method=methodname option that we are familiar with the Bean component itself: from("direct:order") .transform().simple("OrderId: USD{bean:orderIdGenerator?method=generateId}") .to("mock:reply"); You can also convert the body to a given type, for example to ensure that it is a String you can do: <transform> <simple>Hello USD{bodyAs(String)} how are you?</simple> </transform> There are a few types which have a shorthand notation, so we can use String instead of java.lang.String . These are: byte[], String, Integer, Long . All other types must use their FQN name, e.g. org.w3c.dom.Document . It is also possible to lookup a value from a header Map : <transform> <simple>The gold value is USD{header.type[gold]}</simple> </transform> In the code above we lookup the header with name type and regard it as a java.util.Map and we then lookup with the key gold and return the value. If the header is not convertible to Map an exception is thrown. If the header with name type does not exist null is returned. You can nest functions, such as shown below: <setHeader name="myHeader"> <simple>USD{properties:USD{header.someKey}}</simple> </setHeader> 117.7. Setting result type You can now provide a result type to the Simple expression, which means the result of the evaluation will be converted to the desired type. This is most usable to define types such as booleans, integers, etc. For example to set a header as a boolean type you can do: .setHeader("cool", simple("true", Boolean.class)) And in XML DSL <setHeader name="cool"> <!-- use resultType to indicate that the type should be a java.lang.Boolean --> <simple resultType="java.lang.Boolean">true</simple> </setHeader> 117.8. Using new lines or tabs in XML DSLs It is easier to specify new lines or tabs in XML DSLs as you can escape the value now <transform> <simple>The following text\nis on a new line</simple> </transform> 117.9. Leading and trailing whitespace handling The trim attribute of the expression can be used to control whether the leading and trailing whitespace characters are removed or preserved. The default value is true, which removes the whitespace characters. <setBody> <simple trim="false">You get some trailing whitespace characters. </simple> </setBody> 117.10. Loading script from external resource You can externalize the script and have Camel load it from a resource such as "classpath:" , "file:" , or "http:" . This is done using the following syntax: "resource:scheme:location" , e.g. to refer to a file on the classpath you can do: .setHeader("myHeader").simple("resource:classpath:mysimple.txt") 117.11. Spring Boot Auto-Configuration The component supports 147 options, which are listed below. Name Description Default Type camel.cloud.consul.service-discovery.acl-token Sets the ACL token to be used with Consul. String camel.cloud.consul.service-discovery.block-seconds The seconds to wait for a watch event, default 10 seconds. 10 Integer camel.cloud.consul.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.consul.service-discovery.connect-timeout-millis Connect timeout for OkHttpClient. Long camel.cloud.consul.service-discovery.datacenter The data center. String camel.cloud.consul.service-discovery.enabled Enable the component. true Boolean camel.cloud.consul.service-discovery.password Sets the password to be used for basic authentication. String camel.cloud.consul.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.consul.service-discovery.read-timeout-millis Read timeout for OkHttpClient. Long camel.cloud.consul.service-discovery.url The Consul agent URL. String camel.cloud.consul.service-discovery.user-name Sets the username to be used for basic authentication. String camel.cloud.consul.service-discovery.write-timeout-millis Write timeout for OkHttpClient. Long camel.cloud.dns.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.dns.service-discovery.domain The domain name;. String camel.cloud.dns.service-discovery.enabled Enable the component. true Boolean camel.cloud.dns.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.dns.service-discovery.proto The transport protocol of the desired service. _tcp String camel.cloud.etcd.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.etcd.service-discovery.enabled Enable the component. true Boolean camel.cloud.etcd.service-discovery.password The password to use for basic authentication. String camel.cloud.etcd.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.etcd.service-discovery.service-path The path to look for for service discovery. /services/ String camel.cloud.etcd.service-discovery.timeout To set the maximum time an action could take to complete. Long camel.cloud.etcd.service-discovery.type To set the discovery type, valid values are on-demand and watch. on-demand String camel.cloud.etcd.service-discovery.uris The URIs the client can connect to. String camel.cloud.etcd.service-discovery.user-name The user name to use for basic authentication. String camel.cloud.kubernetes.service-discovery.api-version Sets the API version when using client lookup. String camel.cloud.kubernetes.service-discovery.ca-cert-data Sets the Certificate Authority data when using client lookup. String camel.cloud.kubernetes.service-discovery.ca-cert-file Sets the Certificate Authority data that are loaded from the file when using client lookup. String camel.cloud.kubernetes.service-discovery.client-cert-data Sets the Client Certificate data when using client lookup. String camel.cloud.kubernetes.service-discovery.client-cert-file Sets the Client Certificate data that are loaded from the file when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-algo Sets the Client Keystore algorithm, such as RSA when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-data Sets the Client Keystore data when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-file Sets the Client Keystore data that are loaded from the file when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-passphrase Sets the Client Keystore passphrase when using client lookup. String camel.cloud.kubernetes.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.kubernetes.service-discovery.dns-domain Sets the DNS domain to use for DNS lookup. String camel.cloud.kubernetes.service-discovery.enabled Enable the component. true Boolean camel.cloud.kubernetes.service-discovery.lookup How to perform service lookup. Possible values: client, dns, environment. When using client, then the client queries the kubernetes master to obtain a list of active pods that provides the service, and then random (or round robin) select a pod. When using dns the service name is resolved as name.namespace.svc.dnsDomain. When using dnssrv the service name is resolved with SRV query for . ... svc... When using environment then environment variables are used to lookup the service. By default environment is used. environment String camel.cloud.kubernetes.service-discovery.master-url Sets the URL to the master when using client lookup. String camel.cloud.kubernetes.service-discovery.namespace Sets the namespace to use. Will by default use namespace from the ENV variable KUBERNETES_MASTER. String camel.cloud.kubernetes.service-discovery.oauth-token Sets the OAUTH token for authentication (instead of username/password) when using client lookup. String camel.cloud.kubernetes.service-discovery.password Sets the password for authentication when using client lookup. String camel.cloud.kubernetes.service-discovery.port-name Sets the Port Name to use for DNS/DNSSRV lookup. String camel.cloud.kubernetes.service-discovery.port-protocol Sets the Port Protocol to use for DNS/DNSSRV lookup. String camel.cloud.kubernetes.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.kubernetes.service-discovery.trust-certs Sets whether to turn on trust certificate check when using client lookup. false Boolean camel.cloud.kubernetes.service-discovery.username Sets the username for authentication when using client lookup. String camel.cloud.ribbon.load-balancer.client-name Sets the Ribbon client name. String camel.cloud.ribbon.load-balancer.configurations Define additional configuration definitions. Map camel.cloud.ribbon.load-balancer.enabled Enable the component. true Boolean camel.cloud.ribbon.load-balancer.namespace The namespace. String camel.cloud.ribbon.load-balancer.password The password. String camel.cloud.ribbon.load-balancer.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.ribbon.load-balancer.username The username. String camel.hystrix.allow-maximum-size-to-diverge-from-core-size Allows the configuration for maximumSize to take effect. That value can then be equal to, or higher, than coreSize. false Boolean camel.hystrix.circuit-breaker-enabled Whether to use a HystrixCircuitBreaker or not. If false no circuit-breaker logic will be used and all requests permitted. This is similar in effect to circuitBreakerForceClosed() except that continues tracking metrics and knowing whether it should be open/closed, this property results in not even instantiating a circuit-breaker. true Boolean camel.hystrix.circuit-breaker-error-threshold-percentage Error percentage threshold (as whole number such as 50) at which point the circuit breaker will trip open and reject requests. It will stay tripped for the duration defined in circuitBreakerSleepWindowInMilliseconds; The error percentage this is compared against comes from HystrixCommandMetrics.getHealthCounts(). 50 Integer camel.hystrix.circuit-breaker-force-closed If true the HystrixCircuitBreaker#allowRequest() will always return true to allow requests regardless of the error percentage from HystrixCommandMetrics.getHealthCounts(). The circuitBreakerForceOpen() property takes precedence so if it set to true this property does nothing. false Boolean camel.hystrix.circuit-breaker-force-open If true the HystrixCircuitBreaker.allowRequest() will always return false, causing the circuit to be open (tripped) and reject all requests. This property takes precedence over circuitBreakerForceClosed();. false Boolean camel.hystrix.circuit-breaker-request-volume-threshold Minimum number of requests in the metricsRollingStatisticalWindowInMilliseconds() that must exist before the HystrixCircuitBreaker will trip. If below this number the circuit will not trip regardless of error percentage. 20 Integer camel.hystrix.circuit-breaker-sleep-window-in-milliseconds The time in milliseconds after a HystrixCircuitBreaker trips open that it should wait before trying requests again. 5000 Integer camel.hystrix.configurations Define additional configuration definitions. Map camel.hystrix.core-pool-size Core thread-pool size that gets passed to java.util.concurrent.ThreadPoolExecutor#setCorePoolSize(int). 10 Integer camel.hystrix.enabled Enable the component. true Boolean camel.hystrix.execution-isolation-semaphore-max-concurrent-requests Number of concurrent requests permitted to HystrixCommand.run(). Requests beyond the concurrent limit will be rejected. Applicable only when executionIsolationStrategy == SEMAPHORE. 20 Integer camel.hystrix.execution-isolation-strategy What isolation strategy HystrixCommand.run() will be executed with. If THREAD then it will be executed on a separate thread and concurrent requests limited by the number of threads in the thread-pool. If SEMAPHORE then it will be executed on the calling thread and concurrent requests limited by the semaphore count. THREAD String camel.hystrix.execution-isolation-thread-interrupt-on-timeout Whether the execution thread should attempt an interrupt (using Future#cancel ) when a thread times out. Applicable only when executionIsolationStrategy() == THREAD. true Boolean camel.hystrix.execution-timeout-enabled Whether the timeout mechanism is enabled for this command. true Boolean camel.hystrix.execution-timeout-in-milliseconds Time in milliseconds at which point the command will timeout and halt execution. If executionIsolationThreadInterruptOnTimeout == true and the command is thread-isolated, the executing thread will be interrupted. If the command is semaphore-isolated and a HystrixObservableCommand, that command will get unsubscribed. 1000 Integer camel.hystrix.fallback-enabled Whether HystrixCommand.getFallback() should be attempted when failure occurs. true Boolean camel.hystrix.fallback-isolation-semaphore-max-concurrent-requests Number of concurrent requests permitted to HystrixCommand.getFallback(). Requests beyond the concurrent limit will fail-fast and not attempt retrieving a fallback. 10 Integer camel.hystrix.group-key Sets the group key to use. The default value is CamelHystrix. CamelHystrix String camel.hystrix.keep-alive-time Keep-alive time in minutes that gets passed to ThreadPoolExecutor#setKeepAliveTime(long,TimeUnit). 1 Integer camel.hystrix.max-queue-size Max queue size that gets passed to BlockingQueue in HystrixConcurrencyStrategy.getBlockingQueue(int) This should only affect the instantiation of a threadpool - it is not eliglible to change a queue size on the fly. For that, use queueSizeRejectionThreshold(). -1 Integer camel.hystrix.maximum-size Maximum thread-pool size that gets passed to ThreadPoolExecutor#setMaximumPoolSize(int) . This is the maximum amount of concurrency that can be supported without starting to reject HystrixCommands. Please note that this setting only takes effect if you also set allowMaximumSizeToDivergeFromCoreSize. 10 Integer camel.hystrix.metrics-health-snapshot-interval-in-milliseconds Time in milliseconds to wait between allowing health snapshots to be taken that calculate success and error percentages and affect HystrixCircuitBreaker.isOpen() status. On high-volume circuits the continual calculation of error percentage can become CPU intensive thus this controls how often it is calculated. 500 Integer camel.hystrix.metrics-rolling-percentile-bucket-size Maximum number of values stored in each bucket of the rolling percentile. This is passed into HystrixRollingPercentile inside HystrixCommandMetrics. 10 Integer camel.hystrix.metrics-rolling-percentile-enabled Whether percentile metrics should be captured using HystrixRollingPercentile inside HystrixCommandMetrics. true Boolean camel.hystrix.metrics-rolling-percentile-window-buckets Number of buckets the rolling percentile window is broken into. This is passed into HystrixRollingPercentile inside HystrixCommandMetrics. 6 Integer camel.hystrix.metrics-rolling-percentile-window-in-milliseconds Duration of percentile rolling window in milliseconds. This is passed into HystrixRollingPercentile inside HystrixCommandMetrics. 10000 Integer camel.hystrix.metrics-rolling-statistical-window-buckets Number of buckets the rolling statistical window is broken into. This is passed into HystrixRollingNumber inside HystrixCommandMetrics. 10 Integer camel.hystrix.metrics-rolling-statistical-window-in-milliseconds This property sets the duration of the statistical rolling window, in milliseconds. This is how long metrics are kept for the thread pool. The window is divided into buckets and rolls by those increments. 10000 Integer camel.hystrix.queue-size-rejection-threshold Queue size rejection threshold is an artificial max size at which rejections will occur even if maxQueueSize has not been reached. This is done because the maxQueueSize of a BlockingQueue can not be dynamically changed and we want to support dynamically changing the queue size that affects rejections. This is used by HystrixCommand when queuing a thread for execution. 5 Integer camel.hystrix.request-log-enabled Whether HystrixCommand execution and events should be logged to HystrixRequestLog. true Boolean camel.hystrix.thread-pool-key Sets the thread pool key to use. Will by default use the same value as groupKey has been configured to use. CamelHystrix String camel.hystrix.thread-pool-rolling-number-statistical-window-buckets Number of buckets the rolling statistical window is broken into. This is passed into HystrixRollingNumber inside each HystrixThreadPoolMetrics instance. 10 Integer camel.hystrix.thread-pool-rolling-number-statistical-window-in-milliseconds Duration of statistical rolling window in milliseconds. This is passed into HystrixRollingNumber inside each HystrixThreadPoolMetrics instance. 10000 Integer camel.language.constant.enabled Whether to enable auto configuration of the constant language. This is enabled by default. Boolean camel.language.constant.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.csimple.enabled Whether to enable auto configuration of the csimple language. This is enabled by default. Boolean camel.language.csimple.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.exchangeproperty.enabled Whether to enable auto configuration of the exchangeProperty language. This is enabled by default. Boolean camel.language.exchangeproperty.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.file.enabled Whether to enable auto configuration of the file language. This is enabled by default. Boolean camel.language.file.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.header.enabled Whether to enable auto configuration of the header language. This is enabled by default. Boolean camel.language.header.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.ref.enabled Whether to enable auto configuration of the ref language. This is enabled by default. Boolean camel.language.ref.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.simple.enabled Whether to enable auto configuration of the simple language. This is enabled by default. Boolean camel.language.simple.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.tokenize.enabled Whether to enable auto configuration of the tokenize language. This is enabled by default. Boolean camel.language.tokenize.group-delimiter Sets the delimiter to use when grouping. If this has not been set then token will be used as the delimiter. String camel.language.tokenize.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.resilience4j.automatic-transition-from-open-to-half-open-enabled Enables automatic transition from OPEN to HALF_OPEN state once the waitDurationInOpenState has passed. false Boolean camel.resilience4j.circuit-breaker-ref Refers to an existing io.github.resilience4j.circuitbreaker.CircuitBreaker instance to lookup and use from the registry. When using this, then any other circuit breaker options are not in use. String camel.resilience4j.config-ref Refers to an existing io.github.resilience4j.circuitbreaker.CircuitBreakerConfig instance to lookup and use from the registry. String camel.resilience4j.configurations Define additional configuration definitions. Map camel.resilience4j.enabled Enable the component. true Boolean camel.resilience4j.failure-rate-threshold Configures the failure rate threshold in percentage. If the failure rate is equal or greater than the threshold the CircuitBreaker transitions to open and starts short-circuiting calls. The threshold must be greater than 0 and not greater than 100. Default value is 50 percentage. Float camel.resilience4j.minimum-number-of-calls Configures the minimum number of calls which are required (per sliding window period) before the CircuitBreaker can calculate the error rate. For example, if minimumNumberOfCalls is 10, then at least 10 calls must be recorded, before the failure rate can be calculated. If only 9 calls have been recorded the CircuitBreaker will not transition to open even if all 9 calls have failed. Default minimumNumberOfCalls is 100. 100 Integer camel.resilience4j.permitted-number-of-calls-in-half-open-state Configures the number of permitted calls when the CircuitBreaker is half open. The size must be greater than 0. Default size is 10. 10 Integer camel.resilience4j.sliding-window-size Configures the size of the sliding window which is used to record the outcome of calls when the CircuitBreaker is closed. slidingWindowSize configures the size of the sliding window. Sliding window can either be count-based or time-based. If slidingWindowType is COUNT_BASED, the last slidingWindowSize calls are recorded and aggregated. If slidingWindowType is TIME_BASED, the calls of the last slidingWindowSize seconds are recorded and aggregated. The slidingWindowSize must be greater than 0. The minimumNumberOfCalls must be greater than 0. If the slidingWindowType is COUNT_BASED, the minimumNumberOfCalls cannot be greater than slidingWindowSize . If the slidingWindowType is TIME_BASED, you can pick whatever you want. Default slidingWindowSize is 100. 100 Integer camel.resilience4j.sliding-window-type Configures the type of the sliding window which is used to record the outcome of calls when the CircuitBreaker is closed. Sliding window can either be count-based or time-based. If slidingWindowType is COUNT_BASED, the last slidingWindowSize calls are recorded and aggregated. If slidingWindowType is TIME_BASED, the calls of the last slidingWindowSize seconds are recorded and aggregated. Default slidingWindowType is COUNT_BASED. COUNT_BASED String camel.resilience4j.slow-call-duration-threshold Configures the duration threshold (seconds) above which calls are considered as slow and increase the slow calls percentage. Default value is 60 seconds. 60 Integer camel.resilience4j.slow-call-rate-threshold Configures a threshold in percentage. The CircuitBreaker considers a call as slow when the call duration is greater than slowCallDurationThreshold Duration. When the percentage of slow calls is equal or greater the threshold, the CircuitBreaker transitions to open and starts short-circuiting calls. The threshold must be greater than 0 and not greater than 100. Default value is 100 percentage which means that all recorded calls must be slower than slowCallDurationThreshold. Float camel.resilience4j.wait-duration-in-open-state Configures the wait duration (in seconds) which specifies how long the CircuitBreaker should stay open, before it switches to half open. Default value is 60 seconds. 60 Integer camel.resilience4j.writable-stack-trace-enabled Enables writable stack traces. When set to false, Exception.getStackTrace returns a zero length array. This may be used to reduce log spam when the circuit breaker is open as the cause of the exceptions is already known (the circuit breaker is short-circuiting calls). true Boolean camel.rest.api-component The name of the Camel component to use as the REST API (such as swagger) If no API Component has been explicit configured, then Camel will lookup if there is a Camel component responsible for servicing and generating the REST API documentation, or if a org.apache.camel.spi.RestApiProcessorFactory is registered in the registry. If either one is found, then that is being used. String camel.rest.api-context-path Sets a leading API context-path the REST API services will be using. This can be used when using components such as camel-servlet where the deployed web application is deployed using a context-path. String camel.rest.api-context-route-id Sets the route id to use for the route that services the REST API. The route will by default use an auto assigned route id. String camel.rest.api-host To use an specific hostname for the API documentation (eg swagger) This can be used to override the generated host with this configured hostname. String camel.rest.api-property Allows to configure as many additional properties for the api documentation (swagger). For example set property api.title to my cool stuff. Map camel.rest.api-vendor-extension Whether vendor extension is enabled in the Rest APIs. If enabled then Camel will include additional information as vendor extension (eg keys starting with x-) such as route ids, class names etc. Not all 3rd party API gateways and tools supports vendor-extensions when importing your API docs. false Boolean camel.rest.binding-mode Sets the binding mode to use. The default value is off. RestBindingMode camel.rest.client-request-validation Whether to enable validation of the client request to check whether the Content-Type and Accept headers from the client is supported by the Rest-DSL configuration of its consumes/produces settings. This can be turned on, to enable this check. In case of validation error, then HTTP Status codes 415 or 406 is returned. The default value is false. false Boolean camel.rest.component The Camel Rest component to use for the REST transport (consumer), such as netty-http, jetty, servlet, undertow. If no component has been explicit configured, then Camel will lookup if there is a Camel component that integrates with the Rest DSL, or if a org.apache.camel.spi.RestConsumerFactory is registered in the registry. If either one is found, then that is being used. String camel.rest.component-property Allows to configure as many additional properties for the rest component in use. Map camel.rest.consumer-property Allows to configure as many additional properties for the rest consumer in use. Map camel.rest.context-path Sets a leading context-path the REST services will be using. This can be used when using components such as camel-servlet where the deployed web application is deployed using a context-path. Or for components such as camel-jetty or camel-netty-http that includes a HTTP server. String camel.rest.cors-headers Allows to configure custom CORS headers. Map camel.rest.data-format-property Allows to configure as many additional properties for the data formats in use. For example set property prettyPrint to true to have json outputted in pretty mode. The properties can be prefixed to denote the option is only for either JSON or XML and for either the IN or the OUT. The prefixes are: json.in. json.out. xml.in. xml.out. For example a key with value xml.out.mustBeJAXBElement is only for the XML data format for the outgoing. A key without a prefix is a common key for all situations. Map camel.rest.enable-cors Whether to enable CORS headers in the HTTP response. The default value is false. false Boolean camel.rest.endpoint-property Allows to configure as many additional properties for the rest endpoint in use. Map camel.rest.host The hostname to use for exposing the REST service. String camel.rest.host-name-resolver If no hostname has been explicit configured, then this resolver is used to compute the hostname the REST service will be using. RestHostNameResolver camel.rest.json-data-format Name of specific json data format to use. By default json-jackson will be used. Important: This option is only for setting a custom name of the data format, not to refer to an existing data format instance. String camel.rest.port The port number to use for exposing the REST service. Notice if you use servlet component then the port number configured here does not apply, as the port number in use is the actual port number the servlet component is using. eg if using Apache Tomcat its the tomcat http port, if using Apache Karaf its the HTTP service in Karaf that uses port 8181 by default etc. Though in those situations setting the port number here, allows tooling and JMX to know the port number, so its recommended to set the port number to the number that the servlet engine uses. String camel.rest.producer-api-doc Sets the location of the api document (swagger api) the REST producer will use to validate the REST uri and query parameters are valid accordingly to the api document. This requires adding camel-swagger-java to the classpath, and any miss configuration will let Camel fail on startup and report the error(s). The location of the api document is loaded from classpath by default, but you can use file: or http: to refer to resources to load from file or http url. String camel.rest.producer-component Sets the name of the Camel component to use as the REST producer. String camel.rest.scheme The scheme to use for exposing the REST service. Usually http or https is supported. The default value is http. String camel.rest.skip-binding-on-error-code Whether to skip binding on output if there is a custom HTTP error code header. This allows to build custom error messages that do not bind to json / xml etc, as success messages otherwise will do. false Boolean camel.rest.use-x-forward-headers Whether to use X-Forward headers for Host and related setting. The default value is true. true Boolean camel.rest.xml-data-format Name of specific XML data format to use. By default jaxb will be used. Important: This option is only for setting a custom name of the data format, not to refer to an existing data format instance. String camel.rest.api-context-id-pattern Deprecated Sets an CamelContext id pattern to only allow Rest APIs from rest services within CamelContext's which name matches the pattern. The pattern name refers to the CamelContext name, to match on the current CamelContext only. For any other value, the pattern uses the rules from PatternHelper#matchPattern(String,String). String camel.rest.api-context-listing Deprecated Sets whether listing of all available CamelContext's with REST services in the JVM is enabled. If enabled it allows to discover these contexts, if false then only the current CamelContext is in use. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-core-starter</artifactId> </dependency>", "simple(\"USD{body.address}\") simple(\"USD{body.address.street}\") simple(\"USD{body.address.zip}\")", "simple(\"USD{body.address}\") simple(\"USD{body.getAddress.getStreet}\") simple(\"USD{body.address.getZip}\") simple(\"USD{body.doSomething}\")", "simple(\"USD{body?.address?.street}\")", "simple(\"USD{body[foo].name}\")", "simple(\"USD{body['foo bar'].name}\")", "simple(\"USD{body[foo]}\") simple(\"USD{body[this.is.foo]}\")", "simple(\"USD{body[foo]?.name}\")", "simple(\"USD{body.address.lines[0]}\") simple(\"USD{body.address.lines[1]}\") simple(\"USD{body.address.lines[2]}\")", "simple(\"USD{body.address.lines[last]}\")", "simple(\"USD{body.address.lines[last-1]}\")", "simple(\"USD{body.address.lines[last-2]}\")", "simple(\"USD{body.address.lines.size}\")", "String[] lines = new String[]{\"foo\", \"bar\", \"cat\"}; exchange.getIn().setBody(lines); simple(\"There are USD{body.length} lines\")", "simple(\"USD{body.address.zip} > 1000\")", "USD{leftValue} OP rightValue", "USD{leftValue} OP rightValue && USD{leftValue} OP rightValue", "USD{leftValue} OP rightValue || USD{leftValue} OP rightValue", "// exact equals match simple(\"USD{header.foo} == 'foo'\") // ignore case when comparing, so if the header has value FOO this will match simple(\"USD{header.foo} =~ 'foo'\") // here Camel will type convert '100' into the type of header.bar and if it is an Integer '100' will also be converter to an Integer simple(\"USD{header.bar} == '100'\") simple(\"USD{header.bar} == 100\") // 100 will be converter to the type of header.bar so we can do > comparison simple(\"USD{header.bar} > 100\")", "simple(\"100 < USD{header.bar}\")", "// testing for null simple(\"USD{header.baz} == null\") // testing for not null simple(\"USD{header.baz} != null\")", "simple(\"USD{header.date} == USD{date:now:yyyyMMdd}\") simple(\"USD{header.type} == USD{bean:orderService?method=getOrderType}\")", "simple(\"USD{header.title} contains 'Camel'\")", "simple(\"USD{header.number} regex '\\\\d{4}'\")", "simple(\"USD{header.type} in 'gold,silver'\")", "simple(\"USD{header.type} !in 'gold,silver'\")", "simple(\"USD{header.type} is 'java.lang.String'\")", "simple(\"USD{header.type} is 'String'\")", "simple(\"USD{header.number} range 100..199\")", "simple(\"USD{header.number} range '100..199'\")", "<from uri=\"seda:orders\"> <filter> <simple>USD{header.type} == 'widget'</simple> <to uri=\"bean:orderService?method=handleWidget\"/> </filter> </from>", "simple(\"USD{header.title} contains 'Camel' && USD{header.type'} == 'gold'\")", "simple(\"USD{header.title} contains 'Camel' || USD{header.type'} == 'gold'\")", "<from uri=\"seda:orders\"> <filter> <simple>USD{header.foo}</simple> <to uri=\"mock:fooOrders\"/> </filter> </from>", "from(\"seda:orders\") .filter().simple(\"USD{header.foo}\") .to(\"seda:fooOrders\");", "from(\"direct:hello\") .transform().simple(\"Hello USD{header.user} how are you?\") .to(\"mock:reply\");", "from(\"direct:hello\") .transform().simple(\"The today is USD{date:now:yyyyMMdd} and it is a great day.\") .to(\"mock:reply\");", "from(\"direct:order\") .transform().simple(\"OrderId: USD{bean:orderIdGenerator}\") .to(\"mock:reply\");", "from(\"direct:order\") .transform().simple(\"OrderId: USD{bean:orderIdGenerator.generateId}\") .to(\"mock:reply\");", "from(\"direct:order\") .transform().simple(\"OrderId: USD{bean:orderIdGenerator?method=generateId}\") .to(\"mock:reply\");", "<transform> <simple>Hello USD{bodyAs(String)} how are you?</simple> </transform>", "<transform> <simple>The gold value is USD{header.type[gold]}</simple> </transform>", "<setHeader name=\"myHeader\"> <simple>USD{properties:USD{header.someKey}}</simple> </setHeader>", ".setHeader(\"cool\", simple(\"true\", Boolean.class))", "<setHeader name=\"cool\"> <!-- use resultType to indicate that the type should be a java.lang.Boolean --> <simple resultType=\"java.lang.Boolean\">true</simple> </setHeader>", "<transform> <simple>The following text\\nis on a new line</simple> </transform>", "<setBody> <simple trim=\"false\">You get some trailing whitespace characters. </simple> </setBody>", ".setHeader(\"myHeader\").simple(\"resource:classpath:mysimple.txt\")" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-simple-language-starter
6.10. Devices
6.10. Devices kernel component When using large block size (1MB), the tape driver sometimes returns an EBUSY error. To work around this problem, use a smaller block size, that is 256KB. kernel component On some of the older Broadcom tg3 devices, the default Maximum Read Request Size (MRRS) value of 512 byte is known to cause lower performance. It is because these devices perform direct memory access (DMA) requests serially. 1500-byte ethernet packet will be broken into 3 PCIE read requests using 512 byte MRRS. When using a higher MRRS value, the DMA transfer can be faster as fewer requests will be needed. However, the MRRS value is meant to be tuned by system software and not by the driver. PCIE Base spec 3.0 section 7.8.4 contains an implementation note that illustrates how system software might tune the MRRS for all devices in the system. As a result, Broadcom modified the tg3 driver to remove the code that sets the MRRS to 4K bytes so that any value selected by system software (BIOS) will be preserved. kernel component The Brocade BFA Fibre Channel and FCoE driver does not currently support dynamic recognition of Logical Unit addition or removal using the sg3_utils utilities (for example, the sg_scan command) or similar functionality. Please consult Brocade directly for a Brocade equivalent of this functionality. kexec-tools component Starting with Red Hat Enterprise Linux 6.0 and later, kexec kdump supports dumping core to the Brtfs file system. However, note that because the findfs utility in busybox does not support Btrfs yet, UUID/LABEL resolving is not functional. Avoid using the UUID/LABEL syntax when dumping core to Btrfs file systems. trace-cmd component The trace-cmd service does not start on 64-bit PowerPC and IBM System z systems because the sys_enter and sys_exit events do not get enabled on the aforementioned systems. trace-cmd component trace-cmd 's subcommand, report , does not work on IBM System z systems. This is due to the fact that the CONFIG_FTRACE_SYSCALLS parameter is not set on IBM System z systems. libfprint component Red Hat Enterprise Linux 6 only has support for the first revision of the UPEK Touchstrip fingerprint reader (USB ID 147e:2016). Attempting to use a second revision device may cause the fingerprint reader daemon to crash. The following command returns the version of the device being used in an individual machine: kernel component The Emulex Fibre Channel/Fibre Channel-over-Ethernet (FCoE) driver in Red Hat Enterprise Linux 6 does not support DH-CHAP authentication. DH-CHAP authentication provides secure access between hosts and mass storage in Fibre-Channel and FCoE SANs in compliance with the FC-SP specification. Note, however that the Emulex driver ( lpfc ) does support DH-CHAP authentication on Red Hat Enterprise Linux 5, from version 5.4. Future Red Hat Enterprise Linux 6 releases may include DH-CHAP authentication. kernel component The recommended minimum HBA firmware revision for use with the mpt2sas driver is "Phase 5 firmware" (that is, with version number in the form 05.xx.xx.xx ). Note that following this recommendation is especially important on complex SAS configurations involving multiple SAS expanders.
[ "~]USD lsusb -v -d 147e:2016 | grep bcdDevice" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/devices_issues
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation. The Jira issue will be created in the Red Hat OpenStack Platform Jira project, where you can track the progress of your feedback. Ensure that you are logged in to Jira. If you do not have a Jira account, create an account to submit feedback. Click the following link to open a the Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/high_availability_deployment_and_usage/proc_providing-feedback-on-red-hat-documentation
4.6. Configuring a Watchdog
4.6. Configuring a Watchdog 4.6.1. Adding a Watchdog Card to a Virtual Machine You can add a watchdog card to a virtual machine to monitor the operating system's responsiveness. Procedure Click Compute Virtual Machines and select a virtual machine. Click Edit . Click the High Availability tab. Select the watchdog model to use from the Watchdog Model drop-down list. Select an action from the Watchdog Action drop-down list. This is the action that the virtual machine takes when the watchdog is triggered. Click OK . 4.6.2. Installing a Watchdog To activate a watchdog card attached to a virtual machine, you must install the watchdog package on that virtual machine and start the watchdog service. Installing Watchdogs Log in to the virtual machine on which the watchdog card is attached. Install the watchdog package and dependencies: # yum install watchdog Edit the /etc/watchdog.conf file and uncomment the following line: watchdog-device = /dev/watchdog Save the changes. Start the watchdog service and ensure this service starts on boot: Red Hat Enterprise Linux 6: # service watchdog start # chkconfig watchdog on Red Hat Enterprise Linux 7: # systemctl start watchdog.service # systemctl enable watchdog.service 4.6.3. Confirming Watchdog Functionality Confirm that a watchdog card has been attached to a virtual machine and that the watchdog service is active. Warning This procedure is provided for testing the functionality of watchdogs only and must not be run on production machines. Confirming Watchdog Functionality Log in to the virtual machine on which the watchdog card is attached. Confirm that the watchdog card has been identified by the virtual machine: # lspci | grep watchdog -i Run one of the following commands to confirm that the watchdog is active: Trigger a kernel panic: # echo c > /proc/sysrq-trigger Terminate the watchdog service: # kill -9 pgrep watchdog The watchdog timer can no longer be reset, so the watchdog counter reaches zero after a short period of time. When the watchdog counter reaches zero, the action specified in the Watchdog Action drop-down menu for that virtual machine is performed. 4.6.4. Parameters for Watchdogs in watchdog.conf The following is a list of options for configuring the watchdog service available in the /etc/watchdog.conf file. To configure an option, you must uncomment that option and restart the watchdog service after saving the changes. Note For a more detailed explanation of options for configuring the watchdog service and using the watchdog command, see the watchdog man page. Table 4.2. watchdog.conf variables Variable name Default Value Remarks ping N/A An IP address that the watchdog attempts to ping to verify whether that address is reachable. You can specify multiple IP addresses by adding additional ping lines. interface N/A A network interface that the watchdog will monitor to verify the presence of network traffic. You can specify multiple network interfaces by adding additional interface lines. file /var/log/messages A file on the local system that the watchdog will monitor for changes. You can specify multiple files by adding additional file lines. change 1407 The number of watchdog intervals after which the watchdog checks for changes to files. A change line must be specified on the line directly after each file line, and applies to the file line directly above that change line. max-load-1 24 The maximum average load that the virtual machine can sustain over a one-minute period. If this average is exceeded, then the watchdog is triggered. A value of 0 disables this feature. max-load-5 18 The maximum average load that the virtual machine can sustain over a five-minute period. If this average is exceeded, then the watchdog is triggered. A value of 0 disables this feature. By default, the value of this variable is set to a value approximately three quarters that of max-load-1 . max-load-15 12 The maximum average load that the virtual machine can sustain over a fifteen-minute period. If this average is exceeded, then the watchdog is triggered. A value of 0 disables this feature. By default, the value of this variable is set to a value approximately one half that of max-load-1 . min-memory 1 The minimum amount of virtual memory that must remain free on the virtual machine. This value is measured in pages. A value of 0 disables this feature. repair-binary /usr/sbin/repair The path and file name of a binary file on the local system that will be run when the watchdog is triggered. If the specified file resolves the issues preventing the watchdog from resetting the watchdog counter, then the watchdog action is not triggered. test-binary N/A The path and file name of a binary file on the local system that the watchdog will attempt to run during each interval. A test binary allows you to specify a file for running user-defined tests. test-timeout N/A The time limit, in seconds, for which user-defined tests can run. A value of 0 allows user-defined tests to continue for an unlimited duration. temperature-device N/A The path to and name of a device for checking the temperature of the machine on which the watchdog service is running. max-temperature 120 The maximum allowed temperature for the machine on which the watchdog service is running. The machine will be halted if this temperature is reached. Unit conversion is not taken into account, so you must specify a value that matches the watchdog card being used. admin root The email address to which email notifications are sent. interval 10 The interval, in seconds, between updates to the watchdog device. The watchdog device expects an update at least once every minute, and if there are no updates over a one-minute period, then the watchdog is triggered. This one-minute period is hard-coded into the drivers for the watchdog device, and cannot be configured. logtick 1 When verbose logging is enabled for the watchdog service, the watchdog service periodically writes log messages to the local system. The logtick value represents the number of watchdog intervals after which a message is written. realtime yes Specifies whether the watchdog is locked in memory. A value of yes locks the watchdog in memory so that it is not swapped out of memory, while a value of no allows the watchdog to be swapped out of memory. If the watchdog is swapped out of memory and is not swapped back in before the watchdog counter reaches zero, then the watchdog is triggered. priority 1 The schedule priority when the value of realtime is set to yes . pidfile /var/run/syslogd.pid The path and file name of a PID file that the watchdog monitors to see if the corresponding process is still active. If the corresponding process is not active, then the watchdog is triggered.
[ "yum install watchdog", "watchdog-device = /dev/watchdog", "service watchdog start chkconfig watchdog on", "systemctl start watchdog.service systemctl enable watchdog.service", "lspci | grep watchdog -i", "echo c > /proc/sysrq-trigger", "kill -9 pgrep watchdog" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/virtual_machine_management_guide/sect-configuring_a_watchdog
19.3. Security
19.3. Security Security Guide The Security Guide is designed to assist users and administrators in learning the processes and practices of securing workstations and servers against local and remote intrusion, exploitation, and malicious activity. SELinux User's and Administrator's Guide The SELinux User's and Administrator's Guide covers the management and use of Security-Enhanced Linux. Note that managing confined services, which was documented in a stand-alone book in Red Hat Enterprise Linux 6, is now part of the SELinux User's and Administrator's Guide.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.0_release_notes/sect-red_hat_enterprise_linux-7.0_release_notes-documentation-security
Chapter 13. Logging, events, and monitoring
Chapter 13. Logging, events, and monitoring 13.1. Reviewing Virtualization Overview The Virtualization Overview page provides a comprehensive view of virtualization resources, details, status, and top consumers. By gaining an insight into the overall health of OpenShift Virtualization, you can determine if intervention is required to resolve specific issues identified by examining the data. Use the Getting Started resources to access quick starts, read the latest blogs on virtualization, and learn how to use operators. Obtain complete information about alerts, events, inventory, and status of virtual machines. Customize the Top Consumer cards to obtain data on high utilization of a specific resource by projects, virtual machines, or nodes. Click View virtualization dashboard for quick access to the Dashboards page. 13.1.1. Prerequisites To use the vCPU wait metric in the Top Consumers card, the schedstats=enable kernel argument must be applied to the MachineConfig object. This kernel argument enables scheduler statistics used for debugging and performance tuning and adds a minor additional load to the scheduler. See the OpenShift Container Platform machine configuration tasks documentation for more information on applying a kernel argument. 13.1.2. Resources monitored actively in the Virtualization Overview page The following table shows actively monitored resources, metrics, and fields in the Virtualization Overview page. This information is useful when you need to obtain relevant data and intervene to resolve a problem. Monitored resources, fields, and metrics Description Details A brief overview of service and version information for OpenShift Virtualization . Status Alerts for virtualization and networking. Activity Ongoing events for virtual machines. Messages are related to recent activity in the cluster, such as pod creation or virtual machine migration to another host. Running VMs by Template The donut chart displays a unique color for each virtual machine template and shows the number of running virtual machines that use each template. Inventory Total number of active virtual machines, templates, nodes, and networks. Status of VMs Current status of virtual machines: running , provisioning , starting , migrating , paused , stopping , terminating , and unknown . Permissions Tasks for which capabilities are enabled through permissions: Access to public templates , Access to public boot sources , Clone a VM , Attach VM to multiple networks , Upload a base image from local disk , and Share templates . 13.1.3. Resources monitored for top consumption The Top Consumers cards in Virtualization Overview page display projects, virtual machines or nodes with maximum consumption of a resource. You can select a project, a virtual machine, or a node and view the top five or top ten consumers of a specific resource. Note Viewing the maximum resource consumption is limited to the top five or top ten consumers within each Top Consumers card. The following table shows resources monitored for top consumers. Resources monitored for top consumption Description CPU Projects, virtual machines, or nodes consuming the most CPU. Memory Projects, virtual machines, or nodes consuming the most memory (in bytes). The unit of display (for example, MiB or GiB) is determined by the size of the resource consumption. Used filesystem Projects, virtual machines, or nodes with the highest consumption of filesystems (in bytes). The unit of display (for example, MiB or GiB) is determined by the size of the resource consumption. Memory swap Projects, virtual machines, or nodes consuming the most memory pressure when memory is swapped . vCPU wait Projects, virtual machines, or nodes experiencing the maximum wait time (in seconds) for the vCPUs. Storage throughput Projects, virtual machines, or nodes with the highest data transfer rate to and from the storage media (in mbps). Storage IOPS Projects, virtual machines, or nodes with the highest amount of storage IOPS (input/output operations per second) over a time period. 13.1.4. Reviewing top consumers for projects, virtual machines, and nodes You can view the top consumers of resources for a selected project, virtual machine, or node in the Virtualization Overview page. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure In the Administrator perspective in the OpenShift Virtualization web console, navigate to Virtualization Overview . Navigate to the Top Consumers cards. From the drop-down menu, select Show top 5 or Show top 10 . For a Top Consumer card, select the type of resource from the drop-down menu: CPU , Memory , Used Filesystem , Memory Swap , vCPU Wait , or Storage Throughput . Select By Project , By VM , or By Node . A list of the top five or top ten consumers of the selected resource is displayed. 13.1.5. Additional resources Monitoring overview Reviewing monitoring dashboards Dashboards 13.2. Viewing virtual machine logs 13.2.1. About virtual machine logs Logs are collected for OpenShift Container Platform builds, deployments, and pods. In OpenShift Virtualization, virtual machine logs can be retrieved from the virtual machine launcher pod in either the web console or the CLI. The -f option follows the log output in real time, which is useful for monitoring progress and error checking. If the launcher pod is failing to start, use the -- option to see the logs of the last attempt. Warning ErrImagePull and ImagePullBackOff errors can be caused by an incorrect deployment configuration or problems with the images that are referenced. 13.2.2. Viewing virtual machine logs in the CLI Get virtual machine logs from the virtual machine launcher pod. Procedure Use the following command: USD oc logs <virt-launcher-name> 13.2.3. Viewing virtual machine logs in the web console Get virtual machine logs from the associated virtual machine launcher pod. Procedure In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. Click the Details tab. Click the virt-launcher-<name> pod in the Pod section to open the Pod details page. Click the Logs tab to view the pod logs. 13.3. Viewing events 13.3.1. About virtual machine events OpenShift Container Platform events are records of important life-cycle information in a namespace and are useful for monitoring and troubleshooting resource scheduling, creation, and deletion issues. OpenShift Virtualization adds events for virtual machines and virtual machine instances. These can be viewed from either the web console or the CLI. See also: Viewing system event information in an OpenShift Container Platform cluster . 13.3.2. Viewing the events for a virtual machine in the web console You can view streaming events for a running virtual machine on the VirtualMachine details page of the web console. Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. Click the Events tab to view streaming events for the virtual machine. The ▮▮ button pauses the events stream. The ▶ button resumes a paused events stream. 13.3.3. Viewing namespace events in the CLI Use the OpenShift Container Platform client to get the events for a namespace. Procedure In the namespace, use the oc get command: USD oc get events 13.3.4. Viewing resource events in the CLI Events are included in the resource description, which you can get using the OpenShift Container Platform client. Procedure In the namespace, use the oc describe command. The following example shows how to get the events for a virtual machine, a virtual machine instance, and the virt-launcher pod for a virtual machine: USD oc describe vm <vm> USD oc describe vmi <vmi> USD oc describe pod virt-launcher-<name> 13.4. Diagnosing data volumes using events and conditions Use the oc describe command to analyze and help resolve issues with data volumes. 13.4.1. About conditions and events Diagnose data volume issues by examining the output of the Conditions and Events sections generated by the command: USD oc describe dv <DataVolume> There are three Types in the Conditions section that display: Bound Running Ready The Events section provides the following additional information: Type of event Reason for logging Source of the event Message containing additional diagnostic information. The output from oc describe does not always contains Events . An event is generated when either Status , Reason , or Message changes. Both conditions and events react to changes in the state of the data volume. For example, if you misspell the URL during an import operation, the import generates a 404 message. That message change generates an event with a reason. The output in the Conditions section is updated as well. 13.4.2. Analyzing data volumes using conditions and events By inspecting the Conditions and Events sections generated by the describe command, you determine the state of the data volume in relation to persistent volume claims (PVCs), and whether or not an operation is actively running or completed. You might also receive messages that offer specific details about the status of the data volume, and how it came to be in its current state. There are many different combinations of conditions. Each must be evaluated in its unique context. Examples of various combinations follow. Bound - A successfully bound PVC displays in this example. Note that the Type is Bound , so the Status is True . If the PVC is not bound, the Status is False . When the PVC is bound, an event is generated stating that the PVC is bound. In this case, the Reason is Bound and Status is True . The Message indicates which PVC owns the data volume. Message , in the Events section, provides further details including how long the PVC has been bound ( Age ) and by what resource ( From ), in this case datavolume-controller : Example output Status: Conditions: Last Heart Beat Time: 2020-07-15T03:58:24Z Last Transition Time: 2020-07-15T03:58:24Z Message: PVC win10-rootdisk Bound Reason: Bound Status: True Type: Bound Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Bound 24s datavolume-controller PVC example-dv Bound Running - In this case, note that Type is Running and Status is False , indicating that an event has occurred that caused an attempted operation to fail, changing the Status from True to False . However, note that Reason is Completed and the Message field indicates Import Complete . In the Events section, the Reason and Message contain additional troubleshooting information about the failed operation. In this example, the Message displays an inability to connect due to a 404 , listed in the Events section's first Warning . From this information, you conclude that an import operation was running, creating contention for other operations that are attempting to access the data volume: Example output Status: Conditions: Last Heart Beat Time: 2020-07-15T04:31:39Z Last Transition Time: 2020-07-15T04:31:39Z Message: Import Complete Reason: Completed Status: False Type: Running Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Error 12s (x2 over 14s) datavolume-controller Unable to connect to http data source: expected status code 200, got 404. Status: 404 Not Found Ready - If Type is Ready and Status is True , then the data volume is ready to be used, as in the following example. If the data volume is not ready to be used, the Status is False : Example output Status: Conditions: Last Heart Beat Time: 2020-07-15T04:31:39Z Last Transition Time: 2020-07-15T04:31:39Z Status: True Type: Ready 13.5. Viewing information about virtual machine workloads You can view high-level information about your virtual machines by using the Virtual Machines dashboard in the OpenShift Container Platform web console. 13.5.1. About the Virtual Machines dashboard Access virtual machines (VMs) from the OpenShift Container Platform web console by navigating to the Virtualization VirtualMachines page and clicking a virtual machine (VM) to view the VirtualMachine details page. The Overview tab displays the following cards: Details provides identifying information about the virtual machine, including: Name Namespace Date of creation Node name IP address Inventory lists the virtual machine's resources, including: Network interface controllers (NICs) Disks Status includes: The current status of the virtual machine A note indicating whether or not the QEMU guest agent is installed on the virtual machine Utilization includes charts that display usage data for: CPU Memory Filesystem Network transfer Note Use the drop-down list to choose a duration for the utilization data. The available options are 1 Hour , 6 Hours , and 24 Hours . Events lists messages about virtual machine activity over the past hour. To view additional events, click View all . 13.6. Monitoring virtual machine health A virtual machine instance (VMI) can become unhealthy due to transient issues such as connectivity loss, deadlocks, or problems with external dependencies. A health check periodically performs diagnostics on a VMI by using any combination of the readiness and liveness probes. 13.6.1. About readiness and liveness probes Use readiness and liveness probes to detect and handle unhealthy virtual machine instances (VMIs). You can include one or more probes in the specification of the VMI to ensure that traffic does not reach a VMI that is not ready for it and that a new instance is created when a VMI becomes unresponsive. A readiness probe determines whether a VMI is ready to accept service requests. If the probe fails, the VMI is removed from the list of available endpoints until the VMI is ready. A liveness probe determines whether a VMI is responsive. If the probe fails, the VMI is deleted and a new instance is created to restore responsiveness. You can configure readiness and liveness probes by setting the spec.readinessProbe and the spec.livenessProbe fields of the VirtualMachineInstance object. These fields support the following tests: HTTP GET The probe determines the health of the VMI by using a web hook. The test is successful if the HTTP response code is between 200 and 399. You can use an HTTP GET test with applications that return HTTP status codes when they are completely initialized. TCP socket The probe attempts to open a socket to the VMI. The VMI is only considered healthy if the probe can establish a connection. You can use a TCP socket test with applications that do not start listening until initialization is complete. 13.6.2. Defining an HTTP readiness probe Define an HTTP readiness probe by setting the spec.readinessProbe.httpGet field of the virtual machine instance (VMI) configuration. Procedure Include details of the readiness probe in the VMI configuration file. Sample readiness probe with an HTTP GET test # ... spec: readinessProbe: httpGet: 1 port: 1500 2 path: /healthz 3 httpHeaders: - name: Custom-Header value: Awesome initialDelaySeconds: 120 4 periodSeconds: 20 5 timeoutSeconds: 10 6 failureThreshold: 3 7 successThreshold: 3 8 # ... 1 The HTTP GET request to perform to connect to the VMI. 2 The port of the VMI that the probe queries. In the above example, the probe queries port 1500. 3 The path to access on the HTTP server. In the above example, if the handler for the server's /healthz path returns a success code, the VMI is considered to be healthy. If the handler returns a failure code, the VMI is removed from the list of available endpoints. 4 The time, in seconds, after the VMI starts before the readiness probe is initiated. 5 The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than timeoutSeconds . 6 The number of seconds of inactivity after which the probe times out and the VMI is assumed to have failed. The default value is 1. This value must be lower than periodSeconds . 7 The number of times that the probe is allowed to fail. The default is 3. After the specified number of attempts, the pod is marked Unready . 8 The number of times that the probe must report success, after a failure, to be considered successful. The default is 1. Create the VMI by running the following command: USD oc create -f <file_name>.yaml 13.6.3. Defining a TCP readiness probe Define a TCP readiness probe by setting the spec.readinessProbe.tcpSocket field of the virtual machine instance (VMI) configuration. Procedure Include details of the TCP readiness probe in the VMI configuration file. Sample readiness probe with a TCP socket test ... spec: readinessProbe: initialDelaySeconds: 120 1 periodSeconds: 20 2 tcpSocket: 3 port: 1500 4 timeoutSeconds: 10 5 ... 1 The time, in seconds, after the VMI starts before the readiness probe is initiated. 2 The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than timeoutSeconds . 3 The TCP action to perform. 4 The port of the VMI that the probe queries. 5 The number of seconds of inactivity after which the probe times out and the VMI is assumed to have failed. The default value is 1. This value must be lower than periodSeconds . Create the VMI by running the following command: USD oc create -f <file_name>.yaml 13.6.4. Defining an HTTP liveness probe Define an HTTP liveness probe by setting the spec.livenessProbe.httpGet field of the virtual machine instance (VMI) configuration. You can define both HTTP and TCP tests for liveness probes in the same way as readiness probes. This procedure configures a sample liveness probe with an HTTP GET test. Procedure Include details of the HTTP liveness probe in the VMI configuration file. Sample liveness probe with an HTTP GET test # ... spec: livenessProbe: initialDelaySeconds: 120 1 periodSeconds: 20 2 httpGet: 3 port: 1500 4 path: /healthz 5 httpHeaders: - name: Custom-Header value: Awesome timeoutSeconds: 10 6 # ... 1 The time, in seconds, after the VMI starts before the liveness probe is initiated. 2 The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than timeoutSeconds . 3 The HTTP GET request to perform to connect to the VMI. 4 The port of the VMI that the probe queries. In the above example, the probe queries port 1500. The VMI installs and runs a minimal HTTP server on port 1500 via cloud-init. 5 The path to access on the HTTP server. In the above example, if the handler for the server's /healthz path returns a success code, the VMI is considered to be healthy. If the handler returns a failure code, the VMI is deleted and a new instance is created. 6 The number of seconds of inactivity after which the probe times out and the VMI is assumed to have failed. The default value is 1. This value must be lower than periodSeconds . Create the VMI by running the following command: USD oc create -f <file_name>.yaml 13.6.5. Template: Virtual machine configuration file for defining health checks apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: special: vm-fedora name: vm-fedora spec: template: metadata: labels: special: vm-fedora spec: domain: devices: disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk resources: requests: memory: 1024M readinessProbe: httpGet: port: 1500 initialDelaySeconds: 120 periodSeconds: 20 timeoutSeconds: 10 failureThreshold: 3 successThreshold: 3 terminationGracePeriodSeconds: 180 volumes: - name: containerdisk containerDisk: image: kubevirt/fedora-cloud-registry-disk-demo - cloudInitNoCloud: userData: |- #cloud-config password: fedora chpasswd: { expire: False } bootcmd: - setenforce 0 - dnf install -y nmap-ncat - systemd-run --unit=httpserver nc -klp 1500 -e '/usr/bin/echo -e HTTP/1.1 200 OK\\n\\nHello World!' name: cloudinitdisk 13.6.6. Additional resources Monitoring application health by using health checks 13.7. Using the OpenShift Container Platform dashboard to get cluster information Access the OpenShift Container Platform dashboard, which captures high-level information about the cluster, by clicking Home > Dashboards > Overview from the OpenShift Container Platform web console. The OpenShift Container Platform dashboard provides various cluster information, captured in individual dashboard cards . 13.7.1. About the OpenShift Container Platform dashboards page The OpenShift Container Platform dashboard consists of the following cards: Details provides a brief overview of informational cluster details. Status include ok , error , warning , in progress , and unknown . Resources can add custom status names. Cluster ID Provider Version Cluster Inventory details number of resources and associated statuses. It is helpful when intervention is required to resolve problems, including information about: Number of nodes Number of pods Persistent storage volume claims Virtual machines (available if OpenShift Virtualization is installed) Bare metal hosts in the cluster, listed according to their state (only available in metal3 environment). Cluster Health summarizes the current health of the cluster as a whole, including relevant alerts and descriptions. If OpenShift Virtualization is installed, the overall health of OpenShift Virtualization is diagnosed as well. If more than one subsystem is present, click See All to view the status of each subsystem. Cluster Capacity charts help administrators understand when additional resources are required in the cluster. The charts contain an inner ring that displays current consumption, while an outer ring displays thresholds configured for the resource, including information about: CPU time Memory allocation Storage consumed Network resources consumed Cluster Utilization shows the capacity of various resources over a specified period of time, to help administrators understand the scale and frequency of high resource consumption. Events lists messages related to recent activity in the cluster, such as pod creation or virtual machine migration to another host. Top Consumers helps administrators understand how cluster resources are consumed. Click on a resource to jump to a detailed page listing pods and nodes that consume the largest amount of the specified cluster resource (CPU, memory, or storage). 13.8. Reviewing resource usage by virtual machines Dashboards in the OpenShift Container Platform web console provide visual representations of cluster metrics to help you to quickly understand the state of your cluster. Dashboards belong to the Monitoring overview that provides monitoring for core platform components. The OpenShift Virtualization dashboard provides data on resource consumption for virtual machines and associated pods. The visualization metrics displayed in the OpenShift Virtualization dashboard are based on Prometheus Query Language (PromQL) queries . A monitoring role is required to monitor user-defined namespaces in the OpenShift Virtualization dashboard. 13.8.1. About reviewing top consumers In the OpenShift Virtualization dashboard, you can select a specific time period and view the top consumers of resources within that time period. Top consumers are virtual machines or virt-launcher pods that are consuming the highest amount of resources. The following table shows resources monitored in the dashboard and describes the metrics associated with each resource for top consumers. Monitored resources Description Memory swap traffic Virtual machines consuming the most memory pressure when swapping memory. vCPU wait Virtual machines experiencing the maximum wait time (in seconds) for their vCPUs. CPU usage by pod The virt-launcher pods that are using the most CPU. Network traffic Virtual machines that are saturating the network by receiving the most amount of network traffic (in bytes). Storage traffic Virtual machines with the highest amount (in bytes) of storage-related traffic. Storage IOPS Virtual machines with the highest amount of I/O operations per second over a time period. Memory usage The virt-launcher pods that are using the most memory (in bytes). Note Viewing the maximum resource consumption is limited to the top five consumers. 13.8.2. Reviewing top consumers In the Administrator perspective, you can view the OpenShift Virtualization dashboard where top consumers of resources are displayed. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure In the Administrator perspective in the OpenShift Virtualization web console, navigate to Observe Dashboards . Select the KubeVirt/Infrastructure Resources/Top Consumers dashboard from the Dashboard list. Select a predefined time period from the drop-down menu for Period. You can review the data for top consumers in the tables. Optional: Click Inspect to view or edit the Prometheus Query Language (PromQL) query associated with the top consumers for a table. 13.8.3. Additional resources Monitoring overview Reviewing monitoring dashboards 13.9. OpenShift Container Platform cluster monitoring, logging, and Telemetry OpenShift Container Platform provides various resources for monitoring at the cluster level. 13.9.1. About OpenShift Container Platform monitoring OpenShift Container Platform includes a pre-configured, pre-installed, and self-updating monitoring stack that provides monitoring for core platform components . OpenShift Container Platform delivers monitoring best practices out of the box. A set of alerts are included by default that immediately notify cluster administrators about issues with a cluster. Default dashboards in the OpenShift Container Platform web console include visual representations of cluster metrics to help you to quickly understand the state of your cluster. After installing OpenShift Container Platform 4.10, cluster administrators can optionally enable monitoring for user-defined projects . By using this feature, cluster administrators, developers, and other users can specify how services and pods are monitored in their own projects. You can then query metrics, review dashboards, and manage alerting rules and silences for your own projects in the OpenShift Container Platform web console. Note Cluster administrators can grant developers and other users permission to monitor their own projects. Privileges are granted by assigning one of the predefined monitoring roles. 13.9.2. About logging subsystem components The logging subsystem components include a collector deployed to each node in the OpenShift Container Platform cluster that collects all node and container logs and writes them to a log store. You can use a centralized web UI to create rich visualizations and dashboards with the aggregated data. The major components of the logging subsystem are: collection - This is the component that collects logs from the cluster, formats them, and forwards them to the log store. The current implementation is Fluentd. log store - This is where the logs are stored. The default implementation is Elasticsearch. You can use the default Elasticsearch log store or forward logs to external log stores. The default log store is optimized and tested for short-term storage. visualization - This is the UI component you can use to view logs, graphs, charts, and so forth. The current implementation is Kibana. For more information on OpenShift Logging, see the OpenShift Logging documentation. 13.9.3. About Telemetry Telemetry sends a carefully chosen subset of the cluster monitoring metrics to Red Hat. The Telemeter Client fetches the metrics values every four minutes and thirty seconds and uploads the data to Red Hat. These metrics are described in this document. This stream of data is used by Red Hat to monitor the clusters in real-time and to react as necessary to problems that impact our customers. It also allows Red Hat to roll out OpenShift Container Platform upgrades to customers to minimize service impact and continuously improve the upgrade experience. This debugging information is available to Red Hat Support and Engineering teams with the same restrictions as accessing data reported through support cases. All connected cluster information is used by Red Hat to help make OpenShift Container Platform better and more intuitive to use. 13.9.3.1. Information collected by Telemetry The following information is collected by Telemetry: 13.9.3.1.1. System information Version information, including the OpenShift Container Platform cluster version and installed update details that are used to determine update version availability Update information, including the number of updates available per cluster, the channel and image repository used for an update, update progress information, and the number of errors that occur in an update The unique random identifier that is generated during an installation Configuration details that help Red Hat Support to provide beneficial support for customers, including node configuration at the cloud infrastructure level, hostnames, IP addresses, Kubernetes pod names, namespaces, and services The OpenShift Container Platform framework components installed in a cluster and their condition and status Events for all namespaces listed as "related objects" for a degraded Operator Information about degraded software Information about the validity of certificates The name of the provider platform that OpenShift Container Platform is deployed on and the data center location 13.9.3.1.2. Sizing Information Sizing information about clusters, machine types, and machines, including the number of CPU cores and the amount of RAM used for each The number of running virtual machine instances in a cluster The number of etcd members and the number of objects stored in the etcd cluster Number of application builds by build strategy type 13.9.3.1.3. Usage information Usage information about components, features, and extensions Usage details about Technology Previews and unsupported configurations Telemetry does not collect identifying information such as usernames or passwords. Red Hat does not intend to collect personal information. If Red Hat discovers that personal information has been inadvertently received, Red Hat will delete such information. To the extent that any telemetry data constitutes personal data, please refer to the Red Hat Privacy Statement for more information about Red Hat's privacy practices. 13.9.4. CLI troubleshooting and debugging commands For a list of the oc client troubleshooting and debugging commands, see the OpenShift Container Platform CLI tools documentation. 13.10. Prometheus queries for virtual resources OpenShift Virtualization provides metrics for monitoring how infrastructure resources are consumed in the cluster. The metrics cover the following resources: vCPU Network Storage Guest memory swapping Use the OpenShift Container Platform monitoring dashboard to query virtualization metrics. 13.10.1. Prerequisites To use the vCPU metric, the schedstats=enable kernel argument must be applied to the MachineConfig object. This kernel argument enables scheduler statistics used for debugging and performance tuning and adds a minor additional load to the scheduler. See the OpenShift Container Platform machine configuration tasks documentation for more information on applying a kernel argument. For guest memory swapping queries to return data, memory swapping must be enabled on the virtual guests. 13.10.2. Querying metrics The OpenShift Container Platform monitoring dashboard enables you to run Prometheus Query Language (PromQL) queries to examine metrics visualized on a plot. This functionality provides information about the state of a cluster and any user-defined workloads that you are monitoring. As a cluster administrator , you can query metrics for all core OpenShift Container Platform and user-defined projects. As a developer , you must specify a project name when querying metrics. You must have the required privileges to view metrics for the selected project. 13.10.2.1. Querying metrics for all projects as a cluster administrator As a cluster administrator or as a user with view permissions for all projects, you can access metrics for all default OpenShift Container Platform and user-defined projects in the Metrics UI. Note Only cluster administrators have access to the third-party UIs provided with OpenShift Container Platform Monitoring. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or with view permissions for all projects. You have installed the OpenShift CLI ( oc ). Procedure In the Administrator perspective within the OpenShift Container Platform web console, select Observe Metrics . Select Insert Metric at Cursor to view a list of predefined queries. To create a custom query, add your Prometheus Query Language (PromQL) query to the Expression field. To add multiple queries, select Add Query . To delete a query, select to the query, then choose Delete query . To disable a query from being run, select to the query and choose Disable query . Select Run Queries to run the queries that you have created. The metrics from the queries are visualized on the plot. If a query is invalid, the UI shows an error message. Note Queries that operate on large amounts of data might time out or overload the browser when drawing time series graphs. To avoid this, select Hide graph and calibrate your query using only the metrics table. Then, after finding a feasible query, enable the plot to draw the graphs. Optional: The page URL now contains the queries you ran. To use this set of queries again in the future, save this URL. Additional resources See the Prometheus query documentation for more information about creating PromQL queries. 13.10.2.2. Querying metrics for user-defined projects as a developer You can access metrics for a user-defined project as a developer or as a user with view permissions for the project. In the Developer perspective, the Metrics UI includes some predefined CPU, memory, bandwidth, and network packet queries for the selected project. You can also run custom Prometheus Query Language (PromQL) queries for CPU, memory, bandwidth, network packet and application metrics for the project. Note Developers can only use the Developer perspective and not the Administrator perspective. As a developer, you can only query metrics for one project at a time. Developers cannot access the third-party UIs provided with OpenShift Container Platform monitoring that are for core platform components. Instead, use the Metrics UI for your user-defined project. Prerequisites You have access to the cluster as a developer or as a user with view permissions for the project that you are viewing metrics for. You have enabled monitoring for user-defined projects. You have deployed a service in a user-defined project. You have created a ServiceMonitor custom resource definition (CRD) for the service to define how the service is monitored. Procedure From the Developer perspective in the OpenShift Container Platform web console, select Observe Metrics . Select the project that you want to view metrics for in the Project: list. Choose a query from the Select Query list, or run a custom PromQL query by selecting Show PromQL . Note In the Developer perspective, you can only run one query at a time. Additional resources See the Prometheus query documentation for more information about creating PromQL queries. 13.10.3. Virtualization metrics The following metric descriptions include example Prometheus Query Language (PromQL) queries. These metrics are not an API and might change between versions. Note The following examples use topk queries that specify a time period. If virtual machines are deleted during that time period, they can still appear in the query output. 13.10.3.1. vCPU metrics The following query can identify virtual machines that are waiting for Input/Output (I/O): kubevirt_vmi_vcpu_wait_seconds Returns the wait time (in seconds) for a virtual machine's vCPU. A value above '0' means that the vCPU wants to run, but the host scheduler cannot run it yet. This inability to run indicates that there is an issue with I/O. Note To query the vCPU metric, the schedstats=enable kernel argument must first be applied to the MachineConfig object. This kernel argument enables scheduler statistics used for debugging and performance tuning and adds a minor additional load to the scheduler. Example vCPU wait time query topk(3, sum by (name, namespace) (rate(kubevirt_vmi_vcpu_wait_seconds[6m]))) > 0 1 1 This query returns the top 3 VMs waiting for I/O at every given moment over a six-minute time period. 13.10.3.2. Network metrics The following queries can identify virtual machines that are saturating the network: kubevirt_vmi_network_receive_bytes_total Returns the total amount of traffic received (in bytes) on the virtual machine's network. kubevirt_vmi_network_transmit_bytes_total Returns the total amount of traffic transmitted (in bytes) on the virtual machine's network. Example network traffic query topk(3, sum by (name, namespace) (rate(kubevirt_vmi_network_receive_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_network_transmit_bytes_total[6m]))) > 0 1 1 This query returns the top 3 VMs transmitting the most network traffic at every given moment over a six-minute time period. 13.10.3.3. Storage metrics 13.10.3.3.1. Storage-related traffic The following queries can identify VMs that are writing large amounts of data: kubevirt_vmi_storage_read_traffic_bytes_total Returns the total amount (in bytes) of the virtual machine's storage-related traffic. kubevirt_vmi_storage_write_traffic_bytes_total Returns the total amount of storage writes (in bytes) of the virtual machine's storage-related traffic. Example storage-related traffic query topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_read_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_write_traffic_bytes_total[6m]))) > 0 1 1 This query returns the top 3 VMs performing the most storage traffic at every given moment over a six-minute time period. 13.10.3.3.2. I/O performance The following queries can determine the I/O performance of storage devices: kubevirt_vmi_storage_iops_read_total Returns the amount of write I/O operations the virtual machine is performing per second. kubevirt_vmi_storage_iops_write_total Returns the amount of read I/O operations the virtual machine is performing per second. Example I/O performance query topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_read_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_write_total[6m]))) > 0 1 1 This query returns the top 3 VMs performing the most I/O operations per second at every given moment over a six-minute time period. 13.10.3.4. Guest memory swapping metrics The following queries can identify which swap-enabled guests are performing the most memory swapping: kubevirt_vmi_memory_swap_in_traffic_bytes_total Returns the total amount (in bytes) of memory the virtual guest is swapping in. kubevirt_vmi_memory_swap_out_traffic_bytes_total Returns the total amount (in bytes) of memory the virtual guest is swapping out. Example memory swapping query topk(3, sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_in_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_out_traffic_bytes_total[6m]))) > 0 1 1 This query returns the top 3 VMs where the guest is performing the most memory swapping at every given moment over a six-minute time period. Note Memory swapping indicates that the virtual machine is under memory pressure. Increasing the memory allocation of the virtual machine can mitigate this issue. 13.10.4. Additional resources Monitoring overview 13.11. Exposing custom metrics for virtual machines OpenShift Container Platform includes a pre-configured, pre-installed, and self-updating monitoring stack that provides monitoring for core platform components. This monitoring stack is based on the Prometheus monitoring system. Prometheus is a time-series database and a rule evaluation engine for metrics. In addition to using the OpenShift Container Platform monitoring stack, you can enable monitoring for user-defined projects by using the CLI and query custom metrics that are exposed for virtual machines through the node-exporter service. 13.11.1. Configuring the node exporter service The node-exporter agent is deployed on every virtual machine in the cluster from which you want to collect metrics. Configure the node-exporter agent as a service to expose internal metrics and processes that are associated with virtual machines. Prerequisites Install the OpenShift Container Platform CLI oc . Log in to the cluster as a user with cluster-admin privileges. Create the cluster-monitoring-config ConfigMap object in the openshift-monitoring project. Configure the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project by setting enableUserWorkload to true . Procedure Create the Service YAML file. In the following example, the file is called node-exporter-service.yaml . kind: Service apiVersion: v1 metadata: name: node-exporter-service 1 namespace: dynamation 2 labels: servicetype: metrics 3 spec: ports: - name: exmet 4 protocol: TCP port: 9100 5 targetPort: 9100 6 type: ClusterIP selector: monitor: metrics 7 1 The node-exporter service that exposes the metrics from the virtual machines. 2 The namespace where the service is created. 3 The label for the service. The ServiceMonitor uses this label to match this service. 4 The name given to the port that exposes metrics on port 9100 for the ClusterIP service. 5 The target port used by node-exporter-service to listen for requests. 6 The TCP port number of the virtual machine that is configured with the monitor label. 7 The label used to match the virtual machine's pods. In this example, any virtual machine's pod with the label monitor and a value of metrics will be matched. Create the node-exporter service: USD oc create -f node-exporter-service.yaml 13.11.2. Configuring a virtual machine with the node exporter service Download the node-exporter file on to the virtual machine. Then, create a systemd service that runs the node-exporter service when the virtual machine boots. Prerequisites The pods for the component are running in the openshift-user-workload-monitoring project. Grant the monitoring-edit role to users who need to monitor this user-defined project. Procedure Log on to the virtual machine. Download the node-exporter file on to the virtual machine by using the directory path that applies to the version of node-exporter file. USD wget https://github.com/prometheus/node_exporter/releases/download/v1.3.1/node_exporter-1.3.1.linux-amd64.tar.gz Extract the executable and place it in the /usr/bin directory. USD sudo tar xvf node_exporter-1.3.1.linux-amd64.tar.gz \ --directory /usr/bin --strip 1 "*/node_exporter" Create a node_exporter.service file in this directory path: /etc/systemd/system . This systemd service file runs the node-exporter service when the virtual machine reboots. [Unit] Description=Prometheus Metrics Exporter After=network.target StartLimitIntervalSec=0 [Service] Type=simple Restart=always RestartSec=1 User=root ExecStart=/usr/bin/node_exporter [Install] WantedBy=multi-user.target Enable and start the systemd service. USD sudo systemctl enable node_exporter.service USD sudo systemctl start node_exporter.service Verification Verify that the node-exporter agent is reporting metrics from the virtual machine. USD curl http://localhost:9100/metrics Example output go_gc_duration_seconds{quantile="0"} 1.5244e-05 go_gc_duration_seconds{quantile="0.25"} 3.0449e-05 go_gc_duration_seconds{quantile="0.5"} 3.7913e-05 13.11.3. Creating a custom monitoring label for virtual machines To enable queries to multiple virtual machines from a single service, add a custom label in the virtual machine's YAML file. Prerequisites Install the OpenShift Container Platform CLI oc . Log in as a user with cluster-admin privileges. Access to the web console for stop and restart a virtual machine. Procedure Edit the template spec of your virtual machine configuration file. In this example, the label monitor has the value metrics . spec: template: metadata: labels: monitor: metrics Stop and restart the virtual machine to create a new pod with the label name given to the monitor label. 13.11.3.1. Querying the node-exporter service for metrics Metrics are exposed for virtual machines through an HTTP service endpoint under the /metrics canonical name. When you query for metrics, Prometheus directly scrapes the metrics from the metrics endpoint exposed by the virtual machines and presents these metrics for viewing. Prerequisites You have access to the cluster as a user with cluster-admin privileges or the monitoring-edit role. You have enabled monitoring for the user-defined project by configuring the node-exporter service. Procedure Obtain the HTTP service endpoint by specifying the namespace for the service: USD oc get service -n <namespace> <node-exporter-service> To list all available metrics for the node-exporter service, query the metrics resource. USD curl http://<172.30.226.162:9100>/metrics | grep -vE "^#|^USD" Example output node_arp_entries{device="eth0"} 1 node_boot_time_seconds 1.643153218e+09 node_context_switches_total 4.4938158e+07 node_cooling_device_cur_state{name="0",type="Processor"} 0 node_cooling_device_max_state{name="0",type="Processor"} 0 node_cpu_guest_seconds_total{cpu="0",mode="nice"} 0 node_cpu_guest_seconds_total{cpu="0",mode="user"} 0 node_cpu_seconds_total{cpu="0",mode="idle"} 1.10586485e+06 node_cpu_seconds_total{cpu="0",mode="iowait"} 37.61 node_cpu_seconds_total{cpu="0",mode="irq"} 233.91 node_cpu_seconds_total{cpu="0",mode="nice"} 551.47 node_cpu_seconds_total{cpu="0",mode="softirq"} 87.3 node_cpu_seconds_total{cpu="0",mode="steal"} 86.12 node_cpu_seconds_total{cpu="0",mode="system"} 464.15 node_cpu_seconds_total{cpu="0",mode="user"} 1075.2 node_disk_discard_time_seconds_total{device="vda"} 0 node_disk_discard_time_seconds_total{device="vdb"} 0 node_disk_discarded_sectors_total{device="vda"} 0 node_disk_discarded_sectors_total{device="vdb"} 0 node_disk_discards_completed_total{device="vda"} 0 node_disk_discards_completed_total{device="vdb"} 0 node_disk_discards_merged_total{device="vda"} 0 node_disk_discards_merged_total{device="vdb"} 0 node_disk_info{device="vda",major="252",minor="0"} 1 node_disk_info{device="vdb",major="252",minor="16"} 1 node_disk_io_now{device="vda"} 0 node_disk_io_now{device="vdb"} 0 node_disk_io_time_seconds_total{device="vda"} 174 node_disk_io_time_seconds_total{device="vdb"} 0.054 node_disk_io_time_weighted_seconds_total{device="vda"} 259.79200000000003 node_disk_io_time_weighted_seconds_total{device="vdb"} 0.039 node_disk_read_bytes_total{device="vda"} 3.71867136e+08 node_disk_read_bytes_total{device="vdb"} 366592 node_disk_read_time_seconds_total{device="vda"} 19.128 node_disk_read_time_seconds_total{device="vdb"} 0.039 node_disk_reads_completed_total{device="vda"} 5619 node_disk_reads_completed_total{device="vdb"} 96 node_disk_reads_merged_total{device="vda"} 5 node_disk_reads_merged_total{device="vdb"} 0 node_disk_write_time_seconds_total{device="vda"} 240.66400000000002 node_disk_write_time_seconds_total{device="vdb"} 0 node_disk_writes_completed_total{device="vda"} 71584 node_disk_writes_completed_total{device="vdb"} 0 node_disk_writes_merged_total{device="vda"} 19761 node_disk_writes_merged_total{device="vdb"} 0 node_disk_written_bytes_total{device="vda"} 2.007924224e+09 node_disk_written_bytes_total{device="vdb"} 0 13.11.4. Creating a ServiceMonitor resource for the node exporter service You can use a Prometheus client library and scrape metrics from the /metrics endpoint to access and view the metrics exposed by the node-exporter service. Use a ServiceMonitor custom resource definition (CRD) to monitor the node exporter service. Prerequisites You have access to the cluster as a user with cluster-admin privileges or the monitoring-edit role. You have enabled monitoring for the user-defined project by configuring the node-exporter service. Procedure Create a YAML file for the ServiceMonitor resource configuration. In this example, the service monitor matches any service with the label metrics and queries the exmet port every 30 seconds. apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: k8s-app: node-exporter-metrics-monitor name: node-exporter-metrics-monitor 1 namespace: dynamation 2 spec: endpoints: - interval: 30s 3 port: exmet 4 scheme: http selector: matchLabels: servicetype: metrics 1 The name of the ServiceMonitor . 2 The namespace where the ServiceMonitor is created. 3 The interval at which the port will be queried. 4 The name of the port that is queried every 30 seconds Create the ServiceMonitor configuration for the node-exporter service. USD oc create -f node-exporter-metrics-monitor.yaml 13.11.4.1. Accessing the node exporter service outside the cluster You can access the node-exporter service outside the cluster and view the exposed metrics. Prerequisites You have access to the cluster as a user with cluster-admin privileges or the monitoring-edit role. You have enabled monitoring for the user-defined project by configuring the node-exporter service. Procedure Expose the node-exporter service. USD oc expose service -n <namespace> <node_exporter_service_name> Obtain the FQDN (Fully Qualified Domain Name) for the route. USD oc get route -o=custom-columns=NAME:.metadata.name,DNS:.spec.host Example output NAME DNS node-exporter-service node-exporter-service-dynamation.apps.cluster.example.org Use the curl command to display metrics for the node-exporter service. USD curl -s http://node-exporter-service-dynamation.apps.cluster.example.org/metrics Example output go_gc_duration_seconds{quantile="0"} 1.5382e-05 go_gc_duration_seconds{quantile="0.25"} 3.1163e-05 go_gc_duration_seconds{quantile="0.5"} 3.8546e-05 go_gc_duration_seconds{quantile="0.75"} 4.9139e-05 go_gc_duration_seconds{quantile="1"} 0.000189423 13.11.5. Additional resources Configuring the monitoring stack Enabling monitoring for user-defined projects Managing metrics Reviewing monitoring dashboards Monitoring application health by using health checks Creating and using config maps Controlling virtual machine states 13.12. OpenShift Virtualization critical alerts Important OpenShift Virtualization critical alerts is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenShift Virtualization has alerts that inform you when a problem occurs. Critical alerts require immediate attention. Each alert has a corresponding description of the problem, a reason for why the alert is occurring, a troubleshooting process to diagnose the source of the problem, and steps for resolving the alert. 13.12.1. Network alerts Network alerts provide information about problems for the OpenShift Virtualization Network Operator. 13.12.1.1. KubeMacPoolDown alert Description The KubeMacPool component allocates MAC addresses and prevents MAC address conflicts. Reason If the KubeMacPool-manager pod is down, then the creation of VirtualMachine objects fails. Troubleshoot Determine the Kubemacpool-manager pod namespace and name. USD export KMP_NAMESPACE="USD(oc get pod -A --no-headers -l control-plane=mac-controller-manager | awk '{print USD1}')" USD export KMP_NAME="USD(oc get pod -A --no-headers -l control-plane=mac-controller-manager | awk '{print USD2}')" Check the Kubemacpool-manager pod description and logs to determine the source of the problem. USD oc describe pod -n USDKMP_NAMESPACE USDKMP_NAME USD oc logs -n USDKMP_NAMESPACE USDKMP_NAME Resolution Open a support issue and provide the information gathered in the troubleshooting process. 13.12.2. SSP alerts SSP alerts provide information about problems for the OpenShift Virtualization SSP Operator. 13.12.2.1. SSPFailingToReconcile alert Description The SSP Operator's pod is up, but the pod's reconcile cycle consistently fails. This failure includes failure to update the resources for which it is responsible, failure to deploy the template validator, or failure to deploy or update the common templates. Reason If the SSP Operator fails to reconcile, then the deployment of dependent components fails, reconciliation of component changes fails, or both. Additionally, the updates to the common templates and template validator reset and fail. Troubleshoot Check the ssp-operator pod's logs for errors: USD export NAMESPACE="USD(oc get deployment -A | grep ssp-operator | awk '{print USD1}')" USD oc -n USDNAMESPACE describe pods -l control-plane=ssp-operator USD oc -n USDNAMESPACE logs --tail=-1 -l control-plane=ssp-operator Verify that the template validator is up. If the template validator is not up, then check the pod's logs for errors. USD export NAMESPACE="USD(USD oc get deployment -A | grep ssp-operator | awk '{print USD1}')" USD oc -n USDNAMESPACE get pods -l name=virt-template-validator USD oc -n USDNAMESPACE describe pods -l name=virt-template-validator USD oc -n USDNAMESPACE logs --tail=-1 -l name=virt-template-validator Resolution Open a support issue and provide the information gathered in the troubleshooting process. 13.12.2.2. SSPOperatorDown alert Description The SSP Operator deploys and reconciles the common templates and the template validator. Reason If the SSP Operator is down, then the deployment of dependent components fails, reconciliation of component changes fails, or both. Additionally, the updates to the common template and template validator reset and fail. Troubleshoot Check ssp-operator's pod namespace: USD export NAMESPACE="USD(oc get deployment -A | grep ssp-operator | awk '{print USD1}')" Verify that the ssp-operator's pod is currently down. USD oc -n USDNAMESPACE get pods -l control-plane=ssp-operator Check the ssp-operator's pod description and logs. USD oc -n USDNAMESPACE describe pods -l control-plane=ssp-operator USD oc -n USDNAMESPACE logs --tail=-1 -l control-plane=ssp-operator Resolution Open a support issue and provide the information gathered in the troubleshooting process. 13.12.2.3. SSPTemplateValidatorDown alert Description The template validator validates that virtual machines (VMs) do not violate their assigned templates. Reason If every template validator pod is down, then the template validator fails to validate VMs against their assigned templates. Troubleshoot Check the namespaces of the ssp-operator pods and the virt-template-validator pods. USD export NAMESPACE_SSP="USD(oc get deployment -A | grep ssp-operator | awk '{print USD1}')" USD export NAMESPACE="USD(oc get deployment -A | grep virt-template-validator | awk '{print USD1}')" Verify that the virt-template-validator's pod is currently down. USD oc -n USDNAMESPACE get pods -l name=virt-template-validator Check the pod description and logs of the ssp-operator and the virt-template-validator. USD oc -n USDNAMESPACE_SSP describe pods -l name=ssp-operator USD oc -n USDNAMESPACE_SSP logs --tail=-1 -l name=ssp-operator USD oc -n USDNAMESPACE describe pods -l name=virt-template-validator USD oc -n USDNAMESPACE logs --tail=-1 -l name=virt-template-validator Resolution Open a support issue and provide the information gathered in the troubleshooting process. 13.12.3. Virt alerts Virt alerts provide information about problems for the OpenShift Virtualization Virt Operator. 13.12.3.1. NoLeadingVirtOperator alert Description In the past 10 minutes, no virt-operator pod holds the leader lease, despite one or more virt-operator pods being in Ready state. The alert suggests no operating virt-operator pod exists. Reason The virt-operator is the first Kubernetes Operator active in a OpenShift Container Platform cluster. Its primary responsibilities are: Installation Live-update Live-upgrade of a cluster Monitoring the lifecycle of top-level controllers such as virt-controller, virt-handler, and virt-launcher Managing the reconciliation of top-level controllers In addition, the virt-operator is responsible for cluster-wide tasks such as certificate rotation and some infrastructure management. The virt-operator deployment has a default replica of two pods with one leader pod holding a leader lease, indicating an operating virt-operator pod. This alert indicates a failure at the cluster level. Critical cluster-wide management functionalities such as certification rotation, upgrade, and reconciliation of controllers may be temporarily unavailable. Troubleshoot Determine a virt-operator pod's leader status from the pod logs. The log messages containing Started leading and acquire leader indicate the leader status of a given virt-operator pod. Additionally, always check if there are any running virt-operator pods and the pods' statuses with these commands: USD export NAMESPACE="USD(oc get kubevirt -A -o custom-columns="":.metadata.namespace)" USD oc -n USDNAMESPACE get pods -l kubevirt.io=virt-operator USD oc -n USDNAMESPACE logs <pod-name> USD oc -n USDNAMESPACE describe pod <pod-name> Leader pod example: USD oc -n USDNAMESPACE logs <pod-name> |grep lead Example output {"component":"virt-operator","level":"info","msg":"Attempting to acquire leader status","pos":"application.go:400","timestamp":"2021-11-30T12:15:18.635387Z"} I1130 12:15:18.635452 1 leaderelection.go:243] attempting to acquire leader lease <namespace>/virt-operator... I1130 12:15:19.216582 1 leaderelection.go:253] successfully acquired lease <namespace>/virt-operator {"component":"virt-operator","level":"info","msg":"Started leading","pos":"application.go:385","timestamp":"2021-11-30T12:15:19.216836Z"} Non-leader pod example: USD oc -n USDNAMESPACE logs <pod-name> |grep lead Example output {"component":"virt-operator","level":"info","msg":"Attempting to acquire leader status","pos":"application.go:400","timestamp":"2021-11-30T12:15:20.533696Z"} I1130 12:15:20.533792 1 leaderelection.go:243] attempting to acquire leader lease <namespace>/virt-operator... Resolution There are several reasons for no virt-operator pod holding the leader lease, despite one or more virt-operator pods being in Ready state. Identify the root cause and take appropriate action. Otherwise, open a support issue and provide the information gathered in the troubleshooting process. 13.12.3.2. NoReadyVirtController alert Description The virt-controller monitors virtual machine instances (VMIs). The virt-controller also manages the associated pods by creating and managing the lifecycle of the pods associated with the VMI objects. A VMI object always associates with a pod during its lifetime. However, the pod instance can change over time because of VMI migration. This alert occurs when detection of no ready virt-controllers occurs for five minutes. Reason If the virt-controller fails, then VM lifecycle management completely fails. Lifecycle management tasks include launching a new VMI or shutting down an existing VMI. Troubleshoot Check the vdeployment status of the virt-controller for available replicas and conditions. USD oc -n USDNAMESPACE get deployment virt-controller -o yaml Check if the virt-controller pods exist and check their statuses. get pods -n USDNAMESPACE |grep virt-controller Check the virt-controller pods' events. USD oc -n USDNAMESPACE describe pods <virt-controller pod> Check the virt-controller pods' logs. USD oc -n USDNAMESPACE logs <virt-controller pod> Check if there are issues with the nodes, such as if the nodes are in a NotReady state. USD oc get nodes Resolution There are several reasons for no virt-controller pods being in a Ready state. Identify the root cause and take appropriate action. Otherwise, open a support issue and provide the information gathered in the troubleshooting process. 13.12.3.3. NoReadyVirtOperator alert Description No detection of a virt-operator pod in the Ready state occurs in the past 10 minutes. The virt-operator deployment has a default replica of two pods. Reason The virt-operator is the first Kubernetes Operator active in an OpenShift Container Platform cluster. Its primary responsibilities are: Installation Live-update Live-upgrade of a cluster Monitoring the lifecycle of top-level controllers such as virt-controller, virt-handler, and virt-launcher Managing the reconciliation of top-level controllers In addition, the virt-operator is responsible for cluster-wide tasks such as certificate rotation and some infrastructure management. Note Virt-operator is not directly responsible for virtual machines in the cluster. Virt-operator's unavailability does not affect the custom workloads. This alert indicates a failure at the cluster level. Critical cluster-wide management functionalities such as certification rotation, upgrade, and reconciliation of controllers are temporarily unavailable. Troubleshoot Check the deployment status of the virt-operator for available replicas and conditions. USD oc -n USDNAMESPACE get deployment virt-operator -o yaml Check the virt-controller pods' events. USD oc -n USDNAMESPACE describe pods <virt-operator pod> Check the virt-operator pods' logs. USD oc -n USDNAMESPACE logs <virt-operator pod> Check if there are issues with the nodes for the control plane and masters, such as if they are in a NotReady state. USD oc get nodes Resolution There are several reasons for no virt-operator pods being in a Ready state. Identify the root cause and take appropriate action. Otherwise, open a support issue and provide the information gathered in the troubleshooting process. 13.12.3.4. VirtAPIDown alert Description All OpenShift Container Platform API servers are down. Reason If all OpenShift Container Platform API servers are down, then no API calls for OpenShift Container Platform entities occur. Troubleshoot Modify the environment variable NAMESPACE . USD export NAMESPACE="USD(oc get kubevirt -A -o custom-columns="":.metadata.namespace)" Verify if there are any running virt-api pods. USD oc -n USDNAMESPACE get pods -l kubevirt.io=virt-api View the pods' logs using oc logs and the pods' statuses using oc describe . Check the status of the virt-api deployment. Use these commands to learn about related events and show if there are any issues with pulling an image, a crashing pod, or other similar problems. USD oc -n USDNAMESPACE get deployment virt-api -o yaml USD oc -n USDNAMESPACE describe deployment virt-api Check if there are issues with the nodes, such as if the nodes are in a NotReady state. USD oc get nodes Resolution Virt-api pods can be down for several reasons. Identify the root cause and take appropriate action. Otherwise, open a support issue and provide the information gathered in the troubleshooting process. 13.12.3.5. VirtApiRESTErrorsBurst alert Description More than 80% of the REST calls fail in virt-api in the last five minutes. Reason A very high rate of failed REST calls to virt-api causes slow response, slow execution of API calls, or even complete dismissal of API calls. Troubleshoot Modify the environment variable NAMESPACE . USD export NAMESPACE="USD(oc get kubevirt -A -o custom-columns="":.metadata.namespace)" Check to see how many running virt-api pods exist. USD oc -n USDNAMESPACE get pods -l kubevirt.io=virt-api View the pods' logs using oc logs and the pods' statuses using oc describe . Check the status of the virt-api deployment to find out more information. These commands provide the associated events and show if there are any issues with pulling an image or a crashing pod. USD oc -n USDNAMESPACE get deployment virt-api -o yaml USD oc -n USDNAMESPACE describe deployment virt-api Check if there are issues with the nodes, such as if the nodes are overloaded or not in a NotReady state. USD oc get nodes Resolution There are several reasons for a high rate of failed REST calls. Identify the root cause and take appropriate action. Node resource exhaustion Not enough memory on the cluster Nodes are down The API server overloads, such as when the scheduler is not 100% available) Networking issues Otherwise, open a support issue and provide the information gathered in the troubleshooting process. 13.12.3.6. VirtControllerDown alert Description If no detection of virt-controllers occurs in the past five minutes, then virt-controller deployment has a default replica of two pods. Reason If the virt-controller fails, then VM lifecycle management tasks, such as launching a new VMI or shutting down an existing VMI, completely fail. Troubleshoot Modify the environment variable NAMESPACE . USD export NAMESPACE="USD(oc get kubevirt -A -o custom-columns="":.metadata.namespace)" Check the status of the virt-controller deployment. USD oc get deployment -n USDNAMESPACE virt-controller -o yaml Check the virt-controller pods' events. USD oc -n USDNAMESPACE describe pods <virt-controller pod> Check the virt-controller pods' logs. USD oc -n USDNAMESPACE logs <virt-controller pod> Check the manager pod's logs to determine why creating the virt-controller pods fails. USD oc get logs <virt-controller-pod> An example of a virt-controller pod name in the logs is virt-controller-7888c64d66-dzc9p . However, there may be several pods that run virt-controller. Resolution There are several known reasons why the detection of no running virt-controller occurs. Identify the root cause from the list of possible reasons and take appropriate action. Node resource exhaustion Not enough memory on the cluster Nodes are down The API server overloads, such as when the scheduler is not 100% available) Networking issues Otherwise, open a support issue and provide the information gathered in the troubleshooting process. 13.12.3.7. VirtControllerRESTErrorsBurst alert Description More than 80% of the REST calls failed in virt-controller in the last five minutes. Reason Virt-controller has potentially fully lost connectivity to the API server. This loss does not affect running workloads, but propagation of status updates and actions like migrations cannot occur. Troubleshoot There are two common error types associated with virt-controller REST call failure: The API server overloads, causing timeouts. Check the API server metrics and details like response times and overall calls. The virt-controller pod cannot reach the API server. Common causes are: DNS issues on the node Networking connectivity issues Resolution Check the virt-controller logs to determine if the virt-controller pod cannot connect to the API server at all. If so, delete the pod to force a restart. Additionally, verify if node resource exhaustion or not having enough memory on the cluster is causing the connection failure. The issue normally relates to DNS or CNI issues outside of the scope of this alert. Otherwise, open a support issue and provide the information gathered in the troubleshooting process. 13.12.3.8. VirtHandlerRESTErrorsBurst alert Description More than 80% of the REST calls failed in virt-handler in the last five minutes. Reason Virt-handler lost the connection to the API server. Running workloads on the affected node still run, but status updates cannot propagate and actions such as migrations cannot occur. Troubleshoot There are two common error types associated with virt-operator REST call failure: The API server overloads, causing timeouts. Check the API server metrics and details like response times and overall calls. The virt-operator pod cannot reach the API server. Common causes are: DNS issues on the node Networking connectivity issues Resolution If the virt-handler cannot connect to the API server, delete the pod to force a restart. The issue normally relates to DNS or CNI issues outside of the scope of this alert. Identify the root cause and take appropriate action. Otherwise, open a support issue and provide the information gathered in the troubleshooting process. 13.12.3.9. VirtOperatorDown alert Description This alert occurs when no virt-operator pod is in the Running state in the past 10 minutes. The virt-operator deployment has a default replica of two pods. Reason The virt-operator is the first Kubernetes Operator active in an OpenShift Container Platform cluster. Its primary responsibilities are: Installation Live-update Live-upgrade of a cluster Monitoring the lifecycle of top-level controllers such as virt-controller, virt-handler, and virt-launcher Managing the reconciliation of top-level controllers In addition, the virt-operator is responsible for cluster-wide tasks such as certificate rotation and some infrastructure management. Note The virt-operator is not directly responsible for virtual machines in the cluster. The virt-operator's unavailability does not affect the custom workloads. This alert indicates a failure at the cluster level. Critical cluster-wide management functionalities such as certification rotation, upgrade, and reconciliation of controllers are temporarily unavailable. Troubleshoot Modify the environment variable NAMESPACE . USD export NAMESPACE="USD(oc get kubevirt -A -o custom-columns="":.metadata.namespace)" Check the status of the virt-operator deployment. USD oc get deployment -n USDNAMESPACE virt-operator -o yaml Check the virt-operator pods' events. USD oc -n USDNAMESPACE describe pods <virt-operator pod> Check the virt-operator pods' logs. USD oc -n USDNAMESPACE logs <virt-operator pod> Check the manager pod's logs to determine why creating the virt-operator pods fails. USD oc get logs <virt-operator-pod> An example of a virt-operator pod name in the logs is virt-operator-7888c64d66-dzc9p . However, there may be several pods that run virt-operator. Resolution There are several known reasons why the detection of no running virt-operator occurs. Identify the root cause from the list of possible reasons and take appropriate action. Node resource exhaustion Not enough memory on the cluster Nodes are down The API server overloads, such as when the scheduler is not 100% available) Networking issues Otherwise, open a support issue and provide the information gathered in the troubleshooting process. 13.12.3.10. VirtOperatorRESTErrorsBurst alert Description More than 80% of the REST calls failed in virt-operator in the last five minutes. Reason Virt-operator lost the connection to the API server. Cluster-level actions such as upgrading and controller reconciliation do not function. There is no effect to customer workloads such as VMs and VMIs. Troubleshoot There are two common error types associated with virt-operator REST call failure: The API server overloads, causing timeouts. Check the API server metrics and details, such as response times and overall calls. The virt-operator pod cannot reach the API server. Common causes are network connectivity problems and DNS issues on the node. Check the virt-operator logs to verify that the pod can connect to the API server at all. USD export NAMESPACE="USD(oc get kubevirt -A -o custom-columns="":.metadata.namespace)" USD oc -n USDNAMESPACE get pods -l kubevirt.io=virt-operator USD oc -n USDNAMESPACE logs <pod-name> USD oc -n USDNAMESPACE describe pod <pod-name> Resolution If the virt-operator cannot connect to the API server, delete the pod to force a restart. The issue normally relates to DNS or CNI issues outside of the scope of this alert. Identify the root cause and take appropriate action. Otherwise, open a support issue and provide the information gathered in the troubleshooting process. 13.12.4. Additional resources Getting support 13.13. Collecting data for Red Hat Support When you submit a support case to Red Hat Support, it is helpful to provide debugging information for OpenShift Container Platform and OpenShift Virtualization by using the following tools: must-gather tool The must-gather tool collects diagnostic information, including resource definitions and service logs. Prometheus Prometheus is a time-series database and a rule evaluation engine for metrics. Prometheus sends alerts to Alertmanager for processing. Alertmanager The Alertmanager service handles alerts received from Prometheus. The Alertmanager is also responsible for sending the alerts to external notification systems. 13.13.1. Collecting data about your environment Collecting data about your environment minimizes the time required to analyze and determine the root cause. Prerequisites Set the retention time for Prometheus metrics data to a minimum of seven days. Configure the Alertmanager to capture relevant alerts and to send them to a dedicated mailbox so that they can be viewed and persisted outside the cluster. Record the exact number of affected nodes and virtual machines. Procedure Collect must-gather data for the cluster by using the default must-gather image. Collect must-gather data for Red Hat OpenShift Data Foundation, if necessary. Collect must-gather data for OpenShift Virtualization by using the OpenShift Virtualization must-gather image. Collect Prometheus metrics for the cluster. 13.13.1.1. Additional resources Configuring the retention time for Prometheus metrics data Configuring the Alertmanager to send alert notifications to external systems Collecting must-gather data for OpenShift Container Platform Collecting must-gather data for Red Hat OpenShift Data Foundation Collecting must-gather data for OpenShift Virtualization Collecting Prometheus metrics for all projects as a cluster administrator 13.13.2. Collecting data about virtual machines Collecting data about malfunctioning virtual machines (VMs) minimizes the time required to analyze and determine the root cause. Prerequisites Windows VMs: Record the Windows patch update details for Red Hat Support. Install the latest version of the VirtIO drivers. The VirtIO drivers include the QEMU guest agent. If Remote Desktop Protocol (RDP) is enabled, try to connect to the VMs with RDP to determine whether there is a problem with the connection software. Procedure Collect detailed must-gather data about the malfunctioning VMs. Collect screenshots of VMs that have crashed before you restart them. Record factors that the malfunctioning VMs have in common. For example, the VMs have the same host or network. 13.13.2.1. Additional resources Installing VirtIO drivers on Windows VMs Downloading and installing VirtIO drivers on Windows VMs without host access Connecting to Windows VMs with RDP using the web console or the command line Collecting must-gather data about virtual machines 13.13.3. Using the must-gather tool for OpenShift Virtualization You can collect data about OpenShift Virtualization resources by running the must-gather command with the OpenShift Virtualization image. The default data collection includes information about the following resources: OpenShift Virtualization Operator namespaces, including child objects OpenShift Virtualization custom resource definitions Namespaces that contain virtual machines Basic virtual machine definitions Procedure Run the following command to collect data about OpenShift Virtualization: USD oc adm must-gather --image-stream=openshift/must-gather \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.10.10 13.13.3.1. must-gather tool options You can specify a combination of scripts and environment variables for the following options: Collecting detailed virtual machine (VM) information from a namespace Collecting detailed information about specified VMs Collecting image and image stream information Limiting the maximum number of parallel processes used by the must-gather tool 13.13.3.1.1. Parameters Environment variables You can specify environment variables for a compatible script. NS=<namespace_name> Collect virtual machine information, including virt-launcher pod details, from the namespace that you specify. The VirtualMachine and VirtualMachineInstance CR data is collected for all namespaces. VM=<vm_name> Collect details about a particular virtual machine. To use this option, you must also specify a namespace by using the NS environment variable. PROS=<number_of_processes> Modify the maximum number of parallel processes that the must-gather tool uses. The default value is 5 . Important Using too many parallel processes can cause performance issues. Increasing the maximum number of parallel processes is not recommended. Scripts Each script is only compatible with certain environment variable combinations. gather_vms_details Collect VM log files, VM definitions, and namespaces (and their child objects) that belong to OpenShift Virtualization resources. If you use this parameter without specifying a namespace or VM, the must-gather tool collects this data for all VMs in the cluster. This script is compatible with all environment variables, but you must specify a namespace if you use the VM variable. gather Use the default must-gather script, which collects cluster data from all namespaces and includes only basic VM information. This script is only compatible with the PROS variable. gather_images Collect image and image stream custom resource information. This script is only compatible with the PROS variable. 13.13.3.1.2. Usage and examples Environment variables are optional. You can run a script by itself or with one or more compatible environment variables. Table 13.1. Compatible parameters Script Compatible environment variable gather_vms_details For a namespace: NS=<namespace_name> For a VM: VM=<vm_name> NS=<namespace_name> PROS=<number_of_processes> gather PROS=<number_of_processes> gather_images PROS=<number_of_processes> To customize the data that must-gather collects, you append a double dash ( -- ) to the command, followed by a space and one or more compatible parameters. Syntax USD oc adm must-gather \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.10.10 \ -- <environment_variable_1> <environment_variable_2> <script_name> Detailed VM information The following command collects detailed VM information for the my-vm VM in the mynamespace namespace: USD oc adm must-gather \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.10.10 \ -- NS=mynamespace VM=my-vm gather_vms_details 1 1 The NS environment variable is mandatory if you use the VM environment variable. Default data collection limited to three parallel processes The following command collects default must-gather information by using a maximum of three parallel processes: USD oc adm must-gather \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.10.10 \ -- PROS=3 gather Image and image stream information The following command collects image and image stream information from the cluster: USD oc adm must-gather \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.10.10 \ -- gather_images 13.13.3.2. Additional resources About the must-gather tool
[ "oc logs <virt-launcher-name>", "oc get events", "oc describe vm <vm>", "oc describe vmi <vmi>", "oc describe pod virt-launcher-<name>", "oc describe dv <DataVolume>", "Status: Conditions: Last Heart Beat Time: 2020-07-15T03:58:24Z Last Transition Time: 2020-07-15T03:58:24Z Message: PVC win10-rootdisk Bound Reason: Bound Status: True Type: Bound Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Bound 24s datavolume-controller PVC example-dv Bound", "Status: Conditions: Last Heart Beat Time: 2020-07-15T04:31:39Z Last Transition Time: 2020-07-15T04:31:39Z Message: Import Complete Reason: Completed Status: False Type: Running Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Error 12s (x2 over 14s) datavolume-controller Unable to connect to http data source: expected status code 200, got 404. Status: 404 Not Found", "Status: Conditions: Last Heart Beat Time: 2020-07-15T04:31:39Z Last Transition Time: 2020-07-15T04:31:39Z Status: True Type: Ready", "spec: readinessProbe: httpGet: 1 port: 1500 2 path: /healthz 3 httpHeaders: - name: Custom-Header value: Awesome initialDelaySeconds: 120 4 periodSeconds: 20 5 timeoutSeconds: 10 6 failureThreshold: 3 7 successThreshold: 3 8", "oc create -f <file_name>.yaml", "spec: readinessProbe: initialDelaySeconds: 120 1 periodSeconds: 20 2 tcpSocket: 3 port: 1500 4 timeoutSeconds: 10 5", "oc create -f <file_name>.yaml", "spec: livenessProbe: initialDelaySeconds: 120 1 periodSeconds: 20 2 httpGet: 3 port: 1500 4 path: /healthz 5 httpHeaders: - name: Custom-Header value: Awesome timeoutSeconds: 10 6", "oc create -f <file_name>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: special: vm-fedora name: vm-fedora spec: template: metadata: labels: special: vm-fedora spec: domain: devices: disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk resources: requests: memory: 1024M readinessProbe: httpGet: port: 1500 initialDelaySeconds: 120 periodSeconds: 20 timeoutSeconds: 10 failureThreshold: 3 successThreshold: 3 terminationGracePeriodSeconds: 180 volumes: - name: containerdisk containerDisk: image: kubevirt/fedora-cloud-registry-disk-demo - cloudInitNoCloud: userData: |- #cloud-config password: fedora chpasswd: { expire: False } bootcmd: - setenforce 0 - dnf install -y nmap-ncat - systemd-run --unit=httpserver nc -klp 1500 -e '/usr/bin/echo -e HTTP/1.1 200 OK\\\\n\\\\nHello World!' name: cloudinitdisk", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_vcpu_wait_seconds[6m]))) > 0 1", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_network_receive_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_network_transmit_bytes_total[6m]))) > 0 1", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_read_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_write_traffic_bytes_total[6m]))) > 0 1", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_read_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_write_total[6m]))) > 0 1", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_in_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_out_traffic_bytes_total[6m]))) > 0 1", "kind: Service apiVersion: v1 metadata: name: node-exporter-service 1 namespace: dynamation 2 labels: servicetype: metrics 3 spec: ports: - name: exmet 4 protocol: TCP port: 9100 5 targetPort: 9100 6 type: ClusterIP selector: monitor: metrics 7", "oc create -f node-exporter-service.yaml", "wget https://github.com/prometheus/node_exporter/releases/download/v1.3.1/node_exporter-1.3.1.linux-amd64.tar.gz", "sudo tar xvf node_exporter-1.3.1.linux-amd64.tar.gz --directory /usr/bin --strip 1 \"*/node_exporter\"", "[Unit] Description=Prometheus Metrics Exporter After=network.target StartLimitIntervalSec=0 [Service] Type=simple Restart=always RestartSec=1 User=root ExecStart=/usr/bin/node_exporter [Install] WantedBy=multi-user.target", "sudo systemctl enable node_exporter.service sudo systemctl start node_exporter.service", "curl http://localhost:9100/metrics", "go_gc_duration_seconds{quantile=\"0\"} 1.5244e-05 go_gc_duration_seconds{quantile=\"0.25\"} 3.0449e-05 go_gc_duration_seconds{quantile=\"0.5\"} 3.7913e-05", "spec: template: metadata: labels: monitor: metrics", "oc get service -n <namespace> <node-exporter-service>", "curl http://<172.30.226.162:9100>/metrics | grep -vE \"^#|^USD\"", "node_arp_entries{device=\"eth0\"} 1 node_boot_time_seconds 1.643153218e+09 node_context_switches_total 4.4938158e+07 node_cooling_device_cur_state{name=\"0\",type=\"Processor\"} 0 node_cooling_device_max_state{name=\"0\",type=\"Processor\"} 0 node_cpu_guest_seconds_total{cpu=\"0\",mode=\"nice\"} 0 node_cpu_guest_seconds_total{cpu=\"0\",mode=\"user\"} 0 node_cpu_seconds_total{cpu=\"0\",mode=\"idle\"} 1.10586485e+06 node_cpu_seconds_total{cpu=\"0\",mode=\"iowait\"} 37.61 node_cpu_seconds_total{cpu=\"0\",mode=\"irq\"} 233.91 node_cpu_seconds_total{cpu=\"0\",mode=\"nice\"} 551.47 node_cpu_seconds_total{cpu=\"0\",mode=\"softirq\"} 87.3 node_cpu_seconds_total{cpu=\"0\",mode=\"steal\"} 86.12 node_cpu_seconds_total{cpu=\"0\",mode=\"system\"} 464.15 node_cpu_seconds_total{cpu=\"0\",mode=\"user\"} 1075.2 node_disk_discard_time_seconds_total{device=\"vda\"} 0 node_disk_discard_time_seconds_total{device=\"vdb\"} 0 node_disk_discarded_sectors_total{device=\"vda\"} 0 node_disk_discarded_sectors_total{device=\"vdb\"} 0 node_disk_discards_completed_total{device=\"vda\"} 0 node_disk_discards_completed_total{device=\"vdb\"} 0 node_disk_discards_merged_total{device=\"vda\"} 0 node_disk_discards_merged_total{device=\"vdb\"} 0 node_disk_info{device=\"vda\",major=\"252\",minor=\"0\"} 1 node_disk_info{device=\"vdb\",major=\"252\",minor=\"16\"} 1 node_disk_io_now{device=\"vda\"} 0 node_disk_io_now{device=\"vdb\"} 0 node_disk_io_time_seconds_total{device=\"vda\"} 174 node_disk_io_time_seconds_total{device=\"vdb\"} 0.054 node_disk_io_time_weighted_seconds_total{device=\"vda\"} 259.79200000000003 node_disk_io_time_weighted_seconds_total{device=\"vdb\"} 0.039 node_disk_read_bytes_total{device=\"vda\"} 3.71867136e+08 node_disk_read_bytes_total{device=\"vdb\"} 366592 node_disk_read_time_seconds_total{device=\"vda\"} 19.128 node_disk_read_time_seconds_total{device=\"vdb\"} 0.039 node_disk_reads_completed_total{device=\"vda\"} 5619 node_disk_reads_completed_total{device=\"vdb\"} 96 node_disk_reads_merged_total{device=\"vda\"} 5 node_disk_reads_merged_total{device=\"vdb\"} 0 node_disk_write_time_seconds_total{device=\"vda\"} 240.66400000000002 node_disk_write_time_seconds_total{device=\"vdb\"} 0 node_disk_writes_completed_total{device=\"vda\"} 71584 node_disk_writes_completed_total{device=\"vdb\"} 0 node_disk_writes_merged_total{device=\"vda\"} 19761 node_disk_writes_merged_total{device=\"vdb\"} 0 node_disk_written_bytes_total{device=\"vda\"} 2.007924224e+09 node_disk_written_bytes_total{device=\"vdb\"} 0", "apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: k8s-app: node-exporter-metrics-monitor name: node-exporter-metrics-monitor 1 namespace: dynamation 2 spec: endpoints: - interval: 30s 3 port: exmet 4 scheme: http selector: matchLabels: servicetype: metrics", "oc create -f node-exporter-metrics-monitor.yaml", "oc expose service -n <namespace> <node_exporter_service_name>", "oc get route -o=custom-columns=NAME:.metadata.name,DNS:.spec.host", "NAME DNS node-exporter-service node-exporter-service-dynamation.apps.cluster.example.org", "curl -s http://node-exporter-service-dynamation.apps.cluster.example.org/metrics", "go_gc_duration_seconds{quantile=\"0\"} 1.5382e-05 go_gc_duration_seconds{quantile=\"0.25\"} 3.1163e-05 go_gc_duration_seconds{quantile=\"0.5\"} 3.8546e-05 go_gc_duration_seconds{quantile=\"0.75\"} 4.9139e-05 go_gc_duration_seconds{quantile=\"1\"} 0.000189423", "export KMP_NAMESPACE=\"USD(oc get pod -A --no-headers -l control-plane=mac-controller-manager | awk '{print USD1}')\"", "export KMP_NAME=\"USD(oc get pod -A --no-headers -l control-plane=mac-controller-manager | awk '{print USD2}')\"", "oc describe pod -n USDKMP_NAMESPACE USDKMP_NAME", "oc logs -n USDKMP_NAMESPACE USDKMP_NAME", "export NAMESPACE=\"USD(oc get deployment -A | grep ssp-operator | awk '{print USD1}')\"", "oc -n USDNAMESPACE describe pods -l control-plane=ssp-operator", "oc -n USDNAMESPACE logs --tail=-1 -l control-plane=ssp-operator", "export NAMESPACE=\"USD(USD oc get deployment -A | grep ssp-operator | awk '{print USD1}')\"", "oc -n USDNAMESPACE get pods -l name=virt-template-validator", "oc -n USDNAMESPACE describe pods -l name=virt-template-validator", "oc -n USDNAMESPACE logs --tail=-1 -l name=virt-template-validator", "export NAMESPACE=\"USD(oc get deployment -A | grep ssp-operator | awk '{print USD1}')\"", "oc -n USDNAMESPACE get pods -l control-plane=ssp-operator", "oc -n USDNAMESPACE describe pods -l control-plane=ssp-operator", "oc -n USDNAMESPACE logs --tail=-1 -l control-plane=ssp-operator", "export NAMESPACE_SSP=\"USD(oc get deployment -A | grep ssp-operator | awk '{print USD1}')\"", "export NAMESPACE=\"USD(oc get deployment -A | grep virt-template-validator | awk '{print USD1}')\"", "oc -n USDNAMESPACE get pods -l name=virt-template-validator", "oc -n USDNAMESPACE_SSP describe pods -l name=ssp-operator", "oc -n USDNAMESPACE_SSP logs --tail=-1 -l name=ssp-operator", "oc -n USDNAMESPACE describe pods -l name=virt-template-validator", "oc -n USDNAMESPACE logs --tail=-1 -l name=virt-template-validator", "export NAMESPACE=\"USD(oc get kubevirt -A -o custom-columns=\"\":.metadata.namespace)\"", "oc -n USDNAMESPACE get pods -l kubevirt.io=virt-operator", "oc -n USDNAMESPACE logs <pod-name>", "oc -n USDNAMESPACE describe pod <pod-name>", "oc -n USDNAMESPACE logs <pod-name> |grep lead", "{\"component\":\"virt-operator\",\"level\":\"info\",\"msg\":\"Attempting to acquire leader status\",\"pos\":\"application.go:400\",\"timestamp\":\"2021-11-30T12:15:18.635387Z\"} I1130 12:15:18.635452 1 leaderelection.go:243] attempting to acquire leader lease <namespace>/virt-operator I1130 12:15:19.216582 1 leaderelection.go:253] successfully acquired lease <namespace>/virt-operator", "{\"component\":\"virt-operator\",\"level\":\"info\",\"msg\":\"Started leading\",\"pos\":\"application.go:385\",\"timestamp\":\"2021-11-30T12:15:19.216836Z\"}", "oc -n USDNAMESPACE logs <pod-name> |grep lead", "{\"component\":\"virt-operator\",\"level\":\"info\",\"msg\":\"Attempting to acquire leader status\",\"pos\":\"application.go:400\",\"timestamp\":\"2021-11-30T12:15:20.533696Z\"} I1130 12:15:20.533792 1 leaderelection.go:243] attempting to acquire leader lease <namespace>/virt-operator", "oc -n USDNAMESPACE get deployment virt-controller -o yaml", "get pods -n USDNAMESPACE |grep virt-controller", "oc -n USDNAMESPACE describe pods <virt-controller pod>", "oc -n USDNAMESPACE logs <virt-controller pod>", "oc get nodes", "oc -n USDNAMESPACE get deployment virt-operator -o yaml", "oc -n USDNAMESPACE describe pods <virt-operator pod>", "oc -n USDNAMESPACE logs <virt-operator pod>", "oc get nodes", "export NAMESPACE=\"USD(oc get kubevirt -A -o custom-columns=\"\":.metadata.namespace)\"", "oc -n USDNAMESPACE get pods -l kubevirt.io=virt-api", "oc -n USDNAMESPACE get deployment virt-api -o yaml", "oc -n USDNAMESPACE describe deployment virt-api", "oc get nodes", "export NAMESPACE=\"USD(oc get kubevirt -A -o custom-columns=\"\":.metadata.namespace)\"", "oc -n USDNAMESPACE get pods -l kubevirt.io=virt-api", "oc -n USDNAMESPACE get deployment virt-api -o yaml", "oc -n USDNAMESPACE describe deployment virt-api", "oc get nodes", "export NAMESPACE=\"USD(oc get kubevirt -A -o custom-columns=\"\":.metadata.namespace)\"", "oc get deployment -n USDNAMESPACE virt-controller -o yaml", "oc -n USDNAMESPACE describe pods <virt-controller pod>", "oc -n USDNAMESPACE logs <virt-controller pod>", "oc get logs <virt-controller-pod>", "export NAMESPACE=\"USD(oc get kubevirt -A -o custom-columns=\"\":.metadata.namespace)\"", "oc get deployment -n USDNAMESPACE virt-operator -o yaml", "oc -n USDNAMESPACE describe pods <virt-operator pod>", "oc -n USDNAMESPACE logs <virt-operator pod>", "oc get logs <virt-operator-pod>", "export NAMESPACE=\"USD(oc get kubevirt -A -o custom-columns=\"\":.metadata.namespace)\"", "oc -n USDNAMESPACE get pods -l kubevirt.io=virt-operator", "oc -n USDNAMESPACE logs <pod-name>", "oc -n USDNAMESPACE describe pod <pod-name>", "oc adm must-gather --image-stream=openshift/must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.10.10", "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.10.10 -- <environment_variable_1> <environment_variable_2> <script_name>", "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.10.10 -- NS=mynamespace VM=my-vm gather_vms_details 1", "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.10.10 -- PROS=3 gather", "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.10.10 -- gather_images" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/virtualization/logging-events-and-monitoring