title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
25.5.2. Creating a New Directory for rsyslog Log Files
25.5.2. Creating a New Directory for rsyslog Log Files Rsyslog runs as the syslogd daemon and is managed by SELinux. Therefore all files to which rsyslog is required to write to, must have the appropriate SELinux file context. Procedure 25.4. Creating a New Working Directory If required to use a different directory to store working files, create a directory as follows: Install utilities to manage SELinux policy: Set the SELinux directory context type to be the same as the /var/lib/rsyslog/ directory: Apply the SELinux context: If required, check the SELinux context as follows: Create subdirectories as required. For example: The subdirectories will be created with the same SELinux context as the parent directory. Add the following line in /etc/rsyslog.conf immediately before it is required to take effect: This setting will remain in effect until the WorkDirectory directive is encountered while parsing the configuration files.
[ "~]# mkdir /rsyslog", "~]# yum install policycoreutils-python", "~]# semanage fcontext -a -t syslogd_var_lib_t /rsyslog", "~]# restorecon -R -v /rsyslog restorecon reset /rsyslog context unconfined_u:object_r:default_t:s0->unconfined_u:object_r:syslogd_var_lib_t:s0", "~]# ls -Zd /rsyslog drwxr-xr-x. root root system_u:object_r:syslogd_var_lib_t:s0 /rsyslog", "~]# mkdir /rsyslog/work", "USDWorkDirectory /rsyslog/work" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-creating_a_new_directory_for_rsyslog_log_files
11.3. Understanding the Predictable Network Interface Device Names
11.3. Understanding the Predictable Network Interface Device Names The names have two-character prefixes based on the type of interface: en for Ethernet, wl for wireless LAN (WLAN), ww for wireless wide area network (WWAN). The names have the following types: o< index > on-board device index number s< slot> [f< function> ][d< dev_id >] hotplug slot index number. All multi-function PCI devices will carry the [f< function >] number in the device name, including the function 0 device. x< MAC > MAC address [P< domain >]p< bus >s< slot >[f< function >][d< dev_id >] PCI geographical location. In PCI geographical location, the [P< domain >] number is only mentioned if the value is not 0 . For example: ID_NET_NAME_PATH=P1enp5s0 [P< domain >]p< bus >s< slot >[f< function >][u< port >][..][c< config >][i< interface >] USB port number chain. For USB devices, the full chain of port numbers of hubs is composed. If the name gets longer than the maximum number of 15 characters, the name is not exported. If there are multiple USB devices in the chain, the default values for USB configuration descriptors (c1) and USB interface descriptors (i0) are suppressed.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/sec-understanding_the_predictable_network_interface_device_names
Chapter 4. Creating a bootable installation medium for RHEL
Chapter 4. Creating a bootable installation medium for RHEL You can download the ISO file from the Customer Portal to prepare the bootable physical installation medium, such as a USB or DVD. Starting with RHEL 8, Red Hat no longer provides separate variants for Server and Workstation . Red Hat Enterprise Linux for x86_64 includes both Server and Workstation capabilities. The distinction between Server and Workstation is managed through the System Purpose Role during the installation or configuration process. After downloading an ISO file from the Customer Portal, create a bootable physical installation medium, such as a USB or DVD to continue the installation process. For secure environment cases where USB drives are prohibited, consider using the Image Builder to create and deploy reference images. This method ensures compliance with security policies while maintaining system integrity. For more details, refer to the Image builder documentation . Note By default, the inst.stage2= boot option is used on the installation medium and is set to a specific label, for example, inst.stage2=hd:LABEL=RHEL8\x86_64 . If you modify the default label of the file system containing the runtime image, or if you use a customized procedure to boot the installation system, verify that the label is set to the correct value. 4.1. Installation boot media options There are several options available to boot the Red Hat Enterprise Linux installation program. Full installation DVD or USB flash drive Create a full installation DVD or USB flash drive using the DVD ISO image. The DVD or USB flash drive can be used as a boot device and as an installation source for installing software packages. Minimal installation DVD, CD, or USB flash drive Create a minimal installation CD, DVD, or USB flash drive using the Boot ISO image, which contains only the minimum files necessary to boot the system and start the installation program. If you are not using the Content Delivery Network (CDN) to download the required software packages, the Boot ISO image requires an installation source that contains the required software packages. PXE Server A preboot execution environment (PXE) server allows the installation program to boot over the network. After a system boot, you must complete the installation from a different installation source, such as a local disk or a network location. Image builder With image builder, you can create customized system and cloud images to install Red Hat Enterprise Linux in virtual and cloud environments. 4.2. Creating a bootable DVD You can create a bootable installation DVD by using a burning software and a DVD burner. The exact steps to produce a DVD from an ISO image file vary greatly, depending on the operating system and disc burning software installed. Consult your system's burning software documentation for the exact steps to burn a DVD from an ISO image file. Warning You can create a bootable DVD using either the DVD ISO image (full install) or the Boot ISO image (minimal install). However, the DVD ISO image is larger than 4.7 GB, and as a result, it might not fit on a single or dual-layer DVD. Check the size of the DVD ISO image file before you proceed. Use a USB flash drive when using the DVD ISO image to create bootable installation media. For the environment cases where USB drives are prohibited, see Image builder documentation . 4.3. Creating a bootable USB device on Linux You can create a bootable USB device which you can then use to install Red Hat Enterprise Linux on other machines. This procedure overwrites the existing data on the USB drive without any warning. Back up any data or use an empty flash drive. A bootable USB drive cannot be used for storing data. Prerequisites You have downloaded the full installation DVD ISO or minimal installation Boot ISO image from the Product Downloads page. You have a USB flash drive with enough capacity for the ISO image. The required size varies, but the recommended USB size is 8 GB. Procedure Connect the USB flash drive to the system. Open a terminal window and display a log of recent events. Messages resulting from the attached USB flash drive are displayed at the bottom of the log. Record the name of the connected device. Log in as a root user: Enter your root password when prompted. Find the device node assigned to the drive. In this example, the drive name is sdd . If the inserted USB device mounts automatically, unmount it before continuing with the steps. For unmounting, use the umount command. For more information, see Unmounting a file system with umount . Write the ISO image directly to the USB device: Replace /image_directory/image.iso with the full path to the ISO image file that you downloaded, Replace device with the device name that you retrieved with the dmesg command. In this example, the full path to the ISO image is /home/testuser/Downloads/rhel-8-x86_64-boot.iso , and the device name is sdd : Partition names are usually device names with a numerical suffix. For example, sdd is a device name, and sdd1 is the name of a partition on the device sdd . Wait for the dd command to finish writing the image to the device. Run the sync command to synchronize cached writes to the device. The data transfer is complete when the # prompt appears. When you see the prompt, log out of the root account and unplug the USB drive. The USB drive is now ready to use as a boot device. 4.4. Creating a bootable USB device on Windows You can create a bootable USB device on a Windows system with various tools. You can use Fedora Media Writer, available for download at https://github.com/FedoraQt/MediaWriter/releases . Fedora Media Writer is a community product and is not supported by Red Hat. You can report any issues with the tool at https://github.com/FedoraQt/MediaWriter/issues . Creating a bootable drive overwrites existing data on the USB drive without any warning. Back up any data or use an empty flash drive. A bootable USB drive cannot be used for storing data. Prerequisites You have downloaded the full installation DVD ISO or minimal installation Boot ISO image from the Product Downloads page. You have a USB flash drive with enough capacity for the ISO image. The required size varies. Procedure Download and install Fedora Media Writer from https://github.com/FedoraQt/MediaWriter/releases . Connect the USB flash drive to the system. Open Fedora Media Writer. From the main window, click Custom Image and select the previously downloaded Red Hat Enterprise Linux ISO image. From the Write Custom Image window, select the drive that you want to use. Click Write to disk . The boot media creation process starts. Do not unplug the drive until the operation completes. The operation may take several minutes, depending on the size of the ISO image, and the write speed of the USB drive. When the operation completes, unmount the USB drive. The USB drive is now ready to be used as a boot device. 4.5. Creating a bootable USB device on macOS You can create a bootable USB device which you can then use to install Red Hat Enterprise Linux on other machines. Creating a bootable USB drive overwrites any data previously stored on the USB drive without any warning. Back up any data or use an empty flash drive. A bootable USB drive cannot be used for storing data. Prerequisites You have downloaded the full installation DVD ISO or minimal installation Boot ISO image from the Product Downloads page. You have a USB flash drive with enough capacity for the ISO image. The required size varies. Procedure Connect the USB flash drive to the system. Identify the device path with the diskutil list command. The device path has the format of /dev/disknumber , where number is the number of the disk. The disks are numbered starting at zero (0). Typically, disk0 is the OS X recovery disk, and disk1 is the main OS X installation. In the following example, the USB device is disk2 : Identify your USB flash drive by comparing the NAME, TYPE and SIZE columns to your flash drive. For example, the NAME should be the title of the flash drive icon in the Finder tool. You can also compare these values to those in the information panel of the flash drive. Unmount the flash drive's file system volumes: When the command completes, the icon for the flash drive disappears from your desktop. If the icon does not disappear, you may have selected the wrong disk. Attempting to unmount the system disk accidentally returns a failed to unmount error. Write the ISO image to the flash drive. macOS provides both a block ( /dev/disk* ) and character device ( /dev/rdisk* ) file for each storage device. Writing an image to the /dev/rdisknumber character device is faster than writing to the /dev/disknumber block device. For example, to write the /Users/user_name/Downloads/rhel-8-x86_64-boot.iso file to the /dev/rdisk2 device, enter the following command: if= - Path to the installation image. of= - The raw disk device (/dev/rdisknumber) representing the target disk. bs=512K - Sets the block size to 512 KB for faster data transfer. status=progress - Displays a progress indicator during the operation. Wait for the dd command to finish writing the image to the device. The data transfer is complete when the # prompt appears. When the prompt is displayed, log out of the root account and unplug the USB drive. The USB drive is now ready to be used as a boot device. Additional resources Configuring System Purpose ISO for RHEL 8/9 Server or Workstation (Red Hat Knowledgebase)
[ "dmesg|tail", "su -", "dmesg|tail [288954.686557] usb 2-1.8: New USB device strings: Mfr=0, Product=1, SerialNumber=2 [288954.686559] usb 2-1.8: Product: USB Storage [288954.686562] usb 2-1.8: SerialNumber: 000000009225 [288954.712590] usb-storage 2-1.8:1.0: USB Mass Storage device detected [288954.712687] scsi host6: usb-storage 2-1.8:1.0 [288954.712809] usbcore: registered new interface driver usb-storage [288954.716682] usbcore: registered new interface driver uas [288955.717140] scsi 6:0:0:0: Direct-Access Generic STORAGE DEVICE 9228 PQ: 0 ANSI: 0 [288955.717745] sd 6:0:0:0: Attached scsi generic sg4 type 0 [288961.876382] sd 6:0:0:0: sdd Attached SCSI removable disk", "dd if=/image_directory/image.iso of=/dev/device", "dd if=/home/testuser/Downloads/rhel-8-x86_64-boot.iso of=/dev/sdd", "diskutil list /dev/disk0 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *500.3 GB disk0 1: EFI EFI 209.7 MB disk0s1 2: Apple_CoreStorage 400.0 GB disk0s2 3: Apple_Boot Recovery HD 650.0 MB disk0s3 4: Apple_CoreStorage 98.8 GB disk0s4 5: Apple_Boot Recovery HD 650.0 MB disk0s5 /dev/disk1 #: TYPE NAME SIZE IDENTIFIER 0: Apple_HFS YosemiteHD *399.6 GB disk1 Logical Volume on disk0s1 8A142795-8036-48DF-9FC5-84506DFBB7B2 Unlocked Encrypted /dev/disk2 #: TYPE NAME SIZE IDENTIFIER 0: FDisk_partition_scheme *8.1 GB disk2 1: Windows_NTFS SanDisk USB 8.1 GB disk2s1", "diskutil unmountDisk /dev/disknumber Unmount of all volumes on disknumber was successful", "sudo dd if= /Users/user_name/Downloads/rhel-8-x86_64-boot.iso of= /dev/rdisk2 bs= 512K status= progress" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/interactively_installing_rhel_from_installation_media/assembly_creating-a-bootable-installation-medium_rhel-installer
Chapter 65. Virtualization
Chapter 65. Virtualization Booting OVMF guests fails Attempting to boot a guest virtual machine that uses the Open Virtual Machine Firmware (OVMF) on a Red Hat Enterprise Linux host using the qemu-kvm package currently fails, with the guest becoming unresponsive and displaying a blank screen. (BZ#1174132) Bridge creation with virsh iface-bridge fails When installing Red Hat Enterprise Linux 7 from other sources than the network, network device names are not specified by default in the interface configuration files (this is done with a DEVICE= line). As a consequence, creating a network bridge by using the virsh iface-bridge command fails with an error message. To work around the problem, add DEVICE= lines into the /etc/sysconfig/network-scripts/ifcfg-* files. For more information, see the Red Hat Knowledgebase: https://access.redhat.com/solutions/2792701 (BZ#1100588) Guests sometimes fail to boot on ESXi 5.5 When running Red Hat Enterprise Linux 7 guests with 12 GB RAM or above on a VMware ESXi 5.5 hypervisor, certain components currently initialize with incorrect memory type range register (MTRR) values or incorrectly reconfigure MTRR values across boots. This sometimes causes the guest kernel to panic or the guest to become unresponsive during boot. To work around this problem, add the disable_mtrr_trim option to the guest's kernel command line, which enables the guest to continue booting when MTRRs are configured incorrectly. Note that with this option, the guest prints WARNING: BIOS bug messages during boot, which you can safely ignore. (BZ#1429792) The STIG for Red Hat Virtualization Hypervisor profile is not displayed in Anaconda The oscap-anaconda-addon module is currently not able to properly parse the STIG for Red Hat Virtualization Hypervisor security hardening profile. As a consequence, the profile's name is shown as DISA STIG for Red Hat Enterprise Linux 7 or United States Government Configuration Baseline (USGCB / STIG) - DRAFT in the Anaconda interface selection. However, this is only a display problem, and you can safely use the DISA STIG for Red Hat Enterprise Linux 7 profile instead of the STIG for Red Hat Virtualization Hypervisor profile. (BZ# 1437106 )
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.4_release_notes/known_issues_virtualization
Red Hat Data Grid
Red Hat Data Grid Data Grid is a high-performance, distributed in-memory data store. Schemaless data structure Flexibility to store different objects as key-value pairs. Grid-based data storage Designed to distribute and replicate data across clusters. Elastic scaling Dynamically adjust the number of nodes to meet demand without service disruption. Data interoperability Store, retrieve, and query data in the grid from different endpoints.
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_rest_api/red-hat-data-grid
4.7.3. Related Books
4.7.3. Related Books The following books discuss various issues related to resource monitoring, and are good resources for Red Hat Enterprise Linux system administrators: The System Administrators Guide ; Red Hat, Inc -- Includes a chapter on many of the resource monitoring tools described here. Linux Performance Tuning and Capacity Planning by Jason R. Fink and Matthew D. Sherer; Sams -- Provides more in-depth overviews of the resource monitoring tools presented here and includes others that might be appropriate for more specific resource monitoring needs. Red Hat Linux Security and Optimization by Mohammed J. Kabir; Red Hat Press -- Approximately the first 150 pages of this book discuss performance-related issues. This includes chapters dedicated to performance issues specific to network, Web, email, and file servers. Linux Administration Handbook by Evi Nemeth, Garth Snyder, and Trent R. Hein; Prentice Hall -- Provides a short chapter similar in scope to this book, but includes an interesting section on diagnosing a system that has suddenly slowed down. Linux System Administration: A User's Guide by Marcel Gagne; Addison Wesley Professional -- Contains a small chapter on performance monitoring and tuning. Essential System Administration (3rd Edition) by Aeleen Frisch; O'Reilly &Associates -- The chapter on managing system resources contains good overall information, with some Linux specifics included. System Performance Tuning (2nd Edition) by Gian-Paolo D. Musumeci and Mike Loukides; O'Reilly &Associates -- Although heavily oriented toward more traditional UNIX implementations, there are many Linux-specific references throughout the book.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s2-memory-addres-books
Chapter 7. Uninstalling a cluster on vSphere that uses installer-provisioned infrastructure
Chapter 7. Uninstalling a cluster on vSphere that uses installer-provisioned infrastructure You can remove a cluster that you deployed in your VMware vSphere instance by using installer-provisioned infrastructure. Note When you run the openshift-install destroy cluster command to uninstall OpenShift Container Platform, vSphere volumes are not automatically deleted. The cluster administrator must manually find the vSphere volumes and delete them. 7.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. Procedure From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program.
[ "./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_vsphere/uninstalling-cluster-vsphere-installer-provisioned
9.2. Setting a Minimum Strength Factor
9.2. Setting a Minimum Strength Factor For additional security, the Directory Server can be configured to require a certain level of encryption before it allows a connection. The Directory Server can define and require a specific Security Strength Factor (SFF) for any connection. The SSF sets a minimum encryption level, defined by its key strength, for any connection or operation. To require a minimum SSF for any and all directory operations, set the nsslapd-minssf configuration attribute. When enforcing a minimum SSF, Directory Server looks at each available encryption type for an operation - TLS or SASL - and determines which has the higher SSF value and then compares the higher value to the minimum SSF. It is possible for both SASL authentication and TLS to be configured for some server-to-server connections, such as replication. Note Alternatively, use the nsslapd-minssf-exclude-rootdse configuration attribute. This sets a minimum SSF setting for all connections to the Directory Server except for queries against the root DSE. A client may need to obtain information about the server configuration, like its default naming context, before initiating an operation. The nsslapd-minssf-exclude-rootdse attribute allows the client to get that information without having to establish a secure connection first. The SSF for a connection is evaluated when the first operation is initiated on a connection. This allows STARTTLS and SASL binds to succeed, even though those two connections initially open a regular connection. After the TLS or SASL session is opened, then the SSF is evaluated. Any connection which does not meet the SSF requirements is closed with an LDAP unwilling to perform error. Set a minimum SSF to disable insecure connections to a directory. Warning If you connect to the directory using the unencrypted LDAP protocol without SASL, the first LDAP message can contain the bind request. In this case, the credentials are sent unencrypted over the network before the server cancels the connection, because the SSF did not met the minimum value set. Use the LDAPS protocol or SASL binds to ensure that the credentials are never sent unencrypted. The default nsslapd-minssf attribute value is 0, which means there is no minimum SSF for server connections. The value can be set to any reasonable positive integer. The value represents the required key strength for any secure connection. The following example sets the nsslapd-minssf parameter to 128 : Note An ACI can be set to require an SSF for a specific type of operation, as in Section 18.11.2.4, "Requiring a Certain Level of Security in Connections" . Secure connections can be required for bind operations by turning on the nsslapd-require-secure-binds attribute, as in Section 20.12.1, "Requiring Secure Binds" .
[ "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-minssf=128 Successfully replaced \"nsslapd-minssf\"" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/setting_a_minimum_strength_factor
Chapter 9. Applying security policies
Chapter 9. Applying security policies During the in-place upgrade process, certain security policies must remain disabled. Furthermore, RHEL 8 introduces a new concept of system-wide cryptographic policies and also security profiles might contain changes between major releases. To make your system more secure, switch SELinux to enforcing mode and set a system-wide cryptographic policy. You may also want to remediate the system to be compliant with a specific security profile. 9.1. Changing SELinux mode to enforcing During the in-place upgrade process, the Leapp utility sets SELinux mode to permissive. When the system is successfully upgraded, you have to manually change SELinux mode to enforcing. Prerequisites The system has been upgraded and you have performed the Verification described in Verifying the post-upgrade state of the RHEL 8 system . Procedure Ensure that there are no SELinux denials, for example, by using the ausearch utility: Note that the step covers only the most common scenario. To check for all possible SELinux denials, see the Identifying SELinux denials section in the Using SELinux title, which provides a complete procedure. Open the /etc/selinux/config file in a text editor of your choice, for example: Configure the SELINUX=enforcing option: Save the change, and restart the system: Verification After the system restarts, confirm that the getenforce command returns Enforcing : Additional resources Troubleshooting problems related to SELinux Changing SELinux states and modes 9.2. Setting system-wide cryptographic policies The system-wide cryptographic policies is a system component that configures the core cryptographic subsystems, covering the TLS, IPSec, SSH, DNSSec, and Kerberos protocols. After a successful installation or an in-place upgrade process, the system-wide cryptographic policy is automatically set to DEFAULT . The DEFAULT system-wide cryptographic policy level offers secure settings for current threat models. To view or change the current system-wide cryptographic policy, use the update-crypto-policies tool: For example, the following command switches the system-wide crypto policy level to FUTURE , which should withstand any near-term future attacks: You can also customize system-wide cryptographic policies. For details, see the Customizing system-wide cryptographic policies with subpolicies and Creating and setting a custom system-wide cryptographic policy sections. Additional resources Using system-wide cryptographic policies update-crypto-policies(8) man page on your system 9.3. Upgrading the system hardened to a security baseline To get a fully hardened system after a successful upgrade to RHEL 8, you can use automated remediation provided by the OpenSCAP suite. OpenSCAP remediations align your system with security baselines, such as PCI-DSS, OSPP, or ACSC Essential Eight. The configuration compliance recommendations differ among major versions of Red Hat Enterprise Linux due to the evolution of the security offering. When upgrading a hardened RHEL 7 system, the Leapp tool does not provide direct means to retain the full hardening. Depending on the changes in the component configuration, the system might diverge from the recommendations for the RHEL 8 during the upgrade. Note You cannot use the same SCAP content for scanning RHEL 7 and RHEL 8. Update the management platforms if the compliance of the system is managed by the tools like Red Hat Satellite or Red Hat Insights. As an alternative to automated remediations, you can make the changes manually by following an OpenSCAP-generated report. For information about generating a compliance report, see Scanning the system for security compliance and vulnerabilities . Follow the procedure to automatically harden your system with the PCI-DSS profile. Important Automated remediations support RHEL systems in the default configuration. Because the system upgrade has been altered after the installation, running remediation might not make it fully compliant with the required security profile. You might need to fix some requirements manually. Prerequisites The scap-security-guide package is installed on your RHEL 8 system. Procedure Find the appropriate security compliance data stream .xml file: For additional information, see section Viewing compliance profiles . Remediate the system according to the selected profile from the appropriate data stream: You can replace the pci-dss value in the --profile argument with the ID of the profile according to which you want to harden your system. For a full list of profiles supported in RHEL 8, see SCAP security profiles supported in RHEL . Warning If not used carefully, running the system evaluation with the Remediate option enabled might render the system non-functional. Red Hat does not provide any automated method to revert changes made by security-hardening remediations. Remediations are supported on RHEL systems in the default configuration. If your system has been altered after the installation, running remediation might not make it compliant with the required security profile. Restart your system: Verification Verify that the system is compliant with the profile, and save the results in an HTML file: Additional resources scap-security-guide(8) and oscap(8)` man pages on your system Scanning the system for security compliance and vulnerabilities Red Hat Insights Security Policy documentation Red Hat Satellite Security Policy documentation
[ "ausearch -m AVC,USER_AVC -ts boot", "vi /etc/selinux/config", "This file controls the state of SELinux on the system. SELINUX= can take one of these three values: enforcing - SELinux security policy is enforced. permissive - SELinux prints warnings instead of enforcing. disabled - No SELinux policy is loaded. SELINUX= enforcing SELINUXTYPE= can take one of these two values: targeted - Targeted processes are protected, mls - Multi Level Security protection. SELINUXTYPE=targeted", "reboot", "getenforce Enforcing", "update-crypto-policies --show DEFAULT", "update-crypto-policies --set FUTURE Setting system policy to FUTURE", "ls /usr/share/xml/scap/ssg/content/ ssg-firefox-cpe-dictionary.xml ssg-rhel6-ocil.xml ssg-firefox-cpe-oval.xml ssg-rhel6-oval.xml ssg-rhel6-ds-1.2.xml ssg-rhel8-oval.xml ssg-rhel8-ds.xml ssg-rhel8-xccdf.xml", "oscap xccdf eval --profile pci-dss --remediate /usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml", "reboot", "oscap xccdf eval --report pcidss_report.html --profile pci-dss /usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/upgrading_from_rhel_7_to_rhel_8/applying-security-policies_upgrading-from-rhel-7-to-rhel-8
4.226. php
4.226. php 4.226.1. RHSA-2012:0019 - Moderate: php53 and php security update Updated php53 and php packages that fix two security issues are now available for Red Hat Enterprise Linux 5 and 6 respectively. The Red Hat Security Response Team has rated this update as having moderate security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. PHP is an HTML-embedded scripting language commonly used with the Apache HTTP Server. Security Fixes CVE-2011-4885 It was found that the hashing routine used by PHP arrays was susceptible to predictable hash collisions. If an HTTP POST request to a PHP application contained many parameters whose names map to the same hash value, a large amount of CPU time would be consumed. This flaw has been mitigated by adding a new configuration directive, max_input_vars, that limits the maximum number of parameters processed per request. By default, max_input_vars is set to 1000. CVE-2011-4566 An integer overflow flaw was found in the PHP exif extension. On 32-bit systems, a specially-crafted image file could cause the PHP interpreter to crash or disclose portions of its memory when a PHP script tries to extract Exchangeable image file format (Exif) metadata from the image file. Red Hat would like to thank oCERT for reporting CVE-2011-4885. oCERT acknowledges Julian Walde and Alexander Klink as the original reporters of CVE-2011-4885. All php53 and php users should upgrade to these updated packages, which contain backported patches to resolve these issues. After installing the updated packages, the httpd daemon must be restarted for the update to take effect. 4.226.2. RHSA-2012:0093 - Critical: php security update Updated php packages that fix one security issue are now available for Red Hat Enterprise Linux 4, 5 and 6. The Red Hat Security Response Team has rated this update as having critical security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. PHP is an HTML-embedded scripting language commonly used with the Apache HTTP Server. Security Fix CVE-2012-0830 It was discovered that the fix for CVE-2011-4885 (released via RHSA-2012:0071, RHSA-2012:0033, and RHSA-2012:0019 for php packages in Red Hat Enterprise Linux 4, 5, and 6 respectively) introduced an uninitialized memory use flaw. A remote attacker could send a specially-crafted HTTP request to cause the PHP interpreter to crash or, possibly, execute arbitrary code. All php users should upgrade to these updated packages, which contain a backported patch to resolve this issue. After installing the updated packages, the httpd daemon must be restarted for the update to take effect. 4.226.3. RHSA-2012:0546 - Critical: php security update Updated php packages that fix one security issue are now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having critical security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. PHP is an HTML-embedded scripting language commonly used with the Apache HTTP Server. Security Fix CVE-2012-1823 A flaw was found in the way the php-cgi executable processed command line arguments when running in CGI mode. A remote attacker could send a specially-crafted request to a PHP script that would result in the query string being parsed by php-cgi as command line options and arguments. This could lead to the disclosure of the script's source code or arbitrary code execution with the privileges of the PHP interpreter. Red Hat is aware that a public exploit for this issue is available that allows remote code execution in affected PHP CGI configurations. This flaw does not affect the default configuration in Red Hat Enterprise Linux 5 and 6 using the PHP module for Apache httpd to handle PHP scripts. All php users should upgrade to these updated packages, which contain a backported patch to resolve this issue. After installing the updated packages, the httpd daemon must be restarted for the update to take effect. 4.226.4. RHSA-2013:1061 - Critical: php security update Updated php packages that fix one security issue are now available for Red Hat Enterprise Linux 5.3 Long Life, and Red Hat Enterprise Linux 5.6, 6.2 and 6.3 Extended Update Support. The Red Hat Security Response Team has rated this update as having critical security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available from the CVE link associated with the description below. PHP is an HTML-embedded scripting language commonly used with the Apache HTTP Server. Security Fix CVE-2013-4113 A buffer overflow flaw was found in the way PHP parsed deeply nested XML documents. If a PHP application used the xml_parse_into_struct() function to parse untrusted XML content, an attacker able to supply specially-crafted XML could use this flaw to crash the application or, possibly, execute arbitrary code with the privileges of the user running the PHP interpreter. All php users should upgrade to these updated packages, which contain a backported patch to resolve this issue. After installing the updated packages, the httpd daemon must be restarted for the update to take effect.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/php
Preface
Preface Red Hat OpenShift Data Foundation supports deployment on existing Red Hat OpenShift Container Platform (RHOCP) Azure clusters. Note Only internal OpenShift Data Foundation clusters are supported on Microsoft Azure. See Planning your deployment for more information about deployment requirements. To deploy OpenShift Data Foundation, start with the requirements in Preparing to deploy OpenShift Data Foundation chapter and then follow the appropriate deployment process based on your requirement: Deploy OpenShift Data Foundation on Microsoft Azure Deploy standalone Multicloud Object Gateway component
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_using_microsoft_azure/preface-azure
Red Hat Certified Cloud and Service Provider Certification for Red Hat Enterprise Linux for SAP Images Workflow Guide
Red Hat Certified Cloud and Service Provider Certification for Red Hat Enterprise Linux for SAP Images Workflow Guide Red Hat Certified Cloud and Service Provider Certification 2025 For Red Hat Enterprise Linux for SAP with HA and Update Services Cloud Images Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_certified_cloud_and_service_provider_certification/2025/html/red_hat_certified_cloud_and_service_provider_certification_for_red_hat_enterprise_linux_for_sap_images_workflow_guide/index
Chapter 87. Openshift Deployment Configs
Chapter 87. Openshift Deployment Configs Since Camel 3.18 Both producer and consumer are supported The Openshift Deployment Configs component is one of the Kubernetes Components which provides a producer to execute Openshift Deployment Configs operations and a consumer to consume events related to Deployment Configs objects. 87.1. Dependencies When using openshift-deploymentconfigs with Red Hat build of Apache Camel for Spring Boot, use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency> 87.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 87.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 87.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 87.3. Component Options The Openshift Deployment Configs component supports 4 options, which are listed below. Name Description Default Type kubernetesClient (common) Autowired To use an existing kubernetes client. KubernetesClient bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 87.4. Endpoint Options The Openshift Deployment Configs endpoint is configured using URI syntax: with the following path and query parameters: 87.4.1. Path Parameters (1 parameters) Name Description Default Type masterUrl (common) Required Kubernetes Master url. String 87.4.2. Query Parameters (33 parameters) Name Description Default Type apiVersion (common) The Kubernetes API Version to use. String dnsDomain (common) The dns domain, used for ServiceCall EIP. String kubernetesClient (common) Default KubernetesClient to use if provided. KubernetesClient namespace (common) The namespace. String portName (common) The port name, used for ServiceCall EIP. String portProtocol (common) The port protocol, used for ServiceCall EIP. tcp String crdGroup (consumer) The Consumer CRD Resource Group we would like to watch. String crdName (consumer) The Consumer CRD Resource name we would like to watch. String crdPlural (consumer) The Consumer CRD Resource Plural we would like to watch. String crdScope (consumer) The Consumer CRD Resource Scope we would like to watch. String crdVersion (consumer) The Consumer CRD Resource Version we would like to watch. String labelKey (consumer) The Consumer Label key when watching at some resources. String labelValue (consumer) The Consumer Label value when watching at some resources. String poolSize (consumer) The Consumer pool size. 1 int resourceName (consumer) The Consumer Resource Name we would like to watch. String bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut ExchangePattern operation (producer) Producer operation to do on Kubernetes. String lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean connectionTimeout (advanced) Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer caCertData (security) The CA Cert Data. String caCertFile (security) The CA Cert File. String clientCertData (security) The Client Cert Data. String clientCertFile (security) The Client Cert File. String clientKeyAlgo (security) The Key Algorithm used by the client. String clientKeyData (security) The Client Key data. String clientKeyFile (security) The Client Key file. String clientKeyPassphrase (security) The Client Key Passphrase. String oauthToken (security) The Auth Token. String password (security) Password to connect to Kubernetes. String trustCerts (security) Define if the certs we used are trusted anyway or not. Boolean username (security) Username to connect to Kubernetes. String 87.5. Message Headers The Openshift Deployment Configs component supports 8 message header(s), which is/are listed below: Name Description Default Type CamelKubernetesOperation (producer) Constant: KUBERNETES_OPERATION The Producer operation. String CamelKubernetesNamespaceName (producer) Constant: KUBERNETES_NAMESPACE_NAME The namespace name. String CamelKubernetesDeploymentsLabels (producer) Constant: KUBERNETES_DEPLOYMENTS_LABELS The deployment labels. Map CamelKubernetesDeploymentName (producer) Constant: KUBERNETES_DEPLOYMENT_NAME The deployment name. String CamelKubernetesDeploymentReplicas (producer) Constant: KUBERNETES_DEPLOYMENT_REPLICAS The desired instance count. Integer CamelKubernetesDeploymentConfigSpec (producer) Constant: KUBERNETES_DEPLOYMENT_CONFIG_SPEC The spec for a deployment config. DeploymentConfigSpec CamelKubernetesEventAction (consumer) Constant: KUBERNETES_EVENT_ACTION Action watched by the consumer. Enum values: ADDED MODIFIED DELETED ERROR BOOKMARK Action CamelKubernetesEventTimestamp (consumer) Constant: KUBERNETES_EVENT_TIMESTAMP Timestamp of the action watched by the consumer. long 87.6. Supported producer operation listDeploymentConfigs listDeploymentsConfigsByLabels getDeploymentConfig createDeploymentConfig updateDeploymentConfig deleteDeploymentConfig scaleDeploymentConfig 87.7. Openshift Deployment Configs Producer Examples listDeploymentConfigs: this operation list the deployments on a Openshift cluster. from("direct:list"). toF("openshift-deploymentconfigs:///?kubernetesClient=#kubernetesClient&operation=listDeploymentConfigs"). to("mock:result"); This operation returns a List of Deployment Configs from your cluster. listDeploymentConfigsByLabels: this operation list the deployment configs by labels on a Openshift cluster. from("direct:listByLabels").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put("key1", "value1"); labels.put("key2", "value2"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_DEPLOYMENTS_LABELS, labels); } }); toF("openshift-deploymentconfigs:///?kubernetesClient=#kubernetesClient&operation=listDeploymentConfigsByLabels"). to("mock:result"); This operation returns a List of Deployment Configs from your cluster, using a label selector (with key1 and key2, with value value1 and value2). 87.8. Openshift Deployment Configs Consumer Example fromF("openshift-deploymentconfigs://%s?oauthToken=%s&namespace=default&resourceName=test", host, authToken).process(new OpenshiftProcessor()).to("mock:result"); public class OpenshiftProcessor implements Processor { @Override public void process(Exchange exchange) throws Exception { Message in = exchange.getIn(); DeploymentConfig dp = exchange.getIn().getBody(DeploymentConfig.class); log.info("Got event with configmap name: " + dp.getMetadata().getName() + " and action " + in.getHeader(KubernetesConstants.KUBERNETES_EVENT_ACTION)); } } This consumer returns a list of events on the namespace default for the deployment config test. 87.9. Spring Boot Auto-Configuration The component supports 102 options, which are listed below. Name Description Default Type camel.cluster.kubernetes.attributes Custom service attributes. Map camel.cluster.kubernetes.cluster-labels Set the labels used to identify the pods composing the cluster. Map camel.cluster.kubernetes.config-map-name Set the name of the ConfigMap used to do optimistic locking (defaults to 'leaders'). String camel.cluster.kubernetes.connection-timeout-millis Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer camel.cluster.kubernetes.enabled Sets if the Kubernetes cluster service should be enabled or not, default is false. false Boolean camel.cluster.kubernetes.id Cluster Service ID. String camel.cluster.kubernetes.jitter-factor A jitter factor to apply in order to prevent all pods to call Kubernetes APIs in the same instant. Double camel.cluster.kubernetes.kubernetes-namespace Set the name of the Kubernetes namespace containing the pods and the configmap (autodetected by default). String camel.cluster.kubernetes.lease-duration-millis The default duration of the lease for the current leader. Long camel.cluster.kubernetes.master-url Set the URL of the Kubernetes master (read from Kubernetes client properties by default). String camel.cluster.kubernetes.order Service lookup order/priority. Integer camel.cluster.kubernetes.pod-name Set the name of the current pod (autodetected from container host name by default). String camel.cluster.kubernetes.renew-deadline-millis The deadline after which the leader must stop its services because it may have lost the leadership. Long camel.cluster.kubernetes.retry-period-millis The time between two subsequent attempts to check and acquire the leadership. It is randomized using the jitter factor. Long camel.component.kubernetes-config-maps.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-config-maps.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-config-maps.enabled Whether to enable auto configuration of the kubernetes-config-maps component. This is enabled by default. Boolean camel.component.kubernetes-config-maps.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-config-maps.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-custom-resources.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-custom-resources.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-custom-resources.enabled Whether to enable auto configuration of the kubernetes-custom-resources component. This is enabled by default. Boolean camel.component.kubernetes-custom-resources.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-custom-resources.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-deployments.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-deployments.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-deployments.enabled Whether to enable auto configuration of the kubernetes-deployments component. This is enabled by default. Boolean camel.component.kubernetes-deployments.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-deployments.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-events.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-events.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-events.enabled Whether to enable auto configuration of the kubernetes-events component. This is enabled by default. Boolean camel.component.kubernetes-events.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-events.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-hpa.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-hpa.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-hpa.enabled Whether to enable auto configuration of the kubernetes-hpa component. This is enabled by default. Boolean camel.component.kubernetes-hpa.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-hpa.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-job.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-job.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-job.enabled Whether to enable auto configuration of the kubernetes-job component. This is enabled by default. Boolean camel.component.kubernetes-job.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-job.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-namespaces.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-namespaces.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-namespaces.enabled Whether to enable auto configuration of the kubernetes-namespaces component. This is enabled by default. Boolean camel.component.kubernetes-namespaces.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-namespaces.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-nodes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-nodes.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-nodes.enabled Whether to enable auto configuration of the kubernetes-nodes component. This is enabled by default. Boolean camel.component.kubernetes-nodes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-nodes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes-claims.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes-claims.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes-claims component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes-claims.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes-claims.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-pods.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-pods.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-pods.enabled Whether to enable auto configuration of the kubernetes-pods component. This is enabled by default. Boolean camel.component.kubernetes-pods.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-pods.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-replication-controllers.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-replication-controllers.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-replication-controllers.enabled Whether to enable auto configuration of the kubernetes-replication-controllers component. This is enabled by default. Boolean camel.component.kubernetes-replication-controllers.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-replication-controllers.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-resources-quota.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-resources-quota.enabled Whether to enable auto configuration of the kubernetes-resources-quota component. This is enabled by default. Boolean camel.component.kubernetes-resources-quota.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-resources-quota.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-secrets.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-secrets.enabled Whether to enable auto configuration of the kubernetes-secrets component. This is enabled by default. Boolean camel.component.kubernetes-secrets.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-secrets.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-service-accounts.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-service-accounts.enabled Whether to enable auto configuration of the kubernetes-service-accounts component. This is enabled by default. Boolean camel.component.kubernetes-service-accounts.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-service-accounts.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-services.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-services.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-services.enabled Whether to enable auto configuration of the kubernetes-services component. This is enabled by default. Boolean camel.component.kubernetes-services.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-services.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-build-configs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-build-configs.enabled Whether to enable auto configuration of the openshift-build-configs component. This is enabled by default. Boolean camel.component.openshift-build-configs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-build-configs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-builds.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-builds.enabled Whether to enable auto configuration of the openshift-builds component. This is enabled by default. Boolean camel.component.openshift-builds.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-builds.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-deploymentconfigs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-deploymentconfigs.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.openshift-deploymentconfigs.enabled Whether to enable auto configuration of the openshift-deploymentconfigs component. This is enabled by default. Boolean camel.component.openshift-deploymentconfigs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-deploymentconfigs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency>", "openshift-deploymentconfigs:masterUrl", "from(\"direct:list\"). toF(\"openshift-deploymentconfigs:///?kubernetesClient=#kubernetesClient&operation=listDeploymentConfigs\"). to(\"mock:result\");", "from(\"direct:listByLabels\").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put(\"key1\", \"value1\"); labels.put(\"key2\", \"value2\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_DEPLOYMENTS_LABELS, labels); } }); toF(\"openshift-deploymentconfigs:///?kubernetesClient=#kubernetesClient&operation=listDeploymentConfigsByLabels\"). to(\"mock:result\");", "fromF(\"openshift-deploymentconfigs://%s?oauthToken=%s&namespace=default&resourceName=test\", host, authToken).process(new OpenshiftProcessor()).to(\"mock:result\"); public class OpenshiftProcessor implements Processor { @Override public void process(Exchange exchange) throws Exception { Message in = exchange.getIn(); DeploymentConfig dp = exchange.getIn().getBody(DeploymentConfig.class); log.info(\"Got event with configmap name: \" + dp.getMetadata().getName() + \" and action \" + in.getHeader(KubernetesConstants.KUBERNETES_EVENT_ACTION)); } }" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-kubernetes-openshift-deploymentconfigs-component-starter
Chapter 6. Using iPXE to reduce provisioning times
Chapter 6. Using iPXE to reduce provisioning times iPXE is an open-source network-boot firmware. It provides a full PXE implementation enhanced with additional features, such as booting from an HTTP server. For more information about iPXE, see iPXE website . You can use iPXE if the following restrictions prevent you from using PXE: A network with unmanaged DHCP servers. A PXE service that is unreachable because of, for example, a firewall restriction. A TFTP UDP-based protocol that is unreliable because of, for example, a low-bandwidth network. 6.1. Prerequisites for using iPXE You can use iPXE to boot virtual machines in the following cases: Your virtual machines run on a hypervisor that uses iPXE as primary firmware. Your virtual machines are in BIOS mode. In this case, you can configure PXELinux to chainboot iPXE and boot by using the HTTP protocol. For booting virtual machines in UEFI mode by using HTTP, you can follow Section 5.5, "Creating hosts with UEFI HTTP boot provisioning" instead. Supportability Red Hat does not officially support iPXE in Red Hat Satellite. For more information, see Supported architectures and kickstart scenarios in Satellite 6 in the Red Hat Knowledgebase . Host requirements The MAC address of the provisioning interface matches the host configuration. The provisioning interface of the host has a valid DHCP reservation. The NIC is capable of PXE booting. For more information, see supported hardware on ipxe.org for a list of hardware drivers expected to work with an iPXE-based boot disk. The NIC is compatible with iPXE. 6.2. Configuring iPXE environment Configure an iPXE environment on all Capsules that you want to use for iPXE provisioning. Important In Red Hat Enterprise Linux, security-related features of iPXE are not supported and the iPXE binary is built without security features. For this reason, you can only use HTTP but not HTTPS. For more information, see Red Hat Enterprise Linux HTTPS support in iPXE . Prerequisites If you want to use Capsule Servers instead of your Satellite Server, ensure that you have configured your Capsule Servers accordingly. For more information, see Configuring Capsule for Host Registration and Provisioning in Installing Capsule Server . Procedure Enable the TFTP and HTTPboot services on your Capsule: Install the ipxe-bootimgs package on your Capsule: Copy iPXE firmware to the TFTP directory. Copy the iPXE firmware with the Linux kernel header: Copy the UNDI iPXE firmware: Correct the SELinux file contexts: Set the HTTP URL. If you want to use Satellite Server for booting, run the following command on Satellite Server: If you want to use Capsule Server for booting, run the following command on Capsule Server: 6.3. Booting virtual machines Some virtualization hypervisors use iPXE as primary firmware for PXE booting. If you use such a hypervisor, you can boot virtual machines without TFTP and PXELinux. Booting a virtual machine has the following workflow: Virtual machine starts. iPXE retrieves the network credentials, including an HTTP URL, by using DHCP. iPXE loads the iPXE bootstrap template from Capsule. iPXE loads the iPXE template with MAC as a URL parameter from Capsule. iPXE loads the kernel and initial RAM disk of the installer. Prerequisites Your hypervisor must support iPXE. The following virtualization hypervisors support iPXE: libvirt Red Hat Virtualization (deprecated) You have configured your iPXE environment. For more information, see Section 6.2, "Configuring iPXE environment" . Note You can use the original templates shipped in Satellite as described below. If you require modification to an original template, clone the template, edit the clone, and associate the clone instead of the original template. For more information, see Section 2.14, "Cloning provisioning templates" . Procedure In the Satellite web UI, navigate to Hosts > Templates > Provisioning Templates . Search for the Kickstart default iPXE template. Click the name of the template. Click the Association tab and select the operating systems that your host uses. Click the Locations tab and add the location where the host resides. Click the Organizations tab and add the organization that the host belongs to. Click Submit to save the changes. In the Satellite web UI, navigate to Hosts > Operating systems and select the operating system of your host. Click the Templates tab. From the iPXE template list, select the Kickstart default iPXE template. Click Submit to save the changes. In the Satellite web UI, navigate to Hosts > All Hosts . In the Hosts page, select the host that you want to use. Select the Operating System tab. Set PXE Loader to iPXE Embedded . Select the Templates tab. In Provisioning Templates , click Resolve and verify that the iPXE template resolves to the required template. Click Submit to save host settings. 6.4. Chainbooting iPXE from PXELinux You can set up iPXE to use a built-in driver for network communication ( ipxe.lkrn ) or Universal Network Device Interface (UNDI) ( undionly-ipxe.0 ). You can choose to load either file depending on the networking hardware capabilities and iPXE driver availability. UNDI is a minimalistic UDP/IP stack that implements TFTP client. However, UNDI cannot support other protocols like HTTP. To use HTTP with iPXE, use the iPXE build with built-in drivers ( ipxe.lkrn ). Chainbooting iPXE has the following workflow: Host powers on. PXE driver retrieves the network credentials by using DHCP. PXE driver retrieves the PXELinux firmware pxelinux.0 by using TFTP. PXELinux searches for the configuration file on the TFTP server. PXELinux chainloads iPXE ipxe.lkrn or undionly-ipxe.0 . iPXE retrieves the network credentials, including an HTTP URL, by using DHCP again. iPXE chainloads the iPXE template from your Templates Capsule. iPXE loads the kernel and initial RAM disk of the installer. Prerequisites You have configured your iPXE environment. For more information, see Section 6.2, "Configuring iPXE environment" . Note You can use the original templates shipped in Satellite as described below. If you require modification to an original template, clone the template, edit the clone, and associate the clone instead of the original template. For more information, see Section 2.14, "Cloning provisioning templates" . Procedure In the Satellite web UI, navigate to Hosts > Templates > Provisioning Templates . Search for the required PXELinux template: PXELinux chain iPXE to use ipxe.lkrn PXELinux chain iPXE UNDI to use undionly-ipxe.0 Click the name of the template you want to use. Click the Association tab and select the operating systems that your host uses. Click the Locations tab and add the location where the host resides. Click the Organizations tab and add the organization that the host belongs to. Click Submit to save the changes. On the Provisioning Templates page, search for the Kickstart default iPXE template. Click the name of the template. Click the Association tab and associate the template with the operating system that your host uses. Click the Locations tab and add the location where the host resides. Click the Organizations tab and add the organization that the host belongs to. Click Submit to save the changes. In the Satellite web UI, navigate to Hosts > Operating systems and select the operating system of your host. Click the Templates tab. From the PXELinux template list, select the template you want to use. From the iPXE template list, select the Kickstart default iPXE template. Click Submit to save the changes. In the Satellite web UI, navigate to Configure > Host Groups , and select the host group you want to configure. Select the Operating System tab. Select the Architecture and Operating system . Set the PXE Loader : Select PXELinux BIOS to chainboot iPXE ( ipxe.lkrn ) from PXELinux. Select iPXE Chain BIOS to load undionly-ipxe.0 directly.
[ "satellite-installer --foreman-proxy-httpboot true --foreman-proxy-tftp true", "satellite-maintain packages install ipxe-bootimgs", "cp /usr/share/ipxe/ipxe.lkrn /var/lib/tftpboot/", "cp /usr/share/ipxe/undionly.kpxe /var/lib/tftpboot/undionly-ipxe.0", "restorecon -RvF /var/lib/tftpboot/", "satellite-installer --foreman-proxy-dhcp-ipxefilename \"http:// satellite.example.com /unattended/iPXE?bootstrap=1\"", "satellite-installer --foreman-proxy-dhcp-ipxe-bootstrap true" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/provisioning_hosts/using-ipxe-to-reduce-provisioning-times_provisioning
Chapter 11. Accessing monitoring APIs by using the CLI
Chapter 11. Accessing monitoring APIs by using the CLI In OpenShift Container Platform 4.13, you can access web service APIs for some monitoring components from the command line interface (CLI). Important In certain situations, accessing API endpoints can degrade the performance and scalability of your cluster, especially if you use endpoints to retrieve, send, or query large amounts of metrics data. To avoid these issues, follow these recommendations: Avoid querying endpoints frequently. Limit queries to a maximum of one every 30 seconds. Do not try to retrieve all metrics data via the /federate endpoint for Prometheus. Query it only when you want to retrieve a limited, aggregated data set. For example, retrieving fewer than 1,000 samples for each request helps minimize the risk of performance degradation. 11.1. About accessing monitoring web service APIs You can directly access web service API endpoints from the command line for the following monitoring stack components: Prometheus Alertmanager Thanos Ruler Thanos Querier Note To access Thanos Ruler and Thanos Querier service APIs, the requesting account must have get permission on the namespaces resource, which can be granted by binding the cluster-monitoring-view cluster role to the account. When you access web service API endpoints for monitoring components, be aware of the following limitations: You can only use bearer token authentication to access API endpoints. You can only access endpoints in the /api path for a route. If you try to access an API endpoint in a web browser, an Application is not available error occurs. To access monitoring features in a web browser, use the OpenShift Container Platform web console to review monitoring dashboards. Additional resources Reviewing monitoring dashboards 11.2. Accessing a monitoring web service API The following example shows how to query the service API receivers for the Alertmanager service used in core platform monitoring. You can use a similar method to access the prometheus-k8s service for core platform Prometheus and the thanos-ruler service for Thanos Ruler. Prerequisites You are logged in to an account that is bound against the monitoring-alertmanager-edit role in the openshift-monitoring namespace. You are logged in to an account that has permission to get the Alertmanager API route. Note If your account does not have permission to get the Alertmanager API route, a cluster administrator can provide the URL for the route. Procedure Extract an authentication token by running the following command: USD TOKEN=USD(oc whoami -t) Extract the alertmanager-main API route URL by running the following command: USD HOST=USD(oc -n openshift-monitoring get route alertmanager-main -ojsonpath='{.status.ingress[].host}') Query the service API receivers for Alertmanager by running the following command: USD curl -H "Authorization: Bearer USDTOKEN" -k "https://USDHOST/api/v2/receivers" 11.3. Querying metrics by using the federation endpoint for Prometheus You can use the federation endpoint for Prometheus to scrape platform and user-defined metrics from a network location outside the cluster. To do so, access the Prometheus /federate endpoint for the cluster via an OpenShift Container Platform route. Important A delay in retrieving metrics data occurs when you use federation. This delay can affect the accuracy and timeliness of the scraped metrics. Using the federation endpoint can also degrade the performance and scalability of your cluster, especially if you use the federation endpoint to retrieve large amounts of metrics data. To avoid these issues, follow these recommendations: Do not try to retrieve all metrics data via the federation endpoint for Prometheus. Query it only when you want to retrieve a limited, aggregated data set. For example, retrieving fewer than 1,000 samples for each request helps minimize the risk of performance degradation. Avoid frequent querying of the federation endpoint for Prometheus. Limit queries to a maximum of one every 30 seconds. If you need to forward large amounts of data outside the cluster, use remote write instead. For more information, see the Configuring remote write storage section. Prerequisites You have installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-monitoring-view cluster role or have obtained a bearer token with get permission on the namespaces resource. Note You can only use bearer token authentication to access the Prometheus federation endpoint. You are logged in to an account that has permission to get the Prometheus federation route. Note If your account does not have permission to get the Prometheus federation route, a cluster administrator can provide the URL for the route. Procedure Retrieve the bearer token by running the following the command: USD TOKEN=USD(oc whoami -t) Get the Prometheus federation route URL by running the following command: USD HOST=USD(oc -n openshift-monitoring get route prometheus-k8s-federate -ojsonpath='{.status.ingress[].host}') Query metrics from the /federate route. The following example command queries up metrics: USD curl -G -k -H "Authorization: Bearer USDTOKEN" https://USDHOST/federate --data-urlencode 'match[]=up' Example output # TYPE up untyped up{apiserver="kube-apiserver",endpoint="https",instance="10.0.143.148:6443",job="apiserver",namespace="default",service="kubernetes",prometheus="openshift-monitoring/k8s",prometheus_replica="prometheus-k8s-0"} 1 1657035322214 up{apiserver="kube-apiserver",endpoint="https",instance="10.0.148.166:6443",job="apiserver",namespace="default",service="kubernetes",prometheus="openshift-monitoring/k8s",prometheus_replica="prometheus-k8s-0"} 1 1657035338597 up{apiserver="kube-apiserver",endpoint="https",instance="10.0.173.16:6443",job="apiserver",namespace="default",service="kubernetes",prometheus="openshift-monitoring/k8s",prometheus_replica="prometheus-k8s-0"} 1 1657035343834 ... 11.4. Accessing metrics from outside the cluster for custom applications You can query Prometheus metrics from outside the cluster when monitoring your own services with user-defined projects. Access this data from outside the cluster by using the thanos-querier route. This access only supports using a bearer token for authentication. Prerequisites You have deployed your own service, following the "Enabling monitoring for user-defined projects" procedure. You are logged in to an account with the cluster-monitoring-view cluster role, which provides permission to access the Thanos Querier API. You are logged in to an account that has permission to get the Thanos Querier API route. Note If your account does not have permission to get the Thanos Querier API route, a cluster administrator can provide the URL for the route. Procedure Extract an authentication token to connect to Prometheus by running the following command: USD TOKEN=USD(oc whoami -t) Extract the thanos-querier API route URL by running the following command: USD HOST=USD(oc -n openshift-monitoring get route thanos-querier -ojsonpath='{.status.ingress[].host}') Set the namespace to the namespace in which your service is running by using the following command: USD NAMESPACE=ns1 Query the metrics of your own services in the command line by running the following command: USD curl -H "Authorization: Bearer USDTOKEN" -k "https://USDHOST/api/v1/query?" --data-urlencode "query=up{namespace='USDNAMESPACE'}" The output shows the status for each application pod that Prometheus is scraping: The formatted example output { "status": "success", "data": { "resultType": "vector", "result": [ { "metric": { "__name__": "up", "endpoint": "web", "instance": "10.129.0.46:8080", "job": "prometheus-example-app", "namespace": "ns1", "pod": "prometheus-example-app-68d47c4fb6-jztp2", "service": "prometheus-example-app" }, "value": [ 1591881154.748, "1" ] } ], } } Note The formatted example output uses a filtering tool, such as jq , to provide the formatted indented JSON. See the jq Manual (jq documentation) for more information about using jq . The command requests an instant query endpoint of the Thanos Querier service, which evaluates selectors at one point in time. 11.5. Additional resources Enabling monitoring for user-defined projects Configuring remote write storage Managing metrics Managing alerts
[ "TOKEN=USD(oc whoami -t)", "HOST=USD(oc -n openshift-monitoring get route alertmanager-main -ojsonpath='{.status.ingress[].host}')", "curl -H \"Authorization: Bearer USDTOKEN\" -k \"https://USDHOST/api/v2/receivers\"", "TOKEN=USD(oc whoami -t)", "HOST=USD(oc -n openshift-monitoring get route prometheus-k8s-federate -ojsonpath='{.status.ingress[].host}')", "curl -G -k -H \"Authorization: Bearer USDTOKEN\" https://USDHOST/federate --data-urlencode 'match[]=up'", "TYPE up untyped up{apiserver=\"kube-apiserver\",endpoint=\"https\",instance=\"10.0.143.148:6443\",job=\"apiserver\",namespace=\"default\",service=\"kubernetes\",prometheus=\"openshift-monitoring/k8s\",prometheus_replica=\"prometheus-k8s-0\"} 1 1657035322214 up{apiserver=\"kube-apiserver\",endpoint=\"https\",instance=\"10.0.148.166:6443\",job=\"apiserver\",namespace=\"default\",service=\"kubernetes\",prometheus=\"openshift-monitoring/k8s\",prometheus_replica=\"prometheus-k8s-0\"} 1 1657035338597 up{apiserver=\"kube-apiserver\",endpoint=\"https\",instance=\"10.0.173.16:6443\",job=\"apiserver\",namespace=\"default\",service=\"kubernetes\",prometheus=\"openshift-monitoring/k8s\",prometheus_replica=\"prometheus-k8s-0\"} 1 1657035343834", "TOKEN=USD(oc whoami -t)", "HOST=USD(oc -n openshift-monitoring get route thanos-querier -ojsonpath='{.status.ingress[].host}')", "NAMESPACE=ns1", "curl -H \"Authorization: Bearer USDTOKEN\" -k \"https://USDHOST/api/v1/query?\" --data-urlencode \"query=up{namespace='USDNAMESPACE'}\"", "{ \"status\": \"success\", \"data\": { \"resultType\": \"vector\", \"result\": [ { \"metric\": { \"__name__\": \"up\", \"endpoint\": \"web\", \"instance\": \"10.129.0.46:8080\", \"job\": \"prometheus-example-app\", \"namespace\": \"ns1\", \"pod\": \"prometheus-example-app-68d47c4fb6-jztp2\", \"service\": \"prometheus-example-app\" }, \"value\": [ 1591881154.748, \"1\" ] } ], } }" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/monitoring/accessing-third-party-monitoring-apis
4.12. Prioritizing and Disabling SELinux Policy Modules
4.12. Prioritizing and Disabling SELinux Policy Modules The SELinux module storage in /etc/selinux/ allows using a priority on SELinux modules. Enter the following command as root to show two module directories with a different priority: While the default priority used by semodule utility is 400, the priority used in selinux-policy packages is 100, so you can find most of the SELinux modules installed with the priority 100. You can override an existing module with a modified module with the same name using a higher priority. When there are more modules with the same name and different priorities, only a module with the highest priority is used when the policy is built. Example 4.1. Using SELinux Policy Modules Priority Prepare a new module with modified file context. Install the module with the semodule -i command and set the priority of the module to 400. We use sandbox.pp in the following example. To return back to the default module, enter the semodule -r command as root: Disabling a System Policy Module To disable a system policy module, enter the following command as root: Warning If you remove a system policy module using the semodule -r command, it is deleted on your system's storage and you cannot load it again. To avoid unnecessary reinstallations of the selinux-policy-targeted package for restoring all system policy modules, use the semodule -d command instead.
[ "~]# ls /etc/selinux/targeted/active/modules 100 400 disabled", "~]# semodule -X 400 -i sandbox.pp ~]# semodule --list-modules=full | grep sandbox 400 sandbox pp 100 sandbox pp", "~]# semodule -X 400 -r sandbox libsemanage.semanage_direct_remove_key: sandbox module at priority 100 is now active.", "semodule -d MODULE_NAME" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/security-enhanced_linux-prioritizing_selinux_modules
D.2. Enabling FIPS in RHV hosts and the standalone Manager
D.2. Enabling FIPS in RHV hosts and the standalone Manager You can enable FIPS mode when installing a Red Hat Enterprise Linux (RHEL) host or Red Hat Virtualization Host (RHVH). For details, see Installing a RHEL 8 system with FIPS mode enabled in the guide Security hardening for Red Hat Enterprise Linux 8. Red Hat does not support switching a provisioned host or Manager machine to FIPS mode Verification Verify that FIPS is enabled by entering the command fips-mode-setup --check on the host. The command should return FIPS mode is enabled : # fips-mode-setup --check FIPS mode is enabled.
[ "fips-mode-setup --check FIPS mode is enabled." ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/administration_guide/enabling_fips_rhel_hosts_and_rhvm_appendix_fips
Chapter 5. Provisioning Concepts
Chapter 5. Provisioning Concepts An important feature of Red Hat Satellite is unattended provisioning of hosts. To achieve this, Red Hat Satellite uses DNS and DHCP infrastructures, PXE booting, TFTP, and Kickstart. Use this chapter to understand the working principle of these concepts. 5.1. PXE Booting Preboot execution environment (PXE) provides the ability to boot a system over a network. Instead of using local hard drives or a CD-ROM, PXE uses DHCP to provide host with standard information about the network, to discover a TFTP server, and to download a boot image. For more information about setting up a PXE server see the Red Hat Knowledgebase solution How to set-up/configure a PXE Server . 5.1.1. PXE Sequence The host boots the PXE image if no other bootable image is found. A NIC of the host sends a broadcast request to the DHCP server. The DHCP server receives the request and sends standard information about the network: IP address, subnet mask, gateway, DNS, the location of a TFTP server, and a boot image. The host obtains the boot loader image/pxelinux.0 and the configuration file pxelinux.cfg/00:MA:CA:AD:D from the TFTP server. The host configuration specifies the location of a kernel image, initrd and Kickstart. The host downloads the files and installs the image. For an example of using PXE Booting by Satellite Server, see Provisioning Workflow in the Provisioning Guide . 5.1.2. PXE Booting Requirements To provision machines using PXE booting, ensure that you meet the following requirements: Network requirements Optional: If the host and the DHCP server are separated by a router, configure the DHCP relay agent and point to the DHCP server. Client requirements Ensure that all the network-based firewalls are configured to allow clients on the subnet to access the Capsule. For more information, see Figure 2.1, "Satellite Topology with Isolated Capsule" . Ensure that your client has access to the DHCP and TFTP servers. Satellite requirements Ensure that both Satellite Server and Capsule have DNS configured and are able to resolve provisioned host names. Ensure that the UDP ports 67 and 68 are accessible by the client to enable the client to receive a DHCP offer with the boot options. Ensure that the UDP port 69 is accessible by the client so that the client can access the TFTP server on the Capsule. Ensure that the TCP port 80 is accessible by the client to allow the client to download files and Kickstart templates from the Capsule. Ensure that the host provisioning interface subnet has a DHCP Capsule set. Ensure that the host provisioning interface subnet has a TFTP Capsule set. Ensure that the host provisioning interface subnet has a Templates Capsule set. Ensure that DHCP with the correct subnet is enabled using the Satellite installer. Enable TFTP using the Satellite installer. 5.2. HTTP Booting You can use HTTP booting to boot systems over a network using HTTP. 5.2.1. HTTP Booting Requirements with managed DHCP To provision machines through HTTP booting ensure that you meet the following requirements: Client requirements For HTTP booting to work, ensure that your environment has the following client-side configurations: All the network-based firewalls are configured to allow clients on the subnet to access the Capsule. For more information, see Figure 2.1, "Satellite Topology with Isolated Capsule" . Your client has access to the DHCP and DNS servers. Your client has access to the HTTP UEFI Boot Capsule. Network requirements Optional: If the host and the DHCP server are separated by a router, configure the DHCP relay agent and point to the DHCP server. Satellite requirements Although TFTP protocol is not used for HTTP UEFI Booting, Satellite uses TFTP Capsule API to deploy bootloader configuration. For HTTP booting to work, ensure that Satellite has the following configurations: Both Satellite Server and Capsule have DNS configured and are able to resolve provisioned host names. The UDP ports 67 and 68 are accessible by the client so that the client can send and receive a DHCP request and offer. Ensure that the TCP port 8000 is open for the client to download the bootloader and Kickstart templates from the Capsule. The TCP port 9090 is open for the client to download the bootloader from the Capsule using the HTTPS protocol. The subnet that functions as the host's provisioning interface has a DHCP Capsule, a HTTP Boot Capsule, a TFTP Capsule, and a Templates Capsule The grub2-efi package is updated to the latest version. To update the grub2-efi package to the latest version and execute the installer to copy the recent bootloader from /boot into /var/lib/tftpboot directory, enter the following commands: 5.2.2. HTTP Booting Requirements with unmanaged DHCP To provision machines through HTTP booting without managed DHCP ensure that you meet the following requirements: Client requirements HTTP UEFI Boot URL must be set to one of: http://{smartproxy.example.com}:8000 https://{smartproxy.example.com}:9090 Ensure that your client has access to the DHCP and DNS servers. Ensure that your client has access to the HTTP UEFI Boot Capsule. Ensure that all the network-based firewalls are configured to allow clients on the subnet to access the Capsule. For more information, see Figure 2.1, "Satellite Topology with Isolated Capsule" . Network requirements An unmanaged DHCP server available for clients. An unmanaged DNS server available for clients. In case DNS is not available, use IP address to configure clients. Satellite requirements Although TFTP protocol is not used for HTTP UEFI Booting, Satellite use TFTP Capsule API to deploy bootloader configuration. Ensure that both Satellite Server and Capsule have DNS configured and are able to resolve provisioned host names. Ensure that the UDP ports 67 and 68 are accessible by the client so that the client can send and receive a DHCP request and offer. Ensure that the TCP port 8000 is open for the client to download bootloader and Kickstart templates from the Capsule. Ensure that the TCP port 9090 is open for the client to download the bootloader from the Capsule via HTTPS protocol. Ensure that the host provisioning interface subnet has a HTTP Boot Capsule set. Ensure that the host provisioning interface subnet has a TFTP Capsule set. Ensure that the host provisioning interface subnet has a Templates Capsule set. Update the grub2-efi package to the latest version and execute the installer to copy the recent bootloader from the /boot directory into the /var/lib/tftpboot directory: 5.3. Kickstart You can use Kickstart to automate the installation process of a Red Hat Satellite or Capsule Server by creating a Kickstart file that contains all the information that is required for the installation. For more information about Kickstart, see Kickstart Installations in the Red Hat Enterprise Linux 7 Installation Guide . 5.3.1. Workflow When you run a Red Hat Satellite Kickstart script, the following workflow occurs: It specifies the installation location of a Satellite Server or a Capsule Server. It installs the predefined packages. It installs Subscription Manager. It uses Activation Keys to subscribe the hosts to Red Hat Satellite. It installs Puppet, and configures a puppet.conf file to indicate the Red Hat Satellite or Capsule instance. It enables Puppet to run and request a certificate. It runs user defined snippets.
[ "satellite-maintain packages update grub2-efi satellite-installer", "satellite-maintain packages update grub2-efi satellite-installer" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/satellite_overview_concepts_and_deployment_considerations/chap-architecture_guide-provisioning_concepts
function::qs_wait
function::qs_wait Name function::qs_wait - Function to record enqueue requests Synopsis Arguments qname the name of the queue requesting enqueue Description This function records that a new request was enqueued for the given queue name.
[ "qs_wait(qname:string)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-qs-wait
2. Clients
2. Clients The Customer Portal API is, by nature, client-agnostic. It is expressed as a set of resource URLs which send and receive XML data. Some common clients and platforms include, but are not limited to, the following: Client Platform/Environment Comments cURL [a] Command Line (Linux, UNIX, Mac OS X, Microsoft Windows) Easy and transparent way to test commands and do simple integration Apache HTTP Client [b] Java Most common Java library with which to talk HTTP; offers little semantic value beyond simple HTTP; no intrinsic binding of XML RESTeasy Client [c] Java Full Java model integration; no need to think about HTTP or XML; uses Apache HTTP Client underneath [a] http://curl.haxx.se/ [b] http://hc.apache.org/httpclient-3.x/ [c] http://docs.jboss.org/resteasy/docs/2.0.0.GA/userguide/html/RESTEasy_Client_Framework.html
null
https://docs.redhat.com/en/documentation/red_hat_customer_portal/1/html/customer_portal_integration_guide/clients
18.2.5. Saving the Settings
18.2.5. Saving the Settings Click OK to save the changes and enable or disable the firewall. If Enable firewall was selected, the options selected are translated to iptables commands and written to the /etc/sysconfig/iptables file. The iptables service is also started so that the firewall is activated immediately after saving the selected options. If Disable firewall was selected, the /etc/sysconfig/iptables file is removed and the iptables service is stopped immediately. The selected options are also written to the /etc/sysconfig/system-config-selinux file so that the settings can be restored the time the application is started. Do not edit this file by hand. Even though the firewall is activated immediately, the iptables service is not configured to start automatically at boot time. Refer to Section 18.2.6, "Activating the IPTables Service" for more information.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/s2-basic-firewall-securitylevel-commit
Chapter 8. Configuring the audit log policy
Chapter 8. Configuring the audit log policy You can control the amount of information that is logged to the API server audit logs by choosing the audit log policy profile to use. 8.1. About audit log policy profiles Audit log profiles define how to log requests that come to the OpenShift API server, the Kubernetes API server, and the OAuth API server. OpenShift Container Platform provides the following predefined audit policy profiles: Profile Description Default Logs only metadata for read and write requests; does not log request bodies except for OAuth access token requests. This is the default policy. WriteRequestBodies In addition to logging metadata for all requests, logs request bodies for every write request to the API servers ( create , update , patch ). This profile has more resource overhead than the Default profile. [1] AllRequestBodies In addition to logging metadata for all requests, logs request bodies for every read and write request to the API servers ( get , list , create , update , patch ). This profile has the most resource overhead. [1] None No requests are logged; even OAuth access token requests and OAuth authorize token requests are not logged. Warning It is not recommended to disable audit logging by using the None profile unless you are fully aware of the risks of not logging data that can be beneficial when troubleshooting issues. If you disable audit logging and a support situation arises, you might need to enable audit logging and reproduce the issue in order to troubleshoot properly. Sensitive resources, such as Secret , Route , and OAuthClient objects, are never logged past the metadata level. By default, OpenShift Container Platform uses the Default audit log profile. You can use another audit policy profile that also logs request bodies, but be aware of the increased resource usage (CPU, memory, and I/O). 8.2. Configuring the audit log policy You can configure the audit log policy to use when logging requests that come to the API servers. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Edit the APIServer resource: USD oc edit apiserver cluster Update the spec.audit.profile field: apiVersion: config.openshift.io/v1 kind: APIServer metadata: ... spec: audit: profile: WriteRequestBodies 1 1 Set to Default , WriteRequestBodies , AllRequestBodies , or None . The default profile is Default . Warning It is not recommended to disable audit logging by using the None profile unless you are fully aware of the risks of not logging data that can be beneficial when troubleshooting issues. If you disable audit logging and a support situation arises, you might need to enable audit logging and reproduce the issue in order to troubleshoot properly. Save the file to apply the changes. Verification Verify that a new revision of the Kubernetes API server pods is rolled out. It can take several minutes for all nodes to update to the new revision. USD oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition for the Kubernetes API server to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 12 1 1 In this example, the latest revision number is 12 . If the output shows a message similar to one of the following messages, the update is still in progress. Wait a few minutes and try again. 3 nodes are at revision 11; 0 nodes have achieved new revision 12 2 nodes are at revision 11; 1 nodes are at revision 12 8.3. Configuring the audit log policy with custom rules You can configure an audit log policy that defines custom rules. You can specify multiple groups and define which profile to use for that group. These custom rules take precedence over the top-level profile field. The custom rules are evaluated from top to bottom, and the first that matches is applied. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Edit the APIServer resource: USD oc edit apiserver cluster Add the spec.audit.customRules field: apiVersion: config.openshift.io/v1 kind: APIServer metadata: ... spec: audit: customRules: 1 - group: system:authenticated:oauth profile: WriteRequestBodies - group: system:authenticated profile: AllRequestBodies profile: Default 2 1 Add one or more groups and specify the profile to use for that group. These custom rules take precedence over the top-level profile field. The custom rules are evaluated from top to bottom, and the first that matches is applied. 2 Set to Default , WriteRequestBodies , AllRequestBodies , or None . If you do not set this top-level audit.profile field, it defaults to the Default profile. Warning It is not recommended to disable audit logging by using the None profile unless you are fully aware of the risks of not logging data that can be beneficial when troubleshooting issues. If you disable audit logging and a support situation arises, you might need to enable audit logging and reproduce the issue in order to troubleshoot properly. Save the file to apply the changes. Verification Verify that a new revision of the Kubernetes API server pods is rolled out. It can take several minutes for all nodes to update to the new revision. USD oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition for the Kubernetes API server to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 12 1 1 In this example, the latest revision number is 12 . If the output shows a message similar to one of the following messages, the update is still in progress. Wait a few minutes and try again. 3 nodes are at revision 11; 0 nodes have achieved new revision 12 2 nodes are at revision 11; 1 nodes are at revision 12 8.4. Disabling audit logging You can disable audit logging for OpenShift Container Platform. When you disable audit logging, even OAuth access token requests and OAuth authorize token requests are not logged. Warning It is not recommended to disable audit logging by using the None profile unless you are fully aware of the risks of not logging data that can be beneficial when troubleshooting issues. If you disable audit logging and a support situation arises, you might need to enable audit logging and reproduce the issue in order to troubleshoot properly. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Edit the APIServer resource: USD oc edit apiserver cluster Set the spec.audit.profile field to None : apiVersion: config.openshift.io/v1 kind: APIServer metadata: ... spec: audit: profile: None Note You can also disable audit logging only for specific groups by specifying custom rules in the spec.audit.customRules field. Save the file to apply the changes. Verification Verify that a new revision of the Kubernetes API server pods is rolled out. It can take several minutes for all nodes to update to the new revision. USD oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition for the Kubernetes API server to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 12 1 1 In this example, the latest revision number is 12 . If the output shows a message similar to one of the following messages, the update is still in progress. Wait a few minutes and try again. 3 nodes are at revision 11; 0 nodes have achieved new revision 12 2 nodes are at revision 11; 1 nodes are at revision 12
[ "oc edit apiserver cluster", "apiVersion: config.openshift.io/v1 kind: APIServer metadata: spec: audit: profile: WriteRequestBodies 1", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 12 1", "oc edit apiserver cluster", "apiVersion: config.openshift.io/v1 kind: APIServer metadata: spec: audit: customRules: 1 - group: system:authenticated:oauth profile: WriteRequestBodies - group: system:authenticated profile: AllRequestBodies profile: Default 2", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 12 1", "oc edit apiserver cluster", "apiVersion: config.openshift.io/v1 kind: APIServer metadata: spec: audit: profile: None", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 12 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/security_and_compliance/audit-log-policy-config
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_eclipse_temurin_17.0.6/making-open-source-more-inclusive
Chapter 33. Load balancing with MetalLB
Chapter 33. Load balancing with MetalLB 33.1. About MetalLB and the MetalLB Operator As a cluster administrator, you can add the MetalLB Operator to your cluster so that when a service of type LoadBalancer is added to the cluster, MetalLB can add an external IP address for the service. The external IP address is added to the host network for your cluster. 33.1.1. When to use MetalLB Using MetalLB is valuable when you have a bare-metal cluster, or an infrastructure that is like bare metal, and you want fault-tolerant access to an application through an external IP address. You must configure your networking infrastructure to ensure that network traffic for the external IP address is routed from clients to the host network for the cluster. After deploying MetalLB with the MetalLB Operator, when you add a service of type LoadBalancer , MetalLB provides a platform-native load balancer. MetalLB operating in layer2 mode provides support for failover by utilizing a mechanism similar to IP failover. However, instead of relying on the virtual router redundancy protocol (VRRP) and keepalived, MetalLB leverages a gossip-based protocol to identify instances of node failure. When a failover is detected, another node assumes the role of the leader node, and a gratuitous ARP message is dispatched to broadcast this change. MetalLB operating in layer3 or border gateway protocol (BGP) mode delegates failure detection to the network. The BGP router or routers that the OpenShift Container Platform nodes have established a connection with will identify any node failure and terminate the routes to that node. Using MetalLB instead of IP failover is preferable for ensuring high availability of pods and services. 33.1.2. MetalLB Operator custom resources The MetalLB Operator monitors its own namespace for the following custom resources: MetalLB When you add a MetalLB custom resource to the cluster, the MetalLB Operator deploys MetalLB on the cluster. The Operator only supports a single instance of the custom resource. If the instance is deleted, the Operator removes MetalLB from the cluster. IPAddressPool MetalLB requires one or more pools of IP addresses that it can assign to a service when you add a service of type LoadBalancer . An IPAddressPool includes a list of IP addresses. The list can be a single IP address that is set using a range, such as 1.1.1.1-1.1.1.1, a range specified in CIDR notation, a range specified as a starting and ending address separated by a hyphen, or a combination of the three. An IPAddressPool requires a name. The documentation uses names like doc-example , doc-example-reserved , and doc-example-ipv6 . The MetalLB controller assigns IP addresses from a pool of addresses in an IPAddressPool . L2Advertisement and BGPAdvertisement custom resources enable the advertisement of a given IP from a given pool. You can assign IP addresses from an IPAddressPool to services and namespaces by using the spec.serviceAllocation specification in the IPAddressPool custom resource. Note A single IPAddressPool can be referenced by a L2 advertisement and a BGP advertisement. BGPPeer The BGP peer custom resource identifies the BGP router for MetalLB to communicate with, the AS number of the router, the AS number for MetalLB, and customizations for route advertisement. MetalLB advertises the routes for service load-balancer IP addresses to one or more BGP peers. BFDProfile The BFD profile custom resource configures Bidirectional Forwarding Detection (BFD) for a BGP peer. BFD provides faster path failure detection than BGP alone provides. L2Advertisement The L2Advertisement custom resource advertises an IP coming from an IPAddressPool using the L2 protocol. BGPAdvertisement The BGPAdvertisement custom resource advertises an IP coming from an IPAddressPool using the BGP protocol. After you add the MetalLB custom resource to the cluster and the Operator deploys MetalLB, the controller and speaker MetalLB software components begin running. MetalLB validates all relevant custom resources. 33.1.3. MetalLB software components When you install the MetalLB Operator, the metallb-operator-controller-manager deployment starts a pod. The pod is the implementation of the Operator. The pod monitors for changes to all the relevant resources. When the Operator starts an instance of MetalLB, it starts a controller deployment and a speaker daemon set. Note You can configure deployment specifications in the MetalLB custom resource to manage how controller and speaker pods deploy and run in your cluster. For more information about these deployment specifications, see the Additional resources section. controller The Operator starts the deployment and a single pod. When you add a service of type LoadBalancer , Kubernetes uses the controller to allocate an IP address from an address pool. In case of a service failure, verify you have the following entry in your controller pod logs: Example output "event":"ipAllocated","ip":"172.22.0.201","msg":"IP address assigned by controller speaker The Operator starts a daemon set for speaker pods. By default, a pod is started on each node in your cluster. You can limit the pods to specific nodes by specifying a node selector in the MetalLB custom resource when you start MetalLB. If the controller allocated the IP address to the service and service is still unavailable, read the speaker pod logs. If the speaker pod is unavailable, run the oc describe pod -n command. For layer 2 mode, after the controller allocates an IP address for the service, the speaker pods use an algorithm to determine which speaker pod on which node will announce the load balancer IP address. The algorithm involves hashing the node name and the load balancer IP address. For more information, see "MetalLB and external traffic policy". The speaker uses Address Resolution Protocol (ARP) to announce IPv4 addresses and Neighbor Discovery Protocol (NDP) to announce IPv6 addresses. For Border Gateway Protocol (BGP) mode, after the controller allocates an IP address for the service, each speaker pod advertises the load balancer IP address with its BGP peers. You can configure which nodes start BGP sessions with BGP peers. Requests for the load balancer IP address are routed to the node with the speaker that announces the IP address. After the node receives the packets, the service proxy routes the packets to an endpoint for the service. The endpoint can be on the same node in the optimal case, or it can be on another node. The service proxy chooses an endpoint each time a connection is established. 33.1.4. MetalLB and external traffic policy With layer 2 mode, one node in your cluster receives all the traffic for the service IP address. With BGP mode, a router on the host network opens a connection to one of the nodes in the cluster for a new client connection. How your cluster handles the traffic after it enters the node is affected by the external traffic policy. cluster This is the default value for spec.externalTrafficPolicy . With the cluster traffic policy, after the node receives the traffic, the service proxy distributes the traffic to all the pods in your service. This policy provides uniform traffic distribution across the pods, but it obscures the client IP address and it can appear to the application in your pods that the traffic originates from the node rather than the client. local With the local traffic policy, after the node receives the traffic, the service proxy only sends traffic to the pods on the same node. For example, if the speaker pod on node A announces the external service IP, then all traffic is sent to node A. After the traffic enters node A, the service proxy only sends traffic to pods for the service that are also on node A. Pods for the service that are on additional nodes do not receive any traffic from node A. Pods for the service on additional nodes act as replicas in case failover is needed. This policy does not affect the client IP address. Application pods can determine the client IP address from the incoming connections. Note The following information is important when configuring the external traffic policy in BGP mode. Although MetalLB advertises the load balancer IP address from all the eligible nodes, the number of nodes loadbalancing the service can be limited by the capacity of the router to establish equal-cost multipath (ECMP) routes. If the number of nodes advertising the IP is greater than the ECMP group limit of the router, the router will use less nodes than the ones advertising the IP. For example, if the external traffic policy is set to local and the router has an ECMP group limit set to 16 and the pods implementing a LoadBalancer service are deployed on 30 nodes, this would result in pods deployed on 14 nodes not receiving any traffic. In this situation, it would be preferable to set the external traffic policy for the service to cluster . 33.1.5. MetalLB concepts for layer 2 mode In layer 2 mode, the speaker pod on one node announces the external IP address for a service to the host network. From a network perspective, the node appears to have multiple IP addresses assigned to a network interface. Note In layer 2 mode, MetalLB relies on ARP and NDP. These protocols implement local address resolution within a specific subnet. In this context, the client must be able to reach the VIP assigned by MetalLB that exists on the same subnet as the nodes announcing the service in order for MetalLB to work. The speaker pod responds to ARP requests for IPv4 services and NDP requests for IPv6. In layer 2 mode, all traffic for a service IP address is routed through one node. After traffic enters the node, the service proxy for the CNI network provider distributes the traffic to all the pods for the service. Because all traffic for a service enters through a single node in layer 2 mode, in a strict sense, MetalLB does not implement a load balancer for layer 2. Rather, MetalLB implements a failover mechanism for layer 2 so that when a speaker pod becomes unavailable, a speaker pod on a different node can announce the service IP address. When a node becomes unavailable, failover is automatic. The speaker pods on the other nodes detect that a node is unavailable and a new speaker pod and node take ownership of the service IP address from the failed node. The preceding graphic shows the following concepts related to MetalLB: An application is available through a service that has a cluster IP on the 172.130.0.0/16 subnet. That IP address is accessible from inside the cluster. The service also has an external IP address that MetalLB assigned to the service, 192.168.100.200 . Nodes 1 and 3 have a pod for the application. The speaker daemon set runs a pod on each node. The MetalLB Operator starts these pods. Each speaker pod is a host-networked pod. The IP address for the pod is identical to the IP address for the node on the host network. The speaker pod on node 1 uses ARP to announce the external IP address for the service, 192.168.100.200 . The speaker pod that announces the external IP address must be on the same node as an endpoint for the service and the endpoint must be in the Ready condition. Client traffic is routed to the host network and connects to the 192.168.100.200 IP address. After traffic enters the node, the service proxy sends the traffic to the application pod on the same node or another node according to the external traffic policy that you set for the service. If the external traffic policy for the service is set to cluster , the node that advertises the 192.168.100.200 load balancer IP address is selected from the nodes where a speaker pod is running. Only that node can receive traffic for the service. If the external traffic policy for the service is set to local , the node that advertises the 192.168.100.200 load balancer IP address is selected from the nodes where a speaker pod is running and at least an endpoint of the service. Only that node can receive traffic for the service. In the preceding graphic, either node 1 or 3 would advertise 192.168.100.200 . If node 1 becomes unavailable, the external IP address fails over to another node. On another node that has an instance of the application pod and service endpoint, the speaker pod begins to announce the external IP address, 192.168.100.200 and the new node receives the client traffic. In the diagram, the only candidate is node 3. 33.1.6. MetalLB concepts for BGP mode In BGP mode, by default each speaker pod advertises the load balancer IP address for a service to each BGP peer. It is also possible to advertise the IPs coming from a given pool to a specific set of peers by adding an optional list of BGP peers. BGP peers are commonly network routers that are configured to use the BGP protocol. When a router receives traffic for the load balancer IP address, the router picks one of the nodes with a speaker pod that advertised the IP address. The router sends the traffic to that node. After traffic enters the node, the service proxy for the CNI network plugin distributes the traffic to all the pods for the service. The directly-connected router on the same layer 2 network segment as the cluster nodes can be configured as a BGP peer. If the directly-connected router is not configured as a BGP peer, you need to configure your network so that packets for load balancer IP addresses are routed between the BGP peers and the cluster nodes that run the speaker pods. Each time a router receives new traffic for the load balancer IP address, it creates a new connection to a node. Each router manufacturer has an implementation-specific algorithm for choosing which node to initiate the connection with. However, the algorithms commonly are designed to distribute traffic across the available nodes for the purpose of balancing the network load. If a node becomes unavailable, the router initiates a new connection with another node that has a speaker pod that advertises the load balancer IP address. Figure 33.1. MetalLB topology diagram for BGP mode The preceding graphic shows the following concepts related to MetalLB: An application is available through a service that has an IPv4 cluster IP on the 172.130.0.0/16 subnet. That IP address is accessible from inside the cluster. The service also has an external IP address that MetalLB assigned to the service, 203.0.113.200 . Nodes 2 and 3 have a pod for the application. The speaker daemon set runs a pod on each node. The MetalLB Operator starts these pods. You can configure MetalLB to specify which nodes run the speaker pods. Each speaker pod is a host-networked pod. The IP address for the pod is identical to the IP address for the node on the host network. Each speaker pod starts a BGP session with all BGP peers and advertises the load balancer IP addresses or aggregated routes to the BGP peers. The speaker pods advertise that they are part of Autonomous System 65010. The diagram shows a router, R1, as a BGP peer within the same Autonomous System. However, you can configure MetalLB to start BGP sessions with peers that belong to other Autonomous Systems. All the nodes with a speaker pod that advertises the load balancer IP address can receive traffic for the service. If the external traffic policy for the service is set to cluster , all the nodes where a speaker pod is running advertise the 203.0.113.200 load balancer IP address and all the nodes with a speaker pod can receive traffic for the service. The host prefix is advertised to the router peer only if the external traffic policy is set to cluster. If the external traffic policy for the service is set to local , then all the nodes where a speaker pod is running and at least an endpoint of the service is running can advertise the 203.0.113.200 load balancer IP address. Only those nodes can receive traffic for the service. In the preceding graphic, nodes 2 and 3 would advertise 203.0.113.200 . You can configure MetalLB to control which speaker pods start BGP sessions with specific BGP peers by specifying a node selector when you add a BGP peer custom resource. Any routers, such as R1, that are configured to use BGP can be set as BGP peers. Client traffic is routed to one of the nodes on the host network. After traffic enters the node, the service proxy sends the traffic to the application pod on the same node or another node according to the external traffic policy that you set for the service. If a node becomes unavailable, the router detects the failure and initiates a new connection with another node. You can configure MetalLB to use a Bidirectional Forwarding Detection (BFD) profile for BGP peers. BFD provides faster link failure detection so that routers can initiate new connections earlier than without BFD. 33.1.7. Limitations and restrictions 33.1.7.1. Infrastructure considerations for MetalLB MetalLB is primarily useful for on-premise, bare metal installations because these installations do not include a native load-balancer capability. In addition to bare metal installations, installations of OpenShift Container Platform on some infrastructures might not include a native load-balancer capability. For example, the following infrastructures can benefit from adding the MetalLB Operator: Bare metal VMware vSphere IBM Z and IBM(R) LinuxONE IBM Z and IBM(R) LinuxONE for Red Hat Enterprise Linux (RHEL) KVM IBM Power MetalLB Operator and MetalLB are supported with the OpenShift SDN and OVN-Kubernetes network providers. 33.1.7.2. Limitations for layer 2 mode 33.1.7.2.1. Single-node bottleneck MetalLB routes all traffic for a service through a single node, the node can become a bottleneck and limit performance. Layer 2 mode limits the ingress bandwidth for your service to the bandwidth of a single node. This is a fundamental limitation of using ARP and NDP to direct traffic. 33.1.7.2.2. Slow failover performance Failover between nodes depends on cooperation from the clients. When a failover occurs, MetalLB sends gratuitous ARP packets to notify clients that the MAC address associated with the service IP has changed. Most client operating systems handle gratuitous ARP packets correctly and update their neighbor caches promptly. When clients update their caches quickly, failover completes within a few seconds. Clients typically fail over to a new node within 10 seconds. However, some client operating systems either do not handle gratuitous ARP packets at all or have outdated implementations that delay the cache update. Recent versions of common operating systems such as Windows, macOS, and Linux implement layer 2 failover correctly. Issues with slow failover are not expected except for older and less common client operating systems. To minimize the impact from a planned failover on outdated clients, keep the old node running for a few minutes after flipping leadership. The old node can continue to forward traffic for outdated clients until their caches refresh. During an unplanned failover, the service IPs are unreachable until the outdated clients refresh their cache entries. 33.1.7.2.3. Additional Network and MetalLB cannot use same network Using the same VLAN for both MetalLB and an additional network interface set up on a source pod might result in a connection failure. This occurs when both the MetalLB IP and the source pod reside on the same node. To avoid connection failures, place the MetalLB IP in a different subnet from the one where the source pod resides. This configuration ensures that traffic from the source pod will take the default gateway. Consequently, the traffic can effectively reach its destination by using the OVN overlay network, ensuring that the connection functions as intended. 33.1.7.3. Limitations for BGP mode 33.1.7.3.1. Node failure can break all active connections MetalLB shares a limitation that is common to BGP-based load balancing. When a BGP session terminates, such as when a node fails or when a speaker pod restarts, the session termination might result in resetting all active connections. End users can experience a Connection reset by peer message. The consequence of a terminated BGP session is implementation-specific for each router manufacturer. However, you can anticipate that a change in the number of speaker pods affects the number of BGP sessions and that active connections with BGP peers will break. To avoid or reduce the likelihood of a service interruption, you can specify a node selector when you add a BGP peer. By limiting the number of nodes that start BGP sessions, a fault on a node that does not have a BGP session has no affect on connections to the service. 33.1.7.3.2. Support for a single ASN and a single router ID only When you add a BGP peer custom resource, you specify the spec.myASN field to identify the Autonomous System Number (ASN) that MetalLB belongs to. OpenShift Container Platform uses an implementation of BGP with MetalLB that requires MetalLB to belong to a single ASN. If you attempt to add a BGP peer and specify a different value for spec.myASN than an existing BGP peer custom resource, you receive an error. Similarly, when you add a BGP peer custom resource, the spec.routerID field is optional. If you specify a value for this field, you must specify the same value for all other BGP peer custom resources that you add. The limitation to support a single ASN and single router ID is a difference with the community-supported implementation of MetalLB. 33.1.8. Additional resources Comparison: Fault tolerant access to external IP addresses Removing IP failover Deployment specifications for MetalLB 33.2. Installing the MetalLB Operator As a cluster administrator, you can add the MetallB Operator so that the Operator can manage the lifecycle for an instance of MetalLB on your cluster. MetalLB and IP failover are incompatible. If you configured IP failover for your cluster, perform the steps to remove IP failover before you install the Operator. 33.2.1. Installing the MetalLB Operator from the OperatorHub using the web console As a cluster administrator, you can install the MetalLB Operator by using the OpenShift Container Platform web console. Prerequisites Log in as a user with cluster-admin privileges. Procedure In the OpenShift Container Platform web console, navigate to Operators OperatorHub . Type a keyword into the Filter by keyword box or scroll to find the Operator you want. For example, type metallb to find the MetalLB Operator. You can also filter options by Infrastructure Features . For example, select Disconnected if you want to see Operators that work in disconnected environments, also known as restricted network environments. On the Install Operator page, accept the defaults and click Install . Verification To confirm that the installation is successful: Navigate to the Operators Installed Operators page. Check that the Operator is installed in the openshift-operators namespace and that its status is Succeeded . If the Operator is not installed successfully, check the status of the Operator and review the logs: Navigate to the Operators Installed Operators page and inspect the Status column for any errors or failures. Navigate to the Workloads Pods page and check the logs in any pods in the openshift-operators project that are reporting issues. 33.2.2. Installing from OperatorHub using the CLI Instead of using the OpenShift Container Platform web console, you can install an Operator from OperatorHub using the CLI. You can use the OpenShift CLI ( oc ) to install the MetalLB Operator. It is recommended that when using the CLI you install the Operator in the metallb-system namespace. Prerequisites A cluster installed on bare-metal hardware. Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a namespace for the MetalLB Operator by entering the following command: USD cat << EOF | oc apply -f - apiVersion: v1 kind: Namespace metadata: name: metallb-system EOF Create an Operator group custom resource (CR) in the namespace: USD cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: metallb-operator namespace: metallb-system EOF Confirm the Operator group is installed in the namespace: USD oc get operatorgroup -n metallb-system Example output NAME AGE metallb-operator 14m Create a Subscription CR: Define the Subscription CR and save the YAML file, for example, metallb-sub.yaml : apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: metallb-operator-sub namespace: metallb-system spec: channel: stable name: metallb-operator source: redhat-operators 1 sourceNamespace: openshift-marketplace 1 You must specify the redhat-operators value. To create the Subscription CR, run the following command: USD oc create -f metallb-sub.yaml Optional: To ensure BGP and BFD metrics appear in Prometheus, you can label the namespace as in the following command: USD oc label ns metallb-system "openshift.io/cluster-monitoring=true" Verification The verification steps assume the MetalLB Operator is installed in the metallb-system namespace. Confirm the install plan is in the namespace: USD oc get installplan -n metallb-system Example output NAME CSV APPROVAL APPROVED install-wzg94 metallb-operator.4.13.0-nnnnnnnnnnnn Automatic true Note Installation of the Operator might take a few seconds. To verify that the Operator is installed, enter the following command: USD oc get clusterserviceversion -n metallb-system \ -o custom-columns=Name:.metadata.name,Phase:.status.phase Example output Name Phase metallb-operator.4.13.0-nnnnnnnnnnnn Succeeded 33.2.3. Starting MetalLB on your cluster After you install the Operator, you need to configure a single instance of a MetalLB custom resource. After you configure the custom resource, the Operator starts MetalLB on your cluster. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Install the MetalLB Operator. Procedure This procedure assumes the MetalLB Operator is installed in the metallb-system namespace. If you installed using the web console substitute openshift-operators for the namespace. Create a single instance of a MetalLB custom resource: USD cat << EOF | oc apply -f - apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system EOF Verification Confirm that the deployment for the MetalLB controller and the daemon set for the MetalLB speaker are running. Verify that the deployment for the controller is running: USD oc get deployment -n metallb-system controller Example output NAME READY UP-TO-DATE AVAILABLE AGE controller 1/1 1 1 11m Verify that the daemon set for the speaker is running: USD oc get daemonset -n metallb-system speaker Example output NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE speaker 6 6 6 6 6 kubernetes.io/os=linux 18m The example output indicates 6 speaker pods. The number of speaker pods in your cluster might differ from the example output. Make sure the output indicates one pod for each node in your cluster. 33.2.4. Deployment specifications for MetalLB When you start an instance of MetalLB using the MetalLB custom resource, you can configure deployment specifications in the MetalLB custom resource to manage how the controller or speaker pods deploy and run in your cluster. Use these deployment specifications to manage the following tasks: Select nodes for MetalLB pod deployment. Manage scheduling by using pod priority and pod affinity. Assign CPU limits for MetalLB pods. Assign a container RuntimeClass for MetalLB pods. Assign metadata for MetalLB pods. 33.2.4.1. Limit speaker pods to specific nodes By default, when you start MetalLB with the MetalLB Operator, the Operator starts an instance of a speaker pod on each node in the cluster. Only the nodes with a speaker pod can advertise a load balancer IP address. You can configure the MetalLB custom resource with a node selector to specify which nodes run the speaker pods. The most common reason to limit the speaker pods to specific nodes is to ensure that only nodes with network interfaces on specific networks advertise load balancer IP addresses. Only the nodes with a running speaker pod are advertised as destinations of the load balancer IP address. If you limit the speaker pods to specific nodes and specify local for the external traffic policy of a service, then you must ensure that the application pods for the service are deployed to the same nodes. Example configuration to limit speaker pods to worker nodes apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: nodeSelector: <.> node-role.kubernetes.io/worker: "" speakerTolerations: <.> - key: "Example" operator: "Exists" effect: "NoExecute" <.> The example configuration specifies to assign the speaker pods to worker nodes, but you can specify labels that you assigned to nodes or any valid node selector. <.> In this example configuration, the pod that this toleration is attached to tolerates any taint that matches the key value and effect value using the operator . After you apply a manifest with the spec.nodeSelector field, you can check the number of pods that the Operator deployed with the oc get daemonset -n metallb-system speaker command. Similarly, you can display the nodes that match your labels with a command like oc get nodes -l node-role.kubernetes.io/worker= . You can optionally allow the node to control which speaker pods should, or should not, be scheduled on them by using affinity rules. You can also limit these pods by applying a list of tolerations. For more information about affinity rules, taints, and tolerations, see the additional resources. 33.2.4.2. Configuring pod priority and pod affinity in a MetalLB deployment You can optionally assign pod priority and pod affinity rules to controller and speaker pods by configuring the MetalLB custom resource. The pod priority indicates the relative importance of a pod on a node and schedules the pod based on this priority. Set a high priority on your controller or speaker pod to ensure scheduling priority over other pods on the node. Pod affinity manages relationships among pods. Assign pod affinity to the controller or speaker pods to control on what node the scheduler places the pod in the context of pod relationships. For example, you can use pod affinity rules to ensure that certain pods are located on the same node or nodes, which can help improve network communication and reduce latency between those components. Prerequisites You are logged in as a user with cluster-admin privileges. You have installed the MetalLB Operator. You have started the MetalLB Operator on your cluster. Procedure Create a PriorityClass custom resource, such as myPriorityClass.yaml , to configure the priority level. This example defines a PriorityClass named high-priority with a value of 1000000 . Pods that are assigned this priority class are considered higher priority during scheduling compared to pods with lower priority classes: apiVersion: scheduling.k8s.io/v1 kind: PriorityClass metadata: name: high-priority value: 1000000 Apply the PriorityClass custom resource configuration: USD oc apply -f myPriorityClass.yaml Create a MetalLB custom resource, such as MetalLBPodConfig.yaml , to specify the priorityClassName and podAffinity values: apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: logLevel: debug controllerConfig: priorityClassName: high-priority 1 affinity: podAffinity: 2 requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: app: metallb topologyKey: kubernetes.io/hostname speakerConfig: priorityClassName: high-priority affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: app: metallb topologyKey: kubernetes.io/hostname 1 Specifies the priority class for the MetalLB controller pods. In this case, it is set to high-priority . 2 Specifies that you are configuring pod affinity rules. These rules dictate how pods are scheduled in relation to other pods or nodes. This configuration instructs the scheduler to schedule pods that have the label app: metallb onto nodes that share the same hostname. This helps to co-locate MetalLB-related pods on the same nodes, potentially optimizing network communication, latency, and resource usage between these pods. Apply the MetalLB custom resource configuration: USD oc apply -f MetalLBPodConfig.yaml Verification To view the priority class that you assigned to pods in the metallb-system namespace, run the following command: USD oc get pods -n metallb-system -o custom-columns=NAME:.metadata.name,PRIORITY:.spec.priorityClassName Example output NAME PRIORITY controller-584f5c8cd8-5zbvg high-priority metallb-operator-controller-manager-9c8d9985-szkqg <none> metallb-operator-webhook-server-c895594d4-shjgx <none> speaker-dddf7 high-priority To verify that the scheduler placed pods according to pod affinity rules, view the metadata for the pod's node or nodes by running the following command: USD oc get pod -o=custom-columns=NODE:.spec.nodeName,NAME:.metadata.name -n metallb-system 33.2.4.3. Configuring pod CPU limits in a MetalLB deployment You can optionally assign pod CPU limits to controller and speaker pods by configuring the MetalLB custom resource. Defining CPU limits for the controller or speaker pods helps you to manage compute resources on the node. This ensures all pods on the node have the necessary compute resources to manage workloads and cluster housekeeping. Prerequisites You are logged in as a user with cluster-admin privileges. You have installed the MetalLB Operator. Procedure Create a MetalLB custom resource file, such as CPULimits.yaml , to specify the cpu value for the controller and speaker pods: apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: logLevel: debug controllerConfig: resources: limits: cpu: "200m" speakerConfig: resources: limits: cpu: "300m" Apply the MetalLB custom resource configuration: USD oc apply -f CPULimits.yaml Verification To view compute resources for a pod, run the following command, replacing <pod_name> with your target pod: USD oc describe pod <pod_name> 33.2.5. Additional resources Placing pods on specific nodes using node selectors Understanding taints and tolerations Understanding pod priority Understanding pod affinity 33.2.6. steps Configuring MetalLB address pools 33.3. Upgrading the MetalLB If you are currently running version 4.10 or an earlier version of the MetalLB Operator, please note that automatic updates to any version later than 4.10 do not work. Upgrading to a newer version from any version of the MetalLB Operator that is 4.11 or later is successful. For example, upgrading from version 4.12 to version 4.13 will occur smoothly. A summary of the upgrade procedure for the MetalLB Operator from 4.10 and earlier is as follows: Delete the installed MetalLB Operator version for example 4.10. Ensure that the namespace and the metallb custom resource are not removed. Using the CLI, install the MetalLB Operator 4.13 in the same namespace where the version of the MetalLB Operator was installed. Note This procedure does not apply to automatic z-stream updates of the MetalLB Operator, which follow the standard straightforward method. For detailed steps to upgrade the MetalLB Operator from 4.10 and earlier, see the guidance that follows. As a cluster administrator, start the upgrade process by deleting the MetalLB Operator by using the OpenShift CLI ( oc ) or the web console. 33.3.1. Deleting the MetalLB Operator from a cluster using the web console Cluster administrators can delete installed Operators from a selected namespace by using the web console. Prerequisites Access to an OpenShift Container Platform cluster web console using an account with cluster-admin permissions. Procedure Navigate to the Operators Installed Operators page. Search for the MetalLB Operator. Then, click on it. On the right side of the Operator Details page, select Uninstall Operator from the Actions drop-down menu. An Uninstall Operator? dialog box is displayed. Select Uninstall to remove the Operator, Operator deployments, and pods. Following this action, the Operator stops running and no longer receives updates. Note This action does not remove resources managed by the Operator, including custom resource definitions (CRDs) and custom resources (CRs). Dashboards and navigation items enabled by the web console and off-cluster resources that continue to run might need manual clean up. To remove these after uninstalling the Operator, you might need to manually delete the Operator CRDs. 33.3.2. Deleting MetalLB Operator from a cluster using the CLI Cluster administrators can delete installed Operators from a selected namespace by using the CLI. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. oc command installed on workstation. Procedure Check the current version of the subscribed MetalLB Operator in the currentCSV field: USD oc get subscription metallb-operator -n metallb-system -o yaml | grep currentCSV Example output currentCSV: metallb-operator.4.10.0-202207051316 Delete the subscription: USD oc delete subscription metallb-operator -n metallb-system Example output subscription.operators.coreos.com "metallb-operator" deleted Delete the CSV for the Operator in the target namespace using the currentCSV value from the step: USD oc delete clusterserviceversion metallb-operator.4.10.0-202207051316 -n metallb-system Example output clusterserviceversion.operators.coreos.com "metallb-operator.4.10.0-202207051316" deleted 33.3.3. Editing the MetalLB Operator Operator group When upgrading from any MetalLB Operator version up to and including 4.10 to 4.11 and later, remove spec.targetNamespaces from the Operator group custom resource (CR). You must remove the spec regardless of whether you used the web console or the CLI to delete the MetalLB Operator. Note The MetalLB Operator version 4.11 or later only supports the AllNamespaces install mode, whereas 4.10 or earlier versions support OwnNamespace or SingleNamespace modes. Prerequisites You have access to an OpenShift Container Platform cluster with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). Procedure List the Operator groups in the metallb-system namespace by running the following command: USD oc get operatorgroup -n metallb-system Example output NAME AGE metallb-system-7jc66 85m Verify that the spec.targetNamespaces is present in the Operator group CR associated with the metallb-system namespace by running the following command: USD oc get operatorgroup metallb-system-7jc66 -n metallb-system -o yaml Example output apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: annotations: olm.providedAPIs: "" creationTimestamp: "2023-10-25T09:42:49Z" generateName: metallb-system- generation: 1 name: metallb-system-7jc66 namespace: metallb-system resourceVersion: "25027" uid: f5f644a0-eef8-4e31-a306-e2bbcfaffab3 spec: targetNamespaces: - metallb-system upgradeStrategy: Default status: lastUpdated: "2023-10-25T09:42:49Z" namespaces: - metallb-system Edit the Operator group and remove the targetNamespaces and metallb-system present under the spec section by running the following command: USD oc edit n metallb-system Example output operatorgroup.operators.coreos.com/metallb-system-7jc66 edited Verify the spec.targetNamespaces is removed from the Operator group custom resource associated with the metallb-system namespace by running the following command: USD oc get operatorgroup metallb-system-7jc66 -n metallb-system -o yaml Example output apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: annotations: olm.providedAPIs: "" creationTimestamp: "2023-10-25T09:42:49Z" generateName: metallb-system- generation: 2 name: metallb-system-7jc66 namespace: metallb-system resourceVersion: "61658" uid: f5f644a0-eef8-4e31-a306-e2bbcfaffab3 spec: upgradeStrategy: Default status: lastUpdated: "2023-10-25T14:31:30Z" namespaces: - "" 33.3.4. Upgrading the MetalLB Operator Prerequisites Access the cluster as a user with the cluster-admin role. Procedure Verify that the metallb-system namespace still exists: USD oc get namespaces | grep metallb-system Example output metallb-system Active 31m Verify the metallb custom resource still exists: USD oc get metallb -n metallb-system Example output NAME AGE metallb 33m Follow the guidance in "Installing from OperatorHub using the CLI" to install the latest 4.13 version of the MetalLB Operator. Note When installing the latest 4.13 version of the MetalLB Operator, you must install the Operator to the same namespace it was previously installed to. Verify the upgraded version of the Operator is now the 4.13 version. USD oc get csv -n metallb-system Example output NAME DISPLAY VERSION REPLACES PHASE metallb-operator.4.13.0-202207051316 MetalLB Operator 4.13.0-202207051316 Succeeded 33.3.5. Additional resources Deleting Operators from a cluster Installing the MetalLB Operator 33.4. Configuring MetalLB address pools As a cluster administrator, you can add, modify, and delete address pools. The MetalLB Operator uses the address pool custom resources to set the IP addresses that MetalLB can assign to services. The namespace used in the examples assume the namespace is metallb-system . 33.4.1. About the IPAddressPool custom resource Note The address pool custom resource definition (CRD) and API documented in "Load balancing with MetalLB" in OpenShift Container Platform 4.10 can still be used in 4.13. However, the enhanced functionality associated with advertising an IP address from an IPAddressPool with layer 2 protocols, or the BGP protocol, is not supported when using the AddressPool CRD. The fields for the IPAddressPool custom resource are described in the following tables. Table 33.1. MetalLB IPAddressPool pool custom resource Field Type Description metadata.name string Specifies the name for the address pool. When you add a service, you can specify this pool name in the metallb.universe.tf/address-pool annotation to select an IP address from a specific pool. The names doc-example , silver , and gold are used throughout the documentation. metadata.namespace string Specifies the namespace for the address pool. Specify the same namespace that the MetalLB Operator uses. metadata.label string Optional: Specifies the key value pair assigned to the IPAddressPool . This can be referenced by the ipAddressPoolSelectors in the BGPAdvertisement and L2Advertisement CRD to associate the IPAddressPool with the advertisement spec.addresses string Specifies a list of IP addresses for MetalLB Operator to assign to services. You can specify multiple ranges in a single pool; they will all share the same settings. Specify each range in CIDR notation or as starting and ending IP addresses separated with a hyphen. spec.autoAssign boolean Optional: Specifies whether MetalLB automatically assigns IP addresses from this pool. Specify false if you want explicitly request an IP address from this pool with the metallb.universe.tf/address-pool annotation. The default value is true . spec.avoidBuggyIPs boolean Optional: This ensures when enabled that IP addresses ending .0 and .255 are not allocated from the pool. The default value is false . Some older consumer network equipment mistakenly block IP addresses ending in .0 and .255. You can assign IP addresses from an IPAddressPool to services and namespaces by configuring the spec.serviceAllocation specification. Table 33.2. MetalLB IPAddressPool custom resource spec.serviceAllocation subfields Field Type Description priority int Optional: Defines the priority between IP address pools when more than one IP address pool matches a service or namespace. A lower number indicates a higher priority. namespaces array (string) Optional: Specifies a list of namespaces that you can assign to IP addresses in an IP address pool. namespaceSelectors array (LabelSelector) Optional: Specifies namespace labels that you can assign to IP addresses from an IP address pool by using label selectors in a list format. serviceSelectors array (LabelSelector) Optional: Specifies service labels that you can assign to IP addresses from an address pool by using label selectors in a list format. 33.4.2. Configuring an address pool As a cluster administrator, you can add address pools to your cluster to control the IP addresses that MetalLB can assign to load-balancer services. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a file, such as ipaddresspool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example labels: 1 zone: east spec: addresses: - 203.0.113.1-203.0.113.10 - 203.0.113.65-203.0.113.75 1 This label assigned to the IPAddressPool can be referenced by the ipAddressPoolSelectors in the BGPAdvertisement CRD to associate the IPAddressPool with the advertisement. Apply the configuration for the IP address pool: USD oc apply -f ipaddresspool.yaml Verification View the address pool: USD oc describe -n metallb-system IPAddressPool doc-example Example output Name: doc-example Namespace: metallb-system Labels: zone=east Annotations: <none> API Version: metallb.io/v1beta1 Kind: IPAddressPool Metadata: ... Spec: Addresses: 203.0.113.1-203.0.113.10 203.0.113.65-203.0.113.75 Auto Assign: true Events: <none> Confirm that the address pool name, such as doc-example , and the IP address ranges appear in the output. 33.4.3. Example address pool configurations 33.4.3.1. Example: IPv4 and CIDR ranges You can specify a range of IP addresses in CIDR notation. You can combine CIDR notation with the notation that uses a hyphen to separate lower and upper bounds. apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-cidr namespace: metallb-system spec: addresses: - 192.168.100.0/24 - 192.168.200.0/24 - 192.168.255.1-192.168.255.5 33.4.3.2. Example: Reserve IP addresses You can set the autoAssign field to false to prevent MetalLB from automatically assigning the IP addresses from the pool. When you add a service, you can request a specific IP address from the pool or you can specify the pool name in an annotation to request any IP address from the pool. apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-reserved namespace: metallb-system spec: addresses: - 10.0.100.0/28 autoAssign: false 33.4.3.3. Example: IPv4 and IPv6 addresses You can add address pools that use IPv4 and IPv6. You can specify multiple ranges in the addresses list, just like several IPv4 examples. Whether the service is assigned a single IPv4 address, a single IPv6 address, or both is determined by how you add the service. The spec.ipFamilies and spec.ipFamilyPolicy fields control how IP addresses are assigned to the service. apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-combined namespace: metallb-system spec: addresses: - 10.0.100.0/28 - 2002:2:2::1-2002:2:2::100 33.4.3.4. Example: Assign IP address pools to services or namespaces You can assign IP addresses from an IPAddressPool to services and namespaces that you specify. If you assign a service or namespace to more than one IP address pool, MetalLB uses an available IP address from the higher-priority IP address pool. If no IP addresses are available from the assigned IP address pools with a high priority, MetalLB uses available IP addresses from an IP address pool with lower priority or no priority. Note You can use the matchLabels label selector, the matchExpressions label selector, or both, for the namespaceSelectors and serviceSelectors specifications. This example demonstrates one label selector for each specification. apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-service-allocation namespace: metallb-system spec: addresses: - 192.168.20.0/24 serviceAllocation: priority: 50 1 namespaces: 2 - namespace-a - namespace-b namespaceSelectors: 3 - matchLabels: zone: east serviceSelectors: 4 - matchExpressions: - key: security operator: In values: - S1 1 Assign a priority to the address pool. A lower number indicates a higher priority. 2 Assign one or more namespaces to the IP address pool in a list format. 3 Assign one or more namespace labels to the IP address pool by using label selectors in a list format. 4 Assign one or more service labels to the IP address pool by using label selectors in a list format. 33.4.4. steps Configuring MetalLB with an L2 advertisement and label Configuring MetalLB BGP peers Configuring services to use MetalLB 33.5. About advertising for the IP address pools You can configure MetalLB so that the IP address is advertised with layer 2 protocols, the BGP protocol, or both. With layer 2, MetalLB provides a fault-tolerant external IP address. With BGP, MetalLB provides fault-tolerance for the external IP address and load balancing. MetalLB supports advertising using L2 and BGP for the same set of IP addresses. MetalLB provides the flexibility to assign address pools to specific BGP peers effectively to a subset of nodes on the network. This allows for more complex configurations, for example facilitating the isolation of nodes or the segmentation of the network. 33.5.1. About the BGPAdvertisement custom resource The fields for the BGPAdvertisements object are defined in the following table: Table 33.3. BGPAdvertisements configuration Field Type Description metadata.name string Specifies the name for the BGP advertisement. metadata.namespace string Specifies the namespace for the BGP advertisement. Specify the same namespace that the MetalLB Operator uses. spec.aggregationLength integer Optional: Specifies the number of bits to include in a 32-bit CIDR mask. To aggregate the routes that the speaker advertises to BGP peers, the mask is applied to the routes for several service IP addresses and the speaker advertises the aggregated route. For example, with an aggregation length of 24 , the speaker can aggregate several 10.0.1.x/32 service IP addresses and advertise a single 10.0.1.0/24 route. spec.aggregationLengthV6 integer Optional: Specifies the number of bits to include in a 128-bit CIDR mask. For example, with an aggregation length of 124 , the speaker can aggregate several fc00:f853:0ccd:e799::x/128 service IP addresses and advertise a single fc00:f853:0ccd:e799::0/124 route. spec.communities string Optional: Specifies one or more BGP communities. Each community is specified as two 16-bit values separated by the colon character. Well-known communities must be specified as 16-bit values: NO_EXPORT : 65535:65281 NO_ADVERTISE : 65535:65282 NO_EXPORT_SUBCONFED : 65535:65283 Note You can also use community objects that are created along with the strings. spec.localPref integer Optional: Specifies the local preference for this advertisement. This BGP attribute applies to BGP sessions within the Autonomous System. spec.ipAddressPools string Optional: The list of IPAddressPools to advertise with this advertisement, selected by name. spec.ipAddressPoolSelectors string Optional: A selector for the IPAddressPools that gets advertised with this advertisement. This is for associating the IPAddressPool to the advertisement based on the label assigned to the IPAddressPool instead of the name itself. If no IPAddressPool is selected by this or by the list, the advertisement is applied to all the IPAddressPools . spec.nodeSelectors string Optional: NodeSelectors allows to limit the nodes to announce as hops for the load balancer IP. When empty, all the nodes are announced as hops. spec.peers string Optional: Use a list to specify the metadata.name values for each BGPPeer resource that receives advertisements for the MetalLB service IP address. The MetalLB service IP address is assigned from the IP address pool. By default, the MetalLB service IP address is advertised to all configured BGPPeer resources. Use this field to limit the advertisement to specific BGPpeer resources. 33.5.2. Configuring MetalLB with a BGP advertisement and a basic use case Configure MetalLB as follows so that the peer BGP routers receive one 203.0.113.200/32 route and one fc00:f853:ccd:e799::1/128 route for each load-balancer IP address that MetalLB assigns to a service. Because the localPref and communities fields are not specified, the routes are advertised with localPref set to zero and no BGP communities. 33.5.2.1. Example: Advertise a basic address pool configuration with BGP Configure MetalLB as follows so that the IPAddressPool is advertised with the BGP protocol. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an IP address pool. Create a file, such as ipaddresspool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-basic spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124 Apply the configuration for the IP address pool: USD oc apply -f ipaddresspool.yaml Create a BGP advertisement. Create a file, such as bgpadvertisement.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-basic namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-basic Apply the configuration: USD oc apply -f bgpadvertisement.yaml 33.5.3. Configuring MetalLB with a BGP advertisement and an advanced use case Configure MetalLB as follows so that MetalLB assigns IP addresses to load-balancer services in the ranges between 203.0.113.200 and 203.0.113.203 and between fc00:f853:ccd:e799::0 and fc00:f853:ccd:e799::f . To explain the two BGP advertisements, consider an instance when MetalLB assigns the IP address of 203.0.113.200 to a service. With that IP address as an example, the speaker advertises two routes to BGP peers: 203.0.113.200/32 , with localPref set to 100 and the community set to the numeric value of the NO_ADVERTISE community. This specification indicates to the peer routers that they can use this route but they should not propagate information about this route to BGP peers. 203.0.113.200/30 , aggregates the load-balancer IP addresses assigned by MetalLB into a single route. MetalLB advertises the aggregated route to BGP peers with the community attribute set to 8000:800 . BGP peers propagate the 203.0.113.200/30 route to other BGP peers. When traffic is routed to a node with a speaker, the 203.0.113.200/32 route is used to forward the traffic into the cluster and to a pod that is associated with the service. As you add more services and MetalLB assigns more load-balancer IP addresses from the pool, peer routers receive one local route, 203.0.113.20x/32 , for each service, as well as the 203.0.113.200/30 aggregate route. Each service that you add generates the /30 route, but MetalLB deduplicates the routes to one BGP advertisement before communicating with peer routers. 33.5.3.1. Example: Advertise an advanced address pool configuration with BGP Configure MetalLB as follows so that the IPAddressPool is advertised with the BGP protocol. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an IP address pool. Create a file, such as ipaddresspool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-adv labels: zone: east spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124 autoAssign: false Apply the configuration for the IP address pool: USD oc apply -f ipaddresspool.yaml Create a BGP advertisement. Create a file, such as bgpadvertisement1.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-adv-1 namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-adv communities: - 65535:65282 aggregationLength: 32 localPref: 100 Apply the configuration: USD oc apply -f bgpadvertisement1.yaml Create a file, such as bgpadvertisement2.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-adv-2 namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-adv communities: - 8000:800 aggregationLength: 30 aggregationLengthV6: 124 Apply the configuration: USD oc apply -f bgpadvertisement2.yaml 33.5.4. Advertising an IP address pool from a subset of nodes To advertise an IP address from an IP addresses pool, from a specific set of nodes only, use the .spec.nodeSelector specification in the BGPAdvertisement custom resource. This specification associates a pool of IP addresses with a set of nodes in the cluster. This is useful when you have nodes on different subnets in a cluster and you want to advertise an IP addresses from an address pool from a specific subnet, for example a public-facing subnet only. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an IP address pool by using a custom resource: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: pool1 spec: addresses: - 4.4.4.100-4.4.4.200 - 2001:100:4::200-2001:100:4::400 Control which nodes in the cluster the IP address from pool1 advertises from by defining the .spec.nodeSelector value in the BGPAdvertisement custom resource: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: example spec: ipAddressPools: - pool1 nodeSelector: - matchLabels: kubernetes.io/hostname: NodeA - matchLabels: kubernetes.io/hostname: NodeB In this example, the IP address from pool1 advertises from NodeA and NodeB only. 33.5.5. About the L2Advertisement custom resource The fields for the l2Advertisements object are defined in the following table: Table 33.4. L2 advertisements configuration Field Type Description metadata.name string Specifies the name for the L2 advertisement. metadata.namespace string Specifies the namespace for the L2 advertisement. Specify the same namespace that the MetalLB Operator uses. spec.ipAddressPools string Optional: The list of IPAddressPools to advertise with this advertisement, selected by name. spec.ipAddressPoolSelectors string Optional: A selector for the IPAddressPools that gets advertised with this advertisement. This is for associating the IPAddressPool to the advertisement based on the label assigned to the IPAddressPool instead of the name itself. If no IPAddressPool is selected by this or by the list, the advertisement is applied to all the IPAddressPools . spec.nodeSelectors string Optional: NodeSelectors limits the nodes to announce as hops for the load balancer IP. When empty, all the nodes are announced as hops. Important Limiting the nodes to announce as hops is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . spec.interfaces string Optional: The list of interfaces that are used to announce the load balancer IP. 33.5.6. Configuring MetalLB with an L2 advertisement Configure MetalLB as follows so that the IPAddressPool is advertised with the L2 protocol. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an IP address pool. Create a file, such as ipaddresspool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-l2 spec: addresses: - 4.4.4.0/24 autoAssign: false Apply the configuration for the IP address pool: USD oc apply -f ipaddresspool.yaml Create a L2 advertisement. Create a file, such as l2advertisement.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement namespace: metallb-system spec: ipAddressPools: - doc-example-l2 Apply the configuration: USD oc apply -f l2advertisement.yaml 33.5.7. Configuring MetalLB with a L2 advertisement and label The ipAddressPoolSelectors field in the BGPAdvertisement and L2Advertisement custom resource definitions is used to associate the IPAddressPool to the advertisement based on the label assigned to the IPAddressPool instead of the name itself. This example shows how to configure MetalLB so that the IPAddressPool is advertised with the L2 protocol by configuring the ipAddressPoolSelectors field. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an IP address pool. Create a file, such as ipaddresspool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-l2-label labels: zone: east spec: addresses: - 172.31.249.87/32 Apply the configuration for the IP address pool: USD oc apply -f ipaddresspool.yaml Create a L2 advertisement advertising the IP using ipAddressPoolSelectors . Create a file, such as l2advertisement.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement-label namespace: metallb-system spec: ipAddressPoolSelectors: - matchExpressions: - key: zone operator: In values: - east Apply the configuration: USD oc apply -f l2advertisement.yaml 33.5.8. Configuring MetalLB with an L2 advertisement for selected interfaces By default, the IP addresses from IP address pool that has been assigned to the service, is advertised from all the network interfaces. The interfaces field in the L2Advertisement custom resource definition is used to restrict those network interfaces that advertise the IP address pool. This example shows how to configure MetalLB so that the IP address pool is advertised only from the network interfaces listed in the interfaces field of all nodes. Prerequisites You have installed the OpenShift CLI ( oc ). You are logged in as a user with cluster-admin privileges. Procedure Create an IP address pool. Create a file, such as ipaddresspool.yaml , and enter the configuration details like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-l2 spec: addresses: - 4.4.4.0/24 autoAssign: false Apply the configuration for the IP address pool like the following example: USD oc apply -f ipaddresspool.yaml Create a L2 advertisement advertising the IP with interfaces selector. Create a YAML file, such as l2advertisement.yaml , and enter the configuration details like the following example: apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement namespace: metallb-system spec: ipAddressPools: - doc-example-l2 interfaces: - interfaceA - interfaceB Apply the configuration for the advertisement like the following example: USD oc apply -f l2advertisement.yaml Important The interface selector does not affect how MetalLB chooses the node to announce a given IP by using L2. The chosen node does not announce the service if the node does not have the selected interface. 33.5.9. Additional resources Configuring a community alias . 33.6. Configuring MetalLB BGP peers As a cluster administrator, you can add, modify, and delete Border Gateway Protocol (BGP) peers. The MetalLB Operator uses the BGP peer custom resources to identify which peers that MetalLB speaker pods contact to start BGP sessions. The peers receive the route advertisements for the load-balancer IP addresses that MetalLB assigns to services. 33.6.1. About the BGP peer custom resource The fields for the BGP peer custom resource are described in the following table. Table 33.5. MetalLB BGP peer custom resource Field Type Description metadata.name string Specifies the name for the BGP peer custom resource. metadata.namespace string Specifies the namespace for the BGP peer custom resource. spec.myASN integer Specifies the Autonomous System number for the local end of the BGP session. Specify the same value in all BGP peer custom resources that you add. The range is 0 to 4294967295 . spec.peerASN integer Specifies the Autonomous System number for the remote end of the BGP session. The range is 0 to 4294967295 . spec.peerAddress string Specifies the IP address of the peer to contact for establishing the BGP session. spec.sourceAddress string Optional: Specifies the IP address to use when establishing the BGP session. The value must be an IPv4 address. spec.peerPort integer Optional: Specifies the network port of the peer to contact for establishing the BGP session. The range is 0 to 16384 . spec.holdTime string Optional: Specifies the duration for the hold time to propose to the BGP peer. The minimum value is 3 seconds ( 3s ). The common units are seconds and minutes, such as 3s , 1m , and 5m30s . To detect path failures more quickly, also configure BFD. spec.keepaliveTime string Optional: Specifies the maximum interval between sending keep-alive messages to the BGP peer. If you specify this field, you must also specify a value for the holdTime field. The specified value must be less than the value for the holdTime field. spec.routerID string Optional: Specifies the router ID to advertise to the BGP peer. If you specify this field, you must specify the same value in every BGP peer custom resource that you add. spec.password string Optional: Specifies the MD5 password to send to the peer for routers that enforce TCP MD5 authenticated BGP sessions. spec.passwordSecret string Optional: Specifies name of the authentication secret for the BGP Peer. The secret must live in the metallb namespace and be of type basic-auth. spec.bfdProfile string Optional: Specifies the name of a BFD profile. spec.nodeSelectors object[] Optional: Specifies a selector, using match expressions and match labels, to control which nodes can connect to the BGP peer. spec.ebgpMultiHop boolean Optional: Specifies that the BGP peer is multiple network hops away. If the BGP peer is not directly connected to the same network, the speaker cannot establish a BGP session unless this field is set to true . This field applies to external BGP . External BGP is the term that is used to describe when a BGP peer belongs to a different Autonomous System. Note The passwordSecret field is mutually exclusive with the password field, and contains a reference to a secret containing the password to use. Setting both fields results in a failure of the parsing. 33.6.2. Configuring a BGP peer As a cluster administrator, you can add a BGP peer custom resource to exchange routing information with network routers and advertise the IP addresses for services. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Configure MetalLB with a BGP advertisement. Procedure Create a file, such as bgppeer.yaml , with content like the following example: apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: doc-example-peer spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10 Apply the configuration for the BGP peer: USD oc apply -f bgppeer.yaml 33.6.3. Configure a specific set of BGP peers for a given address pool This procedure illustrates how to: Configure a set of address pools ( pool1 and pool2 ). Configure a set of BGP peers ( peer1 and peer2 ). Configure BGP advertisement to assign pool1 to peer1 and pool2 to peer2 . Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create address pool pool1 . Create a file, such as ipaddresspool1.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: pool1 spec: addresses: - 4.4.4.100-4.4.4.200 - 2001:100:4::200-2001:100:4::400 Apply the configuration for the IP address pool pool1 : USD oc apply -f ipaddresspool1.yaml Create address pool pool2 . Create a file, such as ipaddresspool2.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: pool2 spec: addresses: - 5.5.5.100-5.5.5.200 - 2001:100:5::200-2001:100:5::400 Apply the configuration for the IP address pool pool2 : USD oc apply -f ipaddresspool2.yaml Create BGP peer1 . Create a file, such as bgppeer1.yaml , with content like the following example: apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: peer1 spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10 Apply the configuration for the BGP peer: USD oc apply -f bgppeer1.yaml Create BGP peer2 . Create a file, such as bgppeer2.yaml , with content like the following example: apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: peer2 spec: peerAddress: 10.0.0.2 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10 Apply the configuration for the BGP peer2: USD oc apply -f bgppeer2.yaml Create BGP advertisement 1. Create a file, such as bgpadvertisement1.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-1 namespace: metallb-system spec: ipAddressPools: - pool1 peers: - peer1 communities: - 65535:65282 aggregationLength: 32 aggregationLengthV6: 128 localPref: 100 Apply the configuration: USD oc apply -f bgpadvertisement1.yaml Create BGP advertisement 2. Create a file, such as bgpadvertisement2.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-2 namespace: metallb-system spec: ipAddressPools: - pool2 peers: - peer2 communities: - 65535:65282 aggregationLength: 32 aggregationLengthV6: 128 localPref: 100 Apply the configuration: USD oc apply -f bgpadvertisement2.yaml 33.6.4. Example BGP peer configurations 33.6.4.1. Example: Limit which nodes connect to a BGP peer You can specify the node selectors field to control which nodes can connect to a BGP peer. apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-nodesel namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64501 myASN: 64500 nodeSelectors: - matchExpressions: - key: kubernetes.io/hostname operator: In values: [compute-1.example.com, compute-2.example.com] 33.6.4.2. Example: Specify a BFD profile for a BGP peer You can specify a BFD profile to associate with BGP peers. BFD compliments BGP by providing more rapid detection of communication failures between peers than BGP alone. apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-peer-bfd namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64501 myASN: 64500 holdTime: "10s" bfdProfile: doc-example-bfd-profile-full Note Deleting the bidirectional forwarding detection (BFD) profile and removing the bfdProfile added to the border gateway protocol (BGP) peer resource does not disable the BFD. Instead, the BGP peer starts using the default BFD profile. To disable BFD from a BGP peer resource, delete the BGP peer configuration and recreate it without a BFD profile. For more information, see BZ#2050824 . 33.6.4.3. Example: Specify BGP peers for dual-stack networking To support dual-stack networking, add one BGP peer custom resource for IPv4 and one BGP peer custom resource for IPv6. apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-dual-stack-ipv4 namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64500 myASN: 64500 --- apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-dual-stack-ipv6 namespace: metallb-system spec: peerAddress: 2620:52:0:88::104 peerASN: 64500 myASN: 64500 33.6.5. steps Configuring services to use MetalLB 33.7. Configuring community alias As a cluster administrator, you can configure a community alias and use it across different advertisements. 33.7.1. About the community custom resource The community custom resource is a collection of aliases for communities. Users can define named aliases to be used when advertising ipAddressPools using the BGPAdvertisement . The fields for the community custom resource are described in the following table. Note The community CRD applies only to BGPAdvertisement. Table 33.6. MetalLB community custom resource Field Type Description metadata.name string Specifies the name for the community . metadata.namespace string Specifies the namespace for the community . Specify the same namespace that the MetalLB Operator uses. spec.communities string Specifies a list of BGP community aliases that can be used in BGPAdvertisements. A community alias consists of a pair of name (alias) and value (number:number). Link the BGPAdvertisement to a community alias by referring to the alias name in its spec.communities field. Table 33.7. CommunityAlias Field Type Description name string The name of the alias for the community . value string The BGP community value corresponding to the given name. 33.7.2. Configuring MetalLB with a BGP advertisement and community alias Configure MetalLB as follows so that the IPAddressPool is advertised with the BGP protocol and the community alias set to the numeric value of the NO_ADVERTISE community. In the following example, the peer BGP router doc-example-peer-community receives one 203.0.113.200/32 route and one fc00:f853:ccd:e799::1/128 route for each load-balancer IP address that MetalLB assigns to a service. A community alias is configured with the NO_ADVERTISE community. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an IP address pool. Create a file, such as ipaddresspool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-community spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124 Apply the configuration for the IP address pool: USD oc apply -f ipaddresspool.yaml Create a community alias named community1 . apiVersion: metallb.io/v1beta1 kind: Community metadata: name: community1 namespace: metallb-system spec: communities: - name: NO_ADVERTISE value: '65535:65282' Create a BGP peer named doc-example-bgp-peer . Create a file, such as bgppeer.yaml , with content like the following example: apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: doc-example-bgp-peer spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10 Apply the configuration for the BGP peer: USD oc apply -f bgppeer.yaml Create a BGP advertisement with the community alias. Create a file, such as bgpadvertisement.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgp-community-sample namespace: metallb-system spec: aggregationLength: 32 aggregationLengthV6: 128 communities: - NO_ADVERTISE 1 ipAddressPools: - doc-example-bgp-community peers: - doc-example-peer 1 Specify the CommunityAlias.name here and not the community custom resource (CR) name. Apply the configuration: USD oc apply -f bgpadvertisement.yaml 33.8. Configuring MetalLB BFD profiles As a cluster administrator, you can add, modify, and delete Bidirectional Forwarding Detection (BFD) profiles. The MetalLB Operator uses the BFD profile custom resources to identify which BGP sessions use BFD to provide faster path failure detection than BGP alone provides. 33.8.1. About the BFD profile custom resource The fields for the BFD profile custom resource are described in the following table. Table 33.8. BFD profile custom resource Field Type Description metadata.name string Specifies the name for the BFD profile custom resource. metadata.namespace string Specifies the namespace for the BFD profile custom resource. spec.detectMultiplier integer Specifies the detection multiplier to determine packet loss. The remote transmission interval is multiplied by this value to determine the connection loss detection timer. For example, when the local system has the detect multiplier set to 3 and the remote system has the transmission interval set to 300 , the local system detects failures only after 900 ms without receiving packets. The range is 2 to 255 . The default value is 3 . spec.echoMode boolean Specifies the echo transmission mode. If you are not using distributed BFD, echo transmission mode works only when the peer is also FRR. The default value is false and echo transmission mode is disabled. When echo transmission mode is enabled, consider increasing the transmission interval of control packets to reduce bandwidth usage. For example, consider increasing the transmit interval to 2000 ms. spec.echoInterval integer Specifies the minimum transmission interval, less jitter, that this system uses to send and receive echo packets. The range is 10 to 60000 . The default value is 50 ms. spec.minimumTtl integer Specifies the minimum expected TTL for an incoming control packet. This field applies to multi-hop sessions only. The purpose of setting a minimum TTL is to make the packet validation requirements more stringent and avoid receiving control packets from other sessions. The default value is 254 and indicates that the system expects only one hop between this system and the peer. spec.passiveMode boolean Specifies whether a session is marked as active or passive. A passive session does not attempt to start the connection. Instead, a passive session waits for control packets from a peer before it begins to reply. Marking a session as passive is useful when you have a router that acts as the central node of a star network and you want to avoid sending control packets that you do not need the system to send. The default value is false and marks the session as active. spec.receiveInterval integer Specifies the minimum interval that this system is capable of receiving control packets. The range is 10 to 60000 . The default value is 300 ms. spec.transmitInterval integer Specifies the minimum transmission interval, less jitter, that this system uses to send control packets. The range is 10 to 60000 . The default value is 300 ms. 33.8.2. Configuring a BFD profile As a cluster administrator, you can add a BFD profile and configure a BGP peer to use the profile. BFD provides faster path failure detection than BGP alone. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a file, such as bfdprofile.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BFDProfile metadata: name: doc-example-bfd-profile-full namespace: metallb-system spec: receiveInterval: 300 transmitInterval: 300 detectMultiplier: 3 echoMode: false passiveMode: true minimumTtl: 254 Apply the configuration for the BFD profile: USD oc apply -f bfdprofile.yaml 33.8.3. steps Configure a BGP peer to use the BFD profile. 33.9. Configuring services to use MetalLB As a cluster administrator, when you add a service of type LoadBalancer , you can control how MetalLB assigns an IP address. 33.9.1. Request a specific IP address Like some other load-balancer implementations, MetalLB accepts the spec.loadBalancerIP field in the service specification. If the requested IP address is within a range from any address pool, MetalLB assigns the requested IP address. If the requested IP address is not within any range, MetalLB reports a warning. Example service YAML for a specific IP address apiVersion: v1 kind: Service metadata: name: <service_name> annotations: metallb.universe.tf/address-pool: <address_pool_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer loadBalancerIP: <ip_address> If MetalLB cannot assign the requested IP address, the EXTERNAL-IP for the service reports <pending> and running oc describe service <service_name> includes an event like the following example. Example event when MetalLB cannot assign a requested IP address ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning AllocationFailed 3m16s metallb-controller Failed to allocate IP for "default/invalid-request": "4.3.2.1" is not allowed in config 33.9.2. Request an IP address from a specific pool To assign an IP address from a specific range, but you are not concerned with the specific IP address, then you can use the metallb.universe.tf/address-pool annotation to request an IP address from the specified address pool. Example service YAML for an IP address from a specific pool apiVersion: v1 kind: Service metadata: name: <service_name> annotations: metallb.universe.tf/address-pool: <address_pool_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer If the address pool that you specify for <address_pool_name> does not exist, MetalLB attempts to assign an IP address from any pool that permits automatic assignment. 33.9.3. Accept any IP address By default, address pools are configured to permit automatic assignment. MetalLB assigns an IP address from these address pools. To accept any IP address from any pool that is configured for automatic assignment, no special annotation or configuration is required. Example service YAML for accepting any IP address apiVersion: v1 kind: Service metadata: name: <service_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer 33.9.4. Share a specific IP address By default, services do not share IP addresses. However, if you need to colocate services on a single IP address, you can enable selective IP sharing by adding the metallb.universe.tf/allow-shared-ip annotation to the services. apiVersion: v1 kind: Service metadata: name: service-http annotations: metallb.universe.tf/address-pool: doc-example metallb.universe.tf/allow-shared-ip: "web-server-svc" 1 spec: ports: - name: http port: 80 2 protocol: TCP targetPort: 8080 selector: <label_key>: <label_value> 3 type: LoadBalancer loadBalancerIP: 172.31.249.7 4 --- apiVersion: v1 kind: Service metadata: name: service-https annotations: metallb.universe.tf/address-pool: doc-example metallb.universe.tf/allow-shared-ip: "web-server-svc" 5 spec: ports: - name: https port: 443 6 protocol: TCP targetPort: 8080 selector: <label_key>: <label_value> 7 type: LoadBalancer loadBalancerIP: 172.31.249.7 8 1 5 Specify the same value for the metallb.universe.tf/allow-shared-ip annotation. This value is referred to as the sharing key . 2 6 Specify different port numbers for the services. 3 7 Specify identical pod selectors if you must specify externalTrafficPolicy: local so the services send traffic to the same set of pods. If you use the cluster external traffic policy, then the pod selectors do not need to be identical. 4 8 Optional: If you specify the three preceding items, MetalLB might colocate the services on the same IP address. To ensure that services share an IP address, specify the IP address to share. By default, Kubernetes does not allow multiprotocol load balancer services. This limitation would normally make it impossible to run a service like DNS that needs to listen on both TCP and UDP. To work around this limitation of Kubernetes with MetalLB, create two services: For one service, specify TCP and for the second service, specify UDP. In both services, specify the same pod selector. Specify the same sharing key and spec.loadBalancerIP value to colocate the TCP and UDP services on the same IP address. 33.9.5. Configuring a service with MetalLB You can configure a load-balancing service to use an external IP address from an address pool. Prerequisites Install the OpenShift CLI ( oc ). Install the MetalLB Operator and start MetalLB. Configure at least one address pool. Configure your network to route traffic from the clients to the host network for the cluster. Procedure Create a <service_name>.yaml file. In the file, ensure that the spec.type field is set to LoadBalancer . Refer to the examples for information about how to request the external IP address that MetalLB assigns to the service. Create the service: USD oc apply -f <service_name>.yaml Example output service/<service_name> created Verification Describe the service: USD oc describe service <service_name> Example output <.> The annotation is present if you request an IP address from a specific pool. <.> The service type must indicate LoadBalancer . <.> The load-balancer ingress field indicates the external IP address if the service is assigned correctly. <.> The events field indicates the node name that is assigned to announce the external IP address. If you experience an error, the events field indicates the reason for the error. 33.10. MetalLB logging, troubleshooting, and support If you need to troubleshoot MetalLB configuration, see the following sections for commonly used commands. 33.10.1. Setting the MetalLB logging levels MetalLB uses FRRouting (FRR) in a container with the default setting of info generates a lot of logging. You can control the verbosity of the logs generated by setting the logLevel as illustrated in this example. Gain a deeper insight into MetalLB by setting the logLevel to debug as follows: Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Create a file, such as setdebugloglevel.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: logLevel: debug nodeSelector: node-role.kubernetes.io/worker: "" Apply the configuration: USD oc replace -f setdebugloglevel.yaml Note Use oc replace as the understanding is the metallb CR is already created and here you are changing the log level. Display the names of the speaker pods: USD oc get -n metallb-system pods -l component=speaker Example output NAME READY STATUS RESTARTS AGE speaker-2m9pm 4/4 Running 0 9m19s speaker-7m4qw 3/4 Running 0 19s speaker-szlmx 4/4 Running 0 9m19s Note Speaker and controller pods are recreated to ensure the updated logging level is applied. The logging level is modified for all the components of MetalLB. View the speaker logs: USD oc logs -n metallb-system speaker-7m4qw -c speaker Example output View the FRR logs: USD oc logs -n metallb-system speaker-7m4qw -c frr Example output 33.10.1.1. FRRouting (FRR) log levels The following table describes the FRR logging levels. Table 33.9. Log levels Log level Description all Supplies all logging information for all logging levels. debug Information that is diagnostically helpful to people. Set to debug to give detailed troubleshooting information. info Provides information that always should be logged but under normal circumstances does not require user intervention. This is the default logging level. warn Anything that can potentially cause inconsistent MetalLB behaviour. Usually MetalLB automatically recovers from this type of error. error Any error that is fatal to the functioning of MetalLB . These errors usually require administrator intervention to fix. none Turn off all logging. 33.10.2. Troubleshooting BGP issues The BGP implementation that Red Hat supports uses FRRouting (FRR) in a container in the speaker pods. As a cluster administrator, if you need to troubleshoot BGP configuration issues, you need to run commands in the FRR container. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Display the names of the speaker pods: USD oc get -n metallb-system pods -l component=speaker Example output NAME READY STATUS RESTARTS AGE speaker-66bth 4/4 Running 0 56m speaker-gvfnf 4/4 Running 0 56m ... Display the running configuration for FRR: USD oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c "show running-config" Example output <.> The router bgp section indicates the ASN for MetalLB. <.> Confirm that a neighbor <ip-address> remote-as <peer-ASN> line exists for each BGP peer custom resource that you added. <.> If you configured BFD, confirm that the BFD profile is associated with the correct BGP peer and that the BFD profile appears in the command output. <.> Confirm that the network <ip-address-range> lines match the IP address ranges that you specified in address pool custom resources that you added. Display the BGP summary: USD oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c "show bgp summary" Example output 1 1 3 Confirm that the output includes a line for each BGP peer custom resource that you added. 2 4 2 4 Output that shows 0 messages received and messages sent indicates a BGP peer that does not have a BGP session. Check network connectivity and the BGP configuration of the BGP peer. Display the BGP peers that received an address pool: USD oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c "show bgp ipv4 unicast 203.0.113.200/30" Replace ipv4 with ipv6 to display the BGP peers that received an IPv6 address pool. Replace 203.0.113.200/30 with an IPv4 or IPv6 IP address range from an address pool. Example output <.> Confirm that the output includes an IP address for a BGP peer. 33.10.3. Troubleshooting BFD issues The Bidirectional Forwarding Detection (BFD) implementation that Red Hat supports uses FRRouting (FRR) in a container in the speaker pods. The BFD implementation relies on BFD peers also being configured as BGP peers with an established BGP session. As a cluster administrator, if you need to troubleshoot BFD configuration issues, you need to run commands in the FRR container. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Display the names of the speaker pods: USD oc get -n metallb-system pods -l component=speaker Example output NAME READY STATUS RESTARTS AGE speaker-66bth 4/4 Running 0 26m speaker-gvfnf 4/4 Running 0 26m ... Display the BFD peers: USD oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c "show bfd peers brief" Example output <.> Confirm that the PeerAddress column includes each BFD peer. If the output does not list a BFD peer IP address that you expected the output to include, troubleshoot BGP connectivity with the peer. If the status field indicates down , check for connectivity on the links and equipment between the node and the peer. You can determine the node name for the speaker pod with a command like oc get pods -n metallb-system speaker-66bth -o jsonpath='{.spec.nodeName}' . 33.10.4. MetalLB metrics for BGP and BFD OpenShift Container Platform captures the following Prometheus metrics for MetalLB that relate to BGP peers and BFD profiles. metallb_bfd_control_packet_input counts the number of BFD control packets received from each BFD peer. metallb_bfd_control_packet_output counts the number of BFD control packets sent to each BFD peer. metallb_bfd_echo_packet_input counts the number of BFD echo packets received from each BFD peer. metallb_bfd_echo_packet_output counts the number of BFD echo packets sent to each BFD peer. metallb_bfd_session_down_events counts the number of times the BFD session with a peer entered the down state. metallb_bfd_session_up indicates the connection state with a BFD peer. 1 indicates the session is up and 0 indicates the session is down . metallb_bfd_session_up_events counts the number of times the BFD session with a peer entered the up state. metallb_bfd_zebra_notifications counts the number of BFD Zebra notifications for each BFD peer. metallb_bgp_announced_prefixes_total counts the number of load balancer IP address prefixes that are advertised to BGP peers. The terms prefix and aggregated route have the same meaning. metallb_bgp_session_up indicates the connection state with a BGP peer. 1 indicates the session is up and 0 indicates the session is down . metallb_bgp_updates_total counts the number of BGP update messages that were sent to a BGP peer. Additional resources See Querying metrics for information about using the monitoring dashboard. 33.10.5. About collecting MetalLB data You can use the oc adm must-gather CLI command to collect information about your cluster, your MetalLB configuration, and the MetalLB Operator. The following features and objects are associated with MetalLB and the MetalLB Operator: The namespace and child objects that the MetalLB Operator is deployed in All MetalLB Operator custom resource definitions (CRDs) The oc adm must-gather CLI command collects the following information from FRRouting (FRR) that Red Hat uses to implement BGP and BFD: /etc/frr/frr.conf /etc/frr/frr.log /etc/frr/daemons configuration file /etc/frr/vtysh.conf The log and configuration files in the preceding list are collected from the frr container in each speaker pod. In addition to the log and configuration files, the oc adm must-gather CLI command collects the output from the following vtysh commands: show running-config show bgp ipv4 show bgp ipv6 show bgp neighbor show bfd peer No additional configuration is required when you run the oc adm must-gather CLI command. Additional resources Gathering data about your cluster
[ "\"event\":\"ipAllocated\",\"ip\":\"172.22.0.201\",\"msg\":\"IP address assigned by controller", "cat << EOF | oc apply -f - apiVersion: v1 kind: Namespace metadata: name: metallb-system EOF", "cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: metallb-operator namespace: metallb-system EOF", "oc get operatorgroup -n metallb-system", "NAME AGE metallb-operator 14m", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: metallb-operator-sub namespace: metallb-system spec: channel: stable name: metallb-operator source: redhat-operators 1 sourceNamespace: openshift-marketplace", "oc create -f metallb-sub.yaml", "oc label ns metallb-system \"openshift.io/cluster-monitoring=true\"", "oc get installplan -n metallb-system", "NAME CSV APPROVAL APPROVED install-wzg94 metallb-operator.4.13.0-nnnnnnnnnnnn Automatic true", "oc get clusterserviceversion -n metallb-system -o custom-columns=Name:.metadata.name,Phase:.status.phase", "Name Phase metallb-operator.4.13.0-nnnnnnnnnnnn Succeeded", "cat << EOF | oc apply -f - apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system EOF", "oc get deployment -n metallb-system controller", "NAME READY UP-TO-DATE AVAILABLE AGE controller 1/1 1 1 11m", "oc get daemonset -n metallb-system speaker", "NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE speaker 6 6 6 6 6 kubernetes.io/os=linux 18m", "apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: nodeSelector: <.> node-role.kubernetes.io/worker: \"\" speakerTolerations: <.> - key: \"Example\" operator: \"Exists\" effect: \"NoExecute\"", "apiVersion: scheduling.k8s.io/v1 kind: PriorityClass metadata: name: high-priority value: 1000000", "oc apply -f myPriorityClass.yaml", "apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: logLevel: debug controllerConfig: priorityClassName: high-priority 1 affinity: podAffinity: 2 requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: app: metallb topologyKey: kubernetes.io/hostname speakerConfig: priorityClassName: high-priority affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: app: metallb topologyKey: kubernetes.io/hostname", "oc apply -f MetalLBPodConfig.yaml", "oc get pods -n metallb-system -o custom-columns=NAME:.metadata.name,PRIORITY:.spec.priorityClassName", "NAME PRIORITY controller-584f5c8cd8-5zbvg high-priority metallb-operator-controller-manager-9c8d9985-szkqg <none> metallb-operator-webhook-server-c895594d4-shjgx <none> speaker-dddf7 high-priority", "oc get pod -o=custom-columns=NODE:.spec.nodeName,NAME:.metadata.name -n metallb-system", "apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: logLevel: debug controllerConfig: resources: limits: cpu: \"200m\" speakerConfig: resources: limits: cpu: \"300m\"", "oc apply -f CPULimits.yaml", "oc describe pod <pod_name>", "oc get subscription metallb-operator -n metallb-system -o yaml | grep currentCSV", "currentCSV: metallb-operator.4.10.0-202207051316", "oc delete subscription metallb-operator -n metallb-system", "subscription.operators.coreos.com \"metallb-operator\" deleted", "oc delete clusterserviceversion metallb-operator.4.10.0-202207051316 -n metallb-system", "clusterserviceversion.operators.coreos.com \"metallb-operator.4.10.0-202207051316\" deleted", "oc get operatorgroup -n metallb-system", "NAME AGE metallb-system-7jc66 85m", "oc get operatorgroup metallb-system-7jc66 -n metallb-system -o yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: annotations: olm.providedAPIs: \"\" creationTimestamp: \"2023-10-25T09:42:49Z\" generateName: metallb-system- generation: 1 name: metallb-system-7jc66 namespace: metallb-system resourceVersion: \"25027\" uid: f5f644a0-eef8-4e31-a306-e2bbcfaffab3 spec: targetNamespaces: - metallb-system upgradeStrategy: Default status: lastUpdated: \"2023-10-25T09:42:49Z\" namespaces: - metallb-system", "oc edit n metallb-system", "operatorgroup.operators.coreos.com/metallb-system-7jc66 edited", "oc get operatorgroup metallb-system-7jc66 -n metallb-system -o yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: annotations: olm.providedAPIs: \"\" creationTimestamp: \"2023-10-25T09:42:49Z\" generateName: metallb-system- generation: 2 name: metallb-system-7jc66 namespace: metallb-system resourceVersion: \"61658\" uid: f5f644a0-eef8-4e31-a306-e2bbcfaffab3 spec: upgradeStrategy: Default status: lastUpdated: \"2023-10-25T14:31:30Z\" namespaces: - \"\"", "oc get namespaces | grep metallb-system", "metallb-system Active 31m", "oc get metallb -n metallb-system", "NAME AGE metallb 33m", "oc get csv -n metallb-system", "NAME DISPLAY VERSION REPLACES PHASE metallb-operator.4.13.0-202207051316 MetalLB Operator 4.13.0-202207051316 Succeeded", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example labels: 1 zone: east spec: addresses: - 203.0.113.1-203.0.113.10 - 203.0.113.65-203.0.113.75", "oc apply -f ipaddresspool.yaml", "oc describe -n metallb-system IPAddressPool doc-example", "Name: doc-example Namespace: metallb-system Labels: zone=east Annotations: <none> API Version: metallb.io/v1beta1 Kind: IPAddressPool Metadata: Spec: Addresses: 203.0.113.1-203.0.113.10 203.0.113.65-203.0.113.75 Auto Assign: true Events: <none>", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-cidr namespace: metallb-system spec: addresses: - 192.168.100.0/24 - 192.168.200.0/24 - 192.168.255.1-192.168.255.5", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-reserved namespace: metallb-system spec: addresses: - 10.0.100.0/28 autoAssign: false", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-combined namespace: metallb-system spec: addresses: - 10.0.100.0/28 - 2002:2:2::1-2002:2:2::100", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-service-allocation namespace: metallb-system spec: addresses: - 192.168.20.0/24 serviceAllocation: priority: 50 1 namespaces: 2 - namespace-a - namespace-b namespaceSelectors: 3 - matchLabels: zone: east serviceSelectors: 4 - matchExpressions: - key: security operator: In values: - S1", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-basic spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124", "oc apply -f ipaddresspool.yaml", "apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-basic namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-basic", "oc apply -f bgpadvertisement.yaml", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-adv labels: zone: east spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124 autoAssign: false", "oc apply -f ipaddresspool.yaml", "apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-adv-1 namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-adv communities: - 65535:65282 aggregationLength: 32 localPref: 100", "oc apply -f bgpadvertisement1.yaml", "apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-adv-2 namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-adv communities: - 8000:800 aggregationLength: 30 aggregationLengthV6: 124", "oc apply -f bgpadvertisement2.yaml", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: pool1 spec: addresses: - 4.4.4.100-4.4.4.200 - 2001:100:4::200-2001:100:4::400", "apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: example spec: ipAddressPools: - pool1 nodeSelector: - matchLabels: kubernetes.io/hostname: NodeA - matchLabels: kubernetes.io/hostname: NodeB", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-l2 spec: addresses: - 4.4.4.0/24 autoAssign: false", "oc apply -f ipaddresspool.yaml", "apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement namespace: metallb-system spec: ipAddressPools: - doc-example-l2", "oc apply -f l2advertisement.yaml", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-l2-label labels: zone: east spec: addresses: - 172.31.249.87/32", "oc apply -f ipaddresspool.yaml", "apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement-label namespace: metallb-system spec: ipAddressPoolSelectors: - matchExpressions: - key: zone operator: In values: - east", "oc apply -f l2advertisement.yaml", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-l2 spec: addresses: - 4.4.4.0/24 autoAssign: false", "oc apply -f ipaddresspool.yaml", "apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement namespace: metallb-system spec: ipAddressPools: - doc-example-l2 interfaces: - interfaceA - interfaceB", "oc apply -f l2advertisement.yaml", "apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: doc-example-peer spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10", "oc apply -f bgppeer.yaml", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: pool1 spec: addresses: - 4.4.4.100-4.4.4.200 - 2001:100:4::200-2001:100:4::400", "oc apply -f ipaddresspool1.yaml", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: pool2 spec: addresses: - 5.5.5.100-5.5.5.200 - 2001:100:5::200-2001:100:5::400", "oc apply -f ipaddresspool2.yaml", "apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: peer1 spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10", "oc apply -f bgppeer1.yaml", "apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: peer2 spec: peerAddress: 10.0.0.2 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10", "oc apply -f bgppeer2.yaml", "apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-1 namespace: metallb-system spec: ipAddressPools: - pool1 peers: - peer1 communities: - 65535:65282 aggregationLength: 32 aggregationLengthV6: 128 localPref: 100", "oc apply -f bgpadvertisement1.yaml", "apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-2 namespace: metallb-system spec: ipAddressPools: - pool2 peers: - peer2 communities: - 65535:65282 aggregationLength: 32 aggregationLengthV6: 128 localPref: 100", "oc apply -f bgpadvertisement2.yaml", "apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-nodesel namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64501 myASN: 64500 nodeSelectors: - matchExpressions: - key: kubernetes.io/hostname operator: In values: [compute-1.example.com, compute-2.example.com]", "apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-peer-bfd namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64501 myASN: 64500 holdTime: \"10s\" bfdProfile: doc-example-bfd-profile-full", "apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-dual-stack-ipv4 namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64500 myASN: 64500 --- apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-dual-stack-ipv6 namespace: metallb-system spec: peerAddress: 2620:52:0:88::104 peerASN: 64500 myASN: 64500", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-community spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124", "oc apply -f ipaddresspool.yaml", "apiVersion: metallb.io/v1beta1 kind: Community metadata: name: community1 namespace: metallb-system spec: communities: - name: NO_ADVERTISE value: '65535:65282'", "apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: doc-example-bgp-peer spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10", "oc apply -f bgppeer.yaml", "apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgp-community-sample namespace: metallb-system spec: aggregationLength: 32 aggregationLengthV6: 128 communities: - NO_ADVERTISE 1 ipAddressPools: - doc-example-bgp-community peers: - doc-example-peer", "oc apply -f bgpadvertisement.yaml", "apiVersion: metallb.io/v1beta1 kind: BFDProfile metadata: name: doc-example-bfd-profile-full namespace: metallb-system spec: receiveInterval: 300 transmitInterval: 300 detectMultiplier: 3 echoMode: false passiveMode: true minimumTtl: 254", "oc apply -f bfdprofile.yaml", "apiVersion: v1 kind: Service metadata: name: <service_name> annotations: metallb.universe.tf/address-pool: <address_pool_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer loadBalancerIP: <ip_address>", "Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning AllocationFailed 3m16s metallb-controller Failed to allocate IP for \"default/invalid-request\": \"4.3.2.1\" is not allowed in config", "apiVersion: v1 kind: Service metadata: name: <service_name> annotations: metallb.universe.tf/address-pool: <address_pool_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer", "apiVersion: v1 kind: Service metadata: name: <service_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer", "apiVersion: v1 kind: Service metadata: name: service-http annotations: metallb.universe.tf/address-pool: doc-example metallb.universe.tf/allow-shared-ip: \"web-server-svc\" 1 spec: ports: - name: http port: 80 2 protocol: TCP targetPort: 8080 selector: <label_key>: <label_value> 3 type: LoadBalancer loadBalancerIP: 172.31.249.7 4 --- apiVersion: v1 kind: Service metadata: name: service-https annotations: metallb.universe.tf/address-pool: doc-example metallb.universe.tf/allow-shared-ip: \"web-server-svc\" 5 spec: ports: - name: https port: 443 6 protocol: TCP targetPort: 8080 selector: <label_key>: <label_value> 7 type: LoadBalancer loadBalancerIP: 172.31.249.7 8", "oc apply -f <service_name>.yaml", "service/<service_name> created", "oc describe service <service_name>", "Name: <service_name> Namespace: default Labels: <none> Annotations: metallb.universe.tf/address-pool: doc-example <.> Selector: app=service_name Type: LoadBalancer <.> IP Family Policy: SingleStack IP Families: IPv4 IP: 10.105.237.254 IPs: 10.105.237.254 LoadBalancer Ingress: 192.168.100.5 <.> Port: <unset> 80/TCP TargetPort: 8080/TCP NodePort: <unset> 30550/TCP Endpoints: 10.244.0.50:8080 Session Affinity: None External Traffic Policy: Cluster Events: <.> Type Reason Age From Message ---- ------ ---- ---- ------- Normal nodeAssigned 32m (x2 over 32m) metallb-speaker announcing from node \"<node_name>\"", "apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: logLevel: debug nodeSelector: node-role.kubernetes.io/worker: \"\"", "oc replace -f setdebugloglevel.yaml", "oc get -n metallb-system pods -l component=speaker", "NAME READY STATUS RESTARTS AGE speaker-2m9pm 4/4 Running 0 9m19s speaker-7m4qw 3/4 Running 0 19s speaker-szlmx 4/4 Running 0 9m19s", "oc logs -n metallb-system speaker-7m4qw -c speaker", "{\"branch\":\"main\",\"caller\":\"main.go:92\",\"commit\":\"3d052535\",\"goversion\":\"gc / go1.17.1 / amd64\",\"level\":\"info\",\"msg\":\"MetalLB speaker starting (commit 3d052535, branch main)\",\"ts\":\"2022-05-17T09:55:05Z\",\"version\":\"\"} {\"caller\":\"announcer.go:110\",\"event\":\"createARPResponder\",\"interface\":\"ens4\",\"level\":\"info\",\"msg\":\"created ARP responder for interface\",\"ts\":\"2022-05-17T09:55:05Z\"} {\"caller\":\"announcer.go:119\",\"event\":\"createNDPResponder\",\"interface\":\"ens4\",\"level\":\"info\",\"msg\":\"created NDP responder for interface\",\"ts\":\"2022-05-17T09:55:05Z\"} {\"caller\":\"announcer.go:110\",\"event\":\"createARPResponder\",\"interface\":\"tun0\",\"level\":\"info\",\"msg\":\"created ARP responder for interface\",\"ts\":\"2022-05-17T09:55:05Z\"} {\"caller\":\"announcer.go:119\",\"event\":\"createNDPResponder\",\"interface\":\"tun0\",\"level\":\"info\",\"msg\":\"created NDP responder for interface\",\"ts\":\"2022-05-17T09:55:05Z\"} I0517 09:55:06.515686 95 request.go:665] Waited for 1.026500832s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/operators.coreos.com/v1alpha1?timeout=32s {\"Starting Manager\":\"(MISSING)\",\"caller\":\"k8s.go:389\",\"level\":\"info\",\"ts\":\"2022-05-17T09:55:08Z\"} {\"caller\":\"speakerlist.go:310\",\"level\":\"info\",\"msg\":\"node event - forcing sync\",\"node addr\":\"10.0.128.4\",\"node event\":\"NodeJoin\",\"node name\":\"ci-ln-qb8t3mb-72292-7s7rh-worker-a-vvznj\",\"ts\":\"2022-05-17T09:55:08Z\"} {\"caller\":\"service_controller.go:113\",\"controller\":\"ServiceReconciler\",\"enqueueing\":\"openshift-kube-controller-manager-operator/metrics\",\"epslice\":\"{\\\"metadata\\\":{\\\"name\\\":\\\"metrics-xtsxr\\\",\\\"generateName\\\":\\\"metrics-\\\",\\\"namespace\\\":\\\"openshift-kube-controller-manager-operator\\\",\\\"uid\\\":\\\"ac6766d7-8504-492c-9d1e-4ae8897990ad\\\",\\\"resourceVersion\\\":\\\"9041\\\",\\\"generation\\\":4,\\\"creationTimestamp\\\":\\\"2022-05-17T07:16:53Z\\\",\\\"labels\\\":{\\\"app\\\":\\\"kube-controller-manager-operator\\\",\\\"endpointslice.kubernetes.io/managed-by\\\":\\\"endpointslice-controller.k8s.io\\\",\\\"kubernetes.io/service-name\\\":\\\"metrics\\\"},\\\"annotations\\\":{\\\"endpoints.kubernetes.io/last-change-trigger-time\\\":\\\"2022-05-17T07:21:34Z\\\"},\\\"ownerReferences\\\":[{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Service\\\",\\\"name\\\":\\\"metrics\\\",\\\"uid\\\":\\\"0518eed3-6152-42be-b566-0bd00a60faf8\\\",\\\"controller\\\":true,\\\"blockOwnerDeletion\\\":true}],\\\"managedFields\\\":[{\\\"manager\\\":\\\"kube-controller-manager\\\",\\\"operation\\\":\\\"Update\\\",\\\"apiVersion\\\":\\\"discovery.k8s.io/v1\\\",\\\"time\\\":\\\"2022-05-17T07:20:02Z\\\",\\\"fieldsType\\\":\\\"FieldsV1\\\",\\\"fieldsV1\\\":{\\\"f:addressType\\\":{},\\\"f:endpoints\\\":{},\\\"f:metadata\\\":{\\\"f:annotations\\\":{\\\".\\\":{},\\\"f:endpoints.kubernetes.io/last-change-trigger-time\\\":{}},\\\"f:generateName\\\":{},\\\"f:labels\\\":{\\\".\\\":{},\\\"f:app\\\":{},\\\"f:endpointslice.kubernetes.io/managed-by\\\":{},\\\"f:kubernetes.io/service-name\\\":{}},\\\"f:ownerReferences\\\":{\\\".\\\":{},\\\"k:{\\\\\\\"uid\\\\\\\":\\\\\\\"0518eed3-6152-42be-b566-0bd00a60faf8\\\\\\\"}\\\":{}}},\\\"f:ports\\\":{}}}]},\\\"addressType\\\":\\\"IPv4\\\",\\\"endpoints\\\":[{\\\"addresses\\\":[\\\"10.129.0.7\\\"],\\\"conditions\\\":{\\\"ready\\\":true,\\\"serving\\\":true,\\\"terminating\\\":false},\\\"targetRef\\\":{\\\"kind\\\":\\\"Pod\\\",\\\"namespace\\\":\\\"openshift-kube-controller-manager-operator\\\",\\\"name\\\":\\\"kube-controller-manager-operator-6b98b89ddd-8d4nf\\\",\\\"uid\\\":\\\"dd5139b8-e41c-4946-a31b-1a629314e844\\\",\\\"resourceVersion\\\":\\\"9038\\\"},\\\"nodeName\\\":\\\"ci-ln-qb8t3mb-72292-7s7rh-master-0\\\",\\\"zone\\\":\\\"us-central1-a\\\"}],\\\"ports\\\":[{\\\"name\\\":\\\"https\\\",\\\"protocol\\\":\\\"TCP\\\",\\\"port\\\":8443}]}\",\"level\":\"debug\",\"ts\":\"2022-05-17T09:55:08Z\"}", "oc logs -n metallb-system speaker-7m4qw -c frr", "Started watchfrr 2022/05/17 09:55:05 ZEBRA: client 16 says hello and bids fair to announce only bgp routes vrf=0 2022/05/17 09:55:05 ZEBRA: client 31 says hello and bids fair to announce only vnc routes vrf=0 2022/05/17 09:55:05 ZEBRA: client 38 says hello and bids fair to announce only static routes vrf=0 2022/05/17 09:55:05 ZEBRA: client 43 says hello and bids fair to announce only bfd routes vrf=0 2022/05/17 09:57:25.089 BGP: Creating Default VRF, AS 64500 2022/05/17 09:57:25.090 BGP: dup addr detect enable max_moves 5 time 180 freeze disable freeze_time 0 2022/05/17 09:57:25.090 BGP: bgp_get: Registering BGP instance (null) to zebra 2022/05/17 09:57:25.090 BGP: Registering VRF 0 2022/05/17 09:57:25.091 BGP: Rx Router Id update VRF 0 Id 10.131.0.1/32 2022/05/17 09:57:25.091 BGP: RID change : vrf VRF default(0), RTR ID 10.131.0.1 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF br0 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF ens4 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF ens4 addr 10.0.128.4/32 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF ens4 addr fe80::c9d:84da:4d86:5618/64 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF lo 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF ovs-system 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF tun0 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF tun0 addr 10.131.0.1/23 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF tun0 addr fe80::40f1:d1ff:feb6:5322/64 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF veth2da49fed 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF veth2da49fed addr fe80::24bd:d1ff:fec1:d88/64 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF veth2fa08c8c 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF veth2fa08c8c addr fe80::6870:ff:fe96:efc8/64 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF veth41e356b7 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF veth41e356b7 addr fe80::48ff:37ff:fede:eb4b/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF veth1295c6e2 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF veth1295c6e2 addr fe80::b827:a2ff:feed:637/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF veth9733c6dc 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF veth9733c6dc addr fe80::3cf4:15ff:fe11:e541/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF veth336680ea 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF veth336680ea addr fe80::94b1:8bff:fe7e:488c/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF vetha0a907b7 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF vetha0a907b7 addr fe80::3855:a6ff:fe73:46c3/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF vethf35a4398 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF vethf35a4398 addr fe80::40ef:2fff:fe57:4c4d/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF vethf831b7f4 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF vethf831b7f4 addr fe80::f0d9:89ff:fe7c:1d32/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF vxlan_sys_4789 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF vxlan_sys_4789 addr fe80::80c1:82ff:fe4b:f078/64 2022/05/17 09:57:26.094 BGP: 10.0.0.1 [FSM] Timer (start timer expire). 2022/05/17 09:57:26.094 BGP: 10.0.0.1 [FSM] BGP_Start (Idle->Connect), fd -1 2022/05/17 09:57:26.094 BGP: Allocated bnc 10.0.0.1/32(0)(VRF default) peer 0x7f807f7631a0 2022/05/17 09:57:26.094 BGP: sendmsg_zebra_rnh: sending cmd ZEBRA_NEXTHOP_REGISTER for 10.0.0.1/32 (vrf VRF default) 2022/05/17 09:57:26.094 BGP: 10.0.0.1 [FSM] Waiting for NHT 2022/05/17 09:57:26.094 BGP: bgp_fsm_change_status : vrf default(0), Status: Connect established_peers 0 2022/05/17 09:57:26.094 BGP: 10.0.0.1 went from Idle to Connect 2022/05/17 09:57:26.094 BGP: 10.0.0.1 [FSM] TCP_connection_open_failed (Connect->Active), fd -1 2022/05/17 09:57:26.094 BGP: bgp_fsm_change_status : vrf default(0), Status: Active established_peers 0 2022/05/17 09:57:26.094 BGP: 10.0.0.1 went from Connect to Active 2022/05/17 09:57:26.094 ZEBRA: rnh_register msg from client bgp: hdr->length=8, type=nexthop vrf=0 2022/05/17 09:57:26.094 ZEBRA: 0: Add RNH 10.0.0.1/32 type Nexthop 2022/05/17 09:57:26.094 ZEBRA: 0:10.0.0.1/32: Evaluate RNH, type Nexthop (force) 2022/05/17 09:57:26.094 ZEBRA: 0:10.0.0.1/32: NH has become unresolved 2022/05/17 09:57:26.094 ZEBRA: 0: Client bgp registers for RNH 10.0.0.1/32 type Nexthop 2022/05/17 09:57:26.094 BGP: VRF default(0): Rcvd NH update 10.0.0.1/32(0) - metric 0/0 #nhops 0/0 flags 0x6 2022/05/17 09:57:26.094 BGP: NH update for 10.0.0.1/32(0)(VRF default) - flags 0x6 chgflags 0x0 - evaluate paths 2022/05/17 09:57:26.094 BGP: evaluate_paths: Updating peer (10.0.0.1(VRF default)) status with NHT 2022/05/17 09:57:30.081 ZEBRA: Event driven route-map update triggered 2022/05/17 09:57:30.081 ZEBRA: Event handler for route-map: 10.0.0.1-out 2022/05/17 09:57:30.081 ZEBRA: Event handler for route-map: 10.0.0.1-in 2022/05/17 09:57:31.104 ZEBRA: netlink_parse_info: netlink-listen (NS 0) type RTM_NEWNEIGH(28), len=76, seq=0, pid=0 2022/05/17 09:57:31.104 ZEBRA: Neighbor Entry received is not on a VLAN or a BRIDGE, ignoring 2022/05/17 09:57:31.105 ZEBRA: netlink_parse_info: netlink-listen (NS 0) type RTM_NEWNEIGH(28), len=76, seq=0, pid=0 2022/05/17 09:57:31.105 ZEBRA: Neighbor Entry received is not on a VLAN or a BRIDGE, ignoring", "oc get -n metallb-system pods -l component=speaker", "NAME READY STATUS RESTARTS AGE speaker-66bth 4/4 Running 0 56m speaker-gvfnf 4/4 Running 0 56m", "oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c \"show running-config\"", "Building configuration Current configuration: ! frr version 7.5.1_git frr defaults traditional hostname some-hostname log file /etc/frr/frr.log informational log timestamp precision 3 service integrated-vtysh-config ! router bgp 64500 1 bgp router-id 10.0.1.2 no bgp ebgp-requires-policy no bgp default ipv4-unicast no bgp network import-check neighbor 10.0.2.3 remote-as 64500 2 neighbor 10.0.2.3 bfd profile doc-example-bfd-profile-full 3 neighbor 10.0.2.3 timers 5 15 neighbor 10.0.2.4 remote-as 64500 4 neighbor 10.0.2.4 bfd profile doc-example-bfd-profile-full 5 neighbor 10.0.2.4 timers 5 15 ! address-family ipv4 unicast network 203.0.113.200/30 6 neighbor 10.0.2.3 activate neighbor 10.0.2.3 route-map 10.0.2.3-in in neighbor 10.0.2.4 activate neighbor 10.0.2.4 route-map 10.0.2.4-in in exit-address-family ! address-family ipv6 unicast network fc00:f853:ccd:e799::/124 7 neighbor 10.0.2.3 activate neighbor 10.0.2.3 route-map 10.0.2.3-in in neighbor 10.0.2.4 activate neighbor 10.0.2.4 route-map 10.0.2.4-in in exit-address-family ! route-map 10.0.2.3-in deny 20 ! route-map 10.0.2.4-in deny 20 ! ip nht resolve-via-default ! ipv6 nht resolve-via-default ! line vty ! bfd profile doc-example-bfd-profile-full 8 transmit-interval 35 receive-interval 35 passive-mode echo-mode echo-interval 35 minimum-ttl 10 ! ! end", "oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c \"show bgp summary\"", "IPv4 Unicast Summary: BGP router identifier 10.0.1.2, local AS number 64500 vrf-id 0 BGP table version 1 RIB entries 1, using 192 bytes of memory Peers 2, using 29 KiB of memory Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd PfxSnt 10.0.2.3 4 64500 387 389 0 0 0 00:32:02 0 1 1 10.0.2.4 4 64500 0 0 0 0 0 never Active 0 2 Total number of neighbors 2 IPv6 Unicast Summary: BGP router identifier 10.0.1.2, local AS number 64500 vrf-id 0 BGP table version 1 RIB entries 1, using 192 bytes of memory Peers 2, using 29 KiB of memory Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd PfxSnt 10.0.2.3 4 64500 387 389 0 0 0 00:32:02 NoNeg 3 10.0.2.4 4 64500 0 0 0 0 0 never Active 0 4 Total number of neighbors 2", "oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c \"show bgp ipv4 unicast 203.0.113.200/30\"", "BGP routing table entry for 203.0.113.200/30 Paths: (1 available, best #1, table default) Advertised to non peer-group peers: 10.0.2.3 <.> Local 0.0.0.0 from 0.0.0.0 (10.0.1.2) Origin IGP, metric 0, weight 32768, valid, sourced, local, best (First path received) Last update: Mon Jan 10 19:49:07 2022", "oc get -n metallb-system pods -l component=speaker", "NAME READY STATUS RESTARTS AGE speaker-66bth 4/4 Running 0 26m speaker-gvfnf 4/4 Running 0 26m", "oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c \"show bfd peers brief\"", "Session count: 2 SessionId LocalAddress PeerAddress Status ========= ============ =========== ====== 3909139637 10.0.1.2 10.0.2.3 up <.>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/networking/load-balancing-with-metallb
Chapter 1. User Access Configuration Guide for Role-based Access Control (RBAC)
Chapter 1. User Access Configuration Guide for Role-based Access Control (RBAC) The User Access feature is an implementation of role-based access control (RBAC) that controls user access to various services hosted on the Red Hat Hybrid Cloud Console . You configure the User Access feature to grant user access to services hosted on the Hybrid Cloud Console. 1.1. User Access and the Software as a Service (SaaS) access model Red Hat customer accounts might have hundreds of authenticated users, yet not all users need the same level of access to the SaaS services available on the Red Hat Hybrid Cloud Console . With the User Access features, an Organization Administrator can manage user access to services hosted on the Red Hat Hybrid Cloud Console . Note User Access does not manage OpenShift Cluster Manager permissions. For OpenShift Cluster Manager, all users in the organization can view information, but only an Organization Administrator and cluster owners can perform actions on clusters. See Configuring access to clusters in OpenShift Cluster Manager in the Openshift Cluster Manager documentation for details. 1.2. Who can use User Access To initially view and manage User Access on the Red Hat Hybrid Cloud Console , you must be an Organization Administrator. This is because User Access requires user management capabilities that are designated from the Red Hat Customer Portal at Customer Portal . Those capabilities belong solely to the Organization Administrator. The User Access administrator role is a special role that the Organization Administrator can assign. This role allows users who are not Organization Administrator users to manage User Access on the Red Hat Hybrid Cloud Console . 1.3. How to use User Access The User Access feature is based on managing roles rather than by individually assigning permissions to specific users. In User Access, each role has a specific set of permissions. For example, a role might allow read permission for an application. Another role might allow write permission for an application. You create groups that contain roles and, by extension, the permissions assigned to each role. You assign users to groups. This means each user in a group is assigned the permissions of the roles in that group. By creating different groups and adding or removing roles for that group, you control the permissions allowed for that group. When you add one or more users to a group, those users can perform all actions that are allowed for that group. Red Hat provides two default access groups for User Access: Default admin access group. The Default admin access group is limited to Organization Administrator users in your organization. You cannot change or modify the roles in the Default admin access group. Default access group. The Default access group contains all authenticated users in your organization. These users automatically inherit a selection of predefined roles. Note You can make changes to the Default access group. However, when you do so, its name changes to Custom default access group. Red Hat provides a set of predefined roles. Depending on the application, the predefined roles for each supported application might have different permissions that are tailored to the application. 1.3.1. The Default admin access group The Default admin access group is provided by Red Hat on the Red Hat Hybrid Cloud Console . It contains a set of roles that are assigned to all users who have an Organization Administrator role on your system. The roles in this group are predefined in the Red Hat Hybrid Cloud Console . The roles in the Default admin access group cannot be added to or modified. Because this group is provided by Red Hat, it is automatically updated when Red Hat assigns roles to the Default admin access group. The benefit of the Default admin access group is that it allows roles to be assigned automatically to Organization Administrators. See Predefined User Access roles , for the roles included in the Default admin access group. 1.3.2. The Default access group The Default access group is provided by Red Hat on the Red Hat Hybrid Cloud Console . It contains a set of roles that are predefined in the Red Hat Hybrid Cloud Console . The Default access group includes all authenticated users in your organization. The Default access group is automatically updated when Default access group roles are added in the Red Hat Hybrid Cloud Console . Note The Default access group contains a subset of all predefined roles. For more information, see section Predefined User Access roles , for the roles included in the Default admin access group. As an Organization Administrator, you can add roles to and remove roles from the Default access group. When you do so, its name changes to Custom default access group. The changes you make to this group affect all authenticated users in your organization. 1.3.3. The Custom default access group When you manually modify the Default access group, its name changes to Custom default access , which indicates it was modified. Moreover, it is no longer automatically updated from the Red Hat Hybrid Cloud Console . From that point forward, an Organization Administrator is responsible for all updates and changes to the Custom default access group. The group is no longer managed or updated by the Red Hat Hybrid Cloud Console . Important You cannot delete the Default access group or Custom default access group. You can restore the Default access group, which removes the Custom default access group and any changes you made. See Restoring the Default access group . 1.3.4. The User Access groups, roles, and permissions User Access uses the following categories to determine the level of user access that an Organization Administrator can grant to the supported Red Hat Hybrid Cloud Console services. The access provided to any authorized user depends on the group that the user belongs to and the roles assigned to that group. Group : A collection of users belonging to an account which provides the mapping of roles to users. An Organization Administrator can use groups to assign one or more roles to a group and to include one or more users in a group. You can create a group with no roles and no users. Roles : A set of permissions that provide access to a given service, such as Insights. The permissions to perform certain operations are assigned to specific roles. Roles are assigned to groups. For example, you might have a read role and a write role for a service. Adding both roles to a group grants all members of that group read and write permissions to that service. Permissions : A discrete action that can be requested of a service. Permissions are assigned to roles. An Organization Administrator adds or deletes roles and users to groups. The group can be a new group created by an Organization Administrator or the group can be an existing group. By creating a group that has one or more specific roles and then adding users to that group, you control how that group and its members interact with the Red Hat Hybrid Cloud Console services. When you add users to a group, they become members of that group. A group member inherits the roles of all other groups they belong to. The user interface lists users in the Members tab. 1.3.5. Additive access User access on the Red Hat Hybrid Cloud Console uses an additive model, which means that there are no deny roles. In other words, actions are only permitted. To control access, assign the appropriate roles with the desired permissions to groups, then add users to those groups. The access permitted to any individual user is a sum of all roles assigned to all groups to which that user belongs. 1.3.6. Access structure The following points are a summary of the user access structure for User Access: Group : A user can be a member of one or many groups. Role : A role can be added to one or many groups. Permissions : One or more permissions can be assigned to a role. In its initial default configuration, all User Access account users inherit the roles that are provided in the Default access group. Note Any user added to a group must be an authenticated user for the organization account on the Red Hat Hybrid Cloud Console .
null
https://docs.redhat.com/en/documentation/red_hat_hybrid_cloud_console/1-latest/html/user_access_configuration_guide_for_role-based_access_control_rbac_with_fedramp/assembly-insights-rbac-intro_user-access-configuration
Chapter 7. Uninstalling OpenShift Data Foundation
Chapter 7. Uninstalling OpenShift Data Foundation 7.1. Uninstalling OpenShift Data Foundation in Internal-attached devices mode Use the steps in this section to uninstall OpenShift Data Foundation. Uninstall Annotations Annotations on the Storage Cluster are used to change the behavior of the uninstall process. To define the uninstall behavior, the following two annotations have been introduced in the storage cluster: uninstall.ocs.openshift.io/cleanup-policy: delete uninstall.ocs.openshift.io/mode: graceful The following table provides information on the different values that can used with these annotations: Table 7.1. uninstall.ocs.openshift.io uninstall annotations descriptions Annotation Value Default Behavior cleanup-policy delete Yes Rook cleans up the physical drives and the DataDirHostPath cleanup-policy retain No Rook does not clean up the physical drives and the DataDirHostPath mode graceful Yes Rook and NooBaa pauses the uninstall process until the administrator/user removes the Persistent Volume Claims (PVCs) and Object Bucket Claims (OBCs) mode forced No Rook and NooBaa proceeds with uninstall even if the PVCs/OBCs provisioned using Rook and NooBaa exist respectively Edit the value of the annotation to change the cleanup policy or the uninstall mode. Expected output for both commands: Prerequisites Ensure that the OpenShift Data Foundation cluster is in a healthy state. The uninstall process can fail when some of the pods are not terminated successfully due to insufficient resources or nodes. In case the cluster is in an unhealthy state, contact Red Hat Customer Support before uninstalling OpenShift Data Foundation. Ensure that applications are not consuming persistent volume claims (PVCs) or object bucket claims (OBCs) using the storage classes provided by OpenShift Data Foundation. If any custom resources (such as custom storage classes, cephblockpools) were created by the admin, they must be deleted by the admin after removing the resources which consumed them. Procedure Delete the volume snapshots that are using OpenShift Data Foundation. List the volume snapshots from all the namespaces. From the output of the command, identify and delete the volume snapshots that are using OpenShift Data Foundation. <VOLUME-SNAPSHOT-NAME> Is the name of the volume snapshot <NAMESPACE> Is the project namespace Delete PVCs and OBCs that are using OpenShift Data Foundation. In the default uninstall mode (graceful), the uninstaller waits till all the PVCs and OBCs that use OpenShift Data Foundation are deleted. If you want to delete the Storage Cluster without deleting the PVCs, you can set the uninstall mode annotation to forced and skip this step. Doing so results in orphan PVCs and OBCs in the system. Delete OpenShift Container Platform monitoring stack PVCs using OpenShift Data Foundation. See Removing monitoring stack from OpenShift Data Foundation Delete OpenShift Container Platform Registry PVCs using OpenShift Data Foundation. Removing OpenShift Container Platform registry from OpenShift Data Foundation Delete OpenShift Container Platform logging PVCs using OpenShift Data Foundation. Removing the cluster logging operator from OpenShift Data Foundation Delete the other PVCs and OBCs provisioned using OpenShift Data Foundation. Given below is a sample script to identify the PVCs and OBCs provisioned using OpenShift Data Foundation. The script ignores the PVCs that are used internally by OpenShift Data Foundation. Note Omit RGW_PROVISIONER for cloud platforms. Delete the OBCs. <obc-name> Is the name of the OBC <project-name> Is the name of the project Delete the PVCs. <pvc-name> Is the name of the PVC <project-name> Is the name of the project Note Ensure that you have removed any custom backing stores, bucket classes, etc., created in the cluster. Delete the Storage System object and wait for the removal of the associated resources. Check the cleanup pods if the uninstall.ocs.openshift.io/cleanup-policy was set to delete (default) and ensure that their status is Completed . Example output: Confirm that the directory /var/lib/rook is now empty. This directory is empty only if the uninstall.ocs.openshift.io/cleanup-policy annotation was set to delete (default). If encryption was enabled at the time of install, remove dm-crypt managed device-mapper mapping from the OSDs on all the OpenShift Data Foundation nodes. Create a debug pod and chroot to the host on the storage node. <node-name> Is the name of the node Get Device names and make note of the OpenShift Data Foundation devices. Example output: Remove the mapped device. Important If the above command gets stuck due to insufficient privileges, run the following commands: Press CTRL+Z to exit the above command. Find PID of the process which was stuck. Terminate the process using kill command. <PID> Is the process ID Verify that the device name is removed. Delete the namespace and wait till the deletion is complete. You need to switch to another project if openshift-storage is the active project. For example: The project is deleted if the following command returns a NotFound error. Note While uninstalling OpenShift Data Foundation, if namespace is not deleted completely and remains in Terminating state, perform the steps in Troubleshooting and deleting remaining resources during Uninstall to identify objects that are blocking the namespace from being terminated. Delete local storage operator configurations if you have deployed OpenShift Data Foundation using local storage devices. See Removing local storage operator configurations . Unlabel the storage nodes. Remove the OpenShift Data Foundation taint if the nodes were tainted. Confirm all the Persistent volumes (PVs) provisioned using OpenShift Data Foundation are deleted. If there is any PV left in the Released state, delete it. <pv-name> Is the name of the PV Remove the CustomResourceDefinitions . To ensure that OpenShift Data Foundation is uninstalled completely, on the OpenShift Container Platform Web Console, Click Storage . Verify that OpenShift Data Foundation no longer appears under Storage. 7.1.1. Removing local storage operator configurations Use the instructions in this section only if you have deployed OpenShift Data Foundation using local storage devices. Note For OpenShift Data Foundation deployments only using localvolume resources, go directly to step 8. Procedure Identify the LocalVolumeSet and the corresponding StorageClassName being used by OpenShift Data Foundation. Set the variable SC to the StorageClass providing the LocalVolumeSet . List and note the devices to be cleaned up later. Inorder to list the device ids of the disks, follow the procedure mentioned here, See Find the available storage devices . Example output: Delete the LocalVolumeSet . Delete the local storage PVs for the given StorageClassName . Delete the StorageClassName . Delete the symlinks created by the LocalVolumeSet . Delete LocalVolumeDiscovery . Remove the LocalVolume resources (if any). Use the following steps to remove the LocalVolume resources that were used to provision PVs in the current or OpenShift Data Foundation version. Also, ensure that these resources are not being used by other tenants on the cluster. For each of the local volumes, do the following: Identify the LocalVolume and the corresponding StorageClassName being used by OpenShift Data Foundation. Set the variable LV to the name of the LocalVolume and variable SC to the name of the StorageClass For example: List and note the devices to be cleaned up later. Example output: Delete the local volume resource. Delete the remaining PVs and StorageClasses if they exist. Clean up the artifacts from the storage nodes for that resource. Example output: Wipe the disks for each of the local volumesets or local volumes listed in step 1 and 8 respectively so that they can be reused. List the storage nodes. Example output: Obtain the node console and execute chroot /host command when the prompt appears. Store the disk paths in the DISKS variable within quotes. For the list of disk paths, see step 3 and step 8.c for local volumeset and local volume respectively. Example output: Run sgdisk --zap-all on all the disks. Example output: Exit the shell and repeat for the other nodes. Delete the openshift-local-storage namespace and wait till the deletion is complete. You will need to switch to another project if the openshift-local-storage namespace is the active project. For example: The project is deleted if the following command returns a NotFound error. 7.2. Removing monitoring stack from OpenShift Data Foundation Use this section to clean up the monitoring stack from OpenShift Data Foundation. The Persistent Volume Claims (PVCs) that are created as a part of configuring the monitoring stack are in the openshift-monitoring namespace. Prerequisites PVCs are configured to use the OpenShift Container Platform monitoring stack. For more information, see configuring monitoring stack . Procedure List the pods and PVCs that are currently running in the openshift-monitoring namespace. Example output: Edit the monitoring configmap . Remove any config sections that reference the OpenShift Data Foundation storage classes as shown in the following example and save it. Before editing After editing In this example, alertmanagerMain and prometheusK8s monitoring components are using the OpenShift Data Foundation PVCs. Delete the relevant PVCs. Make sure you delete all the PVCs that are consuming the storage classes. <pvc-name> Is the name of the PVC 7.3. Removing OpenShift Container Platform registry from OpenShift Data Foundation Use this section to clean up the OpenShift Container Platform registry from OpenShift Data Foundation. If you want to configure an alternative storage, see Image registry . The Persistent Volume Claims (PVCs) that are created as a part of configuring the OpenShift Container Platform registry are in the openshift-image-registry namespace. Prerequisites The image registry must have been configured to use an OpenShift Data Foundation PVC. Procedure Edit the configs.imageregistry.operator.openshift.io object and remove the content in the storage section. Before editing After editing In this example, the PVC is called registry-cephfs-rwx-pvc , which is now safe to delete. Delete the PVC. <pvc-name> Is the name of the PVC 7.4. Removing the cluster logging operator from OpenShift Data Foundation Use this section to clean up the cluster logging operator from OpenShift Data Foundation. The Persistent Volume Claims (PVCs) that are created as a part of configuring the cluster logging operator are in the openshift-logging namespace. Prerequisites The cluster logging instance should have been configured to use the OpenShift Data Foundation PVCs. Procedure Remove the ClusterLogging instance in the namespace. The PVCs in the openshift-logging namespace are now safe to delete. Delete the PVCs. <pvc-name> Is the name of the PVC
[ "oc -n openshift-storage annotate storagecluster ocs-storagecluster uninstall.ocs.openshift.io/cleanup-policy=\"retain\" --overwrite", "oc -n openshift-storage annotate storagecluster ocs-storagecluster uninstall.ocs.openshift.io/mode=\"forced\" --overwrite", "storagecluster.ocs.openshift.io/ocs-storagecluster annotated", "oc get volumesnapshot --all-namespaces", "oc delete volumesnapshot <VOLUME-SNAPSHOT-NAME> -n <NAMESPACE>", "#!/bin/bash RBD_PROVISIONER=\"openshift-storage.rbd.csi.ceph.com\" CEPHFS_PROVISIONER=\"openshift-storage.cephfs.csi.ceph.com\" NOOBAA_PROVISIONER=\"openshift-storage.noobaa.io/obc\" RGW_PROVISIONER=\"openshift-storage.ceph.rook.io/bucket\" NOOBAA_DB_PVC=\"noobaa-db\" NOOBAA_BACKINGSTORE_PVC=\"noobaa-default-backing-store-noobaa-pvc\" Find all the OCS StorageClasses OCS_STORAGECLASSES=USD(oc get storageclasses | grep -e \"USDRBD_PROVISIONER\" -e \"USDCEPHFS_PROVISIONER\" -e \"USDNOOBAA_PROVISIONER\" -e \"USDRGW_PROVISIONER\" | awk '{print USD1}') List PVCs in each of the StorageClasses for SC in USDOCS_STORAGECLASSES do echo \"======================================================================\" echo \"USDSC StorageClass PVCs and OBCs\" echo \"======================================================================\" oc get pvc --all-namespaces --no-headers 2>/dev/null | grep USDSC | grep -v -e \"USDNOOBAA_DB_PVC\" -e \"USDNOOBAA_BACKINGSTORE_PVC\" oc get obc --all-namespaces --no-headers 2>/dev/null | grep USDSC echo done", "oc delete obc <obc-name> -n <project-name>", "oc delete pvc <pvc-name> -n <project-name>", "oc delete -n openshift-storage storagesystem --all --wait=true", "oc get pods -n openshift-storage | grep -i cleanup", "NAME READY STATUS RESTARTS AGE cluster-cleanup-job-<xx> 0/1 Completed 0 8m35s cluster-cleanup-job-<yy> 0/1 Completed 0 8m35s cluster-cleanup-job-<zz> 0/1 Completed 0 8m35s", "for i in USD(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/USD{i} -- chroot /host ls -l /var/lib/rook; done", "oc debug node/ <node-name>", "chroot /host", "dmsetup ls", "ocs-deviceset-0-data-0-57snx-block-dmcrypt (253:1)", "cryptsetup luksClose --debug --verbose ocs-deviceset-0-data-0-57snx-block-dmcrypt", "ps -ef | grep crypt", "kill -9 <PID>", "dmsetup ls", "oc project default", "oc delete project openshift-storage --wait=true --timeout=5m", "oc get project openshift-storage", "oc label nodes --all cluster.ocs.openshift.io/openshift-storage-", "oc label nodes --all topology.rook.io/rack-", "oc adm taint nodes --all node.ocs.openshift.io/storage-", "oc get pv", "oc delete pv <pv-name>", "oc delete crd backingstores.noobaa.io bucketclasses.noobaa.io cephblockpools.ceph.rook.io cephclusters.ceph.rook.io cephfilesystems.ceph.rook.io cephnfses.ceph.rook.io cephobjectstores.ceph.rook.io cephobjectstoreusers.ceph.rook.io noobaas.noobaa.io ocsinitializations.ocs.openshift.io storageclusters.ocs.openshift.io cephclients.ceph.rook.io cephobjectrealms.ceph.rook.io cephobjectzonegroups.ceph.rook.io cephobjectzones.ceph.rook.io cephrbdmirrors.ceph.rook.io storagesystems.odf.openshift.io --wait=true --timeout=5m", "oc get localvolumesets.local.storage.openshift.io -n openshift-local-storage", "export SC=\"<StorageClassName>\"", "/dev/disk/by-id/scsi-360050763808104bc28000000000000eb /dev/disk/by-id/scsi-360050763808104bc28000000000000ef /dev/disk/by-id/scsi-360050763808104bc28000000000000f3", "oc delete localvolumesets.local.storage.openshift.io <name-of-volumeset> -n openshift-local-storage", "oc get pv | grep USDSC | awk '{print USD1}'| xargs oc delete pv", "oc delete sc USDSC", "[[ ! -z USDSC ]] && for i in USD(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/USD{i} -- chroot /host rm -rfv /mnt/local-storage/USD{SC}/; done", "oc delete localvolumediscovery.local.storage.openshift.io/auto-discover-devices -n openshift-local-storage", "oc get localvolume.local.storage.openshift.io -n openshift-local-storage", "LV=local-block SC=localblock", "oc get localvolume -n openshift-local-storage USDLV -o jsonpath='{ .spec.storageClassDevices[].devicePaths[] }{\"\\n\"}'", "/dev/sdb /dev/sdc /dev/sdd /dev/sde", "oc delete localvolume -n openshift-local-storage --wait=true USDLV", "oc delete pv -l storage.openshift.com/local-volume-owner-name=USD{LV} --wait --timeout=5m oc delete storageclass USDSC --wait --timeout=5m", "[[ ! -z USDSC ]] && for i in USD(oc get node -l cluster.ocs.openshift.io/openshift-storage= -o jsonpath='{ .items[*].metadata.name }'); do oc debug node/USD{i} -- chroot /host rm -rfv /mnt/local-storage/USD{SC}/; done", "Starting pod/node-xxx-debug To use host binaries, run `chroot /host` removed '/mnt/local-storage/localblock/nvme2n1' removed directory '/mnt/local-storage/localblock' Removing debug pod Starting pod/node-yyy-debug To use host binaries, run `chroot /host` removed '/mnt/local-storage/localblock/nvme2n1' removed directory '/mnt/local-storage/localblock' Removing debug pod Starting pod/node-zzz-debug To use host binaries, run `chroot /host` removed '/mnt/local-storage/localblock/nvme2n1' removed directory '/mnt/local-storage/localblock' Removing debug pod", "get nodes -l cluster.ocs.openshift.io/openshift-storage=", "NAME STATUS ROLES AGE VERSION node-xxx Ready worker 4h45m v1.18.3+6c42de8 node-yyy Ready worker 4h46m v1.18.3+6c42de8 node-zzz Ready worker 4h45m v1.18.3+6c42de8", "oc debug node/node-xxx Starting pod/node-xxx-debug ... To use host binaries, run `chroot /host` Pod IP: w.x.y.z If you don't see a command prompt, try pressing enter. sh-4.2# chroot /host", "sh-4.4# DISKS=\"/dev/disk/by-id/scsi-360050763808104bc28000000000000eb /dev/disk/by-id/scsi-360050763808104bc28000000000000ef /dev/disk/by-id/scsi-360050763808104bc28000000000000f3 \" or sh-4.2# DISKS=\"/dev/sdb /dev/sdc /dev/sdd /dev/sde \".", "sh-4.4# for disk in USDDISKS; do sgdisk --zap-all USDdisk;done", "Creating new GPT entries. GPT data structures destroyed! You may now partition the disk using fdisk or other utilities. Creating new GPT entries. GPT data structures destroyed! You may now partition the disk using fdisk or other utilities. Creating new GPT entries. GPT data structures destroyed! You may now partition the disk using fdisk or other utilities. Creating new GPT entries. GPT data structures destroyed! You may now partition the disk using fdisk or other utilities.", "sh-4.4# exit exit sh-4.2# exit exit Removing debug pod", "oc project default oc delete project openshift-local-storage --wait=true --timeout=5m", "oc get project openshift-local-storage", "oc get pod,pvc -n openshift-monitoring", "NAME READY STATUS RESTARTS AGE pod/alertmanager-main-0 3/3 Running 0 8d pod/alertmanager-main-1 3/3 Running 0 8d pod/alertmanager-main-2 3/3 Running 0 8d pod/cluster-monitoring- operator-84457656d-pkrxm 1/1 Running 0 8d pod/grafana-79ccf6689f-2ll28 2/2 Running 0 8d pod/kube-state-metrics- 7d86fb966-rvd9w 3/3 Running 0 8d pod/node-exporter-25894 2/2 Running 0 8d pod/node-exporter-4dsd7 2/2 Running 0 8d pod/node-exporter-6p4zc 2/2 Running 0 8d pod/node-exporter-jbjvg 2/2 Running 0 8d pod/node-exporter-jj4t5 2/2 Running 0 6d18h pod/node-exporter-k856s 2/2 Running 0 6d18h pod/node-exporter-rf8gn 2/2 Running 0 8d pod/node-exporter-rmb5m 2/2 Running 0 6d18h pod/node-exporter-zj7kx 2/2 Running 0 8d pod/openshift-state-metrics- 59dbd4f654-4clng 3/3 Running 0 8d pod/prometheus-adapter- 5df5865596-k8dzn 1/1 Running 0 7d23h pod/prometheus-adapter- 5df5865596-n2gj9 1/1 Running 0 7d23h pod/prometheus-k8s-0 6/6 Running 1 8d pod/prometheus-k8s-1 6/6 Running 1 8d pod/prometheus-operator- 55cfb858c9-c4zd9 1/1 Running 0 6d21h pod/telemeter-client- 78fc8fc97d-2rgfp 3/3 Running 0 8d NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-0 Bound pvc-0d519c4f-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-1 Bound pvc-0d5a9825-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-2 Bound pvc-0d6413dc-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-prometheus-claim-prometheus-k8s-0 Bound pvc-0b7c19b0-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-prometheus-claim-prometheus-k8s-1 Bound pvc-0b8aed3f-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-storagecluster-ceph-rbd 8d", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", ". . . apiVersion: v1 data: config.yaml: | alertmanagerMain: volumeClaimTemplate: metadata: name: my-alertmanager-claim spec: resources: requests: storage: 40Gi storageClassName: ocs-storagecluster-ceph-rbd prometheusK8s: volumeClaimTemplate: metadata: name: my-prometheus-claim spec: resources: requests: storage: 40Gi storageClassName: ocs-storagecluster-ceph-rbd kind: ConfigMap metadata: creationTimestamp: \"2019-12-02T07:47:29Z\" name: cluster-monitoring-config namespace: openshift-monitoring resourceVersion: \"22110\" selfLink: /api/v1/namespaces/openshift-monitoring/configmaps/cluster-monitoring-config uid: fd6d988b-14d7-11ea-84ff-066035b9efa8 . . .", ". . . apiVersion: v1 data: config.yaml: | kind: ConfigMap metadata: creationTimestamp: \"2019-11-21T13:07:05Z\" name: cluster-monitoring-config namespace: openshift-monitoring resourceVersion: \"404352\" selfLink: /api/v1/namespaces/openshift-monitoring/configmaps/cluster-monitoring-config uid: d12c796a-0c5f-11ea-9832-063cd735b81c . . .", "oc delete -n openshift-monitoring pvc <pvc-name> --wait=true --timeout=5m", "oc edit configs.imageregistry.operator.openshift.io", ". . . storage: pvc: claim: registry-cephfs-rwx-pvc . . .", ". . . storage: emptyDir: {} . . .", "oc delete pvc <pvc-name> -n openshift-image-registry --wait=true --timeout=5m", "oc delete clusterlogging instance -n openshift-logging --wait=true --timeout=5m", "oc delete pvc <pvc-name> -n openshift-logging --wait=true --timeout=5m" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_openshift_data_foundation_using_ibm_z/uninstalling_openshift_data_foundation
function::ipmib_tcp_local_port
function::ipmib_tcp_local_port Name function::ipmib_tcp_local_port - Get the local tcp port Synopsis Arguments skb pointer to a struct sk_buff SourceIsLocal flag to indicate whether local operation Description Returns the local tcp port from skb .
[ "ipmib_tcp_local_port:long(skb:long,SourceIsLocal:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-ipmib-tcp-local-port
Chapter 3. Dynamically provisioned OpenShift Data Foundation deployed on Microsoft Azure
Chapter 3. Dynamically provisioned OpenShift Data Foundation deployed on Microsoft Azure 3.1. Replacing operational or failed storage devices on Azure installer-provisioned infrastructure When you need to replace a device in a dynamically created storage cluster on an Azure installer-provisioned infrastructure, you must replace the storage node. For information about how to replace nodes, see: Replacing operational nodes on Azure installer-provisioned infrastructure . Replacing failed nodes on Azure installer-provisioned infrastructures .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/replacing_devices/dynamically_provisioned_openshift_data_foundation_deployed_on_microsoft_azure
Chapter 8. Installing a private cluster on Azure
Chapter 8. Installing a private cluster on Azure In OpenShift Container Platform version 4.15, you can install a private cluster into an existing Azure Virtual Network (VNet) on Microsoft Azure. The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 8.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If you use customer-managed encryption keys, you prepared your Azure environment for encryption . 8.2. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network. Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN. 8.2.1. Private clusters in Azure To create a private cluster on Microsoft Azure, you must provide an existing private VNet and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for only internal traffic. Depending how your network connects to the private VNET, you might need to use a DNS forwarder to resolve the cluster's private DNS records. The cluster's machines use 168.63.129.16 internally for DNS resolution. For more information, see What is Azure Private DNS? and What is IP address 168.63.129.16? in the Azure documentation. The cluster still requires access to internet to access the Azure APIs. The following items are not required or created when you install a private cluster: A BaseDomainResourceGroup , since the cluster does not create public records Public IP addresses Public DNS records Public endpoints 8.2.1.1. Limitations Private clusters on Azure are subject to only the limitations that are associated with the use of an existing VNet. 8.2.2. User-defined outbound routing In OpenShift Container Platform, you can choose your own outbound routing for a cluster to connect to the internet. This allows you to skip the creation of public IP addresses and the public load balancer. You can configure user-defined routing by modifying parameters in the install-config.yaml file before installing your cluster. A pre-existing VNet is required to use outbound routing when installing a cluster; the installation program is not responsible for configuring this. When configuring a cluster to use user-defined routing, the installation program does not create the following resources: Outbound rules for access to the internet. Public IPs for the public load balancer. Kubernetes Service object to add the cluster machines to the public load balancer for outbound requests. You must ensure the following items are available before setting user-defined routing: Egress to the internet is possible to pull container images, unless using an OpenShift image registry mirror. The cluster can access Azure APIs. Various allowlist endpoints are configured. You can reference these endpoints in the Configuring your firewall section. There are several pre-existing networking setups that are supported for internet access using user-defined routing. Private cluster with network address translation You can use Azure VNET network address translation (NAT) to provide outbound internet access for the subnets in your cluster. You can reference Create a NAT gateway using Azure CLI in the Azure documentation for configuration instructions. When using a VNet setup with Azure NAT and user-defined routing configured, you can create a private cluster with no public endpoints. Private cluster with Azure Firewall You can use Azure Firewall to provide outbound routing for the VNet used to install the cluster. You can learn more about providing user-defined routing with Azure Firewall in the Azure documentation. When using a VNet setup with Azure Firewall and user-defined routing configured, you can create a private cluster with no public endpoints. Private cluster with a proxy configuration You can use a proxy with user-defined routing to allow egress to the internet. You must ensure that cluster Operators do not access Azure APIs using a proxy; Operators must have access to Azure APIs outside of the proxy. When using the default route table for subnets, with 0.0.0.0/0 populated automatically by Azure, all Azure API requests are routed over Azure's internal network even though the IP addresses are public. As long as the Network Security Group rules allow egress to Azure API endpoints, proxies with user-defined routing configured allow you to create private clusters with no public endpoints. Private cluster with no internet access You can install a private network that restricts all access to the internet, except the Azure API. This is accomplished by mirroring the release image registry locally. Your cluster must have access to the following: An OpenShift image registry mirror that allows for pulling container images Access to Azure APIs With these requirements available, you can use user-defined routing to create private clusters with no public endpoints. 8.3. About reusing a VNet for your OpenShift Container Platform cluster In OpenShift Container Platform 4.15, you can deploy a cluster into an existing Azure Virtual Network (VNet) in Microsoft Azure. If you do, you must also use existing subnets within the VNet and routing rules. By deploying OpenShift Container Platform into an existing Azure VNet, you might be able to avoid service limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. This is a good option to use if you cannot obtain the infrastructure creation permissions that are required to create the VNet. 8.3.1. Requirements for using your VNet When you deploy a cluster by using an existing VNet, you must perform additional network configuration before you install the cluster. In installer-provisioned infrastructure clusters, the installer usually creates the following components, but it does not create them when you install into an existing VNet: Subnets Route tables VNets Network Security Groups Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VNet, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VNet options like DHCP, so you must do so before you install the cluster. The cluster must be able to access the resource group that contains the existing VNet and subnets. While all of the resources that the cluster creates are placed in a separate resource group that it creates, some network resources are used from a separate group. Some cluster Operators must be able to access resources in both resource groups. For example, the Machine API controller attaches NICS for the virtual machines that it creates to subnets from the networking resource group. Your VNet must meet the following characteristics: The VNet's CIDR block must contain the Networking.MachineCIDR range, which is the IP address pool for cluster machines. The VNet and its subnets must belong to the same resource group, and the subnets must be configured to use Azure-assigned DHCP IP addresses instead of static IP addresses. You must provide two subnets within your VNet, one for the control plane machines and one for the compute machines. Because Azure distributes machines in different availability zones within the region that you specify, your cluster will have high availability by default. Note By default, if you specify availability zones in the install-config.yaml file, the installation program distributes the control plane machines and the compute machines across these availability zones within a region . To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones. To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the specified subnets exist. There are two private subnets, one for the control plane machines and one for the compute machines. The subnet CIDRs belong to the machine CIDR that you specified. Machines are not provisioned in availability zones that you do not provide private subnets for. Note If you destroy a cluster that uses an existing VNet, the VNet is not deleted. 8.3.1.1. Network security group requirements The network security groups for the subnets that host the compute and control plane machines require specific access to ensure that the cluster communication is correct. You must create rules to allow access to the required cluster communication ports. Important The network security group rules must be in place before you install the cluster. If you attempt to install a cluster without the required access, the installation program cannot reach the Azure APIs, and installation fails. Table 8.1. Required ports Port Description Control plane Compute 80 Allows HTTP traffic x 443 Allows HTTPS traffic x 6443 Allows communication to the control plane machines x 22623 Allows internal communication to the machine config server for provisioning machines x If you are using Azure Firewall to restrict the internet access, then you can configure Azure Firewall to allow the Azure APIs . A network security group rule is not needed. Important Currently, there is no supported way to block or restrict the machine config server endpoint. The machine config server must be exposed to the network so that newly-provisioned machines, which have no existing configuration or state, are able to fetch their configuration. In this model, the root of trust is the certificate signing requests (CSR) endpoint, which is where the kubelet sends its certificate signing request for approval to join the cluster. Because of this, machine configs should not be used to distribute sensitive information, such as secrets and certificates. To ensure that the machine config server endpoints, ports 22623 and 22624, are secured in bare metal scenarios, customers must configure proper network policies. Because cluster components do not modify the user-provided network security groups, which the Kubernetes controllers update, a pseudo-network security group is created for the Kubernetes controller to modify without impacting the rest of the environment. Table 8.2. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If you configure an external NTP time server, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 8.3. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports Additional resources About the OpenShift SDN network plugin Configuring your firewall 8.3.2. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resources in your clouds than others. For example, you might be able to create application-specific items, like instances, storage, and load balancers, but not networking-related components such as VNets, subnet, or ingress rules. The Azure credentials that you use when you create your cluster do not need the networking permissions that are required to make VNets and core networking components within the VNet, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as load balancers, security groups, storage accounts, and nodes. 8.3.3. Isolation between clusters Because the cluster is unable to modify network security groups in an existing subnet, there is no way to isolate clusters from each other on the VNet. 8.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 8.5. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 8.6. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 8.7. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for Azure 8.7.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 8.4. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). Important You are required to use Azure virtual machines that have the premiumIO parameter set to true . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 8.7.2. Tested instance types for Azure The following Microsoft Azure instance types have been tested with OpenShift Container Platform. Example 8.1. Machine types based on 64-bit x86 architecture standardBSFamily standardBsv2Family standardDADSv5Family standardDASv4Family standardDASv5Family standardDCACCV5Family standardDCADCCV5Family standardDCADSv5Family standardDCASv5Family standardDCSv3Family standardDCSv2Family standardDDCSv3Family standardDDSv4Family standardDDSv5Family standardDLDSv5Family standardDLSv5Family standardDSFamily standardDSv2Family standardDSv2PromoFamily standardDSv3Family standardDSv4Family standardDSv5Family standardEADSv5Family standardEASv4Family standardEASv5Family standardEBDSv5Family standardEBSv5Family standardECACCV5Family standardECADCCV5Family standardECADSv5Family standardECASv5Family standardEDSv4Family standardEDSv5Family standardEIADSv5Family standardEIASv4Family standardEIASv5Family standardEIBDSv5Family standardEIBSv5Family standardEIDSv5Family standardEISv3Family standardEISv5Family standardESv3Family standardESv4Family standardESv5Family standardFXMDVSFamily standardFSFamily standardFSv2Family standardGSFamily standardHBrsv2Family standardHBSFamily standardHBv4Family standardHCSFamily standardHXFamily standardLASv3Family standardLSFamily standardLSv2Family standardLSv3Family standardMDSMediumMemoryv2Family standardMDSMediumMemoryv3Family standardMIDSMediumMemoryv2Family standardMISMediumMemoryv2Family standardMSFamily standardMSMediumMemoryv2Family standardMSMediumMemoryv3Family StandardNCADSA100v4Family Standard NCASv3_T4 Family standardNCSv3Family standardNDSv2Family standardNPSFamily StandardNVADSA10v5Family standardNVSv3Family standardXEISv4Family 8.7.3. Tested instance types for Azure on 64-bit ARM infrastructures The following Microsoft Azure ARM64 instance types have been tested with OpenShift Container Platform. Example 8.2. Machine types based on 64-bit ARM architecture standardBpsv2Family standardDPSv5Family standardDPDSv5Family standardDPLDSv5Family standardDPLSv5Family standardEPSv5Family standardEPDSv5Family 8.7.4. Enabling trusted launch for Azure VMs You can enable two trusted launch features when installing your cluster on Azure: secure boot and virtualized Trusted Platform Modules . See the Azure documentation about virtual machine sizes to learn what sizes of virtual machines support these features. Important Trusted launch is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites You have created an install-config.yaml file. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add the following stanza: controlPlane: 1 platform: azure: settings: securityType: TrustedLaunch 2 trustedLaunch: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 1 Specify controlPlane.platform.azure or compute.platform.azure to enable trusted launch on only control plane or compute nodes respectively. Specify platform.azure.defaultMachinePlatform to enable trusted launch on all nodes. 2 Enable trusted launch features. 3 Enable secure boot. For more information, see the Azure documentation about secure boot . 4 Enable the virtualized Trusted Platform Module. For more information, see the Azure documentation about virtualized Trusted Platform Modules . 8.7.5. Enabling confidential VMs You can enable confidential VMs when installing your cluster. You can enable confidential VMs for compute nodes, control plane nodes, or all nodes. Important Using confidential VMs is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can use confidential VMs with the following VM sizes: DCasv5-series DCadsv5-series ECasv5-series ECadsv5-series Important Confidential VMs are currently not supported on 64-bit ARM architectures. Prerequisites You have created an install-config.yaml file. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add the following stanza: controlPlane: 1 platform: azure: settings: securityType: ConfidentialVM 2 confidentialVM: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 5 1 Specify controlPlane.platform.azure or compute.platform.azure to deploy confidential VMs on only control plane or compute nodes respectively. Specify platform.azure.defaultMachinePlatform to deploy confidential VMs on all nodes. 2 Enable confidential VMs. 3 Enable secure boot. For more information, see the Azure documentation about secure boot . 4 Enable the virtualized Trusted Platform Module. For more information, see the Azure documentation about virtualized Trusted Platform Modules . 5 Specify VMGuestStateOnly to encrypt the VM guest state. 8.7.6. Sample customized install-config.yaml file for Azure You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 9 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 12 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 13 region: centralus 14 resourceGroupName: existing_resource_group 15 networkResourceGroupName: vnet_resource_group 16 virtualNetwork: vnet 17 controlPlaneSubnet: control_plane_subnet 18 computeSubnet: compute_subnet 19 outboundType: UserDefinedRouting 20 cloudName: AzurePublicCloud pullSecret: '{"auths": ...}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 publish: Internal 24 1 10 14 21 Required. The installation program prompts you for this value. 2 6 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3 , for your machines if you disable simultaneous multithreading. 5 8 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 9 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 12 Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) image that should be used to boot control plane and compute machines. The publisher , offer , sku , and version parameters under platform.azure.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the parameters under controlPlane.platform.azure.osImage or compute.platform.azure.osImage are set, they override the platform.azure.defaultMachinePlatform.osImage parameters. 13 Specify the name of the resource group that contains the DNS zone for your base domain. 15 Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 16 If you use an existing VNet, specify the name of the resource group that contains it. 17 If you use an existing VNet, specify its name. 18 If you use an existing VNet, specify the name of the subnet to host the control plane machines. 19 If you use an existing VNet, specify the name of the subnet to host the compute machines. 20 You can customize your own outbound routing. Configuring user-defined routing prevents exposing external endpoints in your cluster. User-defined routing for egress requires deploying your cluster to an existing VNet. 22 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 23 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 24 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the internet. The default value is External . 8.7.7. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. Additional resources For more details about Accelerated Networking, see Accelerated Networking for Microsoft Azure VMs . 8.8. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 8.9. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an Azure cluster to use short-term credentials . 8.9.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 8.9.2. Configuring an Azure cluster to use short-term credentials To install a cluster that uses Microsoft Entra Workload ID, you must configure the Cloud Credential Operator utility and create the required Azure resources for your cluster. 8.9.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created a global Microsoft Azure account for the ccoctl utility to use with the following permissions: Example 8.3. Required Azure permissions Microsoft.Resources/subscriptions/resourceGroups/read Microsoft.Resources/subscriptions/resourceGroups/write Microsoft.Resources/subscriptions/resourceGroups/delete Microsoft.Authorization/roleAssignments/read Microsoft.Authorization/roleAssignments/delete Microsoft.Authorization/roleAssignments/write Microsoft.Authorization/roleDefinitions/read Microsoft.Authorization/roleDefinitions/write Microsoft.Authorization/roleDefinitions/delete Microsoft.Storage/storageAccounts/listkeys/action Microsoft.Storage/storageAccounts/delete Microsoft.Storage/storageAccounts/read Microsoft.Storage/storageAccounts/write Microsoft.Storage/storageAccounts/blobServices/containers/write Microsoft.Storage/storageAccounts/blobServices/containers/delete Microsoft.Storage/storageAccounts/blobServices/containers/read Microsoft.ManagedIdentity/userAssignedIdentities/delete Microsoft.ManagedIdentity/userAssignedIdentities/read Microsoft.ManagedIdentity/userAssignedIdentities/write Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/read Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/write Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/delete Microsoft.Storage/register/action Microsoft.ManagedIdentity/register/action Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 8.9.2.2. Creating Azure resources with the Cloud Credential Operator utility You can use the ccoctl azure create-all command to automate the creation of Azure resources. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Access to your Microsoft Azure account by using the Azure CLI. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. To enable the ccoctl utility to detect your Azure credentials automatically, log in to the Azure CLI by running the following command: USD az login Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl azure create-all \ --name=<azure_infra_name> \ 1 --output-dir=<ccoctl_output_dir> \ 2 --region=<azure_region> \ 3 --subscription-id=<azure_subscription_id> \ 4 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 5 --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \ 6 --tenant-id=<azure_tenant_id> 7 1 Specify the user-defined name for all created Azure resources used for tracking. 2 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 3 Specify the Azure region in which cloud resources will be created. 4 Specify the Azure subscription ID to use. 5 Specify the directory containing the files for the component CredentialsRequest objects. 6 Specify the name of the resource group containing the cluster's base domain Azure DNS zone. 7 Specify the Azure tenant ID to use. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. To see additional optional parameters and explanations of how to use them, run the azure create-all --help command. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output azure-ad-pod-identity-webhook-config.yaml cluster-authentication-02-config.yaml openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-azure-cloud-credentials-credentials.yaml You can verify that the Microsoft Entra ID service accounts are created by querying Azure. For more information, refer to Azure documentation on listing Entra ID service accounts. 8.9.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you used the ccoctl utility to create a new Azure resource group instead of using an existing resource group, modify the resourceGroupName parameter in the install-config.yaml as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com # ... platform: azure: resourceGroupName: <azure_infra_name> 1 # ... 1 This value must match the user-defined name for Azure resources that was specified with the --name argument of the ccoctl azure create-all command. If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 8.10. Optional: Preparing a private Microsoft Azure cluster for a private image registry By installing a private image registry on a private Microsoft Azure cluster, you can create private storage endpoints. Private storage endpoints disable public facing endpoints to the registry's storage account, adding an extra layer of security to your OpenShift Container Platform deployment. Important Do not install a private image registry on Microsoft Azure Red Hat OpenShift (ARO), because the endpoint can put your Microsoft Azure Red Hat OpenShift cluster in an unrecoverable state. Use the following guide to prepare your private Microsoft Azure cluster for installation with a private image registry. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI (oc). You have prepared an install-config.yaml that includes the following information: The publish field is set to Internal You have set the permissions for creating a private storage endpoint. For more information, see "Azure permissions for installer-provisioned infrastructure". Procedure If you have not previously created installation manifest files, do so by running the following command: USD ./openshift-install create manifests --dir <installation_directory> This command displays the following messages: Example output INFO Consuming Install Config from target directory INFO Manifests created in: <installation_directory>/manifests and <installation_directory>/openshift Create an image registry configuration object and pass in the networkResourceGroupName , subnetName , and vnetName provided by Microsoft Azure. For example: USD touch imageregistry-config.yaml apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: name: cluster spec: managementState: "Managed" replicas: 2 rolloutStrategy: RollingUpdate storage: azure: networkAccess: internal: networkResourceGroupName: <vnet_resource_group> 1 subnetName: <subnet_name> 2 vnetName: <vnet_name> 3 type: Internal 1 Optional. If you have an existing VNet and subnet setup, replace <vnet_resource_group> with the resource group name that contains the existing virtual network (VNet). 2 Optional. If you have an existing VNet and subnet setup, replace <subnet_name> with the name of the existing compute subnet within the specified resource group. 3 Optional. If you have an existing VNet and subnet setup, replace <vnet_name> with the name of the existing virtual network (VNet) in the specified resource group. Note The imageregistry-config.yaml file is consumed during the installation process. If desired, you must back it up before installation. Move the imageregistry-config.yaml file to the <installation_directory/manifests> folder by running the following command: USD mv imageregistry-config.yaml <installation_directory/manifests/> steps After you have moved the imageregistry-config.yaml file to the <installation_directory/manifests> folder and set the required permissions, proceed to "Deploying the cluster". Additional resources For the list of permissions needed to create a private storage endpoint, see Required Azure permissions for installer-provisioned infrastructure . 8.11. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have an Azure subscription ID and tenant ID. If you are installing the cluster using a service principal, you have its application ID and password. If you are installing the cluster using a system-assigned managed identity, you have enabled it on the virtual machine that you will run the installation program from. If you are installing the cluster using a user-assigned managed identity, you have met these prerequisites: You have its client ID. You have assigned it to the virtual machine that you will run the installation program from. Procedure Optional: If you have run the installation program on this computer before, and want to use an alternative service principal or managed identity, go to the ~/.azure/ directory and delete the osServicePrincipal.json configuration file. Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a installation. Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . If the installation program cannot locate the osServicePrincipal.json configuration file from a installation, you are prompted for Azure subscription and authentication values. Enter the following Azure parameter values for your subscription: azure subscription id : Enter the subscription ID to use for the cluster. azure tenant id : Enter the tenant ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client id : If you are using a service principal, enter its application ID. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity, specify its client ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client secret : If you are using a service principal, enter its password. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity,leave this value blank. If previously not detected, the installation program creates an osServicePrincipal.json configuration file and stores this file in the ~/.azure/ directory on your computer. This ensures that the installation program can load the profile when it is creating an OpenShift Container Platform cluster on the target platform. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 8.12. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 8.13. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 8.14. steps Customize your cluster . If necessary, you can opt out of remote health reporting .
[ "The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "mkdir <installation_directory>", "controlPlane: 1 platform: azure: settings: securityType: TrustedLaunch 2 trustedLaunch: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4", "controlPlane: 1 platform: azure: settings: securityType: ConfidentialVM 2 confidentialVM: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 5", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 12 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 13 region: centralus 14 resourceGroupName: existing_resource_group 15 networkResourceGroupName: vnet_resource_group 16 virtualNetwork: vnet 17 controlPlaneSubnet: control_plane_subnet 18 computeSubnet: compute_subnet 19 outboundType: UserDefinedRouting 20 cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 publish: Internal 24", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "az login", "ccoctl azure create-all --name=<azure_infra_name> \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --region=<azure_region> \\ 3 --subscription-id=<azure_subscription_id> \\ 4 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 5 --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \\ 6 --tenant-id=<azure_tenant_id> 7", "ls <path_to_ccoctl_output_dir>/manifests", "azure-ad-pod-identity-webhook-config.yaml cluster-authentication-02-config.yaml openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-azure-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "apiVersion: v1 baseDomain: example.com platform: azure: resourceGroupName: <azure_infra_name> 1", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create manifests --dir <installation_directory>", "INFO Consuming Install Config from target directory INFO Manifests created in: <installation_directory>/manifests and <installation_directory>/openshift", "touch imageregistry-config.yaml", "apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: name: cluster spec: managementState: \"Managed\" replicas: 2 rolloutStrategy: RollingUpdate storage: azure: networkAccess: internal: networkResourceGroupName: <vnet_resource_group> 1 subnetName: <subnet_name> 2 vnetName: <vnet_name> 3 type: Internal", "mv imageregistry-config.yaml <installation_directory/manifests/>", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_azure/installing-azure-private
Chapter 13. Managing Kerberos principal aliases for users, hosts, and services
Chapter 13. Managing Kerberos principal aliases for users, hosts, and services When you create a new user, host, or service, a Kerberos principal in the following format is automatically added: user_name@REALM host/ host_name@REALM service_name/host_name@REALM Administrators can enable users, hosts, or services to authenticate against Kerberos applications using an alias. This is beneficial in the following scenarios: The user name changed and the user wants to log in using both the and new user name. The user needs to log in using the email address even if the IdM Kerberos realm differs from the email domain. Note that if you rename a user, the object keeps the aliases and the canonical principal name. 13.1. Adding a Kerberos principal alias You can associate alias names with existing Kerberos principals in an Identity Management (IdM) environment. This enhances security and simplifies authentication processes within the IdM domain. Procedure To add the alias name useralias to the account user , enter: To add an alias to a host or service, use the ipa host-add-principal or ipa service-add-principal command respectively instead. If you use an alias name to authenticate, use the -C option with the kinit command: 13.2. Removing a Kerberos principal alias You can remove alias names associated with Kerberos principals in their Identity Management (IdM) environment. Procedure To remove the alias useralias from the account user , enter: To remove an alias from a host or service, use the ipa host-remove-principal or ipa service-remove-principal command respectively instead. Note that you cannot remove the canonical principal name: 13.3. Adding a Kerberos enterprise principal alias You can associate enterprise principal alias names with existing Kerberos enterprise principals in an Identity Management (IdM) environment. Enterprise principal aliases can use any domain suffix except for user principal name (UPN) suffixes, NetBIOS names, or domain names of trusted Active Directory forest domains. Note When adding or removing enterprise principal aliases, escape the @ symbol using two backslashes (\\). Otherwise, the shell interprets the @ symbol as part of the Kerberos realm name and leads to the following error: Procedure To add the enterprise principal alias [email protected] to the user account: To add an enterprise alias to a host or service, use the ipa host-add-principal or ipa service-add-principal command respectively instead. If you use an enterprise principal name to authenticate, use the -E option with the kinit command: 13.4. Removing a Kerberos enterprise principal alias You can remove enterprise principal alias names associated with Kerberos enterprise principals in their Identity Management (IdM) environment. Note When adding or removing enterprise principal aliases, escape the @ symbol using two backslashes (\\). Otherwise, the shell interprets the @ symbol as part of the Kerberos realm name and leads to the following error: Procedure To remove the enterprise principal alias [email protected] from the account user , enter: To remove an alias from a host or service, use the ipa host-remove-principal or ipa service-remove-principal command respectively instead.
[ "ipa user-add-principal <user> <useralias> -------------------------------- Added new aliases to user \"user\" -------------------------------- User login: user Principal alias: [email protected], [email protected]", "kinit -C <useralias> Password for <user>@IDM.EXAMPLE.COM:", "ipa user-remove-principal <user> <useralias> -------------------------------- Removed aliases from user \"user\" -------------------------------- User login: user Principal alias: [email protected]", "ipa user-show <user> User login: user Principal name: [email protected] ipa user-remove-principal user user ipa: ERROR: invalid 'krbprincipalname': at least one value equal to the canonical principal name must be present", "ipa: ERROR: The realm for the principal does not match the realm for this IPA server", "ipa user-add-principal <user> <user\\\\@example.com> -------------------------------- Added new aliases to user \"user\" -------------------------------- User login: user Principal alias: [email protected], user\\@[email protected]", "kinit -E <[email protected]> Password for user\\@[email protected]:", "ipa: ERROR: The realm for the principal does not match the realm for this IPA server", "ipa user-remove-principal <user> <user\\\\@example.com> -------------------------------- Removed aliases from user \"user\" -------------------------------- User login: user Principal alias: [email protected]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_idm_users_groups_hosts_and_access_control_rules/managing-kerberos-principal-aliases-for-users-hosts-and-services_managing-users-groups-hosts
Chapter 8. Configuring the audit log policy
Chapter 8. Configuring the audit log policy You can control the amount of information that is logged to the API server audit logs by choosing the audit log policy profile to use. 8.1. About audit log policy profiles Audit log profiles define how to log requests that come to the OpenShift API server, the Kubernetes API server, and the OAuth API server. OpenShift Container Platform provides the following predefined audit policy profiles: Profile Description Default Logs only metadata for read and write requests; does not log request bodies except for OAuth access token creation (login) requests. This is the default policy. WriteRequestBodies In addition to logging metadata for all requests, logs request bodies for every write request to the API servers ( create , update , patch ). This profile has more resource overhead than the Default profile. [1] AllRequestBodies In addition to logging metadata for all requests, logs request bodies for every read and write request to the API servers ( get , list , create , update , patch ). This profile has the most resource overhead. [1] Sensitive resources, such as Secret , Route , and OAuthClient objects, are never logged past the metadata level. OAuth tokens are not logged at all if your cluster was upgraded from OpenShift Container Platform 4.5, because their object names might contain secret information. By default, OpenShift Container Platform uses the Default audit log profile. You can use another audit policy profile that also logs request bodies, but be aware of the increased resource usage (CPU, memory, and I/O). 8.2. Configuring the audit log policy You can configure the audit log policy to use when logging requests that come to the API servers. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Edit the APIServer resource: USD oc edit apiserver cluster Update the spec.audit.profile field: apiVersion: config.openshift.io/v1 kind: APIServer metadata: ... spec: audit: profile: WriteRequestBodies 1 1 Set to Default , WriteRequestBodies , or AllRequestBodies . The default profile is Default . Save the file to apply the changes. Verify that a new revision of the Kubernetes API server pods has rolled out. This will take several minutes. USD oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition for the Kubernetes API server to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 12 1 1 In this example, the latest revision number is 12 . If the output shows a message similar to one of the following, this means that the update is still in progress. Wait a few minutes and try again. 3 nodes are at revision 11; 0 nodes have achieved new revision 12 2 nodes are at revision 11; 1 nodes are at revision 12
[ "oc edit apiserver cluster", "apiVersion: config.openshift.io/v1 kind: APIServer metadata: spec: audit: profile: WriteRequestBodies 1", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 12 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/security_and_compliance/audit-log-policy-config
Chapter 5. Deprecated Functionality
Chapter 5. Deprecated Functionality virtio-win component, BZ# 1001981 The VirtIO SCSI driver has been removed from the virtio-win package and is no longer supported on Microsoft Windows Server 2003 platform. qemu-kvm component The qemu-guest-agent-win32 package is no longer shipped as part of the qemu-kvm package. The Windows guest agent is now delivered in the Supplementary channel together with other Windows components, for example, virtio-win drivers. fence-agents component Prior to Red Hat Enterprise Linux 6.5 release, the Red Hat Enterprise Linux High Availability Add-On was considered fully supported on certain VMware ESXi/vCenter versions in combination with the fence_scsi fence agent. Due to limitations in these VMware platforms in the area of SCSI-3 persistent reservations, the fence_scsi fencing agent is no longer supported on any version of the Red Hat Enterprise Linux High Availability Add-On in VMware virtual machines, except when using iSCSI-based storage. See the Virtualization Support Matrix for High Availability for full details on supported combinations: https://access.redhat.com/site/articles/29440 Users using fence_scsi on an affected combination can contact Red Hat Global Support Services for assistance in evaluating alternative configurations or for additional information. systemtap component The systemtap-grapher package has been removed from Red Hat Enterprise Linux 6. For more information, see https://access.redhat.com/solutions/757983 . matahari component The Matahari agent framework ( matahari-* ) packages have been removed from Red Hat Enterprise Linux 6. Focus for remote systems management has shifted towards the use of the CIM infrastructure. This infrastructure relies on an already existing standard which provides a greater degree of interoperability for all users. distribution component The following packages have been deprecated and are subjected to removal in a future release of Red Hat Enterprise Linux 6. These packages will not be updated in the Red Hat Enterprise Linux 6 repositories and customers who do not use the MRG-Messaging product are advised to uninstall them from their system. mingw-gcc mingw-boost mingw32-qpid-cpp python-qmf python-qpid qpid-cpp qpid-qmf qpid-tests qpid-tools ruby-qpid saslwrapper Red Hat MRG-Messaging customers will continue to receive updated functionality as part of their regular updates to the product. fence-virt component The libvirt-qpid is no longer part of the fence-virt package. openscap component The openscap-perl subpackage has been removed from openscap .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/deprecated_functionality
3.2. Volume Groups
3.2. Volume Groups Physical volumes are combined into volume groups (VGs). This creates a pool of disk space out of which logical volumes can be allocated. Within a volume group, the disk space available for allocation is divided into units of a fixed-size called extents. An extent is the smallest unit of space that can be allocated. Within a physical volume, extents are referred to as physical extents. A logical volume is allocated into logical extents of the same size as the physical extents. The extent size is thus the same for all logical volumes in the volume group. The volume group maps the logical extents to physical extents.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/volume_group_overview
Chapter 3. Compiling and Building
Chapter 3. Compiling and Building Red Hat Enterprise Linux 6 includes many packages used for software development, including tools for compiling and building source code. This chapter discusses several of these packages and tools used to compile source code. 3.1. GNU Compiler Collection (GCC) The GNU Compiler Collection (GCC) is a set of tools for compiling a variety of programming languages (including C, C++, ObjectiveC, ObjectiveC++, Fortran, and Ada) into highly optimized machine code. These tools include various compilers (like gcc and g++ ), run-time libraries (like libgcc , libstdc++ , libgfortran , and libgomp ), and miscellaneous other utilities. 3.1.1. Language Compatibility Application Binary Interfaces specified by the GNU C, C++, Fortran and Java Compiler include: Calling conventions. These specify how arguments are passed to functions and how results are returned from functions. Register usage conventions. These specify how processor registers are allocated and used. Object file formats. These specify the representation of binary object code. Size, layout, and alignment of data types. These specify how data is laid out in memory. Interfaces provided by the runtime environment. Where the documented semantics do not change from one version to another they must be kept available and use the same name at all times. The default system C compiler included with Red Hat Enterprise Linux 6 is largely compatible with the C99 ABI standard. Deviations from the C99 standard in GCC 4.4 are tracked online . In addition to the C ABI, the Application Binary Interface for the GNU C++ Compiler specifies the binary interfaces required to support the C++ language, such as: Name mangling and demangling Creation and propagation of exceptions Formatting of run-time type information Constructors and destructors Layout, alignment, and padding of classes and derived classes Virtual function implementation details, such as the layout and alignment of virtual tables The default system C++ compiler included with Red Hat Enterprise Linux 6 conforms to the C++ ABI defined by the Itanium C++ ABI (1.86) . Although every effort has been made to keep each version of GCC compatible with releases, some incompatibilities do exist. ABI incompatibilities between Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 5 The following is a list of known incompatibilities between the Red Hat Enterprise Linux 6 and 5 toolchains. Passing/returning structs with flexible array members by value changed in some cases on Intel 64 and AMD64. Passing/returning of unions with long double members by value changed in some cases on Intel 64 and AMD64. Passing/returning structs with complex float member by value changed in some cases on Intel 64 and AMD64. Passing of 256-bit vectors on x86, Intel 64 and AMD64 platforms changed when -mavx is used. There have been multiple changes in passing of _Decimal{32,64,128} types and aggregates containing those by value on several targets. Packing of packed char bitfields changed in some cases. ABI incompatibilities between Red Hat Enterprise Linux 5 and Red Hat Enterprise Linux 4 The following is a list of known incompatibilities between the Red Hat Enterprise Linux 5 and 4 toolchains. There have been changes in the library interface specified by the C++ ABI for thread-safe initialization of function-scope static variables. On Intel 64 and AMD64, the medium model for building applications where data segment exceeds 4GB, was redesigned to match the latest ABI draft at the time. The ABI change results in incompatibility among medium model objects. The compiler flag -Wabi can be used to get diagnostics indicating where these constructs appear in source code, though it will not catch every single case. This flag is especially useful for C++ code to warn whenever the compiler generates code that is known to be incompatible with the vendor-neutral C++ ABI. Excluding the incompatibilities listed above, the GCC C and C++ language ABIs are mostly ABI compatible. The vast majority of source code will not encounter any of the known issues, and can be considered compatible. Compatible ABIs allow the objects created by compiling source code to be portable to other systems. In particular, for Red Hat Enterprise Linux, this allows for upward compatibility. Upward compatibility is defined as the ability to link shared libraries and objects, created using a version of the compilers in a particular Red Hat Enterprise Linux release, with no problems. This includes new objects compiled on subsequent Red Hat Enterprise Linux releases. The C ABI is considered to be stable, and has been so since at least Red Hat Enterprise Linux 3 (again, barring any incompatibilities mentioned in the above lists). Libraries built on Red Hat Enterprise Linux 3 and later can be linked to objects created on a subsequent environment (Red Hat Enterprise Linux 4, Red Hat Enterprise Linux 5, and Red Hat Enterprise Linux 6). The C++ ABI is considered to be stable, but less stable than the C ABI, and only as of Red Hat Enterprise Linux 4 (corresponding to GCC version 3.4 and above.). As with C, this is only an upward compatibility. Libraries built on Red Hat Enterprise Linux 4 and above can be linked to objects created on a subsequent environment (Red Hat Enterprise Linux 5, and Red Hat Enterprise Linux 6). To force GCC to generate code compatible with the C++ ABI in Red Hat Enterprise Linux releases prior to Red Hat Enterprise Linux 4, some developers have used the -fabi-version=1 option. This practice is not recommended. Objects created this way are indistinguishable from objects conforming to the current stable ABI, and can be linked (incorrectly) amongst the different ABIs, especially when using new compilers to generate code to be linked with old libraries that were built with tools prior to Red Hat Enterprise Linux 4. Warning The above incompatibilities make it incredibly difficult to maintain ABI shared library sanity between releases, especially when developing custom libraries with multiple dependencies outside of the core libraries. Therefore, if shared libraries are developed, it is highly recommend that a new version is built for each Red Hat Enterprise Linux release. 3.1.2. Object Compatibility and Interoperability Two items that are important are the changes and enhancements in the underlying tools used by the compiler, and the compatibility between the different versions of a language's compiler. Changes and new features in tools like ld (distributed as part of the binutils package) or in the dynamic loader ( ld.so , distributed as part of the glibc package) can subtly change the object files that the compiler produces. These changes mean that object files moving to the current release of Red Hat Enterprise Linux from releases may lose functionality, behave differently at runtime, or otherwise interoperate in a diminished capacity. Known problem areas include: ld --build-id In Red Hat Enterprise Linux 6 this is passed to ld by default, whereas Red Hat Enterprise Linux 5 ld doesn't recognize it. as .cfi_sections support In Red Hat Enterprise Linux 6 this directive allows .debug_frame , .eh_frame or both to be omitted from .cfi* directives. In Red Hat Enterprise Linux 5 only .eh_frame is omitted. as , ld , ld.so , and gdb STB_GNU_UNIQUE and %gnu_unique_symbol support In Red Hat Enterprise Linux 6 more debug information is generated and stored in object files. This information relies on new features detailed in the DWARF standard, and also on new extensions not yet standardized. In Red Hat Enterprise Linux 5, tools like as , ld , gdb , objdump , and readelf may not be prepared for this new information and may fail to interoperate with objects created with the newer tools. In addition, Red Hat Enterprise Linux 5 produced object files do not support these new features; these object files may be handled by Red Hat Enterprise Linux 6 tools in a sub-optimal manner. An outgrowth of this enhanced debug information is that the debuginfo packages that ship with system libraries allow you to do useful source level debugging into system libraries if they are installed. See Section 4.2, "Installing Debuginfo Packages" for more information on debuginfo packages. Object file changes, such as the ones listed above, may interfere with the portable use of prelink . 3.1.3. Running GCC To compile using GCC tools, first install the binutils and gcc packages. Doing so will also install several dependencies. In brief, the tools work via the gcc command. This is the main driver for the compiler. It can be used from the command line to pre-process or compile a source file, link object files and libraries, or perform a combination thereof. By default, gcc takes care of the details and links in the provided libgcc library. Conversely, using GCC tools from the command line interface consumes less system resources. This also allows finer-grained control over compilers; GCC's command line tools can even be used outside of the graphical mode (runlevel 5). 3.1.3.1. Simple C Usage Basic compilation of a C language program using GCC is easy. Start with the following simple program: Example 3.1. hello.c The following procedure illustrates the compilation process for C in its most basic form. Procedure 3.1. Compiling a 'Hello World' C Program Compile Example 3.1, "hello.c" into an executable with: Ensure that the resulting binary hello is in the same directory as hello.c . Run the hello binary, that is, ./hello . 3.1.3.2. Simple C++ Usage Basic compilation of a C++ language program using GCC is similar. Start with the following simple program: Example 3.2. hello.cc The following procedure illustrates the compilation process for C++ in its most basic form. Procedure 3.2. Compiling a 'Hello World' C++ Program Compile Example 3.2, "hello.cc" into an executable with: Ensure that the resulting binary hello is in the same directory as hello.cc . Run the hello binary, that is, ./hello . 3.1.3.3. Simple Multi-File Usage To use basic compilation involving multiple files or object files, start with the following two source files: Example 3.3. one.c Example 3.4. two.c The following procedure illustrates a simple, multi-file compilation process in its most basic form. Procedure 3.3. Compiling a Program with Multiple Source Files Compile Example 3.3, "one.c" into an executable with: Ensure that the resulting binary one.o is in the same directory as one.c . Compile Example 3.4, "two.c" into an executable with: Ensure that the resulting binary two.o is in the same directory as two.c . Compile the two object files one.o and two.o into a single executable with: Ensure that the resulting binary hello is in the same directory as one.o and two.o . Run the hello binary, that is, ./hello . 3.1.3.4. Recommended Optimization Options Different projects require different optimization options. There is no one-size-fits-all approach when it comes to optimization, but here are a few guidelines to keep in mind. Instruction selection and tuning It is very important to choose the correct architecture for instruction scheduling. By default GCC produces code optimized for the most common processors, but if the CPU on which your code will run is known, the corresponding -mtune= option to optimize the instruction scheduling, and -march= option to optimize the instruction selection should be used. The option -mtune= optimizes instruction scheduling to fit your architecture by tuning everything except the ABI and the available instruction set. This option will not choose particular instructions, but instead will tune your program in such a way that executing on a particular architecture will be optimized. For example, if an Intel Core2 CPU will predominantly be used, choose -mtune=core2 . If the wrong choice is made, the program will still run, but not optimally on the given architecture. The architecture on which the program will most likely run should always be chosen. The option -march= optimizes instruction selection. As such, it is important to choose correctly as choosing incorrectly will cause your program to fail. This option selects the instruction set used when generating code. For example, if the program will be run on an AMD K8 core based CPU, choose -march=k8 . Specifying the architecture with this option will imply -mtune= . The -mtune= and -march= commands should only be used for tuning and selecting instructions within a given architecture, not to generate code for a different architecture (also known as cross-compiling). For example, this is not to be used to generate PowerPC code from an Intel 64 and AMD64 platform. For a complete list of the available options for both -march= and -mtune= , see the GCC documentation available here: GCC 4.4.4 Manual: Hardware Models and Configurations General purpose optimization flags The compiler flag -O2 is a good middle of the road option to generate fast code. It produces the best optimized code when the resulting code size is not large. Use this when unsure what would best suit. When code size is not an issue, -O3 is preferable. This option produces code that is slightly larger but runs faster because of a more frequent inline of functions. This is ideal for floating point intensive code. The other general purpose optimization flag is -Os . This flag also optimizes for size, and produces faster code in situations where a smaller footprint will increase code locality, thereby reducing cache misses. Use -frecord-gcc-switches when compiling objects. This records the options used to build objects into objects themselves. After an object is built, it determines which set of options were used to build it. The set of options are then recorded in a section called .GCC.command.line within the object and can be examined with the following: It is very important to test and try different options with a representative data set. Often, different modules or objects can be compiled with different optimization flags in order to produce optimal results. See Section 3.1.3.5, "Using Profile Feedback to Tune Optimization Heuristics" for additional optimization tuning. 3.1.3.5. Using Profile Feedback to Tune Optimization Heuristics During the transformation of a typical set of source code into an executable, tens of hundreds of choices must be made about the importance of speed in one part of code over another, or code size as opposed to code speed. By default, these choices are made by the compiler using reasonable heuristics, tuned over time to produce the optimum runtime performance. However, GCC also has a way to teach the compiler to optimize executables for a specific machine in a specific production environment. This feature is called profile feedback. Profile feedback is used to tune optimizations such as: Inlining Branch prediction Instruction scheduling Inter-procedural constant propagation Determining of hot or cold functions Profile feedback compiles a program first to generate a program that is run and analyzed and then a second time to optimize with the gathered data. Procedure 3.4. Using Profile Feedback The application must be instrumented to produce profiling information by compiling it with -fprofile-generate . Run the application to accumulate and save the profiling information. Recompile the application with -fprofile-use . Step three will use the profile information gathered in step one to tune the compiler's heuristics while optimizing the code into a final executable. Procedure 3.5. Compiling a Program with Profiling Feedback Compile source.c to include profiling instrumentation: gcc source.c -fprofile-generate -O2 -o executable Run executable to gather profiling information: ./executable Recompile and optimize source.c with profiling information gathered in step one: gcc source.c -fprofile-use -O2 -o executable Multiple data collection runs, as seen in step two, will accumulate data into the profiling file instead of replacing it. This allows the executable in step two to be run multiple times with additional representative data in order to collect even more information. The executable must run with representative levels of both the machine being used and a respective data set large enough for the input required. This ensures optimal results are achieved. By default, GCC will generate the profile data into the directory where step one was performed. To generate this information elsewhere, compile with -fprofile-dir=DIR where DIR is the preferred output directory. Warning The format of the compiler feedback data file changes between compiler versions. It is imperative that the program compilation is repeated with each version of the compiler. 3.1.3.6. Using 32-bit compilers on a 64-bit host On a 64-bit host, GCC will build executables that can only run on 64-bit hosts. However, GCC can be used to build executables that will run both on 64-bit hosts and on 32-bit hosts. To build 32-bit binaries on a 64-bit host, first install 32-bit versions of any supporting libraries the executable may require. This must at least include supporting libraries for glibc and libgcc , and libstdc++ if the program is a C++ program. On Intel 64 and AMD64, this can be done with: yum install glibc-devel.i686 libgcc.i686 libstdc++-devel.i686 There may be cases where it is useful to to install additional 32-bit libraries that a program may require. For example, if a program uses the db4-devel libraries to build, the 32-bit version of these libraries can be installed with: yum install db4-devel.i686 Note The .i686 suffix on the x86 platform (as opposed to x86-64 ) specifies a 32-bit version of the given package. For PowerPC architectures, the suffix is ppc (as opposed to ppc64 ). After the 32-bit libraries have been installed, the -m32 option can be passed to the compiler and linker to produce 32-bit executables. Provided the supporting 32-bit libraries are installed on the 64-bit system, this executable will be able to run on both 32-bit systems and 64-bit systems. Procedure 3.6. Compiling a 32-bit Program on a 64-bit Host On a 64-bit system, compile hello.c into a 64-bit executable with: gcc hello.c -o hello64 Ensure that the resulting executable is a 64-bit binary: The command file on a 64-bit executable will include ELF 64-bit in its output, and ldd will list /lib64/libc.so.6 as the main C library linked. On a 64-bit system, compile hello.c into a 32-bit executable with: gcc -m32 hello.c -o hello32 Ensure that the resulting executable is a 32-bit binary: The command file on a 32-bit executable will include ELF 32-bit in its output, and ldd will list /lib/libc.so.6 as the main C library linked. If you have not installed the 32-bit supporting libraries you will get an error similar to this for C code: A similar error would be triggered on C++ code: These errors indicate that the supporting 32-bit libraries have not been properly installed as explained at the beginning of this section. Also important is to note that building with -m32 will in not adapt or convert a program to resolve any issues arising from 32/64-bit incompatibilities. For tips on writing portable code and converting from 32-bits to 64-bits, see the paper entitled Porting to 64-bit GNU/Linux Systems in the Proceedings of the 2003 GCC Developers Summit . 3.1.4. GCC Documentation For more information about GCC compilers, see the man pages for cpp , gcc , g++ , gcj , and gfortran . The following online user manuals are also available: GCC 4.4.4 Manual GCC 4.4.4 GNU Fortran Manual GCC 4.4.4 GCJ Manual GCC 4.4.4 CPP Manual GCC 4.4.4 GNAT Reference Manual GCC 4.4.4 GNAT User's Guide GCC 4.4.4 GNU OpenMP Manual The main site for the development of GCC is gcc.gnu.org .
[ "#include <stdio.h> int main() { printf (\"Hello world!\\n\"); return 0; }", "~]USD gcc hello.c -o hello", "#include <iostream> using namespace std; int main() { cout << \"Hello World!\" << endl; return 0; }", "~]USD g++ hello.cc -o hello", "#include <stdio.h> void hello() { printf(\"Hello world!\\n\"); }", "extern void hello(); int main() { hello(); return 0; }", "~]USD gcc -c one.c -o one.o", "~]USD gcc -c two.c -o two.o", "~]USD gcc one.o two.o -o hello", "gcc -frecord-gcc-switches -O3 -Wall hello.c -o hello readelf --string-dump=.GCC.command.line hello String dump of section '.GCC.command.line': [ 0] hello.c [ 8] -mtune=generic [ 17] -O3 [ 1b] -Wall [ 21] -frecord-gcc-switches", "file hello64 hello64: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), dynamically linked (uses shared libs), for GNU/Linux 2.6.18, not stripped USD ldd hello64 linux-vdso.so.1 => (0x00007fff242dd000) libc.so.6 => /lib64/libc.so.6 (0x00007f0721514000) /lib64/ld-linux-x86-64.so.2 (0x00007f0721893000)", "file hello32 hello32: ELF 32-bit LSB executable, Intel 80386, version 1 (GNU/Linux), dynamically linked (uses shared libs), for GNU/Linux 2.6.18, not stripped USD ldd hello32 linux-gate.so.1 => (0x007eb000) libc.so.6 => /lib/libc.so.6 (0x00b13000) /lib/ld-linux.so.2 (0x00cd7000)", "gcc -m32 hello32.c -o hello32 /usr/bin/ld: crt1.o: No such file: No such file or directory collect2: ld returned 1 exit status", "g++ -m32 hello32.cc -o hello32-c++ In file included from /usr/include/features.h:385, from /usr/lib/gcc/x86_64-redhat-linux/4.4.4/../../../../include/c++/4.4.4/x86_64-redhat-linux/32/bits/os_defines.h:39, from /usr/lib/gcc/x86_64-redhat-linux/4.4.4/../../../../include/c++/4.4.4/x86_64-redhat-linux/32/bits/c++config.h:243, from /usr/lib/gcc/x86_64-redhat-linux/4.4.4/../../../../include/c++/4.4.4/iostream:39, from hello32.cc:1: /usr/include/gnu/stubs.h:7:27: error: gnu/stubs-32.h: No such file or directory" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/developer_guide/compilers
Chapter 8. Reviewing monitoring dashboards
Chapter 8. Reviewing monitoring dashboards OpenShift Dedicated provides a set of monitoring dashboards that help you understand the state of cluster components and user-defined workloads. 8.1. Monitoring dashboards in the Administrator perspective Use the Administrator perspective to access dashboards for the core OpenShift Dedicated components, including the following items: API performance etcd Kubernetes compute resources Kubernetes network resources Prometheus USE method dashboards relating to cluster and node performance Node performance metrics Figure 8.1. Example dashboard in the Administrator perspective 8.2. Monitoring dashboards in the Developer perspective In the Developer perspective, you can access only the Kubernetes compute resources dashboards: Figure 8.2. Example dashboard in the Developer perspective 8.3. Reviewing monitoring dashboards as a cluster administrator In the Administrator perspective, you can view dashboards relating to core OpenShift Dedicated cluster components. Prerequisites You have access to the cluster as a user with the dedicated-admin role. Procedure In the Administrator perspective of the OpenShift Dedicated web console, go to Observe Dashboards . Choose a dashboard in the Dashboard list. Some dashboards, such as etcd and Prometheus dashboards, produce additional sub-menus when selected. Optional: Select a time range for the graphs in the Time Range list. Select a pre-defined time period. Set a custom time range by clicking Custom time range in the Time Range list. Input or select the From and To dates and times. Click Save to save the custom time range. Optional: Select a Refresh Interval . Hover over each of the graphs within a dashboard to display detailed information about specific items. 8.4. Reviewing monitoring dashboards as a developer In the Developer perspective, you can view dashboards relating to a selected project. Note In the Developer perspective, you can view dashboards for only one project at a time. Prerequisites You have access to the cluster as a developer or as a user. You have view permissions for the project that you are viewing the dashboard for. Procedure In the Developer perspective in the OpenShift Dedicated web console, click Observe and go to the Dashboards tab. Select a project from the Project: drop-down list. Select a dashboard from the Dashboard drop-down list to see the filtered metrics. Note All dashboards produce additional sub-menus when selected, except Kubernetes / Compute Resources / Namespace (Pods) . Optional: Select a time range for the graphs in the Time Range list. Select a pre-defined time period. Set a custom time range by clicking Custom time range in the Time Range list. Input or select the From and To dates and times. Click Save to save the custom time range. Optional: Select a Refresh Interval . Hover over each of the graphs within a dashboard to display detailed information about specific items.
null
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/monitoring/reviewing-monitoring-dashboards
Chapter 18. Performing latency tests for platform verification
Chapter 18. Performing latency tests for platform verification You can use the Cloud-native Network Functions (CNF) tests image to run latency tests on a CNF-enabled OpenShift Container Platform cluster, where all the components required for running CNF workloads are installed. Run the latency tests to validate node tuning for your workload. The cnf-tests container image is available at registry.redhat.io/openshift4/cnf-tests-rhel8:v4.16 . 18.1. Prerequisites for running latency tests Your cluster must meet the following requirements before you can run the latency tests: You have applied all the required CNF configurations. This includes the PerformanceProfile cluster and other configuration according to the reference design specifications (RDS) or your specific requirements. You have logged in to registry.redhat.io with your Customer Portal credentials by using the podman login command. Additional resources Scheduling a workload onto a worker with real-time capabilities 18.2. Measuring latency The cnf-tests image uses three tools to measure the latency of the system: hwlatdetect cyclictest oslat Each tool has a specific use. Use the tools in sequence to achieve reliable test results. hwlatdetect Measures the baseline that the bare-metal hardware can achieve. Before proceeding with the latency test, ensure that the latency reported by hwlatdetect meets the required threshold because you cannot fix hardware latency spikes by operating system tuning. cyclictest Verifies the real-time kernel scheduler latency after hwlatdetect passes validation. The cyclictest tool schedules a repeated timer and measures the difference between the desired and the actual trigger times. The difference can uncover basic issues with the tuning caused by interrupts or process priorities. The tool must run on a real-time kernel. oslat Behaves similarly to a CPU-intensive DPDK application and measures all the interruptions and disruptions to the busy loop that simulates CPU heavy data processing. The tests introduce the following environment variables: Table 18.1. Latency test environment variables Environment variables Description LATENCY_TEST_DELAY Specifies the amount of time in seconds after which the test starts running. You can use the variable to allow the CPU manager reconcile loop to update the default CPU pool. The default value is 0. LATENCY_TEST_CPUS Specifies the number of CPUs that the pod running the latency tests uses. If you do not set the variable, the default configuration includes all isolated CPUs. LATENCY_TEST_RUNTIME Specifies the amount of time in seconds that the latency test must run. The default value is 300 seconds. Note To prevent the Ginkgo 2.0 test suite from timing out before the latency tests complete, set the -ginkgo.timeout flag to a value greater than LATENCY_TEST_RUNTIME + 2 minutes. If you also set a LATENCY_TEST_DELAY value then you must set -ginkgo.timeout to a value greater than LATENCY_TEST_RUNTIME + LATENCY_TEST_DELAY + 2 minutes. The default timeout value for the Ginkgo 2.0 test suite is 1 hour. HWLATDETECT_MAXIMUM_LATENCY Specifies the maximum acceptable hardware latency in microseconds for the workload and operating system. If you do not set the value of HWLATDETECT_MAXIMUM_LATENCY or MAXIMUM_LATENCY , the tool compares the default expected threshold (20ms) and the actual maximum latency in the tool itself. Then, the test fails or succeeds accordingly. CYCLICTEST_MAXIMUM_LATENCY Specifies the maximum latency in microseconds that all threads expect before waking up during the cyclictest run. If you do not set the value of CYCLICTEST_MAXIMUM_LATENCY or MAXIMUM_LATENCY , the tool skips the comparison of the expected and the actual maximum latency. OSLAT_MAXIMUM_LATENCY Specifies the maximum acceptable latency in microseconds for the oslat test results. If you do not set the value of OSLAT_MAXIMUM_LATENCY or MAXIMUM_LATENCY , the tool skips the comparison of the expected and the actual maximum latency. MAXIMUM_LATENCY Unified variable that specifies the maximum acceptable latency in microseconds. Applicable for all available latency tools. Note Variables that are specific to a latency tool take precedence over unified variables. For example, if OSLAT_MAXIMUM_LATENCY is set to 30 microseconds and MAXIMUM_LATENCY is set to 10 microseconds, the oslat test will run with maximum acceptable latency of 30 microseconds. 18.3. Running the latency tests Run the cluster latency tests to validate node tuning for your Cloud-native Network Functions (CNF) workload. Note When executing podman commands as a non-root or non-privileged user, mounting paths can fail with permission denied errors. Depending on your local operating system and SELinux configuration, you might also experience issues running these commands from your home directory. To make the podman commands work, run the commands from a folder that is not your home/<username> directory, and append :Z to the volumes creation. For example, -v USD(pwd)/:/kubeconfig:Z . This allows podman to do the proper SELinux relabeling. This procedure runs the three individual tests hwlatdetect , cyclictest , and oslat . For details on these individual tests, see their individual sections. Procedure Open a shell prompt in the directory containing the kubeconfig file. You provide the test image with a kubeconfig file in current directory and its related USDKUBECONFIG environment variable, mounted through a volume. This allows the running container to use the kubeconfig file from inside the container. Note In the following command, your local kubeconfig is mounted to kubeconfig/kubeconfig in the cnf-tests container, which allows access to the cluster. To run the latency tests, run the following command, substituting variable values as appropriate: USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e LATENCY_TEST_RUNTIME=600\ -e MAXIMUM_LATENCY=20 \ registry.redhat.io/openshift4/cnf-tests-rhel8:v4.16 /usr/bin/test-run.sh \ --ginkgo.v --ginkgo.timeout="24h" The LATENCY_TEST_RUNTIME is shown in seconds, in this case 600 seconds (10 minutes). The test runs successfully when the maximum observed latency is lower than MAXIMUM_LATENCY (20 ms). If the results exceed the latency threshold, the test fails. Optional: Append --ginkgo.dry-run flag to run the latency tests in dry-run mode. This is useful for checking what commands the tests run. Optional: Append --ginkgo.v flag to run the tests with increased verbosity. Optional: Append --ginkgo.timeout="24h" flag to ensure the Ginkgo 2.0 test suite does not timeout before the latency tests complete. Important During testing shorter time periods, as shown, can be used to run the tests. However, for final verification and valid results, the test should run for at least 12 hours (43200 seconds). 18.3.1. Running hwlatdetect The hwlatdetect tool is available in the rt-kernel package with a regular subscription of Red Hat Enterprise Linux (RHEL) 9.x. Note When executing podman commands as a non-root or non-privileged user, mounting paths can fail with permission denied errors. Depending on your local operating system and SELinux configuration, you might also experience issues running these commands from your home directory. To make the podman commands work, run the commands from a folder that is not your home/<username> directory, and append :Z to the volumes creation. For example, -v USD(pwd)/:/kubeconfig:Z . This allows podman to do the proper SELinux relabeling. Prerequisites You have reviewed the prerequisites for running latency tests. Procedure To run the hwlatdetect tests, run the following command, substituting variable values as appropriate: USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 \ registry.redhat.io/openshift4/cnf-tests-rhel8:v4.16 \ /usr/bin/test-run.sh --ginkgo.focus="hwlatdetect" --ginkgo.v --ginkgo.timeout="24h" The hwlatdetect test runs for 10 minutes (600 seconds). The test runs successfully when the maximum observed latency is lower than MAXIMUM_LATENCY (20 ms). If the results exceed the latency threshold, the test fails. Important During testing shorter time periods, as shown, can be used to run the tests. However, for final verification and valid results, the test should run for at least 12 hours (43200 seconds). Example failure output running /usr/bin/cnftests -ginkgo.v -ginkgo.focus=hwlatdetect I0908 15:25:20.023712 27 request.go:601] Waited for 1.046586367s due to client-side throttling, not priority and fairness, request: GET:https://api.hlxcl6.lab.eng.tlv2.redhat.com:6443/apis/imageregistry.operator.openshift.io/v1?timeout=32s Running Suite: CNF Features e2e integration tests ================================================= Random Seed: 1662650718 Will run 1 of 3 specs [...] • Failure [283.574 seconds] [performance] Latency Test /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:62 with the hwlatdetect image /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:228 should succeed [It] /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:236 Log file created at: 2022/09/08 15:25:27 Running on machine: hwlatdetect-b6n4n Binary: Built with gc go1.17.12 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0908 15:25:27.160620 1 node.go:39] Environment information: /proc/cmdline: BOOT_IMAGE=(hd1,gpt3)/ostree/rhcos-c6491e1eedf6c1f12ef7b95e14ee720bf48359750ac900b7863c625769ef5fb9/vmlinuz-4.18.0-372.19.1.el8_6.x86_64 random.trust_cpu=on console=tty0 console=ttyS0,115200n8 ignition.platform.id=metal ostree=/ostree/boot.1/rhcos/c6491e1eedf6c1f12ef7b95e14ee720bf48359750ac900b7863c625769ef5fb9/0 ip=dhcp root=UUID=5f80c283-f6e6-4a27-9b47-a287157483b2 rw rootflags=prjquota boot=UUID=773bf59a-bafd-48fc-9a87-f62252d739d3 skew_tick=1 nohz=on rcu_nocbs=0-3 tuned.non_isolcpus=0000ffff,ffffffff,fffffff0 systemd.cpu_affinity=4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79 intel_iommu=on iommu=pt isolcpus=managed_irq,0-3 nohz_full=0-3 tsc=nowatchdog nosoftlockup nmi_watchdog=0 mce=off skew_tick=1 rcutree.kthread_prio=11 + + I0908 15:25:27.160830 1 node.go:46] Environment information: kernel version 4.18.0-372.19.1.el8_6.x86_64 I0908 15:25:27.160857 1 main.go:50] running the hwlatdetect command with arguments [/usr/bin/hwlatdetect --threshold 1 --hardlimit 1 --duration 100 --window 10000000us --width 950000us] F0908 15:27:10.603523 1 main.go:53] failed to run hwlatdetect command; out: hwlatdetect: test duration 100 seconds detector: tracer parameters: Latency threshold: 1us 1 Sample window: 10000000us Sample width: 950000us Non-sampling period: 9050000us Output File: None Starting test test finished Max Latency: 326us 2 Samples recorded: 5 Samples exceeding threshold: 5 ts: 1662650739.017274507, inner:6, outer:6 ts: 1662650749.257272414, inner:14, outer:326 ts: 1662650779.977272835, inner:314, outer:12 ts: 1662650800.457272384, inner:3, outer:9 ts: 1662650810.697273520, inner:3, outer:2 [...] JUnit report was created: /junit.xml/cnftests-junit.xml Summarizing 1 Failure: [Fail] [performance] Latency Test with the hwlatdetect image [It] should succeed /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:476 Ran 1 of 194 Specs in 365.797 seconds FAIL! -- 0 Passed | 1 Failed | 0 Pending | 2 Skipped --- FAIL: TestTest (366.08s) FAIL 1 You can configure the latency threshold by using the MAXIMUM_LATENCY or the HWLATDETECT_MAXIMUM_LATENCY environment variables. 2 The maximum latency value measured during the test. Example hwlatdetect test results You can capture the following types of results: Rough results that are gathered after each run to create a history of impact on any changes made throughout the test. The combined set of the rough tests with the best results and configuration settings. Example of good results hwlatdetect: test duration 3600 seconds detector: tracer parameters: Latency threshold: 10us Sample window: 1000000us Sample width: 950000us Non-sampling period: 50000us Output File: None Starting test test finished Max Latency: Below threshold Samples recorded: 0 The hwlatdetect tool only provides output if the sample exceeds the specified threshold. Example of bad results hwlatdetect: test duration 3600 seconds detector: tracer parameters:Latency threshold: 10usSample window: 1000000us Sample width: 950000usNon-sampling period: 50000usOutput File: None Starting tests:1610542421.275784439, inner:78, outer:81 ts: 1610542444.330561619, inner:27, outer:28 ts: 1610542445.332549975, inner:39, outer:38 ts: 1610542541.568546097, inner:47, outer:32 ts: 1610542590.681548531, inner:13, outer:17 ts: 1610543033.818801482, inner:29, outer:30 ts: 1610543080.938801990, inner:90, outer:76 ts: 1610543129.065549639, inner:28, outer:39 ts: 1610543474.859552115, inner:28, outer:35 ts: 1610543523.973856571, inner:52, outer:49 ts: 1610543572.089799738, inner:27, outer:30 ts: 1610543573.091550771, inner:34, outer:28 ts: 1610543574.093555202, inner:116, outer:63 The output of hwlatdetect shows that multiple samples exceed the threshold. However, the same output can indicate different results based on the following factors: The duration of the test The number of CPU cores The host firmware settings Warning Before proceeding with the latency test, ensure that the latency reported by hwlatdetect meets the required threshold. Fixing latencies introduced by hardware might require you to contact the system vendor support. Not all latency spikes are hardware related. Ensure that you tune the host firmware to meet your workload requirements. For more information, see Setting firmware parameters for system tuning . 18.3.2. Running cyclictest The cyclictest tool measures the real-time kernel scheduler latency on the specified CPUs. Note When executing podman commands as a non-root or non-privileged user, mounting paths can fail with permission denied errors. Depending on your local operating system and SELinux configuration, you might also experience issues running these commands from your home directory. To make the podman commands work, run the commands from a folder that is not your home/<username> directory, and append :Z to the volumes creation. For example, -v USD(pwd)/:/kubeconfig:Z . This allows podman to do the proper SELinux relabeling. Prerequisites You have reviewed the prerequisites for running latency tests. Procedure To perform the cyclictest , run the following command, substituting variable values as appropriate: USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e LATENCY_TEST_CPUS=10 -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 \ registry.redhat.io/openshift4/cnf-tests-rhel8:v4.16 \ /usr/bin/test-run.sh --ginkgo.focus="cyclictest" --ginkgo.v --ginkgo.timeout="24h" The command runs the cyclictest tool for 10 minutes (600 seconds). The test runs successfully when the maximum observed latency is lower than MAXIMUM_LATENCY (in this example, 20 ms). Latency spikes of 20 ms and above are generally not acceptable for telco RAN workloads. If the results exceed the latency threshold, the test fails. Important During testing shorter time periods, as shown, can be used to run the tests. However, for final verification and valid results, the test should run for at least 12 hours (43200 seconds). Example failure output running /usr/bin/cnftests -ginkgo.v -ginkgo.focus=cyclictest I0908 13:01:59.193776 27 request.go:601] Waited for 1.046228824s due to client-side throttling, not priority and fairness, request: GET:https://api.compute-1.example.com:6443/apis/packages.operators.coreos.com/v1?timeout=32s Running Suite: CNF Features e2e integration tests ================================================= Random Seed: 1662642118 Will run 1 of 3 specs [...] Summarizing 1 Failure: [Fail] [performance] Latency Test with the cyclictest image [It] should succeed /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:220 Ran 1 of 194 Specs in 161.151 seconds FAIL! -- 0 Passed | 1 Failed | 0 Pending | 2 Skipped --- FAIL: TestTest (161.48s) FAIL Example cyclictest results The same output can indicate different results for different workloads. For example, spikes up to 18ms are acceptable for 4G DU workloads, but not for 5G DU workloads. Example of good results running cmd: cyclictest -q -D 10m -p 1 -t 16 -a 2,4,6,8,10,12,14,16,54,56,58,60,62,64,66,68 -h 30 -i 1000 -m # Histogram 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000001 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000002 579506 535967 418614 573648 532870 529897 489306 558076 582350 585188 583793 223781 532480 569130 472250 576043 More histogram entries ... # Total: 000600000 000600000 000600000 000599999 000599999 000599999 000599998 000599998 000599998 000599997 000599997 000599996 000599996 000599995 000599995 000599995 # Min Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 # Avg Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 # Max Latencies: 00005 00005 00004 00005 00004 00004 00005 00005 00006 00005 00004 00005 00004 00004 00005 00004 # Histogram Overflows: 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 # Histogram Overflow at cycle number: # Thread 0: # Thread 1: # Thread 2: # Thread 3: # Thread 4: # Thread 5: # Thread 6: # Thread 7: # Thread 8: # Thread 9: # Thread 10: # Thread 11: # Thread 12: # Thread 13: # Thread 14: # Thread 15: Example of bad results running cmd: cyclictest -q -D 10m -p 1 -t 16 -a 2,4,6,8,10,12,14,16,54,56,58,60,62,64,66,68 -h 30 -i 1000 -m # Histogram 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000001 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000002 564632 579686 354911 563036 492543 521983 515884 378266 592621 463547 482764 591976 590409 588145 589556 353518 More histogram entries ... # Total: 000599999 000599999 000599999 000599997 000599997 000599998 000599998 000599997 000599997 000599996 000599995 000599996 000599995 000599995 000599995 000599993 # Min Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 # Avg Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 # Max Latencies: 00493 00387 00271 00619 00541 00513 00009 00389 00252 00215 00539 00498 00363 00204 00068 00520 # Histogram Overflows: 00001 00001 00001 00002 00002 00001 00000 00001 00001 00001 00002 00001 00001 00001 00001 00002 # Histogram Overflow at cycle number: # Thread 0: 155922 # Thread 1: 110064 # Thread 2: 110064 # Thread 3: 110063 155921 # Thread 4: 110063 155921 # Thread 5: 155920 # Thread 6: # Thread 7: 110062 # Thread 8: 110062 # Thread 9: 155919 # Thread 10: 110061 155919 # Thread 11: 155918 # Thread 12: 155918 # Thread 13: 110060 # Thread 14: 110060 # Thread 15: 110059 155917 18.3.3. Running oslat The oslat test simulates a CPU-intensive DPDK application and measures all the interruptions and disruptions to test how the cluster handles CPU heavy data processing. Note When executing podman commands as a non-root or non-privileged user, mounting paths can fail with permission denied errors. Depending on your local operating system and SELinux configuration, you might also experience issues running these commands from your home directory. To make the podman commands work, run the commands from a folder that is not your home/<username> directory, and append :Z to the volumes creation. For example, -v USD(pwd)/:/kubeconfig:Z . This allows podman to do the proper SELinux relabeling. Prerequisites You have reviewed the prerequisites for running latency tests. Procedure To perform the oslat test, run the following command, substituting variable values as appropriate: USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e LATENCY_TEST_CPUS=10 -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 \ registry.redhat.io/openshift4/cnf-tests-rhel8:v4.16 \ /usr/bin/test-run.sh --ginkgo.focus="oslat" --ginkgo.v --ginkgo.timeout="24h" LATENCY_TEST_CPUS specifies the number of CPUs to test with the oslat command. The command runs the oslat tool for 10 minutes (600 seconds). The test runs successfully when the maximum observed latency is lower than MAXIMUM_LATENCY (20 ms). If the results exceed the latency threshold, the test fails. Important During testing shorter time periods, as shown, can be used to run the tests. However, for final verification and valid results, the test should run for at least 12 hours (43200 seconds). Example failure output running /usr/bin/cnftests -ginkgo.v -ginkgo.focus=oslat I0908 12:51:55.999393 27 request.go:601] Waited for 1.044848101s due to client-side throttling, not priority and fairness, request: GET:https://compute-1.example.com:6443/apis/machineconfiguration.openshift.io/v1?timeout=32s Running Suite: CNF Features e2e integration tests ================================================= Random Seed: 1662641514 Will run 1 of 3 specs [...] • Failure [77.833 seconds] [performance] Latency Test /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:62 with the oslat image /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:128 should succeed [It] /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:153 The current latency 304 is bigger than the expected one 1 : 1 [...] Summarizing 1 Failure: [Fail] [performance] Latency Test with the oslat image [It] should succeed /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:177 Ran 1 of 194 Specs in 161.091 seconds FAIL! -- 0 Passed | 1 Failed | 0 Pending | 2 Skipped --- FAIL: TestTest (161.42s) FAIL 1 In this example, the measured latency is outside the maximum allowed value. 18.4. Generating a latency test failure report Use the following procedures to generate a JUnit latency test output and test failure report. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. Procedure Create a test failure report with information about the cluster state and resources for troubleshooting by passing the --report parameter with the path to where the report is dumped: USD podman run -v USD(pwd)/:/kubeconfig:Z -v USD(pwd)/reportdest:<report_folder_path> \ -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.16 \ /usr/bin/test-run.sh --report <report_folder_path> --ginkgo.v where: <report_folder_path> Is the path to the folder where the report is generated. 18.5. Generating a JUnit latency test report Use the following procedures to generate a JUnit latency test output and test failure report. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. Procedure Create a JUnit-compliant XML report by passing the --junit parameter together with the path to where the report is dumped: Note You must create the junit folder before running this command. USD podman run -v USD(pwd)/:/kubeconfig:Z -v USD(pwd)/junit:/junit \ -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.16 \ /usr/bin/test-run.sh --ginkgo.junit-report junit/<file-name>.xml --ginkgo.v where: junit Is the folder where the junit report is stored. 18.6. Running latency tests on a single-node OpenShift cluster You can run latency tests on single-node OpenShift clusters. Note When executing podman commands as a non-root or non-privileged user, mounting paths can fail with permission denied errors. To make the podman command work, append :Z to the volumes creation; for example, -v USD(pwd)/:/kubeconfig:Z . This allows podman to do the proper SELinux relabeling. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. You have applied a cluster performance profile by using the Node Tuning Operator. Procedure To run the latency tests on a single-node OpenShift cluster, run the following command: USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e LATENCY_TEST_RUNTIME=<time_in_seconds> registry.redhat.io/openshift4/cnf-tests-rhel8:v4.16 \ /usr/bin/test-run.sh --ginkgo.v --ginkgo.timeout="24h" Note The default runtime for each test is 300 seconds. For valid latency test results, run the tests for at least 12 hours by updating the LATENCY_TEST_RUNTIME variable. To run the buckets latency validation step, you must specify a maximum latency. For details on maximum latency variables, see the table in the "Measuring latency" section. After running the test suite, all the dangling resources are cleaned up. 18.7. Running latency tests in a disconnected cluster The CNF tests image can run tests in a disconnected cluster that is not able to reach external registries. This requires two steps: Mirroring the cnf-tests image to the custom disconnected registry. Instructing the tests to consume the images from the custom disconnected registry. Mirroring the images to a custom registry accessible from the cluster A mirror executable is shipped in the image to provide the input required by oc to mirror the test image to a local registry. Run this command from an intermediate machine that has access to the cluster and registry.redhat.io : USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ registry.redhat.io/openshift4/cnf-tests-rhel8:v4.16 \ /usr/bin/mirror -registry <disconnected_registry> | oc image mirror -f - where: <disconnected_registry> Is the disconnected mirror registry you have configured, for example, my.local.registry:5000/ . When you have mirrored the cnf-tests image into the disconnected registry, you must override the original registry used to fetch the images when running the tests, for example: podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e IMAGE_REGISTRY="<disconnected_registry>" \ -e CNF_TESTS_IMAGE="cnf-tests-rhel8:v4.16" \ -e LATENCY_TEST_RUNTIME=<time_in_seconds> \ <disconnected_registry>/cnf-tests-rhel8:v4.16 /usr/bin/test-run.sh --ginkgo.v --ginkgo.timeout="24h" Configuring the tests to consume images from a custom registry You can run the latency tests using a custom test image and image registry using CNF_TESTS_IMAGE and IMAGE_REGISTRY variables. To configure the latency tests to use a custom test image and image registry, run the following command: USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e IMAGE_REGISTRY="<custom_image_registry>" \ -e CNF_TESTS_IMAGE="<custom_cnf-tests_image>" \ -e LATENCY_TEST_RUNTIME=<time_in_seconds> \ registry.redhat.io/openshift4/cnf-tests-rhel8:v4.16 /usr/bin/test-run.sh --ginkgo.v --ginkgo.timeout="24h" where: <custom_image_registry> is the custom image registry, for example, custom.registry:5000/ . <custom_cnf-tests_image> is the custom cnf-tests image, for example, custom-cnf-tests-image:latest . Mirroring images to the cluster OpenShift image registry OpenShift Container Platform provides a built-in container image registry, which runs as a standard workload on the cluster. Procedure Gain external access to the registry by exposing it with a route: USD oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge Fetch the registry endpoint by running the following command: USD REGISTRY=USD(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}') Create a namespace for exposing the images: USD oc create ns cnftests Make the image stream available to all the namespaces used for tests. This is required to allow the tests namespaces to fetch the images from the cnf-tests image stream. Run the following commands: USD oc policy add-role-to-user system:image-puller system:serviceaccount:cnf-features-testing:default --namespace=cnftests USD oc policy add-role-to-user system:image-puller system:serviceaccount:performance-addon-operators-testing:default --namespace=cnftests Retrieve the docker secret name and auth token by running the following commands: USD SECRET=USD(oc -n cnftests get secret | grep builder-docker | awk {'print USD1'} USD TOKEN=USD(oc -n cnftests get secret USDSECRET -o jsonpath="{.data['\.dockercfg']}" | base64 --decode | jq '.["image-registry.openshift-image-registry.svc:5000"].auth') Create a dockerauth.json file, for example: USD echo "{\"auths\": { \"USDREGISTRY\": { \"auth\": USDTOKEN } }}" > dockerauth.json Do the image mirroring: USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ registry.redhat.io/openshift4/cnf-tests-rhel8:4.16 \ /usr/bin/mirror -registry USDREGISTRY/cnftests | oc image mirror --insecure=true \ -a=USD(pwd)/dockerauth.json -f - Run the tests: USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ -e LATENCY_TEST_RUNTIME=<time_in_seconds> \ -e IMAGE_REGISTRY=image-registry.openshift-image-registry.svc:5000/cnftests cnf-tests-local:latest /usr/bin/test-run.sh --ginkgo.v --ginkgo.timeout="24h" Mirroring a different set of test images You can optionally change the default upstream images that are mirrored for the latency tests. Procedure The mirror command tries to mirror the upstream images by default. This can be overridden by passing a file with the following format to the image: [ { "registry": "public.registry.io:5000", "image": "imageforcnftests:4.16" } ] Pass the file to the mirror command, for example saving it locally as images.json . With the following command, the local path is mounted in /kubeconfig inside the container and that can be passed to the mirror command. USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ registry.redhat.io/openshift4/cnf-tests-rhel8:v4.16 /usr/bin/mirror \ --registry "my.local.registry:5000/" --images "/kubeconfig/images.json" \ | oc image mirror -f - 18.8. Troubleshooting errors with the cnf-tests container To run latency tests, the cluster must be accessible from within the cnf-tests container. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. Procedure Verify that the cluster is accessible from inside the cnf-tests container by running the following command: USD podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig \ registry.redhat.io/openshift4/cnf-tests-rhel8:v4.16 \ oc get nodes If this command does not work, an error related to spanning across DNS, MTU size, or firewall access might be occurring.
[ "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 registry.redhat.io/openshift4/cnf-tests-rhel8:v4.16 /usr/bin/test-run.sh --ginkgo.v --ginkgo.timeout=\"24h\"", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 registry.redhat.io/openshift4/cnf-tests-rhel8:v4.16 /usr/bin/test-run.sh --ginkgo.focus=\"hwlatdetect\" --ginkgo.v --ginkgo.timeout=\"24h\"", "running /usr/bin/cnftests -ginkgo.v -ginkgo.focus=hwlatdetect I0908 15:25:20.023712 27 request.go:601] Waited for 1.046586367s due to client-side throttling, not priority and fairness, request: GET:https://api.hlxcl6.lab.eng.tlv2.redhat.com:6443/apis/imageregistry.operator.openshift.io/v1?timeout=32s Running Suite: CNF Features e2e integration tests ================================================= Random Seed: 1662650718 Will run 1 of 3 specs [...] • Failure [283.574 seconds] [performance] Latency Test /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:62 with the hwlatdetect image /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:228 should succeed [It] /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:236 Log file created at: 2022/09/08 15:25:27 Running on machine: hwlatdetect-b6n4n Binary: Built with gc go1.17.12 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0908 15:25:27.160620 1 node.go:39] Environment information: /proc/cmdline: BOOT_IMAGE=(hd1,gpt3)/ostree/rhcos-c6491e1eedf6c1f12ef7b95e14ee720bf48359750ac900b7863c625769ef5fb9/vmlinuz-4.18.0-372.19.1.el8_6.x86_64 random.trust_cpu=on console=tty0 console=ttyS0,115200n8 ignition.platform.id=metal ostree=/ostree/boot.1/rhcos/c6491e1eedf6c1f12ef7b95e14ee720bf48359750ac900b7863c625769ef5fb9/0 ip=dhcp root=UUID=5f80c283-f6e6-4a27-9b47-a287157483b2 rw rootflags=prjquota boot=UUID=773bf59a-bafd-48fc-9a87-f62252d739d3 skew_tick=1 nohz=on rcu_nocbs=0-3 tuned.non_isolcpus=0000ffff,ffffffff,fffffff0 systemd.cpu_affinity=4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79 intel_iommu=on iommu=pt isolcpus=managed_irq,0-3 nohz_full=0-3 tsc=nowatchdog nosoftlockup nmi_watchdog=0 mce=off skew_tick=1 rcutree.kthread_prio=11 + + I0908 15:25:27.160830 1 node.go:46] Environment information: kernel version 4.18.0-372.19.1.el8_6.x86_64 I0908 15:25:27.160857 1 main.go:50] running the hwlatdetect command with arguments [/usr/bin/hwlatdetect --threshold 1 --hardlimit 1 --duration 100 --window 10000000us --width 950000us] F0908 15:27:10.603523 1 main.go:53] failed to run hwlatdetect command; out: hwlatdetect: test duration 100 seconds detector: tracer parameters: Latency threshold: 1us 1 Sample window: 10000000us Sample width: 950000us Non-sampling period: 9050000us Output File: None Starting test test finished Max Latency: 326us 2 Samples recorded: 5 Samples exceeding threshold: 5 ts: 1662650739.017274507, inner:6, outer:6 ts: 1662650749.257272414, inner:14, outer:326 ts: 1662650779.977272835, inner:314, outer:12 ts: 1662650800.457272384, inner:3, outer:9 ts: 1662650810.697273520, inner:3, outer:2 [...] JUnit report was created: /junit.xml/cnftests-junit.xml Summarizing 1 Failure: [Fail] [performance] Latency Test with the hwlatdetect image [It] should succeed /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:476 Ran 1 of 194 Specs in 365.797 seconds FAIL! -- 0 Passed | 1 Failed | 0 Pending | 2 Skipped --- FAIL: TestTest (366.08s) FAIL", "hwlatdetect: test duration 3600 seconds detector: tracer parameters: Latency threshold: 10us Sample window: 1000000us Sample width: 950000us Non-sampling period: 50000us Output File: None Starting test test finished Max Latency: Below threshold Samples recorded: 0", "hwlatdetect: test duration 3600 seconds detector: tracer parameters:Latency threshold: 10usSample window: 1000000us Sample width: 950000usNon-sampling period: 50000usOutput File: None Starting tests:1610542421.275784439, inner:78, outer:81 ts: 1610542444.330561619, inner:27, outer:28 ts: 1610542445.332549975, inner:39, outer:38 ts: 1610542541.568546097, inner:47, outer:32 ts: 1610542590.681548531, inner:13, outer:17 ts: 1610543033.818801482, inner:29, outer:30 ts: 1610543080.938801990, inner:90, outer:76 ts: 1610543129.065549639, inner:28, outer:39 ts: 1610543474.859552115, inner:28, outer:35 ts: 1610543523.973856571, inner:52, outer:49 ts: 1610543572.089799738, inner:27, outer:30 ts: 1610543573.091550771, inner:34, outer:28 ts: 1610543574.093555202, inner:116, outer:63", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e LATENCY_TEST_CPUS=10 -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 registry.redhat.io/openshift4/cnf-tests-rhel8:v4.16 /usr/bin/test-run.sh --ginkgo.focus=\"cyclictest\" --ginkgo.v --ginkgo.timeout=\"24h\"", "running /usr/bin/cnftests -ginkgo.v -ginkgo.focus=cyclictest I0908 13:01:59.193776 27 request.go:601] Waited for 1.046228824s due to client-side throttling, not priority and fairness, request: GET:https://api.compute-1.example.com:6443/apis/packages.operators.coreos.com/v1?timeout=32s Running Suite: CNF Features e2e integration tests ================================================= Random Seed: 1662642118 Will run 1 of 3 specs [...] Summarizing 1 Failure: [Fail] [performance] Latency Test with the cyclictest image [It] should succeed /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:220 Ran 1 of 194 Specs in 161.151 seconds FAIL! -- 0 Passed | 1 Failed | 0 Pending | 2 Skipped --- FAIL: TestTest (161.48s) FAIL", "running cmd: cyclictest -q -D 10m -p 1 -t 16 -a 2,4,6,8,10,12,14,16,54,56,58,60,62,64,66,68 -h 30 -i 1000 -m Histogram 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000001 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000002 579506 535967 418614 573648 532870 529897 489306 558076 582350 585188 583793 223781 532480 569130 472250 576043 More histogram entries Total: 000600000 000600000 000600000 000599999 000599999 000599999 000599998 000599998 000599998 000599997 000599997 000599996 000599996 000599995 000599995 000599995 Min Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 Avg Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 Max Latencies: 00005 00005 00004 00005 00004 00004 00005 00005 00006 00005 00004 00005 00004 00004 00005 00004 Histogram Overflows: 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 Histogram Overflow at cycle number: Thread 0: Thread 1: Thread 2: Thread 3: Thread 4: Thread 5: Thread 6: Thread 7: Thread 8: Thread 9: Thread 10: Thread 11: Thread 12: Thread 13: Thread 14: Thread 15:", "running cmd: cyclictest -q -D 10m -p 1 -t 16 -a 2,4,6,8,10,12,14,16,54,56,58,60,62,64,66,68 -h 30 -i 1000 -m Histogram 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000001 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000002 564632 579686 354911 563036 492543 521983 515884 378266 592621 463547 482764 591976 590409 588145 589556 353518 More histogram entries Total: 000599999 000599999 000599999 000599997 000599997 000599998 000599998 000599997 000599997 000599996 000599995 000599996 000599995 000599995 000599995 000599993 Min Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 Avg Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 Max Latencies: 00493 00387 00271 00619 00541 00513 00009 00389 00252 00215 00539 00498 00363 00204 00068 00520 Histogram Overflows: 00001 00001 00001 00002 00002 00001 00000 00001 00001 00001 00002 00001 00001 00001 00001 00002 Histogram Overflow at cycle number: Thread 0: 155922 Thread 1: 110064 Thread 2: 110064 Thread 3: 110063 155921 Thread 4: 110063 155921 Thread 5: 155920 Thread 6: Thread 7: 110062 Thread 8: 110062 Thread 9: 155919 Thread 10: 110061 155919 Thread 11: 155918 Thread 12: 155918 Thread 13: 110060 Thread 14: 110060 Thread 15: 110059 155917", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e LATENCY_TEST_CPUS=10 -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 registry.redhat.io/openshift4/cnf-tests-rhel8:v4.16 /usr/bin/test-run.sh --ginkgo.focus=\"oslat\" --ginkgo.v --ginkgo.timeout=\"24h\"", "running /usr/bin/cnftests -ginkgo.v -ginkgo.focus=oslat I0908 12:51:55.999393 27 request.go:601] Waited for 1.044848101s due to client-side throttling, not priority and fairness, request: GET:https://compute-1.example.com:6443/apis/machineconfiguration.openshift.io/v1?timeout=32s Running Suite: CNF Features e2e integration tests ================================================= Random Seed: 1662641514 Will run 1 of 3 specs [...] • Failure [77.833 seconds] [performance] Latency Test /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:62 with the oslat image /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:128 should succeed [It] /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:153 The current latency 304 is bigger than the expected one 1 : 1 [...] Summarizing 1 Failure: [Fail] [performance] Latency Test with the oslat image [It] should succeed /remote-source/app/vendor/github.com/openshift/cluster-node-tuning-operator/test/e2e/performanceprofile/functests/4_latency/latency.go:177 Ran 1 of 194 Specs in 161.091 seconds FAIL! -- 0 Passed | 1 Failed | 0 Pending | 2 Skipped --- FAIL: TestTest (161.42s) FAIL", "podman run -v USD(pwd)/:/kubeconfig:Z -v USD(pwd)/reportdest:<report_folder_path> -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.16 /usr/bin/test-run.sh --report <report_folder_path> --ginkgo.v", "podman run -v USD(pwd)/:/kubeconfig:Z -v USD(pwd)/junit:/junit -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.16 /usr/bin/test-run.sh --ginkgo.junit-report junit/<file-name>.xml --ginkgo.v", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e LATENCY_TEST_RUNTIME=<time_in_seconds> registry.redhat.io/openshift4/cnf-tests-rhel8:v4.16 /usr/bin/test-run.sh --ginkgo.v --ginkgo.timeout=\"24h\"", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.16 /usr/bin/mirror -registry <disconnected_registry> | oc image mirror -f -", "run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e IMAGE_REGISTRY=\"<disconnected_registry>\" -e CNF_TESTS_IMAGE=\"cnf-tests-rhel8:v4.16\" -e LATENCY_TEST_RUNTIME=<time_in_seconds> <disconnected_registry>/cnf-tests-rhel8:v4.16 /usr/bin/test-run.sh --ginkgo.v --ginkgo.timeout=\"24h\"", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e IMAGE_REGISTRY=\"<custom_image_registry>\" -e CNF_TESTS_IMAGE=\"<custom_cnf-tests_image>\" -e LATENCY_TEST_RUNTIME=<time_in_seconds> registry.redhat.io/openshift4/cnf-tests-rhel8:v4.16 /usr/bin/test-run.sh --ginkgo.v --ginkgo.timeout=\"24h\"", "oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{\"spec\":{\"defaultRoute\":true}}' --type=merge", "REGISTRY=USD(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')", "oc create ns cnftests", "oc policy add-role-to-user system:image-puller system:serviceaccount:cnf-features-testing:default --namespace=cnftests", "oc policy add-role-to-user system:image-puller system:serviceaccount:performance-addon-operators-testing:default --namespace=cnftests", "SECRET=USD(oc -n cnftests get secret | grep builder-docker | awk {'print USD1'}", "TOKEN=USD(oc -n cnftests get secret USDSECRET -o jsonpath=\"{.data['\\.dockercfg']}\" | base64 --decode | jq '.[\"image-registry.openshift-image-registry.svc:5000\"].auth')", "echo \"{\\\"auths\\\": { \\\"USDREGISTRY\\\": { \\\"auth\\\": USDTOKEN } }}\" > dockerauth.json", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:4.16 /usr/bin/mirror -registry USDREGISTRY/cnftests | oc image mirror --insecure=true -a=USD(pwd)/dockerauth.json -f -", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e LATENCY_TEST_RUNTIME=<time_in_seconds> -e IMAGE_REGISTRY=image-registry.openshift-image-registry.svc:5000/cnftests cnf-tests-local:latest /usr/bin/test-run.sh --ginkgo.v --ginkgo.timeout=\"24h\"", "[ { \"registry\": \"public.registry.io:5000\", \"image\": \"imageforcnftests:4.16\" } ]", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.16 /usr/bin/mirror --registry \"my.local.registry:5000/\" --images \"/kubeconfig/images.json\" | oc image mirror -f -", "podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.16 get nodes" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/scalability_and_performance/cnf-performing-platform-verification-latency-tests
5.4. Uploading QCOW2 image to OpenStack
5.4. Uploading QCOW2 image to OpenStack Image Builder can generate images suitable for uploading to OpenStack cloud deployments, and starting instances there. This describes steps to upload an QCOW2 image to OpenStack. Prerequisites You must have an OpenStack-specific image created by Image Builder. Use the openstack output type in CLI or OpenStack Image (.qcow2) in GUI when creating the image. Warning Image Builder also offers a generic QCOW2 image type output format as qcow2 or QEMU QCOW2 Image (.qcow2). Do not mistake it with the OpenStack image type which is also in the QCOW2 format, but contains further changes specific to OpenStack. Procedure 1. Upload the image to OpenStack and start an instance from it. Use the Images interface to do this: Figure 5.5. Virtualization type 2. Start an instance with that image: Figure 5.6. Virtualization type 3. You can run the instance using any mechanism (CLI or OpenStack web UI) from the snapshot. Use your private key via SSH to access the resulting instance. Log in as cloud-user.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/image_builder_guide/sect-documentation-image_builder-chapter5-section_4
Chapter 19. Executing rules
Chapter 19. Executing rules After you identify example rules or create your own rules in Business Central, you can build and deploy the associated project and execute rules locally or on KIE Server to test the rules. Prerequisites Business Central and KIE Server are installed and running. For installation options, see Planning a Red Hat Process Automation Manager installation . Procedure In Business Central, go to Menu Design Projects and click the project name. In the upper-right corner of the project Assets page, click Deploy to build the project and deploy it to KIE Server. If the build fails, address any problems described in the Alerts panel at the bottom of the screen. For more information about project deployment options, see Packaging and deploying an Red Hat Process Automation Manager project . Note If the rule assets in your project are not built from an executable rule model by default, verify that the following dependency is in the pom.xml file of your project and rebuild the project: <dependency> <groupId>org.drools</groupId> <artifactId>drools-model-compiler</artifactId> <version>USD{rhpam.version}</version> </dependency> This dependency is required for rule assets in Red Hat Process Automation Manager to be built from executable rule models by default. This dependency is included as part of the Red Hat Process Automation Manager core packaging, but depending on your Red Hat Process Automation Manager upgrade history, you may need to manually add this dependency to enable the executable rule model behavior. For more information about executable rule models, see Packaging and deploying an Red Hat Process Automation Manager project . Create a Maven or Java project outside of Business Central, if not created already, that you can use for executing rules locally or that you can use as a client application for executing rules on KIE Server. The project must contain a pom.xml file and any other required components for executing the project resources. For example test projects, see "Other methods for creating and executing DRL rules" . Open the pom.xml file of your test project or client application and add the following dependencies, if not added already: kie-ci : Enables your client application to load Business Central project data locally using ReleaseId kie-server-client : Enables your client application to interact remotely with assets on KIE Server slf4j : (Optional) Enables your client application to use Simple Logging Facade for Java (SLF4J) to return debug logging information after you interact with KIE Server Example dependencies for Red Hat Process Automation Manager 7.13 in a client application pom.xml file: <!-- For local execution --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-ci</artifactId> <version>7.67.0.Final-redhat-00024</version> </dependency> <!-- For remote execution on KIE Server --> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-client</artifactId> <version>7.67.0.Final-redhat-00024</version> </dependency> <!-- For debug logging (optional) --> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-simple</artifactId> <version>1.7.25</version> </dependency> For available versions of these artifacts, search the group ID and artifact ID in the Nexus Repository Manager online. Note Instead of specifying a Red Hat Process Automation Manager <version> for individual dependencies, consider adding the Red Hat Business Automation bill of materials (BOM) dependency to your project pom.xml file. The Red Hat Business Automation BOM applies to both Red Hat Decision Manager and Red Hat Process Automation Manager. When you add the BOM files, the correct versions of transitive dependencies from the provided Maven repositories are included in the project. Example BOM dependency: <dependency> <groupId>com.redhat.ba</groupId> <artifactId>ba-platform-bom</artifactId> <version>7.13.5.redhat-00002</version> <scope>import</scope> <type>pom</type> </dependency> For more information about the Red Hat Business Automation BOM, see What is the mapping between Red Hat Process Automation Manager and the Maven library version? . Ensure that the dependencies for artifacts containing model classes are defined in the client application pom.xml file exactly as they appear in the pom.xml file of the deployed project. If dependencies for model classes differ between the client application and your projects, execution errors can occur. To access the project pom.xml file in Business Central, select any existing asset in the project and then in the Project Explorer menu on the left side of the screen, click the Customize View gear icon and select Repository View pom.xml . For example, the following Person class dependency appears in both the client and deployed project pom.xml files: <dependency> <groupId>com.sample</groupId> <artifactId>Person</artifactId> <version>1.0.0</version> </dependency> If you added the slf4j dependency to the client application pom.xml file for debug logging, create a simplelogger.properties file on the relevant classpath (for example, in src/main/resources/META-INF in Maven) with the following content: org.slf4j.simpleLogger.defaultLogLevel=debug In your client application, create a .java main class containing the necessary imports and a main() method to load the KIE base, insert facts, and execute the rules. For example, a Person object in a project contains getter and setter methods to set and retrieve the first name, last name, hourly rate, and the wage of a person. The following Wage rule in a project calculates the wage and hourly rate values and displays a message based on the result: package com.sample; import com.sample.Person; dialect "java" rule "Wage" when Person(hourlyRate * wage > 100) Person(name : firstName, surname : lastName) then System.out.println("Hello" + " " + name + " " + surname + "!"); System.out.println("You are rich!"); end To test this rule locally outside of KIE Server (if needed), configure the .java class to import KIE services, a KIE container, and a KIE session, and then use the main() method to fire all rules against a defined fact model: Executing rules locally import org.kie.api.KieServices; import org.kie.api.builder.ReleaseId; import org.kie.api.runtime.KieContainer; import org.kie.api.runtime.KieSession; import org.drools.compiler.kproject.ReleaseIdImpl; public class RulesTest { public static final void main(String[] args) { try { // Identify the project in the local repository: ReleaseId rid = new ReleaseIdImpl("com.myspace", "MyProject", "1.0.0"); // Load the KIE base: KieServices ks = KieServices.Factory.get(); KieContainer kContainer = ks.newKieContainer(rid); KieSession kSession = kContainer.newKieSession(); // Set up the fact model: Person p = new Person(); p.setWage(12); p.setFirstName("Tom"); p.setLastName("Summers"); p.setHourlyRate(10); // Insert the person into the session: kSession.insert(p); // Fire all rules: kSession.fireAllRules(); kSession.dispose(); } catch (Throwable t) { t.printStackTrace(); } } } To test this rule on KIE Server, configure the .java class with the imports and rule execution information similarly to the local example, and additionally specify KIE services configuration and KIE services client details: Executing rules on KIE Server package com.sample; import java.util.ArrayList; import java.util.HashSet; import java.util.List; import java.util.Set; import org.kie.api.command.BatchExecutionCommand; import org.kie.api.command.Command; import org.kie.api.KieServices; import org.kie.api.runtime.ExecutionResults; import org.kie.api.runtime.KieContainer; import org.kie.api.runtime.KieSession; import org.kie.server.api.marshalling.MarshallingFormat; import org.kie.server.api.model.ServiceResponse; import org.kie.server.client.KieServicesClient; import org.kie.server.client.KieServicesConfiguration; import org.kie.server.client.KieServicesFactory; import org.kie.server.client.RuleServicesClient; import com.sample.Person; public class RulesTest { private static final String containerName = "testProject"; private static final String sessionName = "myStatelessSession"; public static final void main(String[] args) { try { // Define KIE services configuration and client: Set<Class<?>> allClasses = new HashSet<Class<?>>(); allClasses.add(Person.class); String serverUrl = "http://USDHOST:USDPORT/kie-server/services/rest/server"; String username = "USDUSERNAME"; String password = "USDPASSWORD"; KieServicesConfiguration config = KieServicesFactory.newRestConfiguration(serverUrl, username, password); config.setMarshallingFormat(MarshallingFormat.JAXB); config.addExtraClasses(allClasses); KieServicesClient kieServicesClient = KieServicesFactory.newKieServicesClient(config); // Set up the fact model: Person p = new Person(); p.setWage(12); p.setFirstName("Tom"); p.setLastName("Summers"); p.setHourlyRate(10); // Insert Person into the session: KieCommands kieCommands = KieServices.Factory.get().getCommands(); List<Command> commandList = new ArrayList<Command>(); commandList.add(kieCommands.newInsert(p, "personReturnId")); // Fire all rules: commandList.add(kieCommands.newFireAllRules("numberOfFiredRules")); BatchExecutionCommand batch = kieCommands.newBatchExecution(commandList, sessionName); // Use rule services client to send request: RuleServicesClient ruleClient = kieServicesClient.getServicesClient(RuleServicesClient.class); ServiceResponse<ExecutionResults> executeResponse = ruleClient.executeCommandsWithResults(containerName, batch); System.out.println("number of fired rules:" + executeResponse.getResult().getValue("numberOfFiredRules")); } catch (Throwable t) { t.printStackTrace(); } } } Run the configured .java class from your project directory. You can run the file in your development platform (such as Red Hat CodeReady Studio) or in the command line. Example Maven execution (within project directory): Example Java execution (within project directory) Review the rule execution status in the command line and in the server log. If any rules do not execute as expected, review the configured rules in the project and the main class configuration to validate the data provided.
[ "<dependency> <groupId>org.drools</groupId> <artifactId>drools-model-compiler</artifactId> <version>USD{rhpam.version}</version> </dependency>", "<!-- For local execution --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-ci</artifactId> <version>7.67.0.Final-redhat-00024</version> </dependency> <!-- For remote execution on KIE Server --> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-client</artifactId> <version>7.67.0.Final-redhat-00024</version> </dependency> <!-- For debug logging (optional) --> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-simple</artifactId> <version>1.7.25</version> </dependency>", "<dependency> <groupId>com.redhat.ba</groupId> <artifactId>ba-platform-bom</artifactId> <version>7.13.5.redhat-00002</version> <scope>import</scope> <type>pom</type> </dependency>", "<dependency> <groupId>com.sample</groupId> <artifactId>Person</artifactId> <version>1.0.0</version> </dependency>", "org.slf4j.simpleLogger.defaultLogLevel=debug", "package com.sample; import com.sample.Person; dialect \"java\" rule \"Wage\" when Person(hourlyRate * wage > 100) Person(name : firstName, surname : lastName) then System.out.println(\"Hello\" + \" \" + name + \" \" + surname + \"!\"); System.out.println(\"You are rich!\"); end", "import org.kie.api.KieServices; import org.kie.api.builder.ReleaseId; import org.kie.api.runtime.KieContainer; import org.kie.api.runtime.KieSession; import org.drools.compiler.kproject.ReleaseIdImpl; public class RulesTest { public static final void main(String[] args) { try { // Identify the project in the local repository: ReleaseId rid = new ReleaseIdImpl(\"com.myspace\", \"MyProject\", \"1.0.0\"); // Load the KIE base: KieServices ks = KieServices.Factory.get(); KieContainer kContainer = ks.newKieContainer(rid); KieSession kSession = kContainer.newKieSession(); // Set up the fact model: Person p = new Person(); p.setWage(12); p.setFirstName(\"Tom\"); p.setLastName(\"Summers\"); p.setHourlyRate(10); // Insert the person into the session: kSession.insert(p); // Fire all rules: kSession.fireAllRules(); kSession.dispose(); } catch (Throwable t) { t.printStackTrace(); } } }", "package com.sample; import java.util.ArrayList; import java.util.HashSet; import java.util.List; import java.util.Set; import org.kie.api.command.BatchExecutionCommand; import org.kie.api.command.Command; import org.kie.api.KieServices; import org.kie.api.runtime.ExecutionResults; import org.kie.api.runtime.KieContainer; import org.kie.api.runtime.KieSession; import org.kie.server.api.marshalling.MarshallingFormat; import org.kie.server.api.model.ServiceResponse; import org.kie.server.client.KieServicesClient; import org.kie.server.client.KieServicesConfiguration; import org.kie.server.client.KieServicesFactory; import org.kie.server.client.RuleServicesClient; import com.sample.Person; public class RulesTest { private static final String containerName = \"testProject\"; private static final String sessionName = \"myStatelessSession\"; public static final void main(String[] args) { try { // Define KIE services configuration and client: Set<Class<?>> allClasses = new HashSet<Class<?>>(); allClasses.add(Person.class); String serverUrl = \"http://USDHOST:USDPORT/kie-server/services/rest/server\"; String username = \"USDUSERNAME\"; String password = \"USDPASSWORD\"; KieServicesConfiguration config = KieServicesFactory.newRestConfiguration(serverUrl, username, password); config.setMarshallingFormat(MarshallingFormat.JAXB); config.addExtraClasses(allClasses); KieServicesClient kieServicesClient = KieServicesFactory.newKieServicesClient(config); // Set up the fact model: Person p = new Person(); p.setWage(12); p.setFirstName(\"Tom\"); p.setLastName(\"Summers\"); p.setHourlyRate(10); // Insert Person into the session: KieCommands kieCommands = KieServices.Factory.get().getCommands(); List<Command> commandList = new ArrayList<Command>(); commandList.add(kieCommands.newInsert(p, \"personReturnId\")); // Fire all rules: commandList.add(kieCommands.newFireAllRules(\"numberOfFiredRules\")); BatchExecutionCommand batch = kieCommands.newBatchExecution(commandList, sessionName); // Use rule services client to send request: RuleServicesClient ruleClient = kieServicesClient.getServicesClient(RuleServicesClient.class); ServiceResponse<ExecutionResults> executeResponse = ruleClient.executeCommandsWithResults(containerName, batch); System.out.println(\"number of fired rules:\" + executeResponse.getResult().getValue(\"numberOfFiredRules\")); } catch (Throwable t) { t.printStackTrace(); } } }", "mvn clean install exec:java -Dexec.mainClass=\"com.sample.app.RulesTest\"", "javac -classpath \"./USDDEPENDENCIES/*:.\" RulesTest.java java -classpath \"./USDDEPENDENCIES/*:.\" RulesTest" ]
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/assets-executing-proc_drl-rules
Chapter 7. Supported integration products
Chapter 7. Supported integration products AMQ Streams 2.1 supports integration with the following Red Hat products. Red Hat Single Sign-On Provides OAuth 2.0 authentication and OAuth 2.0 authorization. For information on the functionality these products can introduce to your AMQ Streams deployment, refer to the product documentation. Additional resources Red Hat Single Sign-On Supported Configurations
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.1/html/release_notes_for_amq_streams_2.1_on_rhel/supported-config-str
Chapter 2. Enhancements
Chapter 2. Enhancements The enhancements added in this release are outlined below. 2.1. Kafka enhancements For an overview of the enhancements introduced with: Kafka 2.6.2, refer to the Kafka 2.6.2 Release Notes (applies only to AMQ Streams 1.6.4) Kafka 2.6.1, refer to the Kafka 2.6.1 Release Notes (applies only to AMQ Streams 1.6.4) Kafka 2.6.0, refer to the Kafka 2.6.0 Release Notes 2.2. Kafka Bridge enhancements This release includes the following enhancements to the Kafka Bridge component of AMQ Streams. Retrieve partitions and metadata The Kafka Bridge now supports the following operations: Retrieve a list of partitions for a given topic: GET /topics/{topicname}/partitions Retrieve metadata for a given partition, such as the partition ID, the leader broker, and the number of replicas: GET /topics/{topicname}/partitions/{partitionid} See the Kafka Bridge API reference . Support for Kafka message headers Messages sent using the Kafka Bridge can now include Kafka message headers. In a POST request to the /topics endpoint, you can optionally specify headers in the message payload, which is contained in the request body. Message header values must be in binary format and encoded as Base64. Example request with Kafka message header curl -X POST \ http://localhost:8080/topics/my-topic \ -H 'content-type: application/vnd.kafka.json.v2+json' \ -d '{ "records": [ { "key": "my-key", "value": "sales-lead-0001" "partition": 2 "headers": [ { "key": "key1", "value": "QXBhY2hlIEthZmthIGlzIHRoZSBib21iIQ==" } ] }, ] }' See Requests to the Kafka Bridge 2.3. OAuth 2.0 authentication and authorization This release includes the following enhancements to OAuth 2.0 token-based authentication and authorization. Session re-authentication OAuth 2.0 authentication in AMQ Streams now supports session re-authentication for Kafka brokers. This defines the maximum duration of an authenticated OAuth 2.0 session between a Kafka client and a Kafka broker. Session re-authentication is supported for both types of token validation: fast local JWT and introspection endpoint. To configure session re-authentication, use the new maxSecondsWithoutReauthentication option in the OAuth 2.0 configuration for Kafka brokers. For a specific listener, maxSecondsWithoutReauthentication allows you to: Enable session re-authentication Set the maximum duration, in seconds, of an authenticated session between a Kafka client and a Kafka broker Example configuration for session re-authentication after 1 hour apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: listeners: #... - name: tls port: 9093 type: internal tls: true authentication: type: oauth maxSecondsWithoutReauthentication: 3600 # ... An authenticated session is closed if it exceeds the configured maxSecondsWithoutReauthentication , or if the access token expiry time is reached. Then, the client must log in to the authorization server again, obtain a new access token, and then re-authenticate to the Kafka broker. This will establish a new authenticated session over the existing connection. When re-authentication is required, any operation that is attempted by the client (apart from re-authentication) will cause the broker to terminate the connection. See: Session re-authentication for Kafka brokers and Configuring OAuth 2.0 support for Kafka brokers . JWKS keys refresh interval When configuring Kafka brokers to use fast local JWT token validation, you can now set the jwksMinRefreshPauseSeconds option in the external listener configuration. This defines the minimum interval between attempts by the broker to refresh JSON Web Key Set (JWKS) public keys issued by the authorization server. With this release, the Kafka broker will attempt to refresh JWKS keys immediately, without waiting for the regular refresh schedule, if it detects an unknown signing key. Example configuration for a 2-minute pause between attempts to refresh JWKS keys listeners: #... - name: external2 port: 9095 type: loadbalancer tls: false authentication: type: oauth validIssuerUri: < https://<auth-server-address>/auth/realms/external > jwksEndpointUri: < https://<auth-server-address>/auth/realms/external/protocol/openid-connect/certs > userNameClaim: preferred_username tlsTrustedCertificates: - secretName: oauth-server-cert certificate: ca.crt disableTlsHostnameVerification: true jwksExpirySeconds: 360 jwksRefreshSeconds: 300 jwksMinRefreshPauseSeconds: 120 enableECDSA: "true" The refresh schedule for JWKS keys is set in the jwksRefreshSeconds option. When an unknown signing key is encountered, a JWKS keys refresh is scheduled outside of the refresh schedule. The refresh will not start until the time since the last refresh reaches the interval specified in jwksMinRefreshPauseSeconds . jwksMinRefreshPauseSeconds has a default value of 1 . See Configuring OAuth 2.0 support for Kafka brokers . Refreshing grants from Red Hat Single Sign-On New configuration options have been added for OAuth 2.0 token-based authorization through Red Hat Single Sign-On. When configuring Kafka brokers, you can now define the following options related to refreshing grants from Red Hat SSO Authorization Services: grantsRefreshPeriodSeconds : The time between two consecutive grants refresh runs. The default value is 60 . If set to 0 or less, refreshing of grants is disabled. grantsRefreshPoolSize : The number of threads that can fetch grants for the active session in parallel. The default value is 5 . See Using OAuth 2.0 token-based authorization and Configuring OAuth 2.0 authorization support . Detection of permission changes in Red Hat Single Sign-On With this release, the keycloak (Red Hat SSO) authorization regularly checks for changes in permissions for the active sessions. User changes and permissions management changes are now detected in real time. 2.4. Metrics for Kafka Bridge and Cruise Control You can now add metrics configuration to Kafka Bridge and Cruise Control. Example metrics files for Kafka Bridge and Cruise Control are provided with AMQ Streams, including: Custom resource YAML files with metrics configuration Grafana dashboard JSON files With the metrics configuration deployed, and Prometheus and Grafana set up, you can use the example Grafana dashboards for monitoring. Example metrics files provided with AMQ Streams Table 2.1. Example custom resources with metrics configuration Component Custom resource Example YAML file Kafka and ZooKeeper Kafka kafka-metrics.yaml Kafka Connect KafkaConnect and KafkaConnectS2I kafka-connect-metrics.yaml Kafka MirrorMaker 2.0 KafkaMirrorMaker2 kafka-mirror-maker-2-metrics.yaml Kafka Bridge KafkaBridge kafka-bridge-metrics.yaml Cruise Control Kafka kafka-cruise-control-metrics.yaml See Introducing Metrics to Kafka . Note The Prometheus server is not supported as part of the AMQ Streams distribution. However, the Prometheus endpoint and JMX exporter used to expose the metrics are supported. 2.5. Dynamic updates for logging changes With this release, changing the logging levels, both inline and external, of most custom resources no longer triggers rolling updates to the Kafka cluster. Logging changes are now applied dynamically (without a restart). This enhancement applies to the following resources: Kafka clusters Kafka Connect and Kafka Connect S2I Kafka Mirror Maker 2.0 Kafka Bridge It does not apply to Mirror Maker or Cruise Control. If you use external logging via a ConfigMap, a rolling update is still triggered when you change a logging appender. For example: log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout See External logging and the Deployment configuration chapter of the Using AMQ Streams on OpenShift guide. 2.6. PodMonitors used for metrics scraping The way that Prometheus metrics are scraped from pods (for Kafka, ZooKeeper, Kafka Connect, and others) has changed in this release. Metrics are now scraped from pods by PodMonitors only, defined in strimzi-pod-monitor.yaml . Previously, this was performed by ServiceMonitors and PodMonitors . ServiceMonitors have been removed from AMQ Streams in this release. You need to upgrade your monitoring stack to use PodMonitors as described in Upgrading your monitoring stack to use PodMonitors , below. As a result of this change, the following elements have been removed from services related to Kafka and ZooKeeper: The tcp-prometheus monitoring port (port 9404) Prometheus annotations This change applies to the following services: cluster-name-zookeeper-client cluster-name-kafka-brokers To add a Prometheus annotation, you should now use the template property in the relevant AMQ Streams custom resource, as described in Customizing OpenShift resources . Upgrading your monitoring stack to use PodMonitors To avoid an interruption to the monitoring of your Kafka cluster, perform the following steps before upgrading to AMQ Streams 1.6. Using the new AMQ Streams 1.6 installation artifacts, apply the strimzi-pod-monitor.yaml file to your AMQ Streams 1.5 cluster: oc apply -f examples/metrics/prometheus-install/strimzi-pod-monitor.yaml Delete the existing ServiceMonitor resources from your AMQ Streams 1.5 cluster. Delete the Secret named additional-scrape-configs . Create a new Secret , also named additional-scrape-configs , from the prometheus-additional.yaml file provided in the AMQ Streams 1.6 installation artifacts. Check that the Prometheus targets for the Prometheus user interface are up and running again. Proceed with the upgrade to AMQ Streams 1.6, starting with Upgrading the Cluster Operator . After completing the upgrade to AMQ Streams 1.6, you can load the example Grafana dashboards for AMQ Streams 1.6. See Introducing Metrics to Kafka . 2.7. Generic listener configuration A GenericKafkaListener schema is introduced in this release. The schema is for the configuration of Kafka listeners in a Kafka resource, and replaces the KafkaListeners schema, which is deprecated. With the GenericKafkaListener schema, you can configure as many listeners as required, as long as their names and ports are unique. The listeners configuration is defined as an array, but the deprecated format is also supported. See GenericKafkaListener schema reference Updating listeners to the new configuration The KafkaListeners schema uses sub-properties for plain , tls and external listeners, with fixed ports for each. After a Kafka upgrade, you can convert listeners configured using the KafkaListeners schema into the format of the GenericKafkaListener schema. For example, if you are currently using the following configuration in your Kafka configuration: Old listener configuration listeners: plain: # ... tls: # ... external: type: loadbalancer # ... Convert the listeners into the new format using: New listener configuration listeners: #... - name: plain port: 9092 type: internal tls: false 1 - name: tls port: 9093 type: internal tls: true - name: external port: 9094 type: EXTERNAL-LISTENER-TYPE 2 tls: true 1 The TLS property is now required for all listeners. 2 Options: ingress , loadbalancer , nodeport , route . Make sure to use the the exact names and port numbers shown. For any additional configuration or overrides properties used with the old format, you need to update them to the new format. Changes introduced to the listener configuration : overrides is merged with the configuration section dnsAnnotations has been renamed annotations preferredAddressType has been renamed preferredNodePortAddressType address has been renamed alternativenames loadBalancerSourceRanges and externalTrafficPolicy move to the listener configuration from the now deprecated template All listeners now support configuring the advertised hostname and port. For example, this configuration: Old additional listener configuration listeners: external: type: loadbalancer authentication: type: tls overrides: bootstrap: dnsAnnotations: #... Changes to: New additional listener configuration listeners: #... - name: external port: 9094 type:loadbalancer tls: true authentication: type: tls configuration: bootstrap: annotations: #... Important The name and port numbers shown in the new listener configuration must be used for backwards compatibility. Using any other values will cause renaming of the Kafka listeners and Kubernetes services. 2.8. MirrorMaker 2.0 topic renaming update The MirrorMaker 2.0 architecture supports bidirectional replication by automatically renaming remote topics to represent the source cluster. The name of the originating cluster is prepended to the name of the topic. Optionally, you can now override automatic renaming by adding IdentityReplicationPolicy to the source connector configuration. With this configuration applied, topics retain their original names. apiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: #... mirrors: - sourceCluster: "my-cluster-source" targetCluster: "my-cluster-target" sourceConnector: config: replication.factor: 1 offset-syncs.topic.replication.factor: 1 sync.topic.acls.enabled: "false" replication.policy.separator: "" replication.policy.class: "io.strimzi.kafka.connect.mirror.IdentityReplicationPolicy" 1 #... 1 Adds a policy that overrides the automatic renaming of remote topics. Instead of prepending the name with the name of the source cluster, the topic retains its original name. The override is useful, for example, in an active/passive cluster configuration where you want to make backups or migrate data to another cluster. In either situation, you might not want automatic renaming of remote topics. See Kafka MirrorMaker 2.0 configuration 2.9. Support for hostAliases It is now possible to configure hostAliases when customizing a deployment of Kubernetes templates and pods. Example hostAliases configuration apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect #... spec: # ... template: pod: hostAliases: - ip: "192.168.1.86" hostnames: - "my-host-1" - "my-host-2" #... If a list of hosts and IPs is specified, they are injected into the /etc/hosts file of the pod. This is especially useful for Kafka Connect or MirrorMaker when a connection outside of the cluster is also requested by users. See PodTemplate schema reference 2.10. Reconciled resource metric A new operator metric provides information about the status of a specified resource, that is, whether or not it was reconciled successfully. Reconciled resource metric definition 2.11. Secret metadata for KafkaUser You can now use template properties for the Secret created by the User Operator. Using KafkaUserTemplate , you can use labels and annotations to configure metadata that defines how the Secret is generated for the KafkaUser resource. An example showing the KafkaUserTemplate apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: tls template: secret: metadata: labels: label1: value1 annotations: anno1: value1 # ... See KafkaUserTemplate schema reference 2.12. Additional tools in container images The following tools have been added to the AMQ Streams container images: jstack jcmd jmap netstat ( net-tools ) lsof 2.13. Removal of Kafka Exporter service The Kafka Exporter service has been removed from AMQ Streams. This service is no longer required because Prometheus now scrapes the Kafka Exporter metrics directly from the Kafka Exporter pods through the PodMonitor declaration. See Introducing Metrics to Kafka . 2.14. Deprecation of ZooKeeper option in Kafka administrative tools The --zookeeper option was deprecated in the following Kafka administrative tools: bin/kafka-configs.sh bin/kafka-leader-election.sh bin/kafka-topics.sh When using these tools, you should now use the --bootstrap-server option to specify the Kafka broker to connect to. For example: kubectl exec BROKER-POD -c kafka -it -- \ /bin/kafka-topics.sh --bootstrap-server localhost:9092 --list Although the --zookeeper option still works, it will be removed from all the administrative tools in a future Kafka release. This is part of ongoing work in the Apache Kafka project to remove Kafka's dependency on ZooKeeper.
[ "GET /topics/{topicname}/partitions", "GET /topics/{topicname}/partitions/{partitionid}", "curl -X POST http://localhost:8080/topics/my-topic -H 'content-type: application/vnd.kafka.json.v2+json' -d '{ \"records\": [ { \"key\": \"my-key\", \"value\": \"sales-lead-0001\" \"partition\": 2 \"headers\": [ { \"key\": \"key1\", \"value\": \"QXBhY2hlIEthZmthIGlzIHRoZSBib21iIQ==\" } ] }, ] }'", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: listeners: # - name: tls port: 9093 type: internal tls: true authentication: type: oauth maxSecondsWithoutReauthentication: 3600 #", "listeners: # - name: external2 port: 9095 type: loadbalancer tls: false authentication: type: oauth validIssuerUri: < https://<auth-server-address>/auth/realms/external > jwksEndpointUri: < https://<auth-server-address>/auth/realms/external/protocol/openid-connect/certs > userNameClaim: preferred_username tlsTrustedCertificates: - secretName: oauth-server-cert certificate: ca.crt disableTlsHostnameVerification: true jwksExpirySeconds: 360 jwksRefreshSeconds: 300 jwksMinRefreshPauseSeconds: 120 enableECDSA: \"true\"", "metrics ├── grafana-dashboards │ ├── strimzi-cruise-control.json │ ├── strimzi-kafka-bridge.json │ ├── strimzi-kafka-connect.json │ ├── strimzi-kafka-exporter.json │ ├── strimzi-kafka-mirror-maker-2.json │ ├── strimzi-kafka.json │ ├── strimzi-operators.json │ └── strimzi-zookeeper.json ├── grafana-install │ └── grafana.yaml ├── prometheus-additional-properties │ └── prometheus-additional.yaml ├── prometheus-alertmanager-config │ └── alert-manager-config.yaml ├── prometheus-install │ ├── alert-manager.yaml │ ├── prometheus-rules.yaml │ ├── prometheus.yaml │ ├── strimzi-pod-monitor.yaml ├── kafka-bridge-metrics.yaml ├── kafka-connect-metrics.yaml ├── kafka-cruise-control-metrics.yaml ├── kafka-metrics.yaml └── kafka-mirror-maker-2-metrics.yaml", "log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout", "apply -f examples/metrics/prometheus-install/strimzi-pod-monitor.yaml", "listeners: plain: # tls: # external: type: loadbalancer #", "listeners: # - name: plain port: 9092 type: internal tls: false 1 - name: tls port: 9093 type: internal tls: true - name: external port: 9094 type: EXTERNAL-LISTENER-TYPE 2 tls: true", "listeners: external: type: loadbalancer authentication: type: tls overrides: bootstrap: dnsAnnotations: #", "listeners: # - name: external port: 9094 type:loadbalancer tls: true authentication: type: tls configuration: bootstrap: annotations: #", "apiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: # mirrors: - sourceCluster: \"my-cluster-source\" targetCluster: \"my-cluster-target\" sourceConnector: config: replication.factor: 1 offset-syncs.topic.replication.factor: 1 sync.topic.acls.enabled: \"false\" replication.policy.separator: \"\" replication.policy.class: \"io.strimzi.kafka.connect.mirror.IdentityReplicationPolicy\" 1 #", "apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect # spec: # template: pod: hostAliases: - ip: \"192.168.1.86\" hostnames: - \"my-host-1\" - \"my-host-2\" #", "strimzi_resource_state{kind=\"Kafka\", name=\"my-cluster\", resource-namespace=\"my-kafka-namespace\"}", "apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: tls template: secret: metadata: labels: label1: value1 annotations: anno1: value1 #", "exec BROKER-POD -c kafka -it -- /bin/kafka-topics.sh --bootstrap-server localhost:9092 --list" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/release_notes_for_amq_streams_1.6_on_openshift/enhancements-str
11.5.2. Network/Netmask Directives Format
11.5.2. Network/Netmask Directives Format You can also use the network/netmask directives format for route- interface files. The following is a template for the network/netmask format, with instructions following afterwards: ADDRESS0= 10.10.10.0 is the network address of the remote network or host to be reached. NETMASK0= 255.255.255.0 is the netmask for the network address defined with ADDRESS0= 10.10.10.0 . GATEWAY0= 192.168.1.1 is the default gateway, or an IP address that can be used to reach ADDRESS0= 10.10.10.0 The following is an example of a route- interface file using the network/netmask directives format. The default gateway is 192.168.0.1 but a leased line or WAN connection is available at 192.168.0.10 . The two static routes are for reaching the 10.10.10.0/24 and 172.16.1.0/24 networks: Subsequent static routes must be numbered sequentially, and must not skip any values. For example, ADDRESS0 , ADDRESS1 , ADDRESS2 , and so on.
[ "ADDRESS0=10.10.10.0 NETMASK0=255.255.255.0 GATEWAY0=192.168.1.1", "ADDRESS0=10.10.10.0 NETMASK0=255.255.255.0 GATEWAY0=192.168.0.10 ADDRESS1=172.16.1.10 NETMASK1=255.255.255.0 GATEWAY1=192.168.0.10" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-networkscripts-static-routes-network-netmask-directives
Making Open Source More Inclusive
Making Open Source More Inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see " our CTO Chris Wright's message " .
null
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/deployment_guide/making-open-source-more-inclusive
4.11. Displaying Extended GFS Information and Statistics
4.11. Displaying Extended GFS Information and Statistics You can use the gfs_tool command to gather a variety of details about GFS. This section describes typical use of the gfs_tool command for displaying statistics, space usage, and extended status. Usage Displaying Statistics The counters flag displays statistics about a file system. If -c is used, the gfs_tool command continues to run, displaying statistics once per second. Displaying Space Usage The df flag displays a space-usage summary of a given file system. The information is more detailed than a standard df . Displaying Extended Status The stat flag displays extended status information about a file. MountPoint Specifies the file system to which the action applies. File Specifies the file from which to get information. The gfs_tool command provides additional action flags (options) not listed in this section. For more information about other gfs_tool flags, refer to the gfs_tool man page. Examples This example reports extended file system usage about file system /gfs . This example reports extended file status about file /gfs/datafile .
[ "gfs_tool counters MountPoint", "gfs_tool df MountPoint", "gfs_tool stat File", "gfs_tool df /gfs", "gfs_tool stat /gfs/datafile" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/global_file_system/s1-manage-displaystats
function::egid
function::egid Name function::egid - Returns the effective gid of a target process Synopsis Arguments None Description This function returns the effective gid of a target process
[ "egid:long()" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-egid
Chapter 9. Monitoring the Network Observability Operator
Chapter 9. Monitoring the Network Observability Operator You can use the web console to monitor alerts related to the health of the Network Observability Operator. 9.1. Health dashboards Metrics about health and resource usage of the Network Observability Operator are located in the Observe Dashboards page in the web console. You can view metrics about the health of the Operator in the following categories: Flows per second Sampling Errors last minute Dropped flows per second Flowlogs-pipeline statistics Flowlogs-pipleine statistics views eBPF agent statistics views Operator statistics Resource usage 9.2. Health alerts A health alert banner that directs you to the dashboard can appear on the Network Traffic and Home pages if an alert is triggered. Alerts are generated in the following cases: The NetObservLokiError alert occurs if the flowlogs-pipeline workload is dropping flows because of Loki errors, such as if the Loki ingestion rate limit has been reached. The NetObservNoFlows alert occurs if no flows are ingested for a certain amount of time. The NetObservFlowsDropped alert occurs if the Network Observability eBPF agent hashmap table is full, and the eBPF agent processes flows with degraded performance, or when the capacity limiter is triggered. 9.3. Viewing health information You can access metrics about health and resource usage of the Network Observability Operator from the Dashboards page in the web console. Prerequisites You have the Network Observability Operator installed. You have access to the cluster as a user with the cluster-admin role or with view permissions for all projects. Procedure From the Administrator perspective in the web console, navigate to Observe Dashboards . From the Dashboards dropdown, select Netobserv/Health . View the metrics about the health of the Operator that are displayed on the page. 9.3.1. Disabling health alerts You can opt out of health alerting by editing the FlowCollector resource: In the web console, navigate to Operators Installed Operators . Under the Provided APIs heading for the NetObserv Operator , select Flow Collector . Select cluster then select the YAML tab. Add spec.processor.metrics.disableAlerts to disable health alerts, as in the following YAML sample: apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: processor: metrics: disableAlerts: [NetObservLokiError, NetObservNoFlows] 1 1 You can specify one or a list with both types of alerts to disable. 9.4. Creating Loki rate limit alerts for the NetObserv dashboard You can create custom alerting rules for the Netobserv dashboard metrics to trigger alerts when Loki rate limits have been reached. Prerequisites You have access to the cluster as a user with the cluster-admin role or with view permissions for all projects. You have the Network Observability Operator installed. Procedure Create a YAML file by clicking the import icon, + . Add an alerting rule configuration to the YAML file. In the YAML sample that follows, an alert is created for when Loki rate limits have been reached: apiVersion: monitoring.openshift.io/v1 kind: AlertingRule metadata: name: loki-alerts namespace: openshift-monitoring spec: groups: - name: LokiRateLimitAlerts rules: - alert: LokiTenantRateLimit annotations: message: |- {{ USDlabels.job }} {{ USDlabels.route }} is experiencing 429 errors. summary: "At any number of requests are responded with the rate limit error code." expr: sum(irate(loki_request_duration_seconds_count{status_code="429"}[1m])) by (job, namespace, route) / sum(irate(loki_request_duration_seconds_count[1m])) by (job, namespace, route) * 100 > 0 for: 10s labels: severity: warning Click Create to apply the configuration file to the cluster. 9.5. Using the eBPF agent alert An alert, NetObservAgentFlowsDropped , is triggered when the Network Observability eBPF agent hashmap table is full or when the capacity limiter is triggered. If you see this alert, consider increasing the cacheMaxFlows in the FlowCollector , as shown in the following example. Note Increasing the cacheMaxFlows might increase the memory usage of the eBPF agent. Procedure In the web console, navigate to Operators Installed Operators . Under the Provided APIs heading for the Network Observability Operator , select Flow Collector . Select cluster , and then select the YAML tab. Increase the spec.agent.ebpf.cacheMaxFlows value, as shown in the following YAML sample: 1 Increase the cacheMaxFlows value from its value at the time of the NetObservAgentFlowsDropped alert. Additional resources For more information about creating alerts that you can see on the dashboard, see Creating alerting rules for user-defined projects .
[ "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: processor: metrics: disableAlerts: [NetObservLokiError, NetObservNoFlows] 1", "apiVersion: monitoring.openshift.io/v1 kind: AlertingRule metadata: name: loki-alerts namespace: openshift-monitoring spec: groups: - name: LokiRateLimitAlerts rules: - alert: LokiTenantRateLimit annotations: message: |- {{ USDlabels.job }} {{ USDlabels.route }} is experiencing 429 errors. summary: \"At any number of requests are responded with the rate limit error code.\" expr: sum(irate(loki_request_duration_seconds_count{status_code=\"429\"}[1m])) by (job, namespace, route) / sum(irate(loki_request_duration_seconds_count[1m])) by (job, namespace, route) * 100 > 0 for: 10s labels: severity: warning", "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF ebpf: cacheMaxFlows: 200000 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/network_observability/network-observability-operator-monitoring
Chapter 7. Adding annotation to the pre-existing backingstores
Chapter 7. Adding annotation to the pre-existing backingstores Adding the correct annotation to the pre-existing backingstores allows the backingstores backed by object gateways (RGWs) to report its actual and free size. The Multicloud Object Gateway (MCG) can retrieve and use this information. This flow is only relevant if RGW is present and in-use on the cluster. RGW is used by default only in on-premise platforms such as vSphere. Note If you added the annotations to pre-existing backingstores after upgrading to OpenShift Data Foundation version 4.8, then you do not need to add them after upgrading to 4.9. All backingstores created in version 4.8 and above will already have this annotation by default. Procedure Log in to the OpenShift Container Platform Web Console. Click Home Search . Search for BackingStore in Resources and click on it. Beside the S3-compatible BackingStore, click Action Menu (...) Edit annotations . Add rgw for KEY . Click Save .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/upgrading_to_openshift_data_foundation/adding-annotation-to-the-pre-existing-backingstores_rhodf
function::task_time_string
function::task_time_string Name function::task_time_string - Human readable string of task time usage Synopsis Arguments None Description Returns a human readable string showing the user and system time the current task has used up to now. For example " usr: 0m12.908s, sys: 1m6.851s " .
[ "task_time_string:string()" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-task-time-string
Chapter 3. Integrating with PagerDuty
Chapter 3. Integrating with PagerDuty If you are using PagerDuty , you can forward alerts from Red Hat Advanced Cluster Security for Kubernetes to PagerDuty. The following steps represent a high-level workflow for integrating Red Hat Advanced Cluster Security for Kubernetes with PagerDuty: Add a new API service in PagerDuty and get the integration key. Use the integration key to set up notifications in Red Hat Advanced Cluster Security for Kubernetes. Identify the policies you want to send notifications for, and update the notification settings for those policies. 3.1. Configuring PagerDuty Start integrating with PagerDuty by creating a new service and by getting the integration key. Procedure Go to Configuration Services . Select Add Services . Under General Settings , specify a Name and Description . Under Integration Setting , click Use our API Directly with Events v2 API selected for the Integration Type drop-down menu. Under Incident Settings , select an Escalation Policy , and configure notification settings and incident timeouts. Accept default settings for Incident Behavior and Alert Grouping , or configure them as required. Click Add Service . From the Service Details page, make note of the Integration Key . 3.2. Configuring Red Hat Advanced Cluster Security for Kubernetes Create a new integration in Red Hat Advanced Cluster Security for Kubernetes by using the integration key. Procedure In the RHACS portal, go to Platform Configuration Integrations . Scroll down to the Notifier Integrations section and select PagerDuty . Click New Integration ( add icon). Enter a name for Integration Name . Enter the integration key in the PagerDuty integration key field. Click Test to validate that the integration with PagerDuty is working. Click Create to create the configuration. 3.3. Configuring policy notifications Enable alert notifications for system policies. Procedure In the RHACS portal, go to Platform Configuration Policy Management . Select one or more policies for which you want to send alerts. Under Bulk actions , select Enable notification . In the Enable notification window, select the PagerDuty notifier. Note If you have not configured any other integrations, the system displays a message that no notifiers are configured. Click Enable . Note Red Hat Advanced Cluster Security for Kubernetes sends notifications on an opt-in basis. To receive notifications, you must first assign a notifier to the policy. Notifications are only sent once for a given alert. If you have assigned a notifier to a policy, you will not receive a notification unless a violation generates a new alert. Red Hat Advanced Cluster Security for Kubernetes creates a new alert for the following scenarios: A policy violation occurs for the first time in a deployment. A runtime-phase policy violation occurs in a deployment after you resolved the runtime alert for a policy in that deployment.
null
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/integrating/integrate-with-pagerduty
Appendix E. Audit Events
Appendix E. Audit Events This Appendix provides individual audit events and their parameter description and format. Every audit event in the log is accompanied by the following information: The Java identifier of the thread. For example: The time stamp the event occurred at. For example: The log source (14 is SIGNED_AUDIT): The current log level (6 is Security-related events. See the Log Levels (Message Categories) section in the Red Hat Certificate System Planning, Installation, and Deployment Guide . For example: The information about the log event (which is log event specific; see Section E.1, "Audit Event Descriptions" for information about each field in a particular log event). For example: E.1. Audit Event Descriptions The following lists the audit events provided in Certificate System:
[ "0.localhost-startStop-1", "[21/Jan/2019:17:53:00 IST]", "[14]", "[6]", "[AuditEvent=AUDIT_LOG_STARTUP][SubjectID=USDSystemUSD][Outcome=Success] audit function startup", "####################### SIGNED AUDIT EVENTS ############################# # Common fields: # - Outcome: \"Success\" or \"Failure\" # - SubjectID: The UID of the user responsible for the operation # \"USDSystemUSD\" or \"SYSTEM\" if system-initiated operation (e.g. log signing). # ######################################################################### # Required Audit Events # # Event: ACCESS_SESSION_ESTABLISH with [Outcome=Failure] # Description: This event is used when access session failed to establish. # Applicable subsystems: CA, KRA, OCSP, TKS, TPS # Enabled by default: Yes # Fields: # - ClientIP: Client IP address. # - ServerIP: Server IP address. # - SubjectID: Client certificate subject DN. # - Outcome: Failure # - Info: Failure reason. # LOGGING_SIGNED_AUDIT_ACCESS_SESSION_ESTABLISH_FAILURE= <type=ACCESS_SESSION_ESTABLISH>:[AuditEvent=ACCESS_SESSION_ESTABLISH]{0} access session establish failure # # Event: ACCESS_SESSION_ESTABLISH with [Outcome=Success] # Description: This event is used when access session was established successfully. # Applicable subsystems: CA, KRA, OCSP, TKS, TPS # Enabled by default: Yes # Fields: # - ClientIP: Client IP address. # - ServerIP: Server IP address. # - SubjectID: Client certificate subject DN. # - Outcome: Success # LOGGING_SIGNED_AUDIT_ACCESS_SESSION_ESTABLISH_SUCCESS= <type=ACCESS_SESSION_ESTABLISH>:[AuditEvent=ACCESS_SESSION_ESTABLISH]{0} access session establish success # # Event: ACCESS_SESSION_TERMINATED # Description: This event is used when access session was terminated. # Applicable subsystems: CA, KRA, OCSP, TKS, TPS # Enabled by default: Yes # Fields: # - ClientIP: Client IP address. # - ServerIP: Server IP address. # - SubjectID: Client certificate subject DN. # - Info: The TLS Alert received from NSS # - Outcome: Success # - Info: The TLS Alert received from NSS # LOGGING_SIGNED_AUDIT_ACCESS_SESSION_TERMINATED= <type=ACCESS_SESSION_TERMINATED>:[AuditEvent=ACCESS_SESSION_TERMINATED]{0} access session terminated # # Event: AUDIT_LOG_SIGNING # Description: This event is used when a signature on the audit log is generated (same as \"flush\" time). # Applicable subsystems: CA, KRA, OCSP, TKS, TPS # Enabled by default: Yes # Fields: # - SubjectID: Predefined to be \"USDSystemUSD\" because this operation # associates with no user. # - Outcome: Success # - sig: The base-64 encoded signature of the buffer just flushed. # LOGGING_SIGNED_AUDIT_AUDIT_LOG_SIGNING_3=[AuditEvent=AUDIT_LOG_SIGNING][SubjectID={0}][Outcome={1}] signature of audit buffer just flushed: sig: {2} # # Event: AUDIT_LOG_STARTUP # Description: This event is used at audit function startup. # Applicable subsystems: CA, KRA, OCSP, TKS, TPS # Enabled by default: Yes # Fields: # - SubjectID: USDSystemUSD # - Outcome: # LOGGING_SIGNED_AUDIT_AUDIT_LOG_STARTUP_2=<type=AUDIT_LOG_STARTUP>:[AuditEvent=AUDIT_LOG_STARTUP][SubjectID={0}][Outcome={1}] audit function startup # # Event: AUTH with [Outcome=Failure] # Description: This event is used when authentication fails. # In case of TLS-client auth, only webserver env can pick up the TLS violation. # CS authMgr can pick up certificate mismatch, so this event is used. # Applicable subsystems: CA, KRA, OCSP, TKS, TPS # Enabled by default: Yes # Fields: # - SubjectID: # - Outcome: Failure # (obviously, if authentication failed, you won't have a valid SubjectID, so # in this case, SubjectID should be USDUnidentifiedUSD) # - AuthMgr: The authentication manager instance name that did # this authentication. # - AttemptedCred: The credential attempted and failed. # LOGGING_SIGNED_AUDIT_AUTH_FAIL=<type=AUTH>:[AuditEvent=AUTH]{0} authentication failure # # Event: AUTH with [Outcome=Success] # Description: This event is used when authentication succeeded. # Applicable subsystems: CA, KRA, OCSP, TKS, TPS # Enabled by default: Yes # Fields: # - SubjectID: id of user who has been authenticated # - Outcome: Success # - AuthMgr: The authentication manager instance name that did # this authentication. # LOGGING_SIGNED_AUDIT_AUTH_SUCCESS=<type=AUTH>:[AuditEvent=AUTH]{0} authentication success # # Event: AUTHZ with [Outcome=Failure] # Description: This event is used when authorization has failed. # Applicable subsystems: CA, KRA, OCSP, TKS, TPS # Enabled by default: Yes # Fields: # - SubjectID: id of user who has failed to be authorized for an action # - Outcome: Failure # - aclResource: The ACL resource ID as defined in ACL resource list. # - Op: One of the operations as defined with the ACL statement # e.g. \"read\" for an ACL statement containing \"(read,write)\". # - Info: # LOGGING_SIGNED_AUDIT_AUTHZ_FAIL=<type=AUTHZ>:[AuditEvent=AUTHZ]{0} authorization failure # # Event: AUTHZ with [Outcome=Success] # Description: This event is used when authorization is successful. # Applicable subsystems: CA, KRA, OCSP, TKS, TPS # Enabled by default: Yes # Fields: # - SubjectID: id of user who has been authorized for an action # - Outcome: Success # - aclResource: The ACL resource ID as defined in ACL resource list. # - Op: One of the operations as defined with the ACL statement # e.g. \"read\" for an ACL statement containing \"(read,write)\". # LOGGING_SIGNED_AUDIT_AUTHZ_SUCCESS=<type=AUTHZ>:[AuditEvent=AUTHZ]{0} authorization success # # Event: CERT_PROFILE_APPROVAL # Description: This event is used when an agent approves/disapproves a certificate profile set by the # administrator for automatic approval. # Applicable subsystems: CA # Enabled by default: Yes # Fields: # - SubjectID: id of the CA agent who approved the certificate enrollment profile # - Outcome: # - ProfileID: One of the profiles defined by the administrator # and to be approved by an agent. # - Op: \"approve\" or \"disapprove\". # LOGGING_SIGNED_AUDIT_CERT_PROFILE_APPROVAL_4=<type=CERT_PROFILE_APPROVAL>:[AuditEvent=CERT_PROFILE_APPROVAL][SubjectID={0}][Outcome={1}][ProfileID={2}][Op={3}] certificate profile approval # # Event: CERT_REQUEST_PROCESSED # Description: This event is used when certificate request has just been through the approval process. # Applicable subsystems: CA # Enabled by default: Yes # Fields: # - SubjectID: The UID of the agent who approves, rejects, or cancels # the certificate request. # - Outcome: # - ReqID: The request ID. # - InfoName: \"certificate\" (in case of approval), \"rejectReason\" # (in case of reject), or \"cancelReason\" (in case of cancel) # - InfoValue: The certificate (in case of success), a reject reason in # text, or a cancel reason in text. # - CertSerialNum: # LOGGING_SIGNED_AUDIT_CERT_REQUEST_PROCESSED=<type=CERT_REQUEST_PROCESSED>:[AuditEvent=CERT_REQUEST_PROCESSED]{0} certificate request processed # # Event: CERT_SIGNING_INFO # Description: This event indicates which key is used to sign certificates. # Applicable subsystems: CA # Enabled by default: Yes # Fields: # - SubjectID: USDSystemUSD # - Outcome: Success # - SKI: Subject Key Identifier of the certificate signing certificate # - AuthorityID: (applicable only to lightweight CA) # LOGGING_SIGNED_AUDIT_CERT_SIGNING_INFO=<type=CERT_SIGNING_INFO>:[AuditEvent=CERT_SIGNING_INFO]{0} certificate signing info # # Event: CERT_STATUS_CHANGE_REQUEST # Description: This event is used when a certificate status change request (e.g. revocation) # is made (before approval process). # Applicable subsystems: CA # Enabled by default: Yes # Fields: # - SubjectID: id of uer who performed the action # - Outcome: # - ReqID: The request ID. # - CertSerialNum: The serial number (in hex) of the certificate to be revoked. # - RequestType: \"revoke\", \"on-hold\", \"off-hold\" # LOGGING_SIGNED_AUDIT_CERT_STATUS_CHANGE_REQUEST=<type=CERT_STATUS_CHANGE_REQUEST>:[AuditEvent=CERT_STATUS_CHANGE_REQUEST]{0} certificate revocation/unrevocation request made # # Event: CERT_STATUS_CHANGE_REQUEST_PROCESSED # Description: This event is used when certificate status is changed (revoked, expired, on-hold, # off-hold). # Applicable subsystems: CA # Enabled by default: Yes # Fields: # - SubjectID: The UID of the agent that processed the request. # - Outcome: # - ReqID: The request ID. # - RequestType: \"revoke\", \"on-hold\", \"off-hold\" # - Approval: \"complete\", \"rejected\", or \"canceled\" # (note that \"complete\" means \"approved\") # - CertSerialNum: The serial number (in hex). # - RevokeReasonNum: One of the following number: # reason number reason # -------------------------------------- # 0 Unspecified # 1 Key compromised # 2 CA key compromised (should not be used) # 3 Affiliation changed # 4 Certificate superceded # 5 Cessation of operation # 6 Certificate is on-hold # - Info: # LOGGING_SIGNED_AUDIT_CERT_STATUS_CHANGE_REQUEST_PROCESSED=<type=CERT_STATUS_CHANGE_REQUEST_PROCESSED>:[AuditEvent=CERT_STATUS_CHANGE_REQUEST_PROCESSED]{0} certificate status change request processed # # Event: CLIENT_ACCESS_SESSION_ESTABLISH with [Outcome=Failure] # Description: This event is when access session failed to establish when Certificate System acts as client. # Applicable subsystems: CA, KRA, OCSP, TKS, TPS # Enabled by default: Yes # Fields: # - ClientHost: Client hostname. # - ServerHost: Server hostname. # - ServerPort: Server port. # - SubjectID: SYSTEM # - Outcome: Failure # - Info: # LOGGING_SIGNED_AUDIT_CLIENT_ACCESS_SESSION_ESTABLISH_FAILURE= <type=CLIENT_ACCESS_SESSION_ESTABLISH>:[AuditEvent=CLIENT_ACCESS_SESSION_ESTABLISH]{0} access session failed to establish when Certificate System acts as client # # Event: CLIENT_ACCESS_SESSION_ESTABLISH with [Outcome=Success] # Description: This event is used when access session was established successfully when # Certificate System acts as client. # Applicable subsystems: CA, KRA, OCSP, TKS, TPS # Enabled by default: Yes # Fields: # - ClientHost: Client hostname. # - ServerHost: Server hostname. # - ServerPort: Server port. # - SubjectID: SYSTEM # - Outcome: Success # LOGGING_SIGNED_AUDIT_CLIENT_ACCESS_SESSION_ESTABLISH_SUCCESS= <type=CLIENT_ACCESS_SESSION_ESTABLISH>:[AuditEvent=CLIENT_ACCESS_SESSION_ESTABLISH]{0} access session establish successfully when Certificate System acts as client # # Event: CLIENT_ACCESS_SESSION_TERMINATED # Description: This event is used when access session was terminated when Certificate System acts as client. # Applicable subsystems: CA, KRA, OCSP, TKS, TPS # Enabled by default: Yes # Fields: # - ClientHost: Client hostname. # - ServerHost: Server hostname. # - ServerPort: Server port. # - SubjectID: SYSTEM # - Outcome: Success # - Info: The TLS Alert received from NSS # LOGGING_SIGNED_AUDIT_CLIENT_ACCESS_SESSION_TERMINATED= <type=CLIENT_ACCESS_SESSION_TERMINATED>:[AuditEvent=CLIENT_ACCESS_SESSION_TERMINATED]{0} access session terminated when Certificate System acts as client # # Event: CMC_REQUEST_RECEIVED # Description: This event is used when a CMC request is received. # Applicable subsystems: CA # Enabled by default: Yes # Fields: # - SubjectID: The UID of user that triggered this event. # If CMC requests is signed by an agent, SubjectID should # be that of the agent. # In case of an unsigned request, it would bear USDUnidentifiedUSD. # - Outcome: # - CMCRequest: Base64 encoding of the CMC request received # LOGGING_SIGNED_AUDIT_CMC_REQUEST_RECEIVED_3=<type=CMC_REQUEST_RECEIVED>:[AuditEvent=CMC_REQUEST_RECEIVED][SubjectID={0}][Outcome={1}][CMCRequest={2}] CMC request received # # Event: CMC_RESPONSE_SENT # Description: This event is used when a CMC response is sent. # Applicable subsystems: CA # Enabled by default: Yes # Fields: # - SubjectID: The UID of user that triggered this event. # - Outcome: # - CMCResponse: Base64 encoding of the CMC response sent # LOGGING_SIGNED_AUDIT_CMC_RESPONSE_SENT_3=<type=CMC_RESPONSE_SENT>:[AuditEvent=CMC_RESPONSE_SENT][SubjectID={0}][Outcome={1}][CMCResponse={2}] CMC response sent # # Event: CMC_SIGNED_REQUEST_SIG_VERIFY # Description: This event is used when agent signed CMC certificate requests or revocation requests # are submitted and signature is verified. # Applicable subsystems: CA # Enabled by default: Yes # Fields: # - SubjectID: the user who signed the CMC request (success case) # - Outcome: # - ReqType: The request type (enrollment, or revocation). # - CertSubject: The certificate subject name of the certificate request. # - SignerInfo: A unique String representation for the signer. # LOGGING_SIGNED_AUDIT_CMC_SIGNED_REQUEST_SIG_VERIFY=<type=CMC_SIGNED_REQUEST_SIG_VERIFY>:[AuditEvent=CMC_SIGNED_REQUEST_SIG_VERIFY]{0} agent signed CMC request signature verification # # Event: CMC_USER_SIGNED_REQUEST_SIG_VERIFY # Description: This event is used when CMC (user-signed or self-signed) certificate requests or revocation requests # are submitted and signature is verified. # Applicable subsystems: CA # Enabled by default: Yes # Fields: # - SubjectID: the user who signed the CMC request (success case) # - Outcome: # - ReqType: The request type (enrollment, or revocation). # - CertSubject: The certificate subject name of the certificate request. # - CMCSignerInfo: A unique String representation for the CMC request signer. # - info: # LOGGING_SIGNED_AUDIT_CMC_USER_SIGNED_REQUEST_SIG_VERIFY_FAILURE=<type=CMC_USER_SIGNED_REQUEST_SIG_VERIFY>:[AuditEvent=CMC_USER_SIGNED_REQUEST_SIG_VERIFY]{0} User signed CMC request signature verification failure LOGGING_SIGNED_AUDIT_CMC_USER_SIGNED_REQUEST_SIG_VERIFY_SUCCESS=<type=CMC_USER_SIGNED_REQUEST_SIG_VERIFY>:[AuditEvent=CMC_USER_SIGNED_REQUEST_SIG_VERIFY]{0} User signed CMC request signature verification success # # Event: CONFIG_ACL # Description: This event is used when configuring ACL information. # Applicable subsystems: CA, KRA, OCSP, TKS, TPS # Enabled by default: Yes # Fields: # - SubjectID: id of administrator who performed the action # - Outcome: # - ParamNameValPairs: A name-value pair # (where name and value are separated by the delimiter ;;) # separated by + (if more than one name-value pair) of config params changed. # LOGGING_SIGNED_AUDIT_CONFIG_ACL_3=<type=CONFIG_ACL>:[AuditEvent=CONFIG_ACL][SubjectID={0}][Outcome={1}][ParamNameValPairs={2}] ACL configuration parameter(s) change # # Event: CONFIG_AUTH # Description: This event is used when configuring authentication. # Applicable subsystems: CA, KRA, OCSP, TKS, TPS # Enabled by default: Yes # Fields: # - SubjectID: id of administrator who performed the action # - Outcome: # - ParamNameValPairs: A name-value pair # (where name and value are separated by the delimiter ;;) # separated by + (if more than one name-value pair) of config params changed. # --- Password MUST NOT be logged --- # LOGGING_SIGNED_AUDIT_CONFIG_AUTH_3=<type=CONFIG_AUTH>:[AuditEvent=CONFIG_AUTH][SubjectID={0}][Outcome={1}][ParamNameValPairs={2}] authentication configuration parameter(s) change # # Event: CONFIG_CERT_PROFILE # Description: This event is used when configuring certificate profile # (general settings and certificate profile). # Applicable subsystems: CA # Enabled by default: Yes # Fields: # - SubjectID: id of administrator who performed the action # - Outcome: # - ParamNameValPairs: A name-value pair # (where name and value are separated by the delimiter ;;) # separated by + (if more than one name-value pair) of config params changed. # LOGGING_SIGNED_AUDIT_CONFIG_CERT_PROFILE_3=<type=CONFIG_CERT_PROFILE>:[AuditEvent=CONFIG_CERT_PROFILE][SubjectID={0}][Outcome={1}][ParamNameValPairs={2}] certificate profile configuration parameter(s) change # # Event: CONFIG_CRL_PROFILE # Description: This event is used when configuring CRL profile # (extensions, frequency, CRL format). # Applicable subsystems: CA # Enabled by default: Yes # Fields: # - SubjectID: id of administrator who performed the action # - Outcome: # - ParamNameValPairs: A name-value pair # (where name and value are separated by the delimiter ;;) # separated by + (if more than one name-value pair) of config params changed. # LOGGING_SIGNED_AUDIT_CONFIG_CRL_PROFILE_3=<type=CONFIG_CRL_PROFILE>:[AuditEvent=CONFIG_CRL_PROFILE][SubjectID={0}][Outcome={1}][ParamNameValPairs={2}] CRL profile configuration parameter(s) change # # Event: CONFIG_DRM # Description: This event is used when configuring KRA. # This includes key recovery scheme, change of any secret component. # Applicable subsystems: KRA # Enabled by default: Yes # Fields: # - SubjectID: id of administrator who performed the action # - Outcome: # - ParamNameValPairs A name-value pair # (where name and value are separated by the delimiter ;;) # separated by + (if more than one name-value pair) of config params changed. # --- secret component (password) MUST NOT be logged --- # LOGGING_SIGNED_AUDIT_CONFIG_DRM_3=<type=CONFIG_DRM>:[AuditEvent=CONFIG_DRM][SubjectID={0}][Outcome={1}][ParamNameValPairs={2}] DRM configuration parameter(s) change # # Event: CONFIG_OCSP_PROFILE # Description: This event is used when configuring OCSP profile # (everything under Online Certificate Status Manager). # Applicable subsystems: OCSP # Enabled by default: Yes # Fields: # - SubjectID: id of administrator who performed the action # - Outcome: # - ParamNameValPairs: A name-value pair # (where name and value are separated by the delimiter ;;) # separated by + (if more than one name-value pair) of config params changed. # LOGGING_SIGNED_AUDIT_CONFIG_OCSP_PROFILE_3=<type=CONFIG_OCSP_PROFILE>:[AuditEvent=CONFIG_OCSP_PROFILE][SubjectID={0}][Outcome={1}][ParamNameValPairs={2}] OCSP profile configuration parameter(s) change # # Event: CONFIG_ROLE # Description: This event is used when configuring role information. # This includes anything under users/groups, add/remove/edit a role, etc. # Applicable subsystems: CA, KRA, OCSP, TKS, TPS # Enabled by default: Yes # Fields: # - SubjectID: id of administrator who performed the action # - Outcome: # - ParamNameValPairs: A name-value pair # (where name and value are separated by the delimiter ;;) # separated by + (if more than one name-value pair) of config params changed. # LOGGING_SIGNED_AUDIT_CONFIG_ROLE=<type=CONFIG_ROLE>:[AuditEvent=CONFIG_ROLE]{0} role configuration parameter(s) change # # Event: CONFIG_SERIAL_NUMBER # Description: This event is used when configuring serial number ranges # (when requesting a serial number range when cloning, for example). # Applicable subsystems: CA, KRA # Enabled by default: Yes # Fields: # - SubjectID: id of administrator who performed the action # - Outcome: # - ParamNameValPairs: A name-value pair # (where name and value are separated by the delimiter ;;) # separated by + (if more than one name-value pair) of config params changed. # LOGGING_SIGNED_AUDIT_CONFIG_SERIAL_NUMBER_1=<type=CONFIG_SERIAL_NUMBER>:[AuditEvent=CONFIG_SERIAL_NUMBER][SubjectID={0}][Outcome={1}][ParamNameValPairs={2}] serial number range update # # Event: CONFIG_SIGNED_AUDIT # Description: This event is used when configuring signedAudit. # Applicable subsystems: CA, KRA, OCSP, TKS, TPS # Enabled by default: Yes # Fields: # - SubjectID: id of administrator who performed the action # - Outcome: # - ParamNameValPairs: A name-value pair # (where name and value are separated by the delimiter ;;) # separated by + (if more than one name-value pair) of config params changed. # LOGGING_SIGNED_AUDIT_CONFIG_SIGNED_AUDIT=<type=CONFIG_SIGNED_AUDIT>:[AuditEvent=CONFIG_SIGNED_AUDIT]{0} signed audit configuration parameter(s) change # # Event: CONFIG_TRUSTED_PUBLIC_KEY # Description: This event is used when: # 1. \"Manage Certificate\" is used to edit the trustness of certificates # and deletion of certificates # 2. \"Certificate Setup Wizard\" is used to import CA certificates into the # certificate database (Although CrossCertificatePairs are stored # within internaldb, audit them as well) # Applicable subsystems: CA, KRA, OCSP, TKS, TPS # Enabled by default: Yes # Fields: # - SubjectID: ID of administrator who performed this configuration # - Outcome: # - ParamNameValPairs: A name-value pair # (where name and value are separated by the delimiter ;;) # separated by + (if more than one name-value pair) of config params changed. # LOGGING_SIGNED_AUDIT_CONFIG_TRUSTED_PUBLIC_KEY=<type=CONFIG_TRUSTED_PUBLIC_KEY>:[AuditEvent=CONFIG_TRUSTED_PUBLIC_KEY]{0} certificate database configuration # # Event: CRL_SIGNING_INFO # Description: This event indicates which key is used to sign CRLs. # Applicable subsystems: CA # Enabled by default: Yes # Fields: # - SubjectID: USDSystemUSD # - Outcome: # - SKI: Subject Key Identifier of the CRL signing certificate # LOGGING_SIGNED_AUDIT_CRL_SIGNING_INFO=<type=CRL_SIGNING_INFO>:[AuditEvent=CRL_SIGNING_INFO]{0} CRL signing info # # Event: DELTA_CRL_GENERATION # Description: This event is used when delta CRL generation is complete. # Applicable subsystems: CA # Enabled by default: Yes # Fields: # - SubjectID: USDUnidentifiedUSD # - Outcome: \"Success\" when delta CRL is generated successfully, \"Failure\" otherwise. # - CRLnum: The CRL number that identifies the CRL # - Info: # - FailureReason: # LOGGING_SIGNED_AUDIT_DELTA_CRL_GENERATION=<type=DELTA_CRL_GENERATION>:[AuditEvent=DELTA_CRL_GENERATION]{0} Delta CRL generation # # Event: FULL_CRL_GENERATION # Description: This event is used when full CRL generation is complete. # Applicable subsystems: CA # Enabled by default: Yes # Fields: # - SubjectID: USDSystemUSD # - Outcome: \"Success\" when full CRL is generated successfully, \"Failure\" otherwise. # - CRLnum: The CRL number that identifies the CRL # - Info: # - FailureReason: # LOGGING_SIGNED_AUDIT_FULL_CRL_GENERATION=<type=FULL_CRL_GENERATION>:[AuditEvent=FULL_CRL_GENERATION]{0} Full CRL generation # # Event: PROFILE_CERT_REQUEST # Description: This event is used when a profile certificate request is made (before approval process). # Applicable subsystems: CA # Enabled by default: Yes # Fields: # - SubjectID: The UID of user that triggered this event. # If CMC enrollment requests signed by an agent, SubjectID should # be that of the agent. # - Outcome: # - CertSubject: The certificate subject name of the certificate request. # - ReqID: The certificate request ID. # - ProfileID: One of the certificate profiles defined by the # administrator. # LOGGING_SIGNED_AUDIT_PROFILE_CERT_REQUEST_5=<type=PROFILE_CERT_REQUEST>:[AuditEvent=PROFILE_CERT_REQUEST][SubjectID={0}][Outcome={1}][ReqID={2}][ProfileID={3}][CertSubject={4}] certificate request made with certificate profiles # # Event: PROOF_OF_POSSESSION # Description: This event is used for proof of possession during certificate enrollment processing. # Applicable subsystems: CA # Enabled by default: Yes # Fields: # - SubjectID: id that represents the authenticated user # - Outcome: # - Info: some information on when/how it occurred # LOGGING_SIGNED_AUDIT_PROOF_OF_POSSESSION_3=<type=PROOF_OF_POSSESSION>:[AuditEvent=PROOF_OF_POSSESSION][SubjectID={0}][Outcome={1}][Info={2}] proof of possession # # Event: OCSP_ADD_CA_REQUEST_PROCESSED # Description: This event is used when an add CA request to the OCSP Responder is processed. # Applicable subsystems: OCSP # Enabled by default: Yes # Fields: # - SubjectID: OCSP administrator user id # - Outcome: \"Success\" when CA is added successfully, \"Failure\" otherwise. # - CASubjectDN: The subject DN of the leaf CA cert in the chain. # LOGGING_SIGNED_AUDIT_OCSP_ADD_CA_REQUEST_PROCESSED=<type=OCSP_ADD_CA_REQUEST_PROCESSED>:[AuditEvent=OCSP_ADD_CA_REQUEST_PROCESSED]{0} Add CA for OCSP Responder # # Event: OCSP_GENERATION # Description: This event is used when an OCSP response generated is complete. # Applicable subsystems: CA, OCSP # Enabled by default: Yes # Fields: # - SubjectID: USDNonRoleUserUSD # - Outcome: \"Success\" when OCSP response is generated successfully, \"Failure\" otherwise. # - FailureReason: # LOGGING_SIGNED_AUDIT_OCSP_GENERATION=<type=OCSP_GENERATION>:[AuditEvent=OCSP_GENERATION]{0} OCSP response generation # # Event: OCSP_REMOVE_CA_REQUEST_PROCESSED with [Outcome=Failure] # Description: This event is used when a remove CA request to the OCSP Responder is processed and failed. # Applicable subsystems: OCSP # Enabled by default: Yes # Fields: # - SubjectID: OCSP administrator user id # - Outcome: Failure # - CASubjectDN: The subject DN of the leaf CA certificate in the chain. # LOGGING_SIGNED_AUDIT_OCSP_REMOVE_CA_REQUEST_PROCESSED_FAILURE=<type=OCSP_REMOVE_CA_REQUEST_PROCESSED>:[AuditEvent=OCSP_REMOVE_CA_REQUEST_PROCESSED]{0} Remove CA for OCSP Responder has failed # # Event: OCSP_REMOVE_CA_REQUEST_PROCESSED with [Outcome=Success] # Description: This event is used when a remove CA request to the OCSP Responder is processed successfully. # Applicable subsystems: OCSP # Enabled by default: Yes # Fields: # - SubjectID: OCSP administrator user id # - Outcome: \"Success\" when CA is removed successfully, \"Failure\" otherwise. # - CASubjectDN: The subject DN of the leaf CA certificate in the chain. # LOGGING_SIGNED_AUDIT_OCSP_REMOVE_CA_REQUEST_PROCESSED_SUCCESS=<type=OCSP_REMOVE_CA_REQUEST_PROCESSED>:[AuditEvent=OCSP_REMOVE_CA_REQUEST_PROCESSED]{0} Remove CA for OCSP Responder is successful # # Event: OCSP_SIGNING_INFO # Description: This event indicates which key is used to sign OCSP responses. # Applicable subsystems: CA, OCSP # Enabled by default: Yes # Fields: # - SubjectID: USDSystemUSD # - Outcome: # - SKI: Subject Key Identifier of the OCSP signing certificate # - AuthorityID: (applicable only to lightweight CA) # LOGGING_SIGNED_AUDIT_OCSP_SIGNING_INFO=<type=OCSP_SIGNING_INFO>:[AuditEvent=OCSP_SIGNING_INFO]{0} OCSP signing info # # Event: ROLE_ASSUME # Description: This event is used when a user assumes a role. # Applicable subsystems: CA, KRA, OCSP, TKS, TPS # Enabled by default: Yes # Fields: # - SubjectID: # - Outcome: # - Role: One of the valid roles: # \"Administrators\", \"Certificate Manager Agents\", or \"Auditors\". # Note that customized role names can be used once configured. # LOGGING_SIGNED_AUDIT_ROLE_ASSUME=<type=ROLE_ASSUME>:[AuditEvent=ROLE_ASSUME]{0} assume privileged role # # Event: SECURITY_DOMAIN_UPDATE # Description: This event is used when updating contents of security domain # (add/remove a subsystem). # Applicable subsystems: CA # Enabled by default: Yes # Fields: # - SubjectID: CA administrator user ID # - Outcome: # - ParamNameValPairs: A name-value pair # (where name and value are separated by the delimiter ;;) # separated by + (if more than one name-value pair) of config params changed. # LOGGING_SIGNED_AUDIT_SECURITY_DOMAIN_UPDATE_1=<type=SECURITY_DOMAIN_UPDATE>:[AuditEvent=SECURITY_DOMAIN_UPDATE][SubjectID={0}][Outcome={1}][ParamNameValPairs={2}] security domain update # # Event: SELFTESTS_EXECUTION # Description: This event is used when self tests are run. # Applicable subsystems: CA, KRA, OCSP, TKS, TPS # Enabled by default: Yes # Fields: # - SubjectID: USDSystemUSD # - Outcome: # LOGGING_SIGNED_AUDIT_SELFTESTS_EXECUTION_2=<type=SELFTESTS_EXECUTION>:[AuditEvent=SELFTESTS_EXECUTION][SubjectID={0}][Outcome={1}] self tests execution (see selftests.log for details)" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/audit_events
3.4. Service Operations and States
3.4. Service Operations and States The following operations apply to both services and virtual machines, except for the migrate operation, which only works with virtual machines. 3.4.1. Service Operations The service operations are available commands a user may call to apply one of five available actions, defined in the following list. enable - start the service, optionally on a preferred target and optionally according to failover domain rules. In absence of either, the local host where clusvcadm is run will start the service. If the original start fails, the service behaves as though a relocate operation was requested (see below). If the operation succeeds, the service is placed in the started state. disable - stop the service and place into the disabled state. This is the only permissible operation when a service is in the failed state. relocate - move the service to another node. Optionally, the administrator may specify a preferred node to receive the service, but the inability for the service to run on that host (for example, if the service fails to start or the host is offline) does not prevent relocation, and another node is chosen. RGManager attempts to start the service on every permissible node in the cluster. If no permissible target node in the cluster successfully starts the service, the relocation fails and the service is attempted to be restarted on the original owner. If the original owner cannot restart the service, the service is placed in the stopped state. stop - stop the service and place into the stopped state. migrate - migrate the virtual machine to another node. The administrator must specify a target node. Depending on the failure, a failure to migrate may result with the virtual machine in the failed state or in the started state on the original owner. 3.4.1.1. The freeze Operation RGManager can freeze services. Doing so allows users to upgrade RGManager, CMAN, or any other software on the system while minimizing down-time of RGManager-managed services. It also allows maintenance of parts of RGManager services. For example, if you have a database and a web server in a single RGManager service, you may freeze the RGManager service, stop the database, perform maintenance, restart the database, and unfreeze the service. 3.4.1.1.1. Service Behaviors when Frozen status checks are disabled start operations are disabled stop operations are disabled Failover will not occur (even if you power off the service owner) Important Failure to follow these guidelines may result in resources being allocated on multiple hosts. You must not stop all instances of RGManager when a service is frozen unless you plan to reboot the hosts prior to restarting RGManager. You must not unfreeze a service until the reported owner of the service rejoins the cluster and restarts RGManager. 3.4.2. Service States The following list defines the states of services managed by RGManager. disabled - The service will remain in the disabled state until either an administrator re-enables the service or the cluster loses quorum (at which point, the autostart parameter is evaluated). An administrator may enable the service from this state. failed - The service is presumed dead. This state occurs whenever a resource's stop operation fails. An administrator must verify that there are no allocated resources (mounted file systems, and so on) prior to issuing a disable request. The only action which can take place from this state is disable. stopped - When in the stopped state, the service will be evaluated for starting after the service or node transition. This is a very temporary measure. An administrator may disable or enable the service from this state. recovering - The cluster is trying to recover the service. An administrator may disable the service to prevent recovery if desired. started - If a service status check fails, recover it according to the service recovery policy. If the host running the service fails, recover it following failover domain and exclusive service rules. An administrator may relocate, stop, disable, and (with virtual machines) migrate the service from this state. Note Other states, such as starting and stopping are special transitional states of the started state.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/high_availability_add-on_overview/s1-rgmanager-opstates
Chapter 2. Preparing for a minor update
Chapter 2. Preparing for a minor update You must follow some preparation steps on the undercloud and overcoud before you begin the process to update Red Hat OpenStack Platform 16.0 to the latest minor release. 2.1. Locking the environment to a Red Hat Enterprise Linux release Red Hat OpenStack Platform 16.0 is supported on Red Hat Enterprise Linux 8.1. Prior to performing the update, lock the undercloud and overcloud repositories to the Red Hat Enterprise Linux 8.1 release to avoid upgrading the operating system to a newer minor release. Procedure Log into the undercloud as the stack user. Source the stackrc file: Create a static inventory file of your overcloud: If you use an overcloud name different to the default overcloud name of overcloud , set the name of your overcloud with the --plan option. Create a playbook that contains a task to lock the operating system version to Red Hat Enterprise Linux 8.1 on all nodes: Run the set_release.yaml playbook: Note To manually lock a node to a version, log in to the node and run the subscription-manager release command: 2.2. Changing to Extended Update Support (EUS) repositories Your Red Hat OpenStack Platform subscription includes repositories for Red Hat Enterprise Linux 8.1 Extended Update Support (EUS). The EUS repositories include the latest security patches and bug fixes for Red Hat Enterprise Linux 8.1. Switch to the following repositories before performing a minor version update. Standard Repository EUS Resporitory rhel-8-for-x86_64-baseos-rpms rhel-8-for-x86_64-baseos-eus-rpms rhel-8-for-x86_64-appstream-rpms rhel-8-for-x86_64-appstream-eus-rpms rhel-8-for-x86_64-highavailability-rpms rhel-8-for-x86_64-highavailability-eus-rpms Procedure Log into the undercloud as the stack user. Source the stackrc file: Edit your overcloud subscription management environment file, which is the file that contains the RhsmVars parameter. The default name for this file is usually rhsm.yml . Check the rhsm_repos parameter in your subscription management configuration. If this parameter does not include the EUS repositories, change the relevant repositories to the EUS versions: Save the overcloud subscription management environment file. Create a static inventory file of your overcloud: If you use an overcloud name different to the default overcloud name of overcloud , set the name of your overcloud with the --plan option. Create a playbook that contains a task to set the repositories to Red Hat Enterprise Linux 8.1 EUS on all nodes: Run the change_eus.yaml playbook:
[ "source ~/stackrc", "tripleo-ansible-inventory --ansible_ssh_user heat-admin --static-yaml-inventory ~/inventory.yaml", "cat > ~/set_release.yaml <<'EOF' - hosts: overcloud,undercloud gather_facts: false tasks: - name: set release to 8.1 command: subscription-manager release --set=8.1 become: true EOF", "ansible-playbook -i ~/inventory.yaml -f 25 ~/set_release.yaml", "sudo subscription-manager release --set=8.1", "source ~/stackrc", "parameter_defaults: RhsmVars: rhsm_repos: - rhel-8-for-x86_64-baseos-eus-rpms - rhel-8-for-x86_64-appstream-eus-rpms - rhel-8-for-x86_64-highavailability-eus-rpms - ansible-2.9-for-rhel-8-x86_64-rpms - advanced-virt-for-rhel-8-x86_64-rpms - openstack-beta-for-rhel-8-x86_64-rpms - rhceph-4-osd-for-rhel-8-x86_64-rpms - rhceph-4-mon-for-rhel-8-x86_64-rpms - rhceph-4-tools-for-rhel-8-x86_64-rpms - fast-datapath-for-rhel-8-x86_64-rpms", "tripleo-ansible-inventory --ansible_ssh_user heat-admin --static-yaml-inventory ~/inventory.yaml", "cat > ~/change_eus.yaml <<'EOF' - hosts: overcloud,undercloud gather_facts: false tasks: - name: change to eus repos command: subscription-manager repos --disable=rhel-8-for-x86_64-baseos-rpms --disable=rhel-8-for-x86_64-appstream-rpms --disable=rhel-8-for-x86_64-highavailability-rpms --enable=rhel-8-for-x86_64-baseos-eus-rpms --enable=rhel-8-for-x86_64-appstream-eus-rpms --enable=rhel-8-for-x86_64-highavailability-eus-rpms become: true EOF", "ansible-playbook -i ~/inventory.yaml -f 25 ~/change_eus.yaml" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/keeping_red_hat_openstack_platform_updated/preparing-for-a-minor-update
5.315. subversion and neon
5.315. subversion and neon 5.315.1. RHEA-2012:0896 - subversion and neon bug fix and enhancement update Updated subversion and neon packages that fix several bugs and add an enhancement are now available. Subversion (SVN) is a concurrent version control system which enables one or more users to collaborate in developing and maintaining a hierarchy of files and directories while keeping a history of all changes. Neon is an HTTP library and WebDAV client library used by Subversion. Bug Fixes BZ# 749494 The "svn" command unnecessarily required access to the parent directory during certain types of merge operations, which could be denied by the authorization policy on the server. The SVN client has been fixed to not require such access. BZ# 751321 When the "AuthzForceUsernameCase lower" directive was configured in the /etc/httpd/conf.d/subversion.conf file, the "mod_authz_svn" module could crash with a segmentation fault. With this update, segmentation faults no longer occur when using the "AuthzForceUsernameCase" directive. BZ# 798636 Due to a bug in the neon HTTP library, the Server Name Indication (SNI) support was disabled on an SVN client. This update upgrades the neon library, and SNI now works as expected. Enhancement BZ# 711904 , BZ# 720790 This update adds an init script for the "svnserve" daemon. Users are advised to upgrade to these updated subversion and neon packages, which resolve these issues and add this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/subversion_and_neon
Chapter 5. Viewing logs for a resource
Chapter 5. Viewing logs for a resource You can view the logs for various resources, such as builds, deployments, and pods by using the OpenShift CLI (oc) and the web console. Note Resource logs are a default feature that provides limited log viewing capability. To enhance your log retrieving and viewing experience, it is recommended that you install OpenShift Logging . OpenShift Logging aggregates all the logs from your OpenShift Container Platform cluster, such as node system audit logs, application container logs, and infrastructure logs, into a dedicated log store. You can then query, discover, and visualize your log data through the Kibana interface . Resource logs do not access the OpenShift Logging log store. 5.1. Viewing resource logs You can view the log for various resources in the OpenShift CLI (oc) and web console. Logs read from the tail, or end, of the log. Prerequisites Access to the OpenShift CLI (oc). Procedure (UI) In the OpenShift Container Platform console, navigate to Workloads Pods or navigate to the pod through the resource you want to investigate. Note Some resources, such as builds, do not have pods to query directly. In such instances, you can locate the Logs link on the Details page for the resource. Select a project from the drop-down menu. Click the name of the pod you want to investigate. Click Logs . Procedure (CLI) View the log for a specific pod: USD oc logs -f <pod_name> -c <container_name> where: -f Optional: Specifies that the output follows what is being written into the logs. <pod_name> Specifies the name of the pod. <container_name> Optional: Specifies the name of a container. When a pod has more than one container, you must specify the container name. For example: USD oc logs ruby-58cd97df55-mww7r USD oc logs -f ruby-57f7f4855b-znl92 -c ruby The contents of log files are printed out. View the log for a specific resource: USD oc logs <object_type>/<resource_name> 1 1 Specifies the resource type and name. For example: USD oc logs deployment/ruby The contents of log files are printed out.
[ "oc logs -f <pod_name> -c <container_name>", "oc logs ruby-58cd97df55-mww7r", "oc logs -f ruby-57f7f4855b-znl92 -c ruby", "oc logs <object_type>/<resource_name> 1", "oc logs deployment/ruby" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/logging/vewing-resource-logs
14.7.9. Detaching a Node Device
14.7.9. Detaching a Node Device The virsh nodedev-detach detaches the nodedev from the host so it can be safely used by guests via <hostdev> passthrough. This action can be reversed with the nodedev-reattach command but it is done automatically for managed services. This command also accepts nodedev-dettach . Note that different drivers expect the device to be bound to different dummy devices. Using the --driver option allows you to specify the desired back-end driver.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-numa_node_management-detaching_a_node_device
Chapter 11. Log files reference
Chapter 11. Log files reference Directory Server records events to log files that are essential for solving existing problems and predicting potential problems, which might result in failure or poor performance. With log files you can achieve the following goals: Troubleshoot problems. Monitor the server activity. Analyze the directory activity. To monitor the directory effectively, you must understand the structure and content of the log files. You do not find an exhaustive list of log messages in the chapter. Presented information serves as a good starting point to solve common problems and understand the records in the access, error, audit, audit fail, and secure logs. Directory Server instances store logs in the /var/log/dirsrv/ slapd-instance_name directory. 11.1. Access log reference The Directory Server access log contains detailed information about client connections to the directory. A connection is a sequence of requests from the same client with the following structure: A connection record which provides the connection index and the IP address of the client A bind record A bind result record A sequence of operation request and operation result pairs of records, or individual records in the case of connection, closed, and abandon records An unbind record A closed record Access log record example: Almost all records appear in pairs: a service request record, SRCH in the example, followed by a RESULT record. Connection, closed, and abandon records appear individually. The access logs have several levels of logging that you can configure using the nsslapd-accesslog-level attribute. 11.1.1. Access logging levels Different levels of access logging record different kinds of operations that Directory Server performs. The access log has the following log levels: No access logging ( 0 ). Logging for internal access operations ( 4 ). Logging for connections, operations, and results ( 256 ). The default level. Logging for access to an entry and referrals ( 512 ). Use the nsslapd-accesslog-level attribute to configure the access log level. The attribute values are additive: if you set a log level value of 260, it includes levels 256 and 4. Additional resources Configuring log levels Description of the lapd-accesslog-level attribute 11.1.2. Default access log content By default, Directory Server has the 256 logging level that records access to an entry and contains information presented further. Connection number (conn) Directory Server lists every external LDAP request with an incremental connection number, conn=13 in the example. Connection numbers start at conn=0 immediately after the server startup. Directory Server does not record internal LDAP requests by default. To enable logging of internal access operations, use the nsslapd-accesslog-level configuration attribute. File descriptor (fd) Every connection from an external LDAP client to Directory Server requires a file descriptor or socket descriptor from the operating system, in this case fd=608 . The fd=608 value indicates that an external LDAP client used the file descriptor number 608 out of the total pool of available file descriptors. Slot number (slot) The slot number, slot=608 in the example, is a legacy part of the access log that has the same meaning as file descriptor. Ignore this part of the access log. Operation number (opt) To process an LDAP request, Directory Server performs a series of operations. For a connection, all operation request and operation result pairs have incremental operation numbers beginning with op=0 to identify different operations. In the example: op=0 for the bind operation request and the result op=1 for the LDAP search request and the result op=2 for the abandon operation op=3 for the unbind operation the LDAP client sends and the result Method type (method) The method number, method=128 in the example, indicates which LDAPv3 bind method the client used. The method type can have one of the three possible values: 0 for authentication 128 for a simple bind with a user password sasl for a SASL bind that uses an external authentication mechanism Version number (version) The version number indicates the LDAP version number that the LDAP client used to communicate with the LDAP server. The LDAP version number can be either LDAPv2 or LDAPv3. In the example, it uses version=3 . Error number (err) The error number provides the LDAP result code that returns performed LDAP operation. The LDAP error number 0 means that the operation was successful. The example has op=0 . Tag number (tag) The tag number indicates the type of a returned result for an operation. Directory Server uses a BER tags from the LDAP protocol. The example has tag=97 . The following table provides commonly used tags: Tag Description tag=97 The result from a client bind operation. tag=100 The actual entry that Directory Server searched for. It is not a result tag, and the access log does not contain such a tag. tag=101 The result from a search operation. tag=103 The result from a modify operation. tag=105 The result from an add operation. tag=107 The result from a delete operation. tag=109 The result from a moddn (renaming) operation. tag=111 The result from a compare operation. tag=115 Search reference when the entry that the operation searches for holds a referral to the required entry. It is not a result tag, and the access log does not contain such a tag. tag=120 The result from an extended operation. tag=121 The result from an intermediate operation. Number of entries (nentries) The nentries record shows the number of entries that a search operation found matching the LDAP client request. In the example, nentries=0 , Directory Server did not find any matching entries. Elapsed time (etime) The etime record shows the elapsed time or the amount of time (in seconds) that Directory Server spent to perform the LDAP operation. In the example, Directory Server spent 0.000158680 seconds to perform the operation. An etime value of 0 means that the operation actually took 0 nanoseconds to perform. LDAP request type The LDAP request type indicates what type of an LDAP request LDAP client issued. Possible values are: SRCH for a search operation MOD for a modify operation DEL for a delete operation ADD for an add operation MODDN for a moddn (renaming) operation EXT for an extended operation ABANDON for an abandon operation SORT serialno if the LDAP request results in sorting the entries In the example, the number enclosed in parentheses specifies that the LDAP request sorted one candidate entry. LDAP response type Directory Server can issue three LDAP response types: RESULT means a result to the client LDAP request. ENTRY means an entry Directory Server returns in response to a search operation. REFERRAL means that the Directory Server sends the LDAP request to another server. The RESULT message contains the following performance-related records: wtime The amount of time the operation was waiting in the work queue before a worker thread picked up the operation optime The amount of time it took for the actual operation to perform the task etime The time between when Directory Server receives the request and when the server sends the result back to the client. Note The wtime and optime values provide useful information about how the server handles the load and processes operations. Because Directory Server requires some time to gathers these statistics, the sum of the wtime and optime values are slightly greater than the etime value. Search indicators (note) Directory Server provides additional information on searches in the note message of log entries. For example: Directory Server supports the following search indicators: Search indicator Description notes=P Paged search indicator. LDAP clients with limited resources can control the rate at which an LDAP server returns the results of a search operation. When the performed search used the LDAP control extension for simple paging of search results, Directory Server logs the notes=P paged search indicator. This indicator is informational and no further actions are required. For more details on paged search indicator, see RFC 2696 specification . notes=A Unindexed search indicator. Directory Server logs notes=A when all candidate attributes in the filter were unindexed and a full table scan was required. This can exceed the value set in the nsslapd-lookthroughlimit attribute. notes=U Unindexed search indicator. Directory Server logs notes=U in the following situations: At least one of the search terms is unindexed. A search operation exceeds the limit set in the nsslapd-idlistscanlimit attribute. notes=F Unknown attribute indicator. Directory Server logs notes=F when a search filter contains an unknown attribute. notes=M MFA plug-in binds indicator. Directory Server logs notes=M when you configured the two-factor authentication for user accounts by using a pre-bind authentication plug-in, such as the MFA plug-in. The note records can have combinations of values: notes=P,A and notes=U,P . When attributes are not indexed, Directory Server must search them directly in the database. This procedure is more resource-intensive than searching the index file. Unindexed searches occur in the following scenarios: The search operation exceeds the number of searched entries set in the nsslapd-idlistscanlimit attribute even when using the index file. For details about the nsslapd-idlistscanlimit attribute, see nsslapd-idlistscanlimit description No index file exists. The index file was not configured in the way required by the search. To optimize future searches, add frequently searched unindexed attributes to the index. Note An unindexed search indicator is often accompanied by a large etime value, because unindexed searches are generally more time consuming. MFA plug-in binds When you configure the two-factor authentication for user accounts by using a pre-bind authentication plug-in, such as the MFA plug-in, the access log records the notes=M note message to the file: Note For the access log to record the notes=M note messages, the pre-bind authentication plug-in must set the flag by using the SLAPI API if a bind was part of this plug-in. VLV-related entries (VLV) When a search involves virtual list views (VLVs), Directory Server logs appropriate entries to the access log file. Similar to the other entries, VLV-specific records show the request and response information together: In the example, the request information is 0:5:0210 and has the format beforeCount:afterCount:index:contentCount . The response information is 10:5397 (0) and has the format targetPosition:contentCount (resultCode) . If the client uses a position-by-value VLV request, the request information format is beforeCount: afterCount: value . Search scope (scope) The scope entry defines the scope for a performed search operation and can have one of the following values: 0 for a base search 1 for a one-level search 2 for a subtree search Extended operation OID (oid) The oid record provides the object identifier (OID) of the performed extended operation. Below is an example of access log records with the extended operation OIDs: Directory Server supports the following list of LDAPv3 extended operations and their OIDs: Extended operation name Description OID Directory Server Start Replication Request A replication initiator requests a replication session. 2.16.840.1.113730.3.5.3 Directory Server Replication Response A replication responder answers in the response to a Start Replication Request extended operation or an End Replication Request extended operation. 2.16.840.1.113730.3.5.4 Directory Server End Replication Request A replication initiator terminates the replication session. 2.16.840.1.113730.3.5.5 Directory Server Replication Entry Request Carries an entry with the state information ( csn and UniqueIdentifier ) and is used to perform a replica initialization. 2.16.840.1.113730.3.5.6 Directory Server Bulk Import Start A client requests a bulk import together with the imported suffix using the Bulk Import Start operation, and Directory Server indicates that the bulk import may begin. 2.16.840.1.113730.3.5.7 Directory Server Bulk Import Finished A client ends a bulk import using the Bulk Import Finished operation, and Directory Server acknowledges the bulk import ending. 2.16.840.1.113730.3.5.8 Change sequence number (csn) The csn message, such as csn=3b4c8cfb000000030000 , indicates that Directory Server received an update identified by its 'csn' and processed it. Abandon message (ABANDON) The abandon message indicates that a client or Directory Server terminates an operation. Below is an example of log records that contain an abandon message: The nentries=0 value indicates the number of entries Directory Server sent before the operation was terminated, etime=0.0000113980 value indicates how much time (in seconds) had elapsed, and targetop=2 corresponds to the operation number that Directory Server initiated earlier ( opt=2 ). If Directory Server does not find what operation to abandon, a log record contains a targetop=NOTFOUND message: The example message means that Directory Server has completed the operation earlier or it is an unknown operation. Message ID (msgid) An LDAP SDK client generates the message ID, such as msgid=2 , which is also an LDAP operation identifier. The msgid value may differ from the opt value; however, it identifies the same operation. Directory Server records the msgid with an ABANDON operation and tells the user which client operation was abandoned: Note The Directory Server operation number opt starts counting at 0 for a connection. In the majority of LDAP SDK/client implementations, the message ID number msgid starts counting at 1. This explains why the msgid is frequently equal to the Directory Server opt plus 1. SASL multi-stage bind logging Directory Server logs each stage of the bind process. The error codes for SASL connections are really return codes: The example record indicates that the SASL bind is currently in progress ( SASL bind in progress ) and has the return code of err=14 . This means that the connection is still open. Directory Server logs SASL bind information together with the LDAP version number ( version=3 ) and used SASL mechanism ( mech=DIGEST-MD5 ). Note Because SASL authentication requires multiple steps, Directory Server logs the authenticated DN (the DN used for access control decisions) in the bind RESULT line when Directory Server completes the binding process. This shows what entry was mapped to the SASL bind request: 11.1.3. Non-default access log content When you set non-default log levels or apply specific log configurations, Directory Server starts to record additional information to the access log file. Internal operation records When you enable logging for internal operations ( 4 ), Directory Server starts to log internal operations initiated by Directory Server or a client. Server-initiated internal operations If a client deletes an entry, the server runs several internal operations, such as locating the entry and updating groups in which the user was a member. The following example shows the server-initiated internal operation logs format: The example record has conn=Internal that is followed by (0) and op=0(0)(nesting_level) . Operation ID and internal operation ID are always 0 . For the non-nested log records the nesting level is 0 . Client-initiated internal operation Client-initiated internal operation logs have a search base, scope, filter, and requested search attributes in addition to the details of the performed search. The following example shows the format of the log records: The example record has the conn record that is set to the client connection ID and followed by the string (Internal) . The op record contains the operation ID, followed by (internal_operation_ID)(nesting_level) . The internal operation ID can vary. For the non-nested log entries the nesting level is 0 . Internal operations with plug-in logging enabled If the nsslapd-plugin-logging parameter is set to on and you enabled internal operations logging (4), Directory Server additionally logs internal operations of plug-ins. For example, if you delete the uid=user,dc=example,dc=com entry, and the Referential Integrity plug-in automatically deletes this entry from the example group, the server logs the following: Access to an entry and referrals When you enable logging for the access to an entry and referrals ( 512 ), Directory Server has the following records in the access log file: The example has the logging level 768 ( 512 + 256 ) and shows six entries and one referral that a search request returns in response. Options description The options=persistent message indicates that Directory Server performs a persistent search. You can use persistent searches for monitoring purposes and configure returning changes to given configurations when changes occur. The following example shows the 512 and 4 log levels that contain options description. Statistics per a search operation When you set the nsslapd-statlog-level attribute to 1 , the access log starts to collect metrics, such as number of index lookups and overall duration of an index lookup, for each search operation. The example of the log records shows that during the search with filter (cn=user_*) , Directory Server performed the following number of database lookups: 0 for referrals 24 for er_ key 25 for the ser key 25 for the use key 24 for the ^us key 11.1.4. Common connection codes Directory Server adds a connection code to the closed log message with additional information related to the connection closure. Connection Code Description A1 The client aborts the connection. B1 A corrupt BER tag is encountered. Directory Server logs B1 connection code to the access log when it receives corrupted BER tags that were sent over the wire. A BER tags can be corrupted due to physical layer network problems or bad LDAP client operations, such as an LDAP client cancels the operation before receives all request results. B2 The BER tag is longer than the nsslapd-maxbersize attribute value. B3 A corrupt BER tag is encountered. B4 The server failed to send response back to the client. P2 A closed or corrupt connection is detected. T1 The client does not receive a result after the idle period that you can set in the nsslapd-idletimeout attribute. T2 The server closed connection to a stalled LDAP client after a period of time you set in the nsslapd-ioblocktimeout . T3 The server closed the connection because the specified time limit for a paged result search has been exceeded. U1 The server closes the connection after the client sends an unbind request. The server always closes a connection when it receives an unbind request. Additional resources Description of the nsslapd-idletimeout attribute Description of the nsslapd-maxbersize attribute Description of the nsslapd-ioblocktimeout attribute 11.2. Error log reference The Directory Server error log records messages of Directory Server transactions and operations. The error log contains not only error messages for failed operations, but also general information about the Directory Server processes and LDAP tasks, such as server startup messages, logins and searches of the directory, and connection information. 11.2.1. Error logging levels The error log can record different details of the Directory Server operations, including different types of information depending on the enabled logging level. You can set the logging level by using the nsslapd-errorlog-level configuration attribute of the cn=config entry. The default logging level is 16384 . This level includes critical error messages and standard logged messages, such as LDAP results codes and startup messages. Error logging levels are additive. To enable both replication logging ( 8192 ) and plug-in logging ( 65536 ), set the nsslapd-errorlog-level attribute to 73728 ( 8192 + 65536 ). Note Enabling high levels of debug logging can significantly decrease the server performance. Therefore, enable high debug logging levels, such as replication ( 8192 ), only for troubleshooting. Table 11.1. Error log levels Setting Console name Description 1 Trace function calls Logs a message when the server enters and exits a function. 2 Packeting handlings Logs debug information for packets the server processes. 4 Heavy trace output Logs when the server enters and exits a function, with additional debugging messages. 8 Connection management Logs the current connection status, including the connection methods used for a SASL bind. 16 Packets sent and received Prints the numbers of packets the server sends and receives. 32 Search filter processing Logs all functions a search operation calls. 64 Config file processing Prints every .conf configuration files the server used, line by line, when the server starts. By default, Directory Server processes only the slapd-collations.conf file. 128 Access control list processing Provides detailed access control list processing information. 2048 Log entry parsing Logs schema parsing debugging information. 4096 Housekeeping Logs debug information for housekeeping threads. 8192 Replication Logs detailed information about every replication-related operation, including updates and errors, which is important for debugging replication problems. 16384 Default Logs critical errors and other messages that Directory Server always writes to the error log, such as server startup messages. The error log contains these messages regardless of the log level setting. 32768 Entry cache Logs debug information for the database entry cache. 65536 Plug-in Writes an entry to the log file when a server plug-in calls the slapi-log-error() function. You can use the plug-in logging level for server plug-in debugging. 262144 Access control summary Summarizes information about access to the server, contains less details than the 128 level. Use the 262144 value when you need a summary of access control processing. Use the 128 value for very detailed processing messages. 524288 Backend database Logs debug information for handling databases associated with suffixes. 1048576 Password policy Logs debug information about password policy decisions. Additional resources The nsslapd-errorlog-level attribute description 11.2.2. Default error log content Either a server or a plug-in can write entries to the error log: When a server writes logs, it uses the following format: An example of the error log a server generates: When a plug-in writes logs, it uses the following format: An example of the error log a plug-in generates: Error log entries contain the following information: Log message Description Time stamp The time stamp format can differ depending on your local settings. By default, the high-resolution time stamps are enabled and measured in nanoseconds. Severity level The severity level can have the following values: EMERG when the server fails to start. ALERT when the server is in a critical state, and you must take possible actions. CRIT when a severe error appears. ERR when a general error appears. WARNING for a warning message that is not necessarily an error. NOTICE when a normal but significant condition occurs. For example, Directory Server logs a notice message for the expected behavior. INFO for informational messages, such as startup, shutdown, import, export, backup, and restore. DEBUG for debug-level messages. Verbose logging levels, such as Trace function calls ( 1 ), Access control list processing ( 128 ), and Replication ( 8192 ) use DEBUG messages by default. Plug-in name The plug-in name appears only if a plug-in writes the message to the error log. Function name Functions that the operation or the plug-in call. Message The output that the operation or plug-in returns. The message contains additional information, such as LDAP error codes and connection information. You can use the severity levels to filter your log entries. For example, to display only log entries with the ERR severity, run: Additional resources Error logging levels 11.2.3. Non-default error log content Different logging levels return different details, including types of server operations. The following are the most frequently used error logging levels that are not enabled by default. Remember that you can combine logging levels. Replication (8192) The replication logging is one of the most important diagnostic levels to implement. The replication ( 8192 ) level records all operations related to replication and Windows synchronization, including processing modifications on a supplier and writing them to the changelog, sending updates, and changing replication agreements. When Directory Server prepares or sends a replication update, the error log identifies if it is a replication or synchronization agreement. The log also identifies the consumer host and port and the current replication task. The replication level log has the following format: The following is the example of the replication ( 8192 ) level log, where {replicageneration} means that Directory Server sends the new information and 4949df6e000000010000 is the change sequence number (CSN) of the replicated entry: The following is the example of the complete process of sending a single entry to a consumer, from adding the entry to the changelog to releasing the consumer after replication is complete. Plug-in ( 65536 ) The plug-in ( 65536 ) level records the name of a plug-in and all functions the plug-in calls. The plug-in level log has the following format: The returned information can contain hundreds of lines because Directory Server processes every step. The precise recorded information depends on the plug-in itself. In the following example, the ACL Plug-in includes a connection and operation number: Config file processing (64) The configuration file processing log level goes through each .conf file the server uses and prints every line when the server starts up. You can use the 64 log level to debug any problems with files outside of the server normal configuration. By default, only the slapd-collations.conf file, which contains configurations for international language sets, is available. Example of the config file processing (64) level: Access control list processing ( 128 ) and Access control summary ( 262144 ) Both of the ACI logging levels record information that other log levels do not include and contain a connection number ( conn ) and an operation number ( op ). The access control list processing ( 128 ) shows the series of functions called in the course of the bind and any other operations. The access control summary ( 262144 ) records the name of the plug-in, the bind DN of the user, the performed or attempted operation, and the applied ACI. Example of the access control summary ( 262144 ) level: Other logging levels Many other logging levels have the output format that is similar to the plug-in logging level. The only difference is in recorded internal operations. Logging levels, such as Heavy trace output ( 4 ), access control list processing ( 128 ), schema parsing ( 2048 ), and housekeeping ( 4096 ) levels, record the called functions when Directory Server performs different operations. In addition, the error log writes why Directory Server calls these functions for specified operations. 11.3. Audit log reference The audit log records changes made to each database and to the server configuration. This log type is not enabled by default. If you enable audit logging, Directory Server records only successful operations to the audit log file. However, you can record failing operations to a separate file if you enable audit fail logging. Unlike the error and access log, the audit log does not record access to the server instance, so searches against the database are not logged. The format of the the audit log differs from the access and error logs format. Directory Server records operations in the audit log in the LDIF statements: For more details about the LDIF files and formats see LDAP Data Interchange Format The audit log example: Additional resources Configuring log files 11.4. Audit fail log reference If you enable fail audit logging, Directory Server starts to record only failing changes made to the server instance to the audit fail log file. The audit fail log has the same format as the audit log and looks like LDIF statements and is not enabled by default. Additional resources Configuring log files 11.5. Security log reference The security log records a variety of security events, including the following: Authentication events Authorization issues DoS and TCP attacks Directory Server stores the security log in the /var/log/dirsrv/slapd- instance_name / directory along with other log files. The security log does not rotate quickly and consumes less disk resources in comparison to the access log that has all the information, but requires expensive parsing to get the security data. The security log is in JSON format and enables other tooling to do the complex parsing of the log. You cannot change the log format or set a log level for the security log. The security log example: The log example shows that two binds to the server were successful, two binds failed, and one event is a TCP error. In addition, when you configure the two-factor authentication for user accounts by using a pre-bind authentication plug-in, the security log records the bind method, for example: Note that for the secutiry log to record such messages, the pre-bind authentication plug-in must set the flag if a bind was part of this plug-in by using the SLAPI API. Additional resources Configuring log files 11.6. LDAP result codes Directory Server uses the following LDAP result codes the log files: Decimal values Hex values Constants 0 0x00 LDAP_SUCCESS 1 0x01 LDAP_OPERATIONS_ERROR 2 0x02 LDAP_PROTOCOL_ERROR 3 0x03 LDAP_TIMELIMIT_EXCEEDED 4 0x04 LDAP_SIZELIMIT_EXCEEDED 5 0x05 LDAP_COMPARE_FALSE 6 0x06 LDAP_COMPARE_TRUE 7 0x07 LDAP_AUTH_METHOD_NOT_SUPPORTED LDAP_STRONG_AUTH_NOT_SUPPORTED 8 0x08 LDAP_STRONGER_AUTH_REQUIRED LDAP_STRONG_AUTH_REQUIRED 9 0x09 LDAP_PARTIAL_RESULTS 10 0x0a LDAP_REFERRAL (LDAPv3) 11 0x0b LDAP_ADMINLIMIT_EXCEEDED 12 0x0c LDAP_UNAVAILABLE_CRITICAL_EXTENSION 13 0x0d LDAP_CONFIDENTIALITY_REQUIRED 14 0x0e LDAP_SASL_BIND_IN_PROGRESS 16 0x10 LDAP_NO_SUCH_ATTRIBUTE 17 0x11 LDAP_UNDEFINED_TYPE 18 0x12 LDAP_INAPPROPRIATE_MATCHING 19 0x13 LDAP_CONSTRAINT_VIOLATION 20 0x14 LDAP_TYPE_OR_VALUE_EXISTS 21 0x15 LDAP_INVALID_SYNTAX 32 0x20 LDAP_NO_SUCH_OBJECT 33 0x21 LDAP_ALIAS_PROBLEM 34 0x22 LDAP_INVALID_DN_SYNTAX 35 0x23 LDAP_IS_LEAF (not used in LDAPv3) 36 0x24 LDAP_ALIAS_DEREF_PROBLEM 48 0x30 LDAP_INAPPROPRIATE_AUTH 49 0x31 LDAP_INVALID_CREDENTIALS 50 0x32 LDAP_INSUFFICIENT_ACCESS 51 0x33 LDAP_BUSY 52 0x34 LDAP_UNAVAILABLE 53 0x35 LDAP_UNWILLING_TO_PERFORM 54 0x36 LDAP_LOOP_DETECT 60 0x3c LDAP_SORT_CONTROL_MISSING 61 0x3d LDAP_INDEX_RANGE_ERROR 64 0x40 LDAP_NAMING_VIOLATION 65 0x41 LDAP_OBJECT_CLASS_VIOLATION 66 0x42 LDAP_NOT_ALLOWED_ON_NONLEAF 67 0x43 LDAP_NOT_ALLOWED_ON_RDN 68 0x44 LDAP_ALREADY_EXISTS 69 0x45 LDAP_NO_OBJECT_CLASS_MODS 70 0x46 LDAP_RESULTS_TOO_LARGE (reserved for CLDAP) 71 0x47 LDAP_AFFECTS_MULTIPLE_DSAS 76 0x4C LDAP_VIRTUAL_LIST_VIEW_ERROR 80 0x50 LDAP_OTHER 81 0x51 LDAP_SERVER_DOWN 82 0x52 LDAP_LOCAL_ERROR 83 0x53 LDAP_ENCODING_ERROR 84 0x54 LDAP_DECODING_ERROR 85 0x55 LDAP_TIMEOUT 86 0x56 LDAP_AUTH_UNKNOWN 87 0x57 LDAP_FILTER_ERROR 88 0x58 LDAP_USER_CANCELLED 89 0x59 LDAP_PARAM_ERROR 90 0x5A LDAP_NO_MEMORY 91 0x5B LDAP_CONNECT_ERROR 92 0x5C LDAP_NOT_SUPPORTED 93 0x5D LDAP_CONTROL_NOT_FOUND 94 0x5E LDAP_MORE_RESULTS_TO_RETURN 95 0x5F LDAP_MORE_RESULTS_TO_RETURN 96 0x60 LDAP_CLIENT_LOOP 97 0x61 LDAP_REFERRAL_LIMIT_EXCEEDED 118 0x76 LDAP_CANCELLED
[ "[time_stamp] conn=1 op=73 SRCH base=\"dc=example,dc=com\" scope=2 filter=\"(&(objectClass=top)(objectClass=ldapsubentry)(objectClass=passwordpolicy))\" attrs=\"distinguishedName\" [time_stamp] conn=1 op=73 RESULT err=0 tag=101 nentries=24 wtime=0.000078414 optime=0.001614101 etime=0.001690742", "[time_stamp] conn=13 fd=608 slot=608 connection from 172.17.0.2 to 172.17.0.2", "[time_stamp] conn=11 fd=608 slot=608 connection from 172.17.0.2 to 172.17.0.2", "[time_stamp] conn=11 fd=608 slot=608 connection from 172.17.0.2 to 172.17.0.2.", "[time_stamp] conn=14 op=0 BIND dn=\"cn=Directory Manager\" method=128 version=3 [time_stamp] conn=14 op=0 RESULT err=0 tag=97 nentries=0 wtime=0.000076581 optime=0.000082736 etime=0.000158680 [time_stamp] conn=14 op=1 SRCH base=\"dc=example,dc=com\" scope=2 filter=\"(uid=bjensen)\" [time_stamp] conn=14 op=2 ABANDON targetop=2 msgid=3 nentries=0 etime=0.0000113702 [time_stamp] conn=14 op=3 UNBIND [time_stamp] conn=14 op=3 fd=634 closed - U1", "[time_stamp] conn=11 op=0 BIND dn=\"cn=Directory Manager\" method=128 version=3", "[time_stamp] conn=11 op=0 BIND dn=\"cn=Directory Manager\" method=128 version=3", "[time_stamp] conn=2 op=0 RESULT err=0 tag=97 nentries=0 wtime=0.000076581 optime=0.000082736 etime=0.000158680", "[time_stamp] conn=11 op=0 RESULT err=0 tag=97 nentries=0 wtime=0.000076581 optime=0.000082736 etime=0.000158680", "[time_stamp] conn=11 op=0 RESULT err=0 tag=97 nentries=0 wtime=0.000076581 optime=0.000082736 etime=0.000158680", "[time_stamp] conn=11 op=1 RESULT err=0 tag=101 nentries=1 wtime=0.000076581 optime=0.000082736 etime=0.000158680 notes=U", "[time_stamp] conn=114 op=68 SORT serialno (1)", "[time_stamp] conn=11 op=1 RESULT err=0 tag=101 nentries=1 wtime=0.000076581 optime=0.000082736 etime=0.000158680 notes=U", "[time_stamp] conn=1 op=0 BIND dn=\"uid=jdoe,ou=people,dc=example,dc=com\" method=128 version=3 [time_stamp] conn=1 op=0 RESULT err=0 tag=97 nentries=0 wtime=0.000111632 optime=0.006612223 etime=0.006722325 notes=M details=\"Multi-factor Authentication\" dn=\"uid=jdoe,ou=people,dc=example,dc=com\"", "[time_stamp] conn=67 op=8530 VLV 0:5:0210 10:5397 (0)", "[time_stamp] conn=13 op=1 EXT oid=\"2.16.840.1.113730.3.5.3\" [time_stamp] conn=15 op=3 EXT oid=\"2.16.840.1.113730.3.5.5\"", "[time_stamp] conn=12 op=1 SRCH base=\"dc=example,dc=com\" scope=2 filter=\"(uid=bjensen)\" [time_stamp] conn=12 op=2 ABANDON targetop=2 msgid=3 nentries=0 etime=0.0000113980", "[time_stamp] conn=12 op=2 ABANDON targetop=NOTFOUND msgid=2", "[time_stamp] conn=12 op=2 ABANDON targetop=NOTFOUND msgid=2", "[time_stamp] conn=16 op=0 BIND dn=\"\" method=sasl version=3 mech=DIGEST-MD5 [time_stamp] conn=16 op=0 RESULT err=14 tag=97 nentries=0 wtime=0.000076581 optime=0.000082736 etime=0.000158680, SASL bind in progress", "[time_stamp] conn=14 op=1 RESULT err=0 tag=97 nentries=0 wtime=0.000076581 optime=0.000082736 etime=0.000158680 dn=\"uid=jdoe,dc=example,dc=com\"", "[time_stamp] conn=Internal ( 0 ) op=0( 0 )( 0 ) MOD dn=\"cn=uniqueid generator,cn=config\" [time_stamp] conn=Internal ( 0 ) op=0( 0 )( 0 ) RESULT err=0 tag=48 nentries=0 wtime=0.0003979676 optime=0.0003989250 etime=0.0007968796", "[time_stamp] conn=5 (Internal) op=15(1)(0) SRCH base=\"cn=config,cn=userroot,cn=ldbm database,cn=plugins,cn=config\" scope=1 filter=\"objectclass=vlvsearch\" attrs=ALL [time_stamp] conn=5 (Internal) op=15(1)(0) RESULT err=0 tag=48 nentries=0 wtime=0.0000143989 optime=0.0000151450 etime=0.0000295419 [time_stamp] conn=5 (Internal) op=15(2)(0) SRCH base=\"cn=config,cn=example,cn=ldbm database,cn=plugins,cn=config\" scope=1 filter=\"objectclass=vlvsearch\" attrs=ALL [time_stamp] conn=5 (Internal) op=15(2)(0) RESULT err=0", "[time_stamp] conn=2 op=37 DEL dn=\"uid=user,dc=example,dc=com\" [time_stamp] conn=2 (Internal) op=37(1) SRCH base=\"uid=user,dc=example,dc=com\" scope=0 filter=\"(|(objectclass=*)(objectclass=ldapsubentry))\" attrs=ALL [time_stamp] conn=2 (Internal) op=37(1) RESULT err=0 tag=48 nentries=1 wtime=0.0000062569 optime=0.0000067203 etime=0.0000129148 [time_stamp] conn=2 (Internal) op=37(2) SRCH base=\"dc=example,dc=com\" scope=2 filter=\"(member=uid=user,dc=example,dc=com)\" attrs=\"member\" [time_stamp] conn=2 (Internal) op=37(2) RESULT err=0 tag=48 nentries=0 wtime=0.0000058002 optime=0.0000065198 etime=0.0000123162 [time_stamp] conn=2 (Internal) op=37(3) SRCH base=\"dc=example,dc=com\" scope=2 filter=\"(uniquemember=uid=user,dc=example,dc=com)\" attrs=\"uniquemember\" [time_stamp] conn=2 (Internal) op=37(3) RESULT err=0 tag=48 nentries=1 wtime=0.0000062123 optime=0.0000066022 etime=0.0000128104 [time_stamp] conn=2 (Internal) op=37(4) MOD dn=\"cn=example,dc=example,dc=com\" [time_stamp] conn=2 (Internal) op=37(5) SRCH base=\"cn=example,dc=example,dc=com\" scope=0 filter=\"(|(objectclass=\\*)(objectclass=ldapsubentry))\" attrs=ALL [time_stamp] conn=2 (Internal) op=37(5) RESULT err=0 tag=48 nentries=1 wtime=0.0000061994 optime=0.0000068742 etime=0.0000130685 [time_stamp] conn=2 (Internal) op=37(4) RESULT err=0 tag=48 nentries=0 wtime=0.0002600573 optime=0.0002617786 etime=0.0005217545 [time_stamp] conn=2 (Internal) op=37(6) SRCH base=\"dc=example,dc=com\" scope=2 filter=\"(owner=uid=user,dc=example,dc=com)\" attrs=\"owner\" [time_stamp] conn=2 (Internal) op=37(6) RESULT err=0 tag=48 nentries=0 wtime=0.000061678 optime=0.000076107 etime=0.0000137656 [time_stamp] conn=2 (Internal) op=37(7) SRCH base=\"dc=example,dc=com\" scope=2 filter=\"(seeAlso=uid=user,dc=example,dc=com)\" attrs=\"seeAlso\" [time_stamp] conn=2 (Internal) op=37(7) RESULT err=0 tag=48 nentries=0 wtime=0.0000031789 optime=0.0000035354 etime=0.0000066978 [time_stamp] conn=2 (Internal) op=37(8) SRCH base=\"o=example\" scope=2 filter=\"(member=uid=user,dc=example,dc=com)\" attrs=\"member\" [time_stamp] conn=2 (Internal) op=37(8) RESULT err=0 tag=48 nentries=0 wtime=0.0000030987 optime=0.0000032456 etime=0.0000063316 [time_stamp] conn=2 (Internal) op=37(9) SRCH base=\"o=example\" scope=2 filter=\"(uniquemember=uid=user,dc=example,dc=com)\" attrs=\"uniquemember\" [time_stamp] conn=2 (Internal) op=37(9) RESULT err=0 tag=48 nentries=0 wtime=0.0000021958 optime=0.0000026676 etime=0.0000048634 [time_stamp] conn=2 (Internal) op=37(10) SRCH base=\"o=example\" scope=2 filter=\"(owner=uid=user,dc=example,dc=com)\" attrs=\"owner\" [time_stamp] conn=2 (Internal) op=37(10) RESULT err=0 tag=48 nentries=0 wtime=0.0000022109 optime=0.00000268003 etime=00000048854 [time_stamp] conn=2 (Internal) op=37(11) SRCH base=\"o=example\" scope=2 filter=\"(seeAlso=uid=user,dc=example,dc=com)\" attrs=\"seeAlso\" [time_stamp] conn=2 (Internal) op=37(11) RESULT err=0 tag=48 nentries=0 wtime=0.0000021786 optime=0.0000024867 etime=0.0000046522 [time_stamp] conn=2 op=37 RESULT err=0 tag=107 nentries=0 wtime=0.005147365 optime=0.005150798 etime=0.0010297858", "[time_stamp] conn=306 fd=60 slot=60 connection from 127.0.0.1 to 127.0.0.1 [time_stamp] conn=306 op=0 SRCH base=\"dc=example,dc=com\" scope=2 filter=\"(description=*)\" attrs=ALL [time_stamp] conn=306 op=0 ENTRY dn=\"ou=Special [time_stamp] conn=306 op=0 ENTRY dn=\"cn=Accounting Managers,ou=groups,dc=example,dc=com\" [time_stamp] conn=306 op=0 ENTRY dn=\"cn=HR Managers,ou=groups,dc=example,dc=com\" [time_stamp] conn=306 op=0 ENTRY dn=\"cn=QA Managers,ou=groups,dc=example,dc=com\" [time_stamp] conn=306 op=0 ENTRY dn=\"cn=PD Managers,ou=groups,dc=example,dc=com\" [time_stamp] conn=306 op=0 ENTRY dn=\"ou=Red Hat Servers,dc=example,dc=com\" [time_stamp0] conn=306 op=0 REFERRAL", "[time_stamps] conn=1 (Internal) op=2(1)(0) SRCH base=\"cn=\\22dc=example,dc=com\\22,cn=mapping tree,cn=config\"scope=0 filter=\"objectclass=nsMappingTree\"attrs=\"nsslapd-referral\" options=persistent", "[time_stamps] conn=1 op=73 SRCH base=\"dc=example,dc=com\" scope=2 filter=\"(cn=user_*)\" attrs=ALL [time_stamps] conn=1 op=73 STAT read index: attribute=objectclass key(eq)=referral --> count 0 [time_stamps] conn=1 op=73 STAT read index: attribute=cn key(sub)=er_ --> count 24 [time_stamps] conn=1 op=73 STAT read index: attribute=cn key(sub)=ser --> count 25 [time_stamps] conn=1 op=73 STAT read index: attribute=cn key(sub)=use --> count 25 [time_stamps] conn=1 op=73 STAT read index: attribute=cn key(sub)=^us --> count 24 [time_stamps] conn=1 op=73 STAT read index: duration 0.000010276 [time_stamps] conn=1 op=73 RESULT err=0 tag=101 nentries=24 wtime=0.00007841", "[time_stamp] - <severity_level> - <function_name> - <message>", "[time_stamp] - NOTICE - bdb_start_autotune - found 7110616k physical memory", "[time_stamp] - <severity_level> - <plug-in_name> - <function_name> - <message>", "[time_stamp] - ERR - NSMMReplicationPlugin - multimaster_extop_StartNSDS50ReplicationRequest - conn=19 op=3 repl=\"o=example.com\": Excessive clock skew from supplier RUV", "grep ERR /var/log/dirsrv/slapd-instance_name/errors [time_stamp] - ERR - no_diskspace - No enough space left on device (/var/lib/dirsrv/slapd-instance_name/db) (40009728 bytes); at least 145819238 bytes space is needed for db region files [time_stamp] - ERR - ldbm_back_start - Failed to init database, err=28 No space left on device [time_stamp] - ERR - plugin_dependency_startall - Failed to start database plugin ldbm database", "[time_stamp] NSMMReplicationPlugin - agmt=\"name\" (consumer_host:consumer_port): current_task", "[time_stamp] NSMMReplicationPlugin - agmt=\"cn=example2_agreement\" (alt:13864): {replicageneration} 4949df6e000000010000", "[time_stamp] - DEBUG - _csngen_adjust_local_time - gen state before 592c103d0000:1496059964:0:1 [time_stamp] - DEBUG - _csngen_adjust_local_time - gen state after 592c10e20000:1496060129:0:1 [time_stamp] - DEBUG - NSMMReplicationPlugin - ruv_add_csn_inprogress - Successfully inserted csn 592c10e2000000020000 into pending list [time_stamp] - DEBUG - NSMMReplicationPlugin - changelog program - _cl5GetDBFileByReplicaName - found DB object 0x558ddfe1f720 for database /var/lib/dirsrv/slapd-supplier_2/changelogdb/d3de3e8d-446611e7-a89886da-6a37442d_592c0e0b000000010000.db [time_stamp] - DEBUG - NSMMReplicationPlugin - changelog program - cl5WriteOperationTxn - Successfully written entry with csn (592c10e2000000020000) [time_stamp] - DEBUG - NSMMReplicationPlugin - changelog program - _cl5GetDBFileByReplicaName - found DB object 0x558ddfe1f720 for database /var/lib/dirsrv/slapd-supplier_2/changelogdb/d3de3e8d-446611e7-a89886da-6a37442d_592c0e0b000000010000.db [time_stamp] - DEBUG - NSMMReplicationPlugin - csnplCommitALL: committing all csns for csn 592c10e2000000020000 [time_stamp] - DEBUG - NSMMReplicationPlugin - csnplCommitALL: processing data csn 592c10e2000000020000 [time_stamp] - DEBUG - NSMMReplicationPlugin - ruv_update_ruv - Successfully committed csn 592c10e2000000020000 [time_stamp] - DEBUG - NSMMReplicationPlugin - repl5_inc_run - agmt=\"cn=meTo_localhost:39001\" (localhost:39001): State: wait_for_changes -> wait_for_changes [time_stamp] - DEBUG - NSMMReplicationPlugin - repl5_inc_run - agmt=\"cn=meTo_localhost:39001\" (localhost:39001): State: wait_for_changes -> ready_to_acquire_replica [time_stamp] - DEBUG - NSMMReplicationPlugin - conn_connect - agmt=\"cn=meTo_localhost:39001\" (localhost:39001) - Trying non-secure slapi_ldap_init_ext [time_stamp] - DEBUG - NSMMReplicationPlugin - conn_connect - agmt=\"cn=meTo_localhost:39001\" (localhost:39001) - binddn = cn=replrepl,cn=config, passwd = {AES-TUhNR0NTcUdTSWIzRFFFRkRUQm1NRVVHQ1NxR1NJYjNEUUVGRERBNEJDUmlZVFUzTnpRMk55MDBaR1ZtTXpobQ0KTWkxaE9XTTRPREpoTlMwME1EaGpabVUxWmdBQ0FRSUNBU0F3Q2dZSUtvWklodmNOQWdjd0hRWUpZSVpJQVdVRA0KQkFFcUJCRGhwMnNLcEZ2ZWE2RzEwWG10OU41Tg==}+36owaI7oTmvWhxRzUqX5w== [time_stamp] - DEBUG - NSMMReplicationPlugin - conn_cancel_linger - agmt=\"cn=meTo_localhost:39001\" (localhost:39001) - No linger to cancel on the connection [time_stamp] - DEBUG - _csngen_adjust_local_time - gen state before 592c10e20001:1496060129:0:1 [time_stamp] - DEBUG - _csngen_adjust_local_time - gen state after 592c10e30000:1496060130:0:1 [time_stamp] - DEBUG - NSMMReplicationPlugin - acquire_replica - agmt=\"cn=meTo_localhost:39001\" (localhost:39001): Replica was successfully acquired. [time_stamp] - DEBUG - NSMMReplicationPlugin - repl5_inc_run - agmt=\"cn=meTo_localhost:39001\" (localhost:39001): State: ready_to_acquire_replica -> sending_updates [time_stamp] - DEBUG - csngen_adjust_time - gen state before 592c10e30001:1496060130:0:1 [time_stamp] - DEBUG - NSMMReplicationPlugin - changelog program - _cl5GetDBFile - found DB object 0x558ddfe1f720 for database /var/lib/dirsrv/slapd-supplier_2/changelogdb/d3de3e8d-446611e7-a89886da-6a37442d_592c0e0b000000010000.db [time_stamp] - DEBUG - NSMMReplicationPlugin - changelog program - _cl5PositionCursorForReplay - (agmt=\"cn=meTo_localhost:39001\" (localhost:39001)): Consumer RUV: [time_stamp] - DEBUG - NSMMReplicationPlugin - agmt=\"cn=meTo_localhost:39001\" (localhost:39001): {replicageneration} 592c0e0b000000010000 [time_stamp] - DEBUG - NSMMReplicationPlugin - agmt=\"cn=meTo_localhost:39001\" (localhost:39001): {replica 1 ldap://localhost:39001} 592c0e17000000010000 592c0e1a000100010000 00000000 [time_stamp] - DEBUG - NSMMReplicationPlugin - agmt=\"cn=meTo_localhost:39001\" (localhost:39001): {replica 2 ldap://localhost:39002} 592c103c000000020000 592c103c000000020000 00000000 [time_stamp] - DEBUG - NSMMReplicationPlugin - changelog program - _cl5PositionCursorForReplay - (agmt=\"cn=meTo_localhost:39001\" (localhost:39001)): Supplier RUV: [time_stamp] - DEBUG - NSMMReplicationPlugin - agmt=\"cn=meTo_localhost:39001\" (localhost:39001): {replicageneration} 592c0e0b000000010000 [time_stamp] - DEBUG - NSMMReplicationPlugin - agmt=\"cn=meTo_localhost:39001\" (localhost:39001): {replica 2 ldap://localhost:39002} 592c103c000000020000 592c10e2000000020000 592c10e1 [time_stamp] - DEBUG - NSMMReplicationPlugin - agmt=\"cn=meTo_localhost:39001\" (localhost:39001): {replica 1 ldap://localhost:39001} 592c0e1a000100010000 592c0e1a000100010000 00000000 [time_stamp] - DEBUG - agmt=\"cn=meTo_localhost:39001\" (localhost:39001) - clcache_get_buffer - found thread private buffer cache 0x558ddf870f00 [time_stamp] - DEBUG - agmt=\"cn=meTo_localhost:39001\" (localhost:39001) - clcache_get_buffer - _pool is 0x558ddfe294d0 _pool->pl_busy_lists is 0x558ddfab84c0 _pool->pl_busy_lists->bl_buffers is 0x558ddf870f00 [time_stamp] - DEBUG - agmt=\"cn=meTo_localhost:39001\" (localhost:39001) - clcache_initial_anchorcsn - agmt=\"cn=meTo_localhost:39001\" (localhost:39001) - (cscb 0 - state 0) - csnPrevMax () csnMax (592c10e2000000020000) csnBuf (592c103c000000020000) csnConsumerMax (592c103c000000020000) [time_stamp] - DEBUG - clcache_initial_anchorcsn - anchor is now: 592c103c000000020000 [time_stamp] - DEBUG - NSMMReplicationPlugin - changelog program - agmt=\"cn=meTo_localhost:39001\" (localhost:39001): CSN 592c103c000000020000 found, position set for replay [time_stamp] - DEBUG - agmt=\"cn=meTo_localhost:39001\" (localhost:39001) - clcache_get_next_change - load=1 rec=1 csn=592c10e2000000020000 [time_stamp] - DEBUG - NSMMReplicationPlugin - repl5_inc_result_threadmain - Starting [time_stamp] - DEBUG - NSMMReplicationPlugin - repl5_inc_result_threadmain - Read result for message_id 0 [time_stamp] - DEBUG - NSMMReplicationPlugin - replay_update - agmt=\"cn=meTo_localhost:39001\" (localhost:39001): Sending add operation (dn=\"cn=user,ou=People,dc=example,dc=com\" csn=592c10e2000000020000) [time_stamp] - DEBUG - NSMMReplicationPlugin - repl5_inc_result_threadmain - Read result for message_id 0 [time_stamp] - DEBUG - NSMMReplicationPlugin - replay_update - agmt=\"cn=meTo_localhost:39001\" (localhost:39001): Consumer successfully sent operation with csn 592c10e2000000020000 [time_stamp] - DEBUG - NSMMReplicationPlugin - repl5_inc_result_threadmain - Read result for message_id 0 [time_stamp] - DEBUG - agmt=\"cn=meTo_localhost:39001\" (localhost:39001) - clcache_adjust_anchorcsn - agmt=\"cn=meTo_localhost:39001\" (localhost:39001) - (cscb 0 - state 1) - csnPrevMax (592c10e2000000020000) csnMax (592c10e2000000020000) csnBuf (592c10e2000000020000) csnConsumerMax (592c10e2000000020000) [time_stamp] - DEBUG - agmt=\"cn=meTo_localhost:39001\" (localhost:39001) - clcache_load_buffer - rc=-30988 [time_stamp] - DEBUG - NSMMReplicationPlugin - send_updates - agmt=\"cn=meTo_localhost:39001\" (localhost:39001): No more updates to send (cl5GetNextOperationToReplay) [time_stamp] - DEBUG - NSMMReplicationPlugin - repl5_inc_waitfor_async_results - 0 5 [time_stamp] - DEBUG - NSMMReplicationPlugin - repl5_inc_result_threadmain - Read result for message_id 0 [time_stamp] - DEBUG - NSMMReplicationPlugin - repl5_inc_result_threadmain - Read result for message_id 0 [time_stamp] - DEBUG - NSMMReplicationPlugin - repl5_inc_result_threadmain - Read result for message_id 5 [time_stamp] - DEBUG - NSMMReplicationPlugin - repl5_inc_result_threadmain - Result 1, 0, 0, 5, (null) [time_stamp] - DEBUG - NSMMReplicationPlugin - repl5_inc_result_threadmain - Read result for message_id 5 [time_stamp] - DEBUG - NSMMReplicationPlugin - repl5_inc_waitfor_async_results - 5 5 [time_stamp] - DEBUG - NSMMReplicationPlugin - repl5_inc_result_threadmain exiting [time_stamp] - DEBUG - agmt=\"cn=meTo_localhost:39001\" (localhost:39001) - clcache_return_buffer - session end: state=5 load=1 sent=1 skipped=0 skipped_new_rid=0 skipped_csn_gt_cons_maxcsn=0 skipped_up_to_date=0 skipped_csn_gt_ruv=0 skipped_csn_covered=0 [time_stamp] - DEBUG - NSMMReplicationPlugin - consumer_connection_extension_acquire_exclusive_access - conn=4 op=3 Acquired consumer connection extension [time_stamp] - DEBUG - NSMMReplicationPlugin - multimaster_extop_StartNSDS50ReplicationRequest - conn=4 op=3 repl=\"dc=example,dc=com\": Begin incremental protocol [time_stamp] - DEBUG - csngen_adjust_time - gen state before 592c10e30001:1496060130:0:1 [time_stamp] - DEBUG - csngen_adjust_time - gen state after 592c10e40001:1496060130:1:1 [time_stamp] - DEBUG - NSMMReplicationPlugin - replica_get_exclusive_access - conn=4 op=3 repl=\"dc=example,dc=com\": Acquired replica [time_stamp] - DEBUG - NSMMReplicationPlugin - release_replica - agmt=\"cn=meTo_localhost:39001\" (localhost:39001): Successfully released consumer [time_stamp] - DEBUG - NSMMReplicationPlugin - conn_start_linger -agmt=\"cn=meTo_localhost:39001\" (localhost:39001) - Beginning linger on the connection [time_stamp] - DEBUG - NSMMReplicationPlugin - repl5_inc_run - agmt=\"cn=meTo_localhost:39001\" (localhost:39001): State: sending_updates -> wait_for_changes [time_stamp] - DEBUG - NSMMReplicationPlugin - multimaster_extop_StartNSDS50ReplicationRequest - conn=4 op=3 repl=\"dc=example,dc=com\": StartNSDS90ReplicationRequest: response=0 rc=0 [time_stamp] - DEBUG - NSMMReplicationPlugin - consumer_connection_extension_relinquish_exclusive_access - conn=4 op=3 Relinquishing consumer connection extension [time_stamp] - DEBUG - NSMMReplicationPlugin - consumer_connection_extension_acquire_exclusive_access - conn=4 op=4 Acquired consumer connection extension [time_stamp] - DEBUG - NSMMReplicationPlugin - replica_relinquish_exclusive_access - conn=4 op=4 repl=\"dc=example,dc=com\": Released replica held by locking_purl=conn=4 id=3 [time_stamp] - DEBUG - NSMMReplicationPlugin - consumer_connection_extension_relinquish_exclusive_access - conn=4 op=4 Relinquishing consumer connection extension", "[time_stamp] plug-in_name - message [time_stamp] - function - message", "[time_stamp] - DEBUG - NSACLPlugin - acl_access_allowed - conn=15 op=1 (main): Allow search on entry(cn=replication,cn=config): root user", "[time_stamp] - DEBUG - collation_read_config - Reading config file /etc/dirsrv/slapd-supplier_1/slapd-collations.conf [time_stamp] - DEBUG - collation-plugin - collation_read_config - line 16: collation \"\" \"\" \"\" 1 3 2.16.840.1.113730.3.3.2.0.1 default [time_stamp] - DEBUG - collation-plugin - collation_read_config - line 17: collation ar \"\" \"\" 1 3 2.16.840.1.113730.3.3.2.1.1 ar [time_stamp] - DEBUG - collation-plugin - collation_read_config - line 18: collation be \"\" \"\" 1 3 2.16.840.1.113730.3.3.2.2.1 be be-BY", "[time_stamp] - DEBUG - NSACLPlugin - acllist_init_scan - Failed to find root for base: cn=features,cn=config [time_stamp] - DEBUG - NSACLPlugin - acllist_init_scan - Failed to find root for base: cn=config [time_stamp] - DEBUG - NSACLPlugin - acl_access_allowed - # # conn=6 op=1 binddn=\"cn=user,ou=people,dc=example,dc=com\" [time_stamp] - DEBUG - NSACLPlugin - RESOURCE INFO STARTS [time_stamp] - DEBUG - NSACLPlugin - Client DN: cn=user,ou=people,dc=example,dc=com [time_stamp] - DEBUG - NSACLPlugin - resource type:256(search target_DN ) [time_stamp] - DEBUG - NSACLPlugin - Slapi_Entry DN: cn=features,cn=config [time_stamp] - DEBUG - NSACLPlugin - ATTR: objectClass [time_stamp] - DEBUG - NSACLPlugin - rights:search [time_stamp] - DEBUG - NSACLPlugin - RESOURCE INFO ENDS [time_stamp] - DEBUG - NSACLPlugin - acl__scan_for_acis - Num of ALLOW Handles:0, DENY handles:0 [time_stamp] - DEBUG - NSACLPlugin - print_access_control_summary - conn=6 op=1 (main): Deny search on entry(cn=features,cn=config).attr(objectClass) to cn=user,ou=people,dc=example,dc=com: no aci matched the resource", "timestamp: date dn: modified_entry changetype: action action:attribute attribute:new_value - replace: modifiersname modifiersname: dn - replace: modifytimestamp modifytimestamp: date -", "... modifying an entry time: 20200108181429 dn: uid=scarter,ou=people,dc=example,dc=com changetype: modify replace: userPassword userPassword: {SSHA}8EcJhJoIgBgY/E5j8JiVoj6W3BLyj9Za/rCPOw== - replace: modifiersname modifiersname: cn=Directory Manager - replace: modifytimestamp modifytimestamp: 20200108231429Z - ... sending a replication update time: 20200109131811 dn: cn=example2,cn=replica,cn=\"dc=example,dc=com\",cn=mapping tree,cn=config changetype: modify replace: nsds5BeginReplicaRefresh nsds5BeginReplicaRefresh: start - replace: modifiersname modifiersname: cn=Directory Manager - replace: modifytimestamp modifytimestamp: 20200109181810Z -", "{ \"date\": \"[time_stamp] \", \"utc_time\": \"1684155510.154562500\", \"event\": \"BIND_SUCCESS\", \"dn\": \"cn=directory manager\", \"bind_method\": \"LDAPI\", \"root_dn\": true, \"client_ip\": \"local\", \"server_ip\": \"\\/run\\/slapd- instance_name .socket\", \"ldap_version\": 3, \"conn_id\": 1, \"op_id\": 0, \"msg\": \"\" } { \"date\": \"[time_stamp] \", \"utc_time\": \"1684155510.163790695\", \"event\": \"BIND_SUCCESS\", \"dn\": \"cn=directory manager\", \"bind_method\": \"LDAPI\", \"root_dn\": true, \"client_ip\": \"local\", \"server_ip\": \"\\/run\\/slapd- instance_name .socket\", \"ldap_version\": 3, \"conn_id\": 2, \"op_id\": 0, \"msg\": \"\" } {'date': '[time_stamp]', 'utc_time': '168485945', 'event': 'BIND_FAILED', 'dn': 'uid=mark,ou=people,dc=example,dc=com', 'bind_method': 'SIMPLE', 'root_dn': 'false', 'client_ip': '127.0.0.1', 'server_ip': '127.0.0.1', 'conn_id': '2', 'op_id': '1', 'msg': 'INVALID_PASSWORD'} {'date': '[time_stamp]', 'utc_time': '168499999', 'event': 'BIND_FAILED', 'dn': 'uid=mike,ou=people,dc=example,dc=com', 'bind_method': 'SIMPLE', 'root_dn': 'false', 'client_ip': '127.0.0.1', 'server_ip': '127.0.0.1', 'conn_id': '7', 'op_id': '1', 'msg': 'NO_SUCH_ENTRY'} {\"date\": \"[time_stamp]\", \"utc_time\": 1657907429, \"event\": \"TCP_ERROR\", \"client_ip\": \"::1\", \"server_ip\": \"::1\", \"ldap_version\": 3, \"conn_id\": 1, \"msg\": \"Bad Ber Tag or uncleanly closed connection - B1\"}", "{ \"date\": \"[time_stamp] \", \"utc_time\": \"1709327649.232748932\", \"event\": \"BIND_SUCCESS\", \"dn\": \"uid=djoe,ou=people,dc=example,dc=com\", \"bind_method\": \"SIMPLE\\/MFA\" , \"root_dn\": false, \"client_ip\": \"::1\", \"server_ip\": \"::1\", \"ldap_version\": 3, \"conn_id\": 1, \"op_id\": 0, \"msg\": \"\" }" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/configuration_and_schema_reference/assembly_log-files-reference_config-schema-reference-title
Chapter 8. Access resources
Chapter 8. Access resources Automation controller uses a primary key to access individual resource objects. You can access automation controller resources by using resource-specific, human-readable identifiers through the named URL feature. The following example shows the named URL path where you can access a resource object without an auxiliary query string: /api/v2/hosts/host_name++inv_name++org_name/ 8.1. Configuration settings There are two named-URL-related configuration settings available under /api/v2/settings/named-url/: NAMED_URL_FORMATS and NAMED_URL_GRAPH_NODES . NAMED_URL_FORMATS is a read only key-value pair list of all available named URL identifier formats. The following shows an example NAMED_URL_FORMATS : "NAMED_URL_FORMATS": { "organizations": "<name>", "teams": "<name>++<organization.name>", "credential_types": "<name>+<kind>", "credentials": "<name>++<credential_type.name>+<credential_type.kind>++<organization.name>", "notification_templates": "<name>++<organization.name>", "job_templates": "<name>++<organization.name>", "projects": "<name>++<organization.name>", "inventories": "<name>++<organization.name>", "hosts": "<name>++<inventory.name>++<organization.name>", "groups": "<name>++<inventory.name>++<organization.name>", "inventory_sources": "<name>++<inventory.name>++<organization.name>", "inventory_scripts": "<name>++<organization.name>", "instance_groups": "<name>", "labels": "<name>++<organization.name>", "workflow_job_templates": "<name>++<organization.name>", "workflow_job_template_nodes": "<identifier>++<workflow_job_template.name>++<organization.name>", "applications": "<name>++<organization.name>", "users": "<username>", "instances": "<hostname>" } For each item in NAMED_URL_FORMATS , the key is the API name of the resource to have named URL. The value is a string indicating how to form a human-readable unique identifier for that resource. NAMED_URL_FORMATS only lists the resources that can have named URL, any resource not listed there has no named URL. If a resource can have named URL, its objects must have a named_url field that represents the object-specific named URL. That field is only visible under detail view, not list view. You can access specified resource objects by using accurately generated named URL. This the object and its related URLs. For example, if /api/v2/res_name/obj_slug/ is valid, /api/v2/res_name/obj_slug/related_res_name/ is also valid. NAMED_URL_FORMATS are instructive enough to compose human-readable unique identifiers and named URLs themselves. For ease-of-use, every object of a resource that can have named URL has a related field named_url that displays that object's named URL. You can copy and paste that field for your own custom use. For more information, see the help text of the API browser if a resource object has named URL. You can manually decide the named URL label, for example with ID 5. To compose a named URL for this specific resource object by using NAMED_URL_FORMATS , first look up the labels field of NAMED_URL_FORMATS to get the identifier format <name>++<organization.name> : The first part of the URL format is <name> , which indicates that you can find the label resource detail in /api/v2/labels/5/ , and look for name field in returned JSON. If you have the name field with value Foo , then the first part of the unique identifier is Foo . The second part of the format is double plus sign ++. That is the delimiter that separates different parts of a unique identifier. Append them to the unique identifier to get Foo++ . The third part of the format is <organization.name> , which indicates that the field is not in the current label object under investigation, but in an organization that the label object points to. As the format indicates, look up the organization in the related field of the current returned JSON. That field might not exist. If it exists, follow the URL given in that field, for example, /api/v2/organizations/3/ , to get the details of the specific organization, extract its name field, for example, "Default", and append it to the current unique identifier. Since <organizations.name> is the last part of the format, it generates the following named URL: /api/v2/labels/Foo++Default/ . In the case where an organization does not exist in the related field of the label object detail, append an empty string instead. This does not alter the current identifier. Therefore, Foo++ becomes the final unique identifier and the resulting generated named URL becomes /api/v2/labels/Foo++/ . An important aspect of generating a unique identifier for named URL has to do with reserved characters. As the identifier is part of a URL, the following reserved characters by URL standard are encoded by percentage symbols: ;/?:@=&[] . For example, if an organization is named ;/?:@=&[] , its unique identifier should be %3B%2F%3F%3A%40%3D%26%5B%5D . Another special reserved character is + , which is not reserved by URL standard but used by named URL to link different parts of an identifier. It is encoded by [+] . For example, if an organization is named [+] , its unique identifier is %5B[+]%5D , where original [ and ] are percent encoded and + is converted to [+] . Although you cannot manually change NAMED_URL_FORMATS , modifications do occur automatically and expand over time, reflecting underlying resource modification and expansion. Consult the NAMED_URL_FORMATS on the same cluster where you want to use the named URL feature. NAMED_URL_GRAPH_NODES is another read-only list of key-value pairs that exposes the internal graph data structure used to manage named URLs. This is not intended to be human-readable but must be used for programmatically generating named URLs. An example script for generating named URL given the primary key of arbitrary resource objects that can have a named URL, using info provided by NAMED_URL_GRAPH_NODES , can be found in GitHub . 8.2. Identifier format protocol Resources are identifiable by their unique keys, which are tuples of resource fields. Every resource is guaranteed to have its primary key number alone as a unique key, but there might be many other unique keys. A resource can generate an identifier format and, therefore, have a named URL if it has at least one unique key that satisfies the following rules: The key must contain only fields that are either the name field, or text fields with a finite number of possible choices (such as credential type resource's kind field). The only permitted exceptional field that breaks the preceding rule is a many-to-one related field relating to a resource other than itself, which is also allowed to have a slug. If there are resources Foo and Bar , both Foo and Bar contain a name field and a choice field that can only have values "yes" or "no". Additionally, resource Foo has a many-to-one field (a foreign key) relating to Bar , for example fk . Foo has a unique key tuple ( name , choice , fk ) and Bar has a unique key tuple ( name , choice ). Bar can have named URL because it satisfies the preceding first rule. Foo can also have named URL, even though it breaks the first rule, the extra field breaking rule number one is the fk field, which is many-to-one-related to Bar and Bar can have named URL. For resources satisfying the rule number one, their human-readable unique identifiers are combinations of foreign key fields, delimited by + . In specific, resource Bar in the preceding example has slug format <name>+<choice> . Note that the field order matters in slug format and the name field always comes first if present, followed by the remaining fields arranged in lexicographic order of field name. For example, if Bar also has an a_choice field satisfying rule one and the unique key becomes ( name , choice , a_choice ), its slug format becomes <name><a_choice><choice> . For resources satisfying rule number two, if traced back through the extra foreign key fields, the result is a tree of resources that identify objects of that resource. To generate the identifier format, each resource in the traceback tree generates its own part of the standalone format, using all fields but the foreign keys. Finally, all parts are combined by ++ in the following order: Put standalone format as the first identifier part. Recursively generate unique identifiers for each resource. The underlying resource is pointing to using a foreign key (a child of a traceback tree node). Treat generated unique identifiers as the rest of the identifier components. Sort them in lexicographic order of corresponding foreign keys. Combine all components together using ++ to generate the final identifier format. When generating an identifier format for resource Foo , automation controller generates the standalone formats, <name>+<choice> for Foo and <fk.name>+<fk.choice> for Bar , then combines them together to be <name><choice>+<fk.name>+<fk.choice> . When generating identifiers according to the given identifier format, there are cases where a foreign key might point to nowhere. In this case, automation controller substitutes the part of the format corresponding to the resource the foreign key should point to with an empty string. For example, if a Foo object has the name ="alice" , choice ="yes", but fk field = None , its resulting identifier is alice+yes++ .
[ "/api/v2/hosts/host_name++inv_name++org_name/", "\"NAMED_URL_FORMATS\": { \"organizations\": \"<name>\", \"teams\": \"<name>++<organization.name>\", \"credential_types\": \"<name>+<kind>\", \"credentials\": \"<name>++<credential_type.name>+<credential_type.kind>++<organization.name>\", \"notification_templates\": \"<name>++<organization.name>\", \"job_templates\": \"<name>++<organization.name>\", \"projects\": \"<name>++<organization.name>\", \"inventories\": \"<name>++<organization.name>\", \"hosts\": \"<name>++<inventory.name>++<organization.name>\", \"groups\": \"<name>++<inventory.name>++<organization.name>\", \"inventory_sources\": \"<name>++<inventory.name>++<organization.name>\", \"inventory_scripts\": \"<name>++<organization.name>\", \"instance_groups\": \"<name>\", \"labels\": \"<name>++<organization.name>\", \"workflow_job_templates\": \"<name>++<organization.name>\", \"workflow_job_template_nodes\": \"<identifier>++<workflow_job_template.name>++<organization.name>\", \"applications\": \"<name>++<organization.name>\", \"users\": \"<username>\", \"instances\": \"<hostname>\" }" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/automation_controller_api_overview/controller-api-access-resources
Part X. Monitor Caches and Cache Managers
Part X. Monitor Caches and Cache Managers
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/part-monitor_caches_and_cache_managers
13.4. OpenLDAP Configuration Files
13.4. OpenLDAP Configuration Files OpenLDAP configuration files are installed into the /etc/openldap/ directory. The following is a brief list highlighting the most important directories and files: /etc/openldap/ldap.conf - This is the configuration file for all client applications which use the OpenLDAP libraries such as ldapsearch , ldapadd , Sendmail, Evolution , and Gnome Meeting . /etc/openldap/slapd.conf - This is the configuration file for the slapd daemon. Refer to Section 13.6.1, "Editing /etc/openldap/slapd.conf " for more information file. /etc/openldap/schema/ directory - This subdirectory contains the schema used by the slapd daemon. Refer to Section 13.5, "The /etc/openldap/schema/ Directory" for more information. Note If the nss_ldap package is installed, it creates a file named /etc/ldap.conf . This file is used by the PAM and NSS modules supplied by the nss_ldap package. Refer to Section 13.7, "Configuring a System to Authenticate Using OpenLDAP" for more information.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-ldap-files
Preface
Preface The Red Hat Quay application programming interface (API) provides a comprehensive, RESTful interface for managing and automating tasks within Red Hat Quay. Designed around the OAuth 2.0 protocol , this API enables secure, fine-grained access to Red Hat Quay resources, and allows administrators and users to perform such actions as creating repositories, managing images, setting permissions, and more. Red Hat Quay follows Semantic Versioning (SemVer) principles, ensuring predictable API stability across releases, such as: Major releases : Introduce new capabilities. Might include breaking changes to API compatibility. For example, the API of Red Hat Quay 2.0 differs from Red Hat Quay 3.0 . Minor releases : Add new functionality in a backward-compatible manner. For example, a 3.y release adds functionality to the version 3. release. Patch releases : Deliver bug fixes and improvements while preserving backward compatibility with minor releases, such as 3.y.z . The following guide describes the Red Hat Quay API in more detail, and provides details on the following topics: OAuth 2 access tokens and how they compare to traditional API tokens and Red Hat Quay's robot tokens Generating an OAuth 2 access token Best practices for token management OAuth 2 access token capabilities Using the Red Hat Quay API Red Hat Quay API configuration examples
null
https://docs.redhat.com/en/documentation/red_hat_quay/3/html/red_hat_quay_api_guide/pr01
A.2. Designer Metadata Usage Requirements In JBoss Data Virtualization Runtime
A.2. Designer Metadata Usage Requirements In JBoss Data Virtualization Runtime Based on the metadata exposed by the Teiid Designer the below table shows which fields are required and how that information is being used in JBoss Data Virtualization runtime. Table A.2. Data Usage for Tables TABLE Type In Designer In Metadata API Required Description FullName String Yes Yes Yes Name of the Table NameInSource String Yes Yes Yes Name of Table in the source system, for view this can be empty, also used on variety of use cases Cardinality Integer Yes Yes Yes Cardinality is used to calculate the cost of source node access TableType Integer Yes Yes Yes Table,View,Document,XmlMappingClass,XmlStagingTable,MaterializedTable IsVirtual Boolean Yes Yes Yes Used to find if this is source table Vs view IsSystem Boolean Yes Yes No Only used for System metadata IsMaterialized Boolean Yes Yes Yes To identify that the table is materialized SupportsUpdate Boolean Yes Yes Yes To allow updates on the table PrimaryKeyID String Yes KeyRecord Yes Used for creating indexes on temp tables and to create default update/delete procedures ForeignKeyIDs Collection Yes List<ForeignKey> Yes Used in Planning of query (rule raise access) IndexIDs Collection Yes List<KeyRecord> Yes Used for creating indexes on temp tables and in planning (estimate predicate cost) UniqueKeyIDs Collection Yes List<KeyRecord> Yes Used for query planning AccessPatternIDs Collection Yes List<KeyRecord> Yes Used for enforcing the criteria on query MaterializedTableID String Yes Table Yes Reference to Materialization table insertEnabled Boolean ** Yes Yes Flag for checking insert procedure is enabled for view deleteEnabled Boolean ** Yes Yes Flag for checking delete procedure is enabled for view updateEnabled Boolean ** Yes Yes Flag for checking update procedure is enabled for view Select Transformation String ** Yes Yes Transformation for Select in case of View Insert Plan String ** Yes Yes Transformation for Insert in case of View Update Plan String ** Yes Yes Transformation for Update in case of View Delete Plan String ** Yes Yes Transformation for Delete in case of View Bindings Collection ** Yes Yes XML Document SchemaPaths Collection ** Yes Yes XML Document Table A.3. Data Usage for Columns COLUMN Type In Designer In Metadata API Required Description FullName String Yes Yes Yes Name of the column NameInSource String Yes Yes Yes Name of the column in source system IsSelectable Boolean Yes Yes Yes Column is allowed in select IsUpdatable Boolean Yes Yes Yes Column is allowed in Update/Insert/Delete NullType Integer Yes Yes Yes Used for validation if null value allowed IsAutoIncrementable Boolean Yes Yes Yes During insert used to validate if a value is required or not IsCaseSensitive Boolean Yes Yes ?? ?? IsSigned Boolean Yes Yes ?? Used in System Metadata IsCurrency Boolean Yes Yes No Only used for System metadata IsFixedLength Boolean Yes Yes No Only used for System metadata IsTranformationInputParameter Boolean Yes ?? ?? ?? SearchType Integer Yes Yes Yes Used for defining the capability of the source Length Integer Yes Yes ?? Used in System Metadata Scale Integer Yes Yes ?? Used in System Metadata Precision Integer Yes Yes ?? Used in System Metadata CharOctetLength Integer Yes Yes No only used for System metadata Radix Integer Yes Yes ?? Used in System Metadata DistinctValues Integer Yes Yes Yes Used for cost calculations, System metadata NullValues Integer Yes Yes Yes Used for cost calculations, System metadata MinValue String Yes Yes Yes Used for cost calculations, System metadata MaxValue String Yes Yes Yes Used for cost calculations, System metadata Format String Yes Yes No Only used for System metadata RuntimeType String Yes DataType Yes Data Type NativeType String Yes Yes Yes Translators can use this field to further plan DatatypeObjectID String Yes ?? ?? DefaultValue String Yes Yes Yes Used for Insert and procedure execute operations when the values are not supplied Position Integer Yes Yes Yes Used in the index calculations Table A.4. Data Usage for Primary Keys PRIMARY KEY Type In Designer In Metadata API Required Description FullName String See the KeyRecord, See Table NameInSource String ColumnIDs Collection ForeignKeyIDs Collection Extends KeyRecord Table A.5. Data Usage for Unique Keys UNIQUE KEY Type In Designer In Metadata API Required Description FullName String See the KeyRecord, See Table NameInSource String ColumnIDs Collection ForeignKeyIDs Collection Table A.6. Data Usage for Indexes INDEX Type In Designer In Metadata API Required Description FullName String See the KeyRecord, See Table NameInSource String ColumnIDs Collection Table A.7. Data Usage for Access Patterns ACCESS PATTERNS Type In Designer In Metadata API Required Description FullName String See the KeyRecord, See Table NameInSource String ColumnIDs Collection Table A.8. Data Usage for Result Sets RESULT SET Type In Designer In Metadata API Required Description FullName String See DataType NameInSource String ColumnIDs Collection Table A.9. Data Usage for Foreign Keys FOREIGN KEY Type In Designer In Metadata API Required Description FullName String See the KeyRecord, See Table NameInSource String ColumnIDs Collection UniqueKeyID String Table A.10. Data Usage for Data Types DATA TYPE Type In Designer In Metadata API Required Description FullName String No Only used for System metadata NameInSource String No Only used for System metadata Length Integer No Only used for System metadata PrecisionLength Integer No Only used for System metadata Scale Integer No Only used for System metadata Radix Integer No Only used for System metadata IsSigned Boolean No Only used for System metadata IsAutoIncrement Boolean No Only used for System metadata IsCaseSensitive Boolean No Only used for System metadata Type Integer No Only used for System metadata SearchType Integer No Only used for System metadata NullType Integer No Only used for System metadata JavaClassName String Yes Maps to runtime type based on java class name RuntimeTypeName String No Only used for System metadata DatatypeID String No Only used for System metadata BaseTypeID String No Only used for System metadata PrimitiveTypeID String No Only used for System metadata VarietyType Integer No Only used for System metadata VarietyProps Collection No Only used for System metadata Table A.11. Data Usage for Procedures PROCEDURE Type In Designer In Metadata API Required Description FullName String Yes Yes Yes Name of the column NameInSource String Yes Yes Yes Name of the column in source system IsFunction Boolean Yes Yes Determines if this function IsVirtual Boolean Yes Yes If Function then UDF else stored procedure ParametersIDs Collection Yes Yes Parameter List ResultSetID String Yes Yes Result set columns UpdateCount Integer Yes Yes Update count defines the number of sources being updated, only applicable for virtual procedures Table A.12. Data Usage for Procedure Parameters PROCEDURE PARAMETER Type In Designer In Metadata API Required Description ObjectID String Same as Column FullName String Same as Column nameInSource String Same as Column defaultValue String Same as Column RuntimeType String Same as Column DatatypeObjectID String Same as Column Length Integer Same as Column Radix Integer Same as Column Scale Integer Same as Column NullType Integer Same as Column Precision Integer Same as Column Position Integer Same as Column Type String Yes Defines parameter is IN/OUT/RETURN Optional Boolean No Defines if the parameter is optional or not, only used system metadata Table A.13. Data Usage for SQL Transformations SQL TRANSFORMATION(**) Type In Designer In Metadata API Required Description VirtualGroupName String Yes No Yes See Table, the properties defined on Table TransformedObjectID String Yes No Yes See Table, the properties defined on Table TransformationObjectID String Yes No Yes See Table, the properties defined on Table TransformationSql String Yes No Yes See Table, the properties defined on Table Bindings Collection Yes No Yes See Table, the properties defined on Table SchemaPaths Collection Yes No Yes See Table, the properties defined on Table Table A.14. Data Usage for VDBs VDB Type In Designer In Metadata API Required Description FullName String Yes vdb.xml Yes Name of the VDB NameInSource String ?? No No Not required Version String Yes vdb.xml Yes VDB version Identifier String Yes No No Not required Description String Yes vdb.xml No Used by System metadata ProducerName String Yes No No Not required ProducerVersion String Yes No No Not required Provider String Yes No No Not required TimeLastChanged String Yes No No Not required TimeLastProduced String Yes No No Not required ModelIDs Collection Yes vdb.xml Yes Defines the model list in a VDB Table A.15. Data Usage for Annotations ANNOTATION Type In Designer In Metadata API Required Description FullName String Yes Yes No System metadata, as description on procedure parameter NameInSource String Yes No No Not required Description String Yes No No Not required
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/designer_metadata_usage_requirements_in_teiid_runtime
Configuring a Cost-Optimized SAP S/4HANA HA cluster (HANA System Replication + ENSA2) using the RHEL HA Add-On
Configuring a Cost-Optimized SAP S/4HANA HA cluster (HANA System Replication + ENSA2) using the RHEL HA Add-On Red Hat Enterprise Linux for SAP Solutions 9 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/configuring_a_cost-optimized_sap_s4hana_ha_cluster_hana_system_replication_ensa2_using_the_rhel_ha_add-on/index
probe::ioscheduler.elv_completed_request
probe::ioscheduler.elv_completed_request Name probe::ioscheduler.elv_completed_request - Fires when a request is completed Synopsis ioscheduler.elv_completed_request Values name Name of the probe point rq Address of the request elevator_name The type of I/O elevator currently enabled disk_major Disk major number of the request disk_minor Disk minor number of the request rq_flags Request flags
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-ioscheduler-elv-completed-request
C.2. Desktop Environments and Window Managers
C.2. Desktop Environments and Window Managers Once an X server is running, X client applications can connect to it and create a GUI for the user. A range of GUIs are available with Red Hat Enterprise Linux, from the rudimentary Tab Window Manager (twm) to the highly developed and interactive desktop environment (such as GNOME or KDE ) that most Red Hat Enterprise Linux users are familiar with. To create the latter, more comprehensive GUI, two main classes of X client application must connect to the X server: a window manager and a desktop environment . C.2.1. Maximum number of concurrent GUI sessions Multiple GUI sessions for different users can be run at the same time on the same machine. The maximum number of concurrent GUI sessions is limited by the hardware, especially by the memory size, and by the workload demands of the running applications. For common PCs the maximum possible number of concurrent GUI sessions is not higher than 10 to 15, depending on previously described circumstances. Logging the same user into GNOME more than once on the same machine is not supported, because some applications could terminate unexpectedly.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-x-clients
Chapter 8. Strategies for repartitioning a disk
Chapter 8. Strategies for repartitioning a disk There are different approaches to repartitioning a disk. These include: Unpartitioned free space is available. An unused partition is available. Free space in an actively used partition is available. Note The following examples are simplified for clarity and do not reflect the exact partition layout when actually installing Red Hat Enterprise Linux. 8.1. Using unpartitioned free space Partitions that are already defined and do not span the entire hard disk, leave unallocated space that is not part of any defined partition. The following diagram shows what this might look like. Figure 8.1. Disk with unpartitioned free space The first diagram represents a disk with one primary partition and an undefined partition with unallocated space. The second diagram represents a disk with two defined partitions with allocated space. An unused hard disk also falls into this category. The only difference is that all the space is not part of any defined partition. On a new disk, you can create the necessary partitions from the unused space. Most preinstalled operating systems are configured to take up all available space on a disk drive. 8.2. Using space from an unused partition In the following example, the first diagram represents a disk with an unused partition. The second diagram represents reallocating an unused partition for Linux. Figure 8.2. Disk with an unused partition To use the space allocated to the unused partition, delete the partition and then create the appropriate Linux partition instead. Alternatively, during the installation process, delete the unused partition and manually create new partitions. 8.3. Using free space from an active partition This process can be difficult to manage because an active partition, that is already in use, contains the required free space. In most cases, hard disks of computers with preinstalled software contain one larger partition holding the operating system and data. Warning If you want to use an operating system (OS) on an active partition, you must reinstall the OS. Be aware that some computers, which include pre-installed software, do not include installation media to reinstall the original OS. Check whether this applies to your OS before you destroy an original partition and the OS installation. To optimise the use of available free space, you can use the methods of destructive or non-destructive repartitioning. 8.3.1. Destructive repartitioning Destructive repartitioning destroys the partition on your hard drive and creates several smaller partitions instead. Backup any needed data from the original partition as this method deletes the complete contents. After creating a smaller partition for your existing operating system, you can: Reinstall software. Restore your data. Start your Red Hat Enterprise Linux installation. The following diagram is a simplified representation of using the destructive repartitioning method. Figure 8.3. Destructive repartitioning action on disk Warning This method deletes all data previously stored in the original partition. 8.3.2. Non-destructive repartitioning Non-destructive repartitioning resizes partitions, without any data loss. This method is reliable, however it takes longer processing time on large drives. The following is a list of methods, which can help initiate non-destructive repartitioning. Compress existing data The storage location of some data cannot be changed. This can prevent the resizing of a partition to the required size, and ultimately lead to a destructive repartition process. Compressing data in an already existing partition can help you resize your partitions as needed. It can also help to maximize the free space available. The following diagram is a simplified representation of this process. Figure 8.4. Data compression on a disk To avoid any possible data loss, create a backup before continuing with the compression process. Resize the existing partition By resizing an already existing partition, you can free up more space. Depending on your resizing software, the results may vary. In the majority of cases, you can create a new unformatted partition of the same type, as the original partition. The steps you take after resizing can depend on the software you use. In the following example, the best practice is to delete the new DOS (Disk Operating System) partition, and create a Linux partition instead. Verify what is most suitable for your disk before initiating the resizing process. Figure 8.5. Partition resizing on a disk Optional: Create new partitions Some pieces of resizing software support Linux based systems. In such cases, there is no need to delete the newly created partition after resizing. Creating a new partition afterwards depends on the software you use. The following diagram represents the disk state, before and after creating a new partition. Figure 8.6. Disk with final partition configuration
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_file_systems/strategies-for-repartitioning-a-disk_managing-file-systems
Appendix A. Versioning information
Appendix A. Versioning information Documentation last updated on Friday, July 14, 2023.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_optaplanner/8.38/html/developing_solvers_with_red_hat_build_of_optaplanner/versioning-information
7.3. Additional Resources
7.3. Additional Resources Use these sources to learn more about LVM. 7.3.1. Installed Documentation rpm -qd lvm - This command shows all the documentation available from the lvm package, including man pages. lvm help - This command shows all LVM commands available.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/logical_volume_manager_lvm-additional_resources
Chapter 2. Installing a cluster on Oracle Cloud Infrastructure (OCI) by using the Agent-based Installer
Chapter 2. Installing a cluster on Oracle Cloud Infrastructure (OCI) by using the Agent-based Installer In OpenShift Container Platform 4.16, you can use the Agent-based Installer to install a cluster on Oracle(R) Cloud Infrastructure (OCI), so that you can run cluster workloads on infrastructure that supports dedicated, hybrid, public, and multiple cloud environments. Note You can deploy OpenShift Container Platform on a Dedicated Region (Oracle documentation) the same as any region from Oracle Cloud Infrastructure (OCI). 2.1. The Agent-based Installer and OCI overview You can install an OpenShift Container Platform cluster on Oracle(R) Cloud Infrastructure (OCI) by using the Agent-based Installer. Both Red Hat and Oracle test, validate, and support running OCI and Oracle(R) Cloud VMware Solution (OCVS) workloads in an OpenShift Container Platform cluster on OCI. The Agent-based installer provides the ease of use of the Assisted Installation service, but with the capability to install a cluster in either a connected or disconnected environment. The following diagrams show workflows for connected and disconnected environments: Figure 2.1. Workflow for using the Agent-based installer in a connected environment to install a cluster on OCI Figure 2.2. Workflow for using the Agent-based installer in a disconnected environment to install a cluster on OCI OCI provides services that can meet your regulatory compliance, performance, and cost-effectiveness needs. OCI supports 64-bit x86 instances and 64-bit ARM instances. Additionally, OCI provides an OCVS service where you can move VMware workloads to OCI with minimal application re-architecture. Note Consider selecting a nonvolatile memory express (NVMe) drive or a solid-state drive (SSD) for your boot disk, because these drives offer low latency and high throughput capabilities for your boot disk. By running your OpenShift Container Platform cluster on OCI, you can access the following capabilities: Compute flexible shapes, where you can customize the number of Oracle(R) CPUs (OCPUs) and memory resources for your VM. With access to this capability, a cluster's workload can perform operations in a resource-balanced environment. You can find all RHEL-certified OCI shapes by going to the Oracle page on the Red Hat Ecosystem Catalog portal. Block Volume storage, where you can configure scaling and auto-tuning settings for your storage volume, so that the Block Volume service automatically adjusts the performance level to optimize performance. OCVS, where you can deploy a cluster in a public-cloud environment that operates on a VMware(R) vSphere software-defined data center (SDDC). You continue to retain full-administrative control over your VMware vSphere environment, but you can use OCI services to improve your applications on flexible, scalable, and secure infrastructure. Important To ensure the best performance conditions for your cluster workloads that operate on OCI and on the OCVS service, ensure volume performance units (VPUs) for your block volume is sized for your workloads. The following list provides some guidance in selecting the VPUs needed for specific performance needs: Test or proof of concept environment: 100 GB, and 20 to 30 VPUs. Basic environment: 500 GB, and 60 VPUs. Heavy production environment: More than 500 GB, and 100 or more VPUs. Consider reserving additional VPUs to provide sufficient capacity for updates and scaling activities. For more information about VPUs, see Volume Performance Units (Oracle documentation). Additional resources Installation process Internet access for OpenShift Container Platform Understanding the Agent-based Installer Overview of the Compute Service (Oracle documentation) Volume Performance Units (Oracle documentation) Instance Sizing Recommendations for OpenShift Container Platform on OCI Nodes (Oracle documentation) 2.2. Creating OCI infrastructure resources and services You must create an OCI environment on your virtual machine (VM) shape. By creating this environment, you can install OpenShift Container Platform and deploy a cluster on an infrastructure that supports a wide range of cloud options and strong security policies. Having prior knowledge of OCI components can help you with understanding the concept of OCI resources and how you can configure them to meet your organizational needs. The Agent-based installer method for installing an OpenShift Container Platform cluster on OCI requires that you manually create OCI resources and services. Important To ensure compatibility with OpenShift Container Platform, you must set A as the record type for each DNS record and name records as follows: api.<cluster_name>.<base_domain> , which targets the apiVIP parameter of the API load balancer. api-int.<cluster_name>.<base_domain> , which targets the apiVIP parameter of the API load balancer. *.apps.<cluster_name>.<base_domain> , which targets the ingressVIP parameter of the Ingress load balancer. The api.* and api-int.* DNS records relate to control plane machines, so you must ensure that all nodes in your installed OpenShift Container Platform cluster can access these DNS records. Prerequisites You configured an OCI account to host the OpenShift Container Platform cluster. See Prerequisites (Oracle documentation) . Procedure Create the required OCI resources and services. See OCI Resources Needed for Using the Agent-based Installer (Oracle documentation) . Additional resources Learn About Oracle Cloud Basics (Oracle documentation) 2.3. Creating configuration files for installing a cluster on OCI You need to create the install-config.yaml and the agent-config.yaml configuration files so that you can use the Agent-based Installer to generate a bootable ISO image. The Agent-based installation comprises a bootable ISO that has the Assisted discovery agent and the Assisted Service. Both of these components are required to perform the cluster installation, but the latter component runs on only one of the hosts. At a later stage, you must follow the steps in the Oracle documentation for uploading your generated agent ISO image to Oracle's default Object Storage bucket, which is the initial step for integrating your OpenShift Container Platform cluster on Oracle(R) Cloud Infrastructure (OCI). Note You can also use the Agent-based Installer to generate or accept Zero Touch Provisioning (ZTP) custom resources. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing the method for users. You have read the "Preparing to install with the Agent-based Installer" documentation. You downloaded the Agent-Based Installer and the command-line interface (CLI) from the Red Hat Hybrid Cloud Console. You have logged in to the OpenShift Container Platform with administrator privileges. Procedure For a disconnected environment, mirror the Mirror registry for Red Hat OpenShift to your local container image registry. Important Check that your openshift-install binary version relates to your local image container registry and not a shared registry, such as Red Hat Quay. USD ./openshift-install version Example output for a shared registry binary ./openshift-install 4.16.0 built from commit ae7977b7d1ca908674a0d45c5c243c766fa4b2ca release image registry.ci.openshift.org/origin/release:4.16ocp-release@sha256:0da6316466d60a3a4535d5fed3589feb0391989982fba59d47d4c729912d6363 release architecture amd64 Configure the install-config.yaml configuration file to meet the needs of your organization. Example install-config.yaml configuration file that demonstrates setting an external platform # install-config.yaml apiVersion: v1 baseDomain: <base_domain> 1 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 network type: OVNKubernetes machineNetwork: - cidr: <ip_address_from_cidr> 2 serviceNetwork: - 172.30.0.0/16 compute: - architecture: amd64 3 hyperthreading: Enabled name: worker replicas: 0 controlPlane: architecture: amd64 4 hyperthreading: Enabled name: master replicas: 3 platform: external: platformName: oci 5 cloudControllerManager: External sshKey: <public_ssh_key> 6 pullSecret: '<pull_secret>' 7 # ... 1 The base domain of your cloud provider. 2 The IP address from the virtual cloud network (VCN) that the CIDR allocates to resources and components that operate on your network. 3 4 Depending on your infrastructure, you can select either x86_64 , or amd64 . 5 Set OCI as the external platform, so that OpenShift Container Platform can integrate with OCI. 6 Specify your SSH public key. 7 The pull secret that you need for authenticate purposes when downloading container images for OpenShift Container Platform components and services, such as Quay.io. See Install OpenShift Container Platform 4 from the Red Hat Hybrid Cloud Console. Create a directory on your local system named openshift . Important Do not move the install-config.yaml and agent-config.yaml configuration files to the openshift directory. Complete the steps in the "Configuration Files" section of the Oracle documentation to download Oracle Cloud Controller Manager (CCM) and Oracle Container Storage Interface (CSI) manifests as an archive file and save the archive file in your openshift directory. You need the Oracle CCM manifests for deploying the Oracle CCM during cluster installation so that OpenShift Container Platform can connect to the external OCI platform. You need the Oracle CSI custom manifests for deploying the Oracle CSI driver during cluster installation so that OpenShift Container Platform can claim required objects from OCI. Access the custom manifest files that are provided in the "Configuration Files" section of the Oracle documentation. Change the oci-cloud-controller-manager secret that is defined in the oci-ccm.yml configuration file to match your organization's region, compartment OCID, VCN OCID, and the subnet OCID from the load balancer. Use the Agent-based Installer to generate a minimal ISO image, which excludes the rootfs image, by entering the following command in your OpenShift Container Platform CLI. You can use this image later in the process to boot all your cluster's nodes. USD ./openshift-install agent create image --log-level debug The command also completes the following actions: Creates a subdirectory, ./<installation_directory>/auth directory: , and places kubeadmin-password and kubeconfig files in the subdirectory. Creates a rendezvousIP file based on the IP address that you specified in the agent-config.yaml configuration file. Optional: Any modifications you made to agent-config.yaml and install-config.yaml configuration files get imported to the Zero Touch Provisioning (ZTP) custom resources. Important The Agent-based Installer uses Red Hat Enterprise Linux CoreOS (RHCOS). The rootfs image, which is mentioned in a later listed item, is required for booting, recovering, and repairing your operating system. Configure the agent-config.yaml configuration file to meet your organization's requirements. Example agent-config.yaml configuration file that sets values for an IPv4 formatted network. apiVersion: v1alpha1 metadata: name: <cluster_name> 1 namespace: <cluster_namespace> 2 rendezvousIP: <ip_address_from_CIDR> 3 bootArtifactsBaseURL: <server_URL> 4 # ... 1 The cluster name that you specified in your DNS record. 2 The namespace of your cluster on OpenShift Container Platform. 3 If you use IPv4 as the network IP address format, ensure that you set the rendezvousIP parameter to an IPv4 address that the VCN's Classless Inter-Domain Routing (CIDR) method allocates on your network. Also ensure that at least one instance from the pool of instances that you booted with the ISO matches the IP address value you set for rendezvousIP . 4 The URL of the server where you want to upload the rootfs image. Apply one of the following two updates to your agent-config.yaml configuration file: For a disconnected network: After you run the command to generate a minimal ISO Image, the Agent-based installer saves the rootfs image into the ./<installation_directory>/boot-artifacts directory on your local system. Use your preferred web server, such as any Hypertext Transfer Protocol daemon ( httpd ), to upload rootfs to the location stated in the bootArtifactsBaseURL parameter in the agent-config.yaml configuration file. For example, if the bootArtifactsBaseURL parameter states http://192.168.122.20 , you would upload the generated rootfs image to this location, so that the Agent-based installer can access the image from http://192.168.122.20/agent.x86_64-rootfs.img . After the Agent-based installer boots the minimal ISO for the external platform, the Agent-based Installer downloads the rootfs image from the http://192.168.122.20/agent.x86_64-rootfs.img location into the system memory. Note The Agent-based Installer also adds the value of the bootArtifactsBaseURL to the minimal ISO Image's configuration, so that when the Operator boots a cluster's node, the Agent-based Installer downloads the rootfs image into system memory. For a connected network: You do not need to specify the bootArtifactsBaseURL parameter in the agent-config.yaml configuration file. The default behavior of the Agent-based Installer reads the rootfs URL location from https://rhcos.mirror.openshift.com . After the Agent-based Installer boots the minimal ISO for the external platform, the Agent-based Installer then downloads the rootfs file into your system's memory from the default RHCOS URL. Important Consider that the full ISO image, which is in excess of 1 GB, includes the rootfs image. The image is larger than the minimal ISO Image, which is typically less than 150 MB. Additional resources About OpenShift Container Platform installation Selecting a cluster installation type Preparing to install with the Agent-based Installer Downloading the Agent-based Installer Mirroring the OpenShift Container Platform image repository Optional: Using ZTP manifests 2.4. Configuring your firewall for OpenShift Container Platform Before you install OpenShift Container Platform, you must configure your firewall to grant access to the sites that OpenShift Container Platform requires. When using a firewall, make additional configurations to the firewall so that OpenShift Container Platform can access the sites that it requires to function. For a disconnected environment, you must mirror content from both Red Hat and Oracle. This environment requires that you create firewall rules to expose your firewall to specific ports and registries. Note If your environment has a dedicated load balancer in front of your OpenShift Container Platform cluster, review the allowlists between your firewall and load balancer to prevent unwanted network restrictions to your cluster. Procedure Set the following registry URLs for your firewall's allowlist: URL Port Function registry.redhat.io 443 Provides core container images access.redhat.com 443 Hosts a signature store that a container client requires for verifying images pulled from registry.access.redhat.com . In a firewall environment, ensure that this resource is on the allowlist. registry.access.redhat.com 443 Hosts all the container images that are stored on the Red Hat Ecosystem Catalog, including core container images. quay.io 443 Provides core container images cdn.quay.io 443 Provides core container images cdn01.quay.io 443 Provides core container images cdn02.quay.io 443 Provides core container images cdn03.quay.io 443 Provides core container images cdn04.quay.io 443 Provides core container images cdn05.quay.io 443 Provides core container images cdn06.quay.io 443 Provides core container images sso.redhat.com 443 The https://console.redhat.com site uses authentication from sso.redhat.com You can use the wildcards *.quay.io and *.openshiftapps.com instead of cdn.quay.io and cdn0[1-6].quay.io in your allowlist. You can use the wildcard *.access.redhat.com to simplify the configuration and ensure that all subdomains, including registry.access.redhat.com , are allowed. When you add a site, such as quay.io , to your allowlist, do not add a wildcard entry, such as *.quay.io , to your denylist. In most cases, image registries use a content delivery network (CDN) to serve images. If a firewall blocks access, image downloads are denied when the initial download request redirects to a hostname such as cdn01.quay.io . Set your firewall's allowlist to include any site that provides resources for a language or framework that your builds require. If you do not disable Telemetry, you must grant access to the following URLs to access Red Hat Insights: URL Port Function cert-api.access.redhat.com 443 Required for Telemetry api.access.redhat.com 443 Required for Telemetry infogw.api.openshift.com 443 Required for Telemetry console.redhat.com 443 Required for Telemetry and for insights-operator Set your firewall's allowlist to include the following registry URLs: URL Port Function api.openshift.com 443 Required both for your cluster token and to check if updates are available for the cluster. rhcos.mirror.openshift.com 443 Required to download Red Hat Enterprise Linux CoreOS (RHCOS) images. Set your firewall's allowlist to include the following external URLs. Each repository URL hosts OCI containers. Consider mirroring images to as few repositories as possible to reduce any performance issues. URL Port Function k8s.gcr.io port A Kubernetes registry that hosts container images for a community-based image registry. This image registry is hosted on a custom Google Container Registry (GCR) domain. ghcr.io port A GitHub image registry where you can store and manage Open Container Initiative images. Requires an access token to publish, install, and delete private, internal, and public packages. storage.googleapis.com 443 A source of release image signatures, although the Cluster Version Operator needs only a single functioning source. registry.k8s.io port Replaces the k8s.gcr.io image registry because the k8s.gcr.io image registry does not support other platforms and vendors. 2.5. Running a cluster on OCI To run a cluster on Oracle(R) Cloud Infrastructure (OCI), you must upload the generated agent ISO image to the default Object Storage bucket on OCI. Additionally, you must create a compute instance from the supplied base image, so that your OpenShift Container Platform and OCI can communicate with each other for the purposes of running the cluster on OCI. Note OCI supports the following OpenShift Container Platform cluster topologies: Installing an OpenShift Container Platform cluster on a single node. A highly available cluster that has a minimum of three control plane instances and two compute instances. A compact three-node cluster that has a minimum of three control plane instances. Prerequisites You generated an agent ISO image. See the "Creating configuration files for installing a cluster on OCI" section. Procedure Upload the agent ISO image to Oracle's default Object Storage bucket and import the agent ISO image as a custom image to this bucket. Ensure you that you configure the custom image to boot in Unified Extensible Firmware Interface (UEFI) mode. For more information, see Creating the OpenShift Container Platform ISO Image (Oracle documentation) . Create a compute instance from the supplied base image for your cluster topology. See Creating the OpenShift Container Platform cluster on OCI (Oracle documentation) . Important Before you create the compute instance, check that you have enough memory and disk resources for your cluster. Additionally, ensure that at least one compute instance has the same IP address as the address stated under rendezvousIP in the agent-config.yaml file. Additional resources Recommended resources for topologies Instance Sizing Recommendations for OpenShift Container Platform on OCI Nodes (Oracle documentation) Troubleshooting OpenShift Container Platform on OCI (Oracle documentation) 2.6. Verifying that your Agent-based cluster installation runs on OCI Verify that your cluster was installed and is running effectively on Oracle(R) Cloud Infrastructure (OCI). Prerequisites You created all the required OCI resources and services. See the "Creating OCI infrastructure resources and services" section. You created install-config.yaml and agent-config.yaml configuration files. See the "Creating configuration files for installing a cluster on OCI" section. You uploaded the agent ISO image to Oracle's default Object Storage bucket, and you created a compute instance on OCI. For more information, see "Running a cluster on OCI". Procedure After you deploy the compute instance on a self-managed node in your OpenShift Container Platform cluster, you can monitor the cluster's status by choosing one of the following options: From the OpenShift Container Platform CLI, enter the following command: USD ./openshift-install agent wait-for install-complete --log-level debug Check the status of the rendezvous host node that runs the bootstrap node. After the host reboots, the host forms part of the cluster. Use the kubeconfig API to check the status of various OpenShift Container Platform components. For the KUBECONFIG environment variable, set the relative path of the cluster's kubeconfig configuration file: USD export KUBECONFIG=~/auth/kubeconfig Check the status of each of the cluster's self-managed nodes. CCM applies a label to each node to designate the node as running in a cluster on OCI. USD oc get nodes -A Output example NAME STATUS ROLES AGE VERSION main-0.private.agenttest.oraclevcn.com Ready control-plane, master 7m v1.27.4+6eeca63 main-1.private.agenttest.oraclevcn.com Ready control-plane, master 15m v1.27.4+d7fa83f main-2.private.agenttest.oraclevcn.com Ready control-plane, master 15m v1.27.4+d7fa83f Check the status of each of the cluster's Operators, with the CCM Operator status being a good indicator that your cluster is running. USD oc get co Truncated output example NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.16.0-0 True False False 6m18s baremetal 4.16.0-0 True False False 2m42s network 4.16.0-0 True True False 5m58s Progressing: ... ... Additional resources Gathering log data from a failed Agent-based installation
[ "./openshift-install version", "./openshift-install 4.16.0 built from commit ae7977b7d1ca908674a0d45c5c243c766fa4b2ca release image registry.ci.openshift.org/origin/release:4.16ocp-release@sha256:0da6316466d60a3a4535d5fed3589feb0391989982fba59d47d4c729912d6363 release architecture amd64", "install-config.yaml apiVersion: v1 baseDomain: <base_domain> 1 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 network type: OVNKubernetes machineNetwork: - cidr: <ip_address_from_cidr> 2 serviceNetwork: - 172.30.0.0/16 compute: - architecture: amd64 3 hyperthreading: Enabled name: worker replicas: 0 controlPlane: architecture: amd64 4 hyperthreading: Enabled name: master replicas: 3 platform: external: platformName: oci 5 cloudControllerManager: External sshKey: <public_ssh_key> 6 pullSecret: '<pull_secret>' 7", "./openshift-install agent create image --log-level debug", "apiVersion: v1alpha1 metadata: name: <cluster_name> 1 namespace: <cluster_namespace> 2 rendezvousIP: <ip_address_from_CIDR> 3 bootArtifactsBaseURL: <server_URL> 4", "./openshift-install agent wait-for install-complete --log-level debug", "export KUBECONFIG=~/auth/kubeconfig", "oc get nodes -A", "NAME STATUS ROLES AGE VERSION main-0.private.agenttest.oraclevcn.com Ready control-plane, master 7m v1.27.4+6eeca63 main-1.private.agenttest.oraclevcn.com Ready control-plane, master 15m v1.27.4+d7fa83f main-2.private.agenttest.oraclevcn.com Ready control-plane, master 15m v1.27.4+d7fa83f", "oc get co", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.16.0-0 True False False 6m18s baremetal 4.16.0-0 True False False 2m42s network 4.16.0-0 True True False 5m58s Progressing: ... ..." ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_oci/installing-oci-agent-based-installer
Chapter 4. Test the Configuration
Chapter 4. Test the Configuration Once you have configured Ceph Object Gateway to use LDAP to authenticate users, test the configuration. 4.1. Add an S3 User to the LDAP Server In the administrative console on LDAP server, create at least one S3 user so that an S3 client can use the LDAP user credentials. Make a note of the user name and secret for use when passing the credentials to the S3 client. 4.2. Export an LDAP Token When running Ceph Object Gateway with LDAP, the access token is all that is required. However, the access token is created from the access key and secret. Export the access key and secret key as an LDAP token. Export the access key. Export the secret. Export the token. For LDAP, use ldap as the token type ( ttype ). For Active Directory, use ad as the token type. The result is a base-64 encoded string, which is the access token. Provide this access token to S3 clients in lieu of the access key. The secret is no longer required. (Optional) For added convenience, export the base-64 encoded string to the RGW_ACCESS_KEY_ID environment variable if the S3 client uses the environment variable. 4.3. Test the Configuration with an S3 Client Pick a Ceph Object Gateway client such as Python Boto. Configure it to use the RGW_ACCESS_KEY_ID environment variable. Alternatively, you may copy the base-64 encoded string and specify it as the access key. Then, run the Ceph client. Note The secret is no longer required.
[ "export RGW_ACCESS_KEY_ID=\"<username>\"", "export RGW_SECRET_ACCESS_KEY=\"<password>\"", "radosgw-token --encode --ttype=ldap", "radosgw-token --encode --ttype=ad", "export RGW_ACCESS_KEY_ID=\"ewogICAgIlJHV19UT0tFTiI6IHsKICAgICAgICAidmVyc2lvbiI6IDEsCiAgICAgICAgInR5cGUiOiAibGRhcCIsCiAgICAgICAgImlkIjogImNlcGgiLAogICAgICAgICJrZXkiOiAiODAwI0dvcmlsbGEiCiAgICB9Cn0K\"" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/object_gateway_with_ldap_and_ad_guide/rgw-ldap-test-the-configuration-ldap
Chapter 14. CertAndKeySecretSource schema reference
Chapter 14. CertAndKeySecretSource schema reference Used in: GenericKafkaListenerConfiguration , KafkaClientAuthenticationTls Property Property type Description secretName string The name of the Secret containing the certificate. certificate string The name of the file certificate in the Secret. key string The name of the private key in the Secret.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-certandkeysecretsource-reference
Chapter 7. Adding file and object storage to an existing external OpenShift Data Foundation cluster
Chapter 7. Adding file and object storage to an existing external OpenShift Data Foundation cluster When OpenShift Data Foundation is configured in external mode, there are several ways to provide storage for persistent volume claims and object bucket claims. Persistent volume claims for block storage are provided directly from the external Red Hat Ceph Storage cluster. Persistent volume claims for file storage can be provided by adding a Metadata Server (MDS) to the external Red Hat Ceph Storage cluster. Object bucket claims for object storage can be provided either by using the Multicloud Object Gateway or by adding the Ceph Object Gateway to the external Red Hat Ceph Storage cluster. Use the following process to add file storage (using Metadata Servers) or object storage (using Ceph Object Gateway) or both to an external OpenShift Data Foundation cluster that was initially deployed to provide only block storage. Prerequisites OpenShift Data Foundation 4.15 is installed and running on the OpenShift Container Platform version 4.16 or above. Also, the OpenShift Data Foundation Cluster in external mode is in the Ready state. Your external Red Hat Ceph Storage cluster is configured with one or both of the following: a Ceph Object Gateway (RGW) endpoint that can be accessed by the OpenShift Container Platform cluster for object storage a Metadata Server (MDS) pool for file storage Ensure that you know the parameters used with the ceph-external-cluster-details-exporter.py script during external OpenShift Data Foundation cluster deployment. Procedure Download the OpenShift Data Foundation version of the ceph-external-cluster-details-exporter.py python script using the following command: Update permission caps on the external Red Hat Ceph Storage cluster by running ceph-external-cluster-details-exporter.py on any client node in the external Red Hat Ceph Storage cluster. You may need to ask your Red Hat Ceph Storage administrator to do this. --run-as-user The client name used during OpenShift Data Foundation cluster deployment. Use the default client name client.healthchecker if a different client name was not set. --rgw-pool-prefix The prefix used for the Ceph Object Gateway pool. This can be omitted if the default prefix is used. Generate and save configuration details from the external Red Hat Ceph Storage cluster. Generate configuration details by running ceph-external-cluster-details-exporter.py on any client node in the external Red Hat Ceph Storage cluster. --monitoring-endpoint Is optional. It accepts comma separated list of IP addresses of active and standby mgrs reachable from the OpenShift Container Platform cluster. If not provided, the value is automatically populated. --monitoring-endpoint-port Is optional. It is the port associated with the ceph-mgr Prometheus exporter specified by --monitoring-endpoint . If not provided, the value is automatically populated. --run-as-user The client name used during OpenShift Data Foundation cluster deployment. Use the default client name client.healthchecker if a different client name was not set. --rgw-endpoint Provide this parameter to provision object storage through Ceph Object Gateway for OpenShift Data Foundation. (optional parameter) --rgw-pool-prefix The prefix used for the Ceph Object Gateway pool. This can be omitted if the default prefix is used. User permissions are updated as shown: Note Ensure that all the parameters (including the optional arguments) except the Ceph Object Gateway details (if provided), are the same as what was used during the deployment of OpenShift Data Foundation in external mode. Save the output of the script in an external-cluster-config.json file. The following example output shows the generated configuration changes in bold text. Upload the generated JSON file. Log in to the OpenShift web console. Click Workloads Secrets . Set project to openshift-storage . Click on rook-ceph-external-cluster-details . Click Actions (...) Edit Secret Click Browse and upload the external-cluster-config.json file. Click Save . Verification steps To verify that the OpenShift Data Foundation cluster is healthy and data is resilient, navigate to Storage Data foundation Storage Systems tab and then click on the storage system name. On the Overview Block and File tab, check the Status card to confirm that the Storage Cluster has a green tick indicating it is healthy. If you added a Metadata Server for file storage: Click Workloads Pods and verify that csi-cephfsplugin-* pods are created new and are in the Running state. Click Storage Storage Classes and verify that the ocs-external-storagecluster-cephfs storage class is created. If you added the Ceph Object Gateway for object storage: Click Storage Storage Classes and verify that the ocs-external-storagecluster-ceph-rgw storage class is created. To verify that the OpenShift Data Foundation cluster is healthy and data is resilient, navigate to Storage Data foundation Storage Systems tab and then click on the storage system name. Click the Object tab and confirm Object Service and Data resiliency has a green tick indicating it is healthy.
[ "get csv USD(oc get csv -n openshift-storage | grep rook-ceph-operator | awk '{print USD1}') -n openshift-storage -o jsonpath='{.metadata.annotations.externalClusterScript}' | base64 --decode >ceph-external-cluster-details-exporter.py", "python3 ceph-external-cluster-details-exporter.py --upgrade --run-as-user= ocs-client-name --rgw-pool-prefix rgw-pool-prefix", "python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name rbd-block-pool-name --monitoring-endpoint ceph-mgr-prometheus-exporter-endpoint --monitoring-endpoint-port ceph-mgr-prometheus-exporter-port --run-as-user ocs-client-name --rgw-endpoint rgw-endpoint --rgw-pool-prefix rgw-pool-prefix", "caps: [mgr] allow command config caps: [mon] allow r, allow command quorum_status, allow command version caps: [osd] allow rwx pool=default.rgw.meta, allow r pool=.rgw.root, allow rw pool=default.rgw.control, allow rx pool=default.rgw.log, allow x pool=default.rgw.buckets.index", "[{\"name\": \"rook-ceph-mon-endpoints\", \"kind\": \"ConfigMap\", \"data\": {\"data\": \"xxx.xxx.xxx.xxx:xxxx\", \"maxMonId\": \"0\", \"mapping\": \"{}\"}}, {\"name\": \"rook-ceph-mon\", \"kind\": \"Secret\", \"data\": {\"admin-secret\": \"admin-secret\", \"fsid\": \"<fs-id>\", \"mon-secret\": \"mon-secret\"}}, {\"name\": \"rook-ceph-operator-creds\", \"kind\": \"Secret\", \"data\": {\"userID\": \"<user-id>\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-rbd-node\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-node\", \"userKey\": \"<user-key>\"}}, {\"name\": \"ceph-rbd\", \"kind\": \"StorageClass\", \"data\": {\"pool\": \"<pool>\"}}, {\"name\": \"monitoring-endpoint\", \"kind\": \"CephCluster\", \"data\": {\"MonitoringEndpoint\": \"xxx.xxx.xxx.xxx\", \"MonitoringPort\": \"xxxx\"}}, {\"name\": \"rook-ceph-dashboard-link\", \"kind\": \"Secret\", \"data\": {\"userID\": \"ceph-dashboard-link\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-rbd-provisioner\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-provisioner\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-cephfs-provisioner\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-provisioner\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"rook-csi-cephfs-node\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-node\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"cephfs\", \"kind\": \"StorageClass\", \"data\": {\"fsName\": \"cephfs\", \"pool\": \"cephfs_data\"}}, {\"name\": \"ceph-rgw\", \"kind\": \"StorageClass\", \"data\": {\"endpoint\": \"xxx.xxx.xxx.xxx:xxxx\", \"poolPrefix\": \"default\"}}, {\"name\": \"rgw-admin-ops-user\", \"kind\": \"Secret\", \"data\": {\"accessKey\": \"<access-key>\", \"secretKey\": \"<secret-key>\"}} ]" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/managing_and_allocating_storage_resources/adding-file-and-object-storage-to-an-existing-external-ocs-cluster
Chapter 32. ExternalConfigurationReference schema reference
Chapter 32. ExternalConfigurationReference schema reference Used in: ExternalLogging , JmxPrometheusExporterMetrics Property Property type Description configMapKeyRef ConfigMapKeySelector Reference to the key in the ConfigMap containing the configuration.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-externalconfigurationreference-reference
Chapter 2. Defining Logical Data Units
Chapter 2. Defining Logical Data Units Abstract When describing a service in a WSDL contract complex data types are defined as logical units using XML Schema. 2.1. Introduction to Logical Data Units When defining a service, the first thing you must consider is how the data used as parameters for the exposed operations is going to be represented. Unlike applications that are written in a programming language that uses fixed data structures, services must define their data in logical units that can be consumed by any number of applications. This involves two steps: Breaking the data into logical units that can be mapped into the data types used by the physical implementations of the service Combining the logical units into messages that are passed between endpoints to carry out the operations This chapter discusses the first step. Chapter 3, Defining Logical Messages Used by a Service discusses the second step. 2.2. Mapping data into logical data units Overview The interfaces used to implement a service define the data representing operation parameters as XML documents. If you are defining an interface for a service that is already implemented, you must translate the data types of the implemented operations into discreet XML elements that can be assembled into messages. If you are starting from scratch, you must determine the building blocks from which your messages are built, so that they make sense from an implementation standpoint. Available type systems for defining service data units According to the WSDL specification, you can use any type system you choose to define data types in a WSDL contract. However, the W3C specification states that XML Schema is the preferred canonical type system for a WSDL document. Therefore, XML Schema is the intrinsic type system in Apache CXF. XML Schema as a type system XML Schema is used to define how an XML document is structured. This is done by defining the elements that make up the document. These elements can use native XML Schema types, like xsd:int , or they can use types that are defined by the user. User defined types are either built up using combinations of XML elements or they are defined by restricting existing types. By combining type definitions and element definitions you can create intricate XML documents that can contain complex data. When used in WSDL XML Schema defines the structure of the XML document that holds the data used to interact with a service. When defining the data units used by your service, you can define them as types that specify the structure of the message parts. You can also define your data units as elements that make up the message parts. Considerations for creating your data units You might consider simply creating logical data units that map directly to the types you envision using when implementing the service. While this approach works, and closely follows the model of building RPC-style applications, it is not necessarily ideal for building a piece of a service-oriented architecture. The Web Services Interoperability Organization's WS-I basic profile provides a number of guidelines for defining data units and can be accessed at http://www.ws-i.org/Profiles/BasicProfile-1.1-2004-08-24.html#WSDLTYPES . In addition, the W3C also provides the following guidelines for using XML Schema to represent data types in WSDL documents: Use elements, not attributes. Do not use protocol-specific types as base types. 2.3. Adding data units to a contract Overview Depending on how you choose to create your WSDL contract, creating new data definitions requires varying amounts of knowledge. The Apache CXF GUI tools provide a number of aids for describing data types using XML Schema. Other XML editors offer different levels of assistance. Regardless of the editor you choose, it is a good idea to have some knowledge about what the resulting contract should look like. Procedure Defining the data used in a WSDL contract involves the following steps: Determine all the data units used in the interface described by the contract. Create a types element in your contract. Create a schema element, shown in Example 2.1, "Schema entry for a WSDL contract" , as a child of the type element. The targetNamespace attribute specifies the namespace under which new data types are defined. Best practice is to also define the namespace that provides access to the target namespace. The remaining entries should not be changed. Example 2.1. Schema entry for a WSDL contract For each complex type that is a collection of elements, define the data type using a complexType element. See Section 2.5.1, "Defining data structures" . For each array, define the data type using a complexType element. See Section 2.5.2, "Defining arrays" . For each complex type that is derived from a simple type, define the data type using a simpleType element. See Section 2.5.4, "Defining types by restriction" . For each enumerated type, define the data type using a simpleType element. See Section 2.5.5, "Defining enumerated types" . For each element, define it using an element element. See Section 2.6, "Defining elements" . 2.4. XML Schema simple types Overview If a message part is going to be of a simple type it is not necessary to create a type definition for it. However, the complex types used by the interfaces defined in the contract are defined using simple types. Entering simple types XML Schema simple types are mainly placed in the element elements used in the types section of your contract. They are also used in the base attribute of restriction elements and extension elements. Simple types are always entered using the xsd prefix. For example, to specify that an element is of type int , you would enter xsd:int in its type attribute as shown in Example 2.2, "Defining an element with a simple type" . Example 2.2. Defining an element with a simple type Supported XSD simple types Apache CXF supports the following XML Schema simple types: xsd:string xsd:normalizedString xsd:int xsd:unsignedInt xsd:long xsd:unsignedLong xsd:short xsd:unsignedShort xsd:float xsd:double xsd:boolean xsd:byte xsd:unsignedByte xsd:integer xsd:positiveInteger xsd:negativeInteger xsd:nonPositiveInteger xsd:nonNegativeInteger xsd:decimal xsd:dateTime xsd:time xsd:date xsd:QName xsd:base64Binary xsd:hexBinary xsd:ID xsd:token xsd:language xsd:Name xsd:NCName xsd:NMTOKEN xsd:anySimpleType xsd:anyURI xsd:gYear xsd:gMonth xsd:gDay xsd:gYearMonth xsd:gMonthDay 2.5. Defining complex data types Abstract XML Schema provides a flexible and powerful mechanism for building complex data structures from its simple data types. You can create data structures by creating a sequence of elements and attributes. You can also extend your defined types to create even more complex types. In addition to building complex data structures, you can also describe specialized types such as enumerated types, data types that have a specific range of values, or data types that need to follow certain patterns by either extending or restricting the primitive types. 2.5.1. Defining data structures Overview In XML Schema, data units that are a collection of data fields are defined using complexType elements. Specifying a complex type requires three pieces of information: The name of the defined type is specified in the name attribute of the complexType element. The first child element of the complexType describes the behavior of the structure's fields when it is put on the wire. See the section called "Complex type varieties" . Each of the fields of the defined structure are defined in element elements that are grandchildren of the complexType element. See the section called "Defining the parts of a structure" . For example, the structure shown in Example 2.3, "Simple Structure" is defined in XML Schema as a complex type with two elements. Example 2.3. Simple Structure Example 2.4, "A complex type" shows one possible XML Schema mapping for the structure shown in Example 2.3, "Simple Structure" The structure defined in Example 2.4, "A complex type" generates a message containing two elements: name and age . . Example 2.4. A complex type Complex type varieties XML Schema has three ways of describing how the fields of a complex type are organized when represented as an XML document and passed on the wire. The first child element of the complexType element determines which variety of complex type is being used. Table 2.1, "Complex type descriptor elements" shows the elements used to define complex type behavior. Table 2.1. Complex type descriptor elements Element Complex Type Behavior sequence All of a complex type's fields can be present and they must be in the order in which they are specified in the type definition. all All of the complex type's fields can be present but they can be in any order. choice Only one of the elements in the structure can be placed in the message. If the structure is defined using a choice element, as shown in Example 2.5, "Simple complex choice type" , it generates a message with either a name element or an age element. Example 2.5. Simple complex choice type Defining the parts of a structure You define the data fields that make up a structure using element elements. Every complexType element should contain at least one element element. Each element element in the complexType element represents a field in the defined data structure. To fully describe a field in a data structure, element elements have two required attributes: The name attribute specifies the name of the data field and it must be unique within the defined complex type. The type attribute specifies the type of the data stored in the field. The type can be either one of the XML Schema simple types, or any named complex type that is defined in the contract. In addition to name and type , element elements have two other commonly used optional attributes: minOcurrs and maxOccurs . These attributes place bounds on the number of times the field occurs in the structure. By default, each field occurs only once in a complex type. Using these attributes, you can change how many times a field must or can appear in a structure. For example, you can define a field, previousJobs , that must occur at least three times, and no more than seven times, as shown in Example 2.6, "Simple complex type with occurrence constraints" . Example 2.6. Simple complex type with occurrence constraints You can also use the minOccurs to make the age field optional by setting the minOccurs to zero as shown in Example 2.7, "Simple complex type with minOccurs set to zero" . In this case age can be omitted and the data will still be valid. Example 2.7. Simple complex type with minOccurs set to zero Defining attributes In XML documents, attributes are contained in the element's tag. For example, in the complexType element in the code below, name is an attribute. To specify an attribute for a complex type, you define an attribute element in the complexType element definition. An attribute element can appear only after the all , sequence , or choice element. Specify one attribute element for each of the complex type's attributes. Any attribute elements must be direct children of the complexType element. Example 2.8. Complex type with an attribute In the code, the attribute element specifies that the personalInfo complex type has an age attribute. The attribute element has these attributes: name - A required attribute that specifies the string that identifies the attribute. type - Specifies the type of the data stored in the field. The type can be one of the XML Schema simple types. use - An optional attribute that specifies whether the complex type is required to have this attribute. Valid values are required or optional . The default is that the attribute is optional. In an attribute element, you can specify the optional default attribute, which lets you specify a default value for the attribute. 2.5.2. Defining arrays Overview Apache CXF supports two methods for defining arrays in a contract. The first is define a complex type with a single element whose maxOccurs attribute has a value greater than one. The second is to use SOAP arrays. SOAP arrays provide added functionality such as the ability to easily define multi-dimensional arrays and to transmit sparsely populated arrays. Complex type arrays Complex type arrays are a special case of a sequence complex type. You simply define a complex type with a single element and specify a value for the maxOccurs attribute. For example, to define an array of twenty floating point numbers you use a complex type similar to the one shown in Example 2.9, "Complex type array" . Example 2.9. Complex type array You can also specify a value for the minOccurs attribute. SOAP arrays SOAP arrays are defined by deriving from the SOAP-ENC:Array base type using the wsdl:arrayType element. The syntax for this is shown in Example 2.10, "Syntax for a SOAP array derived using wsdl:arrayType" . Ensure that the definitions element declares xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/" . Example 2.10. Syntax for a SOAP array derived using wsdl:arrayType Using this syntax, TypeName specifies the name of the newly-defined array type. ElementType specifies the type of the elements in the array. ArrayBounds specifies the number of dimensions in the array. To specify a single dimension array use [] ; to specify a two-dimensional array use either [][] or [,] . For example, the SOAP Array, SOAPStrings, shown in Example 2.11, "Definition of a SOAP array" , defines a one-dimensional array of strings. The wsdl:arrayType attribute specifies the type of the array elements, xsd:string , and the number of dimensions, with [] implying one dimension. Example 2.11. Definition of a SOAP array You can also describe a SOAP Array using a simple element as described in the SOAP 1.1 specification. The syntax for this is shown in Example 2.12, "Syntax for a SOAP array derived using an element" . Example 2.12. Syntax for a SOAP array derived using an element When using this syntax, the element's maxOccurs attribute must always be set to unbounded . 2.5.3. Defining types by extension Like most major coding languages, XML Schema allows you to create data types that inherit some of their elements from other data types. This is called defining a type by extension. For example, you could create a new type called alienInfo , that extends the personalInfo structure defined in Example 2.4, "A complex type" by adding a new element called planet . Types defined by extension have four parts: The name of the type is defined by the name attribute of the complexType element. The complexContent element specifies that the new type will have more than one element. Note If you are only adding new attributes to the complex type, you can use a simpleContent element. The type from which the new type is derived, called the base type, is specified in the base attribute of the extension element. The new type's elements and attributes are defined in the extension element, the same as they are for a regular complex type. For example, alienInfo is defined as shown in Example 2.13, "Type defined by extension" . Example 2.13. Type defined by extension 2.5.4. Defining types by restriction Overview XML Schema allows you to create new types by restricting the possible values of an XML Schema simple type. For example, you can define a simple type, SSN , which is a string of exactly nine characters. New types defined by restricting simple types are defined using a simpleType element. The definition of a type by restriction requires three things: The name of the new type is specified by the name attribute of the simpleType element. The simple type from which the new type is derived, called the base type , is specified in the restriction element. See the section called "Specifying the base type" . The rules, called facets , defining the restrictions placed on the base type are defined as children of the restriction element. See the section called "Defining the restrictions" . Specifying the base type The base type is the type that is being restricted to define the new type. It is specified using a restriction element. The restriction element is the only child of a simpleType element and has one attribute, base , that specifies the base type. The base type can be any of the XML Schema simple types. For example, to define a new type by restricting the values of an xsd:int you use a definition like the one shown in Example 2.14, "Using int as the base type" . Example 2.14. Using int as the base type Defining the restrictions The rules defining the restrictions placed on the base type are called facets . Facets are elements with one attribute, value , that defines how the facet is enforced. The available facets and their valid value settings depend on the base type. For example, xsd:string supports six facets, including: length minLength maxLength pattern whitespace enumeration Each facet element is a child of the restriction element. Example Example 2.15, "SSN simple type description" shows an example of a simple type, SSN , which represents a social security number. The resulting type is a string of the form xxx-xx-xxxx . <SSN>032-43-9876<SSN> is a valid value for an element of this type, but <SSN>032439876</SSN> is not. Example 2.15. SSN simple type description 2.5.5. Defining enumerated types Overview Enumerated types in XML Schema are a special case of definition by restriction. They are described by using the enumeration facet which is supported by all XML Schema primitive types. As with enumerated types in most modern programming languages, a variable of this type can only have one of the specified values. Defining an enumeration in XML Schema The syntax for defining an enumeration is shown in Example 2.16, "Syntax for an enumeration" . Example 2.16. Syntax for an enumeration EnumName specifies the name of the enumeration type. EnumType specifies the type of the case values. CaseNValue , where N is any number one or greater, specifies the value for each specific case of the enumeration. An enumerated type can have any number of case values, but because it is derived from a simple type, only one of the case values is valid at a time. Example For example, an XML document with an element defined by the enumeration widgetSize , shown in Example 2.17, "widgetSize enumeration" , would be valid if it contained <widgetSize>big</widgetSize>, but it would not be valid if it contained <widgetSize>big,mungo</widgetSize>. Example 2.17. widgetSize enumeration 2.6. Defining elements Elements in XML Schema represent an instance of an element in an XML document generated from the schema. The most basic element consists of a single element element. Like the element element used to define the members of a complex type, they have three attributes: name - A required attribute that specifies the name of the element as it appears in an XML document. type - Specifies the type of the element. The type can be any XML Schema primitive type or any named complex type defined in the contract. This attribute can be omitted if the type has an in-line definition. nillable - Specifies whether an element can be omitted from a document entirely. If nillable is set to true , the element can be omitted from any document generated using the schema. An element can also have an in-line type definition. In-line types are specified using either a complexType element or a simpleType element. Once you specify if the type of data is complex or simple, you can define any type of data needed using the tools available for each type of data. In-line type definitions are discouraged because they are not reusable.
[ "<schema targetNamespace=\"http://schemas.iona.com/bank.idl\" xmlns=\"http://www.w3.org/2001/XMLSchema\" xmlns:xsd1=\"http://schemas.iona.com/bank.idl\" xmlns:wsdl=\"http://schemas.xmlsoap.org/wsdl/\">", "<element name=\"simpleInt\" type=\"xsd:int\" />", "struct personalInfo { string name; int age; };", "<complexType name=\"personalInfo\"> <sequence> <element name=\"name\" type=\"xsd:string\" /> <element name=\"age\" type=\"xsd:int\" /> </sequence> </complexType>", "<complexType name=\"personalInfo\"> <choice> <element name=\"name\" type=\"xsd:string\"/> <element name=\"age\" type=\"xsd:int\"/> </choice> </complexType>", "<complexType name=\"personalInfo\"> <all> <element name=\"name\" type=\"xsd:string\"/> <element name=\"age\" type=\"xsd:int\"/> <element name=\"previousJobs\" type=\"xsd:string: minOccurs=\"3\" maxOccurs=\"7\"/> </all> </complexType>", "<complexType name=\"personalInfo\"> <choice> <element name=\"name\" type=\"xsd:string\"/> <element name=\"age\" type=\"xsd:int\" minOccurs=\"0\"/> </choice> </complexType>", "<complexType name=\"personalInfo\"> <all> <element name=\"name\" type=\"xsd:string\"/> <element name=\"previousJobs\" type=\"xsd:string\" minOccurs=\"3\" maxOccurs=\"7\"/> </all> <attribute name=\"age\" type=\"xsd:int\" use=\"required\" /> </complexType>", "<complexType name=\"personalInfo\"> <element name=\"averages\" type=\"xsd:float\" maxOccurs=\"20\"/> </complexType>", "<complexType name=\" TypeName \"> <complexContent> <restriction base=\"SOAP-ENC:Array\"> <attribute ref=\"SOAP-ENC:arrayType\" wsdl:arrayType=\" ElementType<ArrayBounds> \"/> </restriction> </complexContent> </complexType>", "<complexType name=\"SOAPStrings\"> <complexContent> <restriction base=\"SOAP-ENC:Array\"> <attribute ref=\"SOAP-ENC:arrayType\" wsdl:arrayType=\"xsd:string[]\"/> </restriction> </complexContent> </complexType>", "<complexType name=\" TypeName \"> <complexContent> <restriction base=\"SOAP-ENC:Array\"> <sequence> <element name=\" ElementName \" type=\" ElementType \" maxOccurs=\"unbounded\"/> </sequence> </restriction> </complexContent> </complexType>", "<complexType name=\"alienInfo\"> <complexContent> <extension base=\"xsd1:personalInfo\"> <sequence> <element name=\"planet\" type=\"xsd:string\"/> </sequence> </extension> </complexContent> </complexType>", "<simpleType name=\"restrictedInt\"> <restriction base=\"xsd:int\"> </restriction> </simpleType>", "<simpleType name=\"SSN\"> <restriction base=\"xsd:string\"> <pattern value=\"\\d{3}-\\d{2}-\\d{4}\"/> </restriction> </simpleType>", "<simpleType name=\" EnumName \"> <restriction base=\" EnumType \"> <enumeration value=\" Case1Value \"/> <enumeration value=\" Case2Value \"/> <enumeration value=\" CaseNValue \"/> </restriction> </simpleType>", "<simpleType name=\"widgetSize\"> <restriction base=\"xsd:string\"> <enumeration value=\"big\"/> <enumeration value=\"large\"/> <enumeration value=\"mungo\"/> </restriction> </simpleType>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/wsdltypes
Chapter 8. Securing Kafka
Chapter 8. Securing Kafka A secure deployment of AMQ Streams might encompass one or more of the following security measures: Encryption for data exchange Authentication to prove identity Authorization to allow or decline actions executed by users Running AMQ Streams on FIPS-enabled OpenShift clusters to ensure data security and system interoperability 8.1. Encryption AMQ Streams supports Transport Layer Security (TLS), a protocol for encrypted communication. Communication is always encrypted for communication between: Kafka brokers ZooKeeper nodes Operators and Kafka brokers Operators and ZooKeeper nodes Kafka Exporter You can also configure TLS encryption between Kafka brokers and clients. TLS is specified for external clients when configuring an external listener for the Kafka broker. AMQ Streams components and Kafka clients use digital certificates for encryption. The Cluster Operator sets up certificates to enable encryption within the Kafka cluster. You can provide your own server certificates, referred to as Kafka listener certificates , for communication between Kafka clients and Kafka brokers, and inter-cluster communication. AMQ Streams uses Secrets to store the certificates and private keys required for mTLS in PEM and PKCS #12 format. A TLS CA (certificate authority) issues certificates to authenticate the identity of a component. AMQ Streams verifies the certificates for the components against the CA certificate. AMQ Streams components are verified against the cluster CA CA Kafka clients are verified against the clients CA CA 8.2. Authentication Kafka listeners use authentication to ensure a secure client connection to the Kafka cluster. Supported authentication mechanisms: mTLS authentication (on listeners with TLS-enabled encryption) SASL SCRAM-SHA-512 OAuth 2.0 token based authentication Custom authentication The User Operator manages user credentials for mTLS and SCRAM authentication, but not OAuth 2.0. For example, through the User Operator you can create a user representing a client that requires access to the Kafka cluster, and specify tls as the authentication type. Using OAuth 2.0 token-based authentication, application clients can access Kafka brokers without exposing account credentials. An authorization server handles the granting of access and inquiries about access. Custom authentication allows for any type of kafka-supported authentication. It can provide more flexibility, but also adds complexity. 8.3. Authorization Kafka clusters use authorization to control the operations that are permitted on Kafka brokers by specific clients or users. If applied to a Kafka cluster, authorization is enabled for all listeners used for client connection. If a user is added to a list of super users in a Kafka broker configuration, the user is allowed unlimited access to the cluster regardless of any authorization constraints implemented through authorization mechanisms. Supported authorization mechanisms: Simple authorization OAuth 2.0 authorization (if you are using OAuth 2.0 token-based authentication) Open Policy Agent (OPA) authorization Custom authorization Simple authorization uses AclAuthorizer , the default Kafka authorization plugin. AclAuthorizer uses Access Control Lists (ACLs) to define which users have access to which resources. For custom authorization, you configure your own Authorizer plugin to enforce ACL rules. OAuth 2.0 and OPA provide policy-based control from an authorization server. Security policies and permissions used to grant access to resources on Kafka brokers are defined in the authorization server. URLs are used to connect to the authorization server and verify that an operation requested by a client or user is allowed or denied. Users and clients are matched against the policies created in the authorization server that permit access to perform specific actions on Kafka brokers. 8.4. Federal Information Processing Standards (FIPS) Federal Information Processing Standards (FIPS) are a set of security standards established by the US government to ensure the confidentiality, integrity, and availability of sensitive data and information that is processed or transmitted by information systems. The OpenJDK used in AMQ Streams container images automatically enables FIPS mode when running on a FIPS-enabled OpenShift cluster. Note If you don't want to use FIPS, you can disable it in the deployment configuration of the Cluster Operator using the FIPS_MODE environment variable.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_on_openshift_overview/security-overview_str
7.14. RHEA-2014:1441 - new packages: libmicrohttpd
7.14. RHEA-2014:1441 - new packages: libmicrohttpd New libmicrohttpd packages are now available for Red Hat Enterprise Linux 6. GNU libmicrohttpd is a lightweight C library that can be used to easily embed an HTTP server in another application. This enhancement update adds the libmicrohttpd packages to Red Hat Enterprise Linux 6. (BZ# 1087821 ) All users who require libmicrohttpd are advised to install these new packages.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/rhea-2014-1441
Chapter 2. Configuring logging
Chapter 2. Configuring logging This chapter describes how to configure logging for various Ceph subsystems. Important Logging is resource intensive. Also, verbose logging can generate a huge amount of data in a relatively short time. If you are encountering problems in a specific subsystem of the cluster, enable logging only of that subsystem. See Section 2.1, "Ceph subsystems" for more information. In addition, consider setting up a rotation of log files. See Section 2.4, "Accelerating log rotation" for details. Once you fix any problems you encounter, change the subsystems log and memory levels to their default values. See Appendix A, Ceph subsystems default logging level values for a list of all Ceph subsystems and their default values. You can configure Ceph logging by: Using the ceph command at runtime. This is the most common approach. See Section 2.2, "Configuring logging at runtime" for details. Updating the Ceph configuration file. Use this approach if you are encountering problems when starting the cluster. See Section 2.3, "Configuring logging in configuration file" for details. Prerequisites A running Red Hat Ceph Storage cluster. 2.1. Ceph subsystems This section contains information about Ceph subsystems and their logging levels. Understanding Ceph Subsystems and Their Logging Levels Ceph consists of several subsystems. Each subsystem has a logging level of its: Output logs that are stored by default in /var/log/ceph/ CLUSTER_FSID / directory (log level) Logs that are stored in a memory cache (memory level) In general, Ceph does not send logs stored in memory to the output logs unless: A fatal signal is raised An assert in source code is triggered You request it You can set different values for each of these subsystems. Ceph logging levels operate on a scale of 1 to 20 , where 1 is terse and 20 is verbose. Use a single value for the log level and memory level to set them both to the same value. For example, debug_osd = 5 sets the debug level for the ceph-osd daemon to 5 . To use different values for the output log level and the memory level, separate the values with a forward slash ( / ). For example, debug_mon = 1/5 sets the debug log level for the ceph-mon daemon to 1 and its memory log level to 5 . Table 2.1. Ceph Subsystems and the Logging Default Values Subsystem Log Level Memory Level Description asok 1 5 The administration socket auth 1 5 Authentication client 0 5 Any application or library that uses librados to connect to the cluster bluestore 1 5 The BlueStore OSD backend journal 1 5 The OSD journal mds 1 5 The Metadata Servers monc 0 5 The Monitor client handles communication between most Ceph daemons and Monitors mon 1 5 Monitors ms 0 5 The messaging system between Ceph components osd 0 5 The OSD Daemons paxos 0 5 The algorithm that Monitors use to establish a consensus rados 0 5 Reliable Autonomic Distributed Object Store, a core component of Ceph rbd 0 5 The Ceph Block Devices rgw 1 5 The Ceph Object Gateway Example Log Outputs The following examples show the type of messages in the logs when you increase the verbosity for the Monitors and OSDs. Monitor Debug Settings Example Log Output of Monitor Debug Settings OSD Debug Settings Example Log Output of OSD Debug Settings Additional Resources Configuring logging at runtime Configuring logging in configuration file 2.2. Configuring logging at runtime You can configure the logging of Ceph subsystems at system runtime to help troubleshoot any issues that might occur. Prerequisites A running Red Hat Ceph Storage cluster. Access to Ceph debugger. Procedure To activate the Ceph debugging output, dout() , at runtime: Replace: TYPE with the type of Ceph daemons ( osd , mon , or mds ) ID with a specific ID of the Ceph daemon. Alternatively, use * to apply the runtime setting to all daemons of a particular type. SUBSYSTEM with a specific subsystem. VALUE with a number from 1 to 20 , where 1 is terse and 20 is verbose. For example, to set the log level for the OSD subsystem on the OSD named osd.0 to 0 and the memory level to 5: To see the configuration settings at runtime: Log in to the host with a running Ceph daemon, for example, ceph-osd or ceph-mon . Display the configuration: Syntax Example Additional Resources See Ceph subsystems for details. See Configuration logging in configuration file for details. The Ceph Debugging and Logging Configuration Reference chapter in the Configuration Guide for Red Hat Ceph Storage 8. 2.3. Configuring logging in configuration file Configure Ceph subsystems to log informational, warning, and error messages to the log file. You can specify the debugging level in the Ceph configuration file, by default /etc/ceph/ceph.conf . Prerequisites A running Red Hat Ceph Storage cluster. Procedure To activate Ceph debugging output, dout() at boot time, add the debugging settings to the Ceph configuration file. For subsystems common to each daemon, add the settings under the [global] section. For subsystems for particular daemons, add the settings under a daemon section, such as [mon] , [osd] , or [mds] . Example Additional Resources Ceph subsystems Configuring logging at runtime The Ceph Debugging and Logging Configuration Reference chapter in the Configuration Guide for Red Hat Ceph Storage 8 2.4. Accelerating log rotation Increasing debugging level for Ceph components might generate a huge amount of data. If you have almost full disks, you can accelerate log rotation by modifying the Ceph log rotation file at /etc/logrotate.d/ceph-<fsid> . The Cron job scheduler uses this file to schedule log rotation. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Add the size setting after the rotation frequency to the log rotation file: For example, to rotate a log file when it reaches 500 MB: Note The size value can be expressed as '500 MB' or '500M'. Open the crontab editor: Add an entry to check the /etc/logrotate.d/ceph-<fsid> file. For example, to instruct Cron to check /etc/logrotate.d/ceph-<fsid> every 30 minutes: 2.5. Creating and collecting operation logs for Ceph Object Gateway User identity information is added to the operation log output. This is used to enable customers to access this information for auditing of S3 access. Track user identities reliably by S3 request in all versions of the Ceph Object Gateway operation log. Procedure Find where the logs are located: Syntax Example List the logs within the specified location: Syntax Example List the current buckets: Example Create a bucket: Syntax Example List the current logs: Syntax Example Collect the logs: Syntax Example
[ "debug_ms = 5 debug_mon = 20 debug_paxos = 20 debug_auth = 20", "2022-05-12 12:37:04.278761 7f45a9afc700 10 mon.cephn2@0(leader).osd e322 e322: 2 osds: 2 up, 2 in 2022-05-12 12:37:04.278792 7f45a9afc700 10 mon.cephn2@0(leader).osd e322 min_last_epoch_clean 322 2022-05-12 12:37:04.278795 7f45a9afc700 10 mon.cephn2@0(leader).log v1010106 log 2022-05-12 12:37:04.278799 7f45a9afc700 10 mon.cephn2@0(leader).auth v2877 auth 2022-05-12 12:37:04.278811 7f45a9afc700 20 mon.cephn2@0(leader) e1 sync_trim_providers 2022-05-12 12:37:09.278914 7f45a9afc700 11 mon.cephn2@0(leader) e1 tick 2022-05-12 12:37:09.278949 7f45a9afc700 10 mon.cephn2@0(leader).pg v8126 v8126: 64 pgs: 64 active+clean; 60168 kB data, 172 MB used, 20285 MB / 20457 MB avail 2022-05-12 12:37:09.278975 7f45a9afc700 10 mon.cephn2@0(leader).paxosservice(pgmap 7511..8126) maybe_trim trim_to 7626 would only trim 115 < paxos_service_trim_min 250 2022-05-12 12:37:09.278982 7f45a9afc700 10 mon.cephn2@0(leader).osd e322 e322: 2 osds: 2 up, 2 in 2022-05-12 12:37:09.278989 7f45a9afc700 5 mon.cephn2@0(leader).paxos(paxos active c 1028850..1029466) is_readable = 1 - now=2021-08-12 12:37:09.278990 lease_expire=0.000000 has v0 lc 1029466 . 2022-05-12 12:59:18.769963 7f45a92fb700 1 -- 192.168.0.112:6789/0 <== osd.1 192.168.0.114:6800/2801 5724 ==== pg_stats(0 pgs tid 3045 v 0) v1 ==== 124+0+0 (2380105412 0 0) 0x5d96300 con 0x4d5bf40 2022-05-12 12:59:18.770053 7f45a92fb700 1 -- 192.168.0.112:6789/0 --> 192.168.0.114:6800/2801 -- pg_stats_ack(0 pgs tid 3045) v1 -- ?+0 0x550ae00 con 0x4d5bf40 2022-05-12 12:59:32.916397 7f45a9afc700 0 mon.cephn2@0(leader).data_health(1) update_stats avail 53% total 1951 MB, used 780 MB, avail 1053 MB . 2022-05-12 13:01:05.256263 7f45a92fb700 1 -- 192.168.0.112:6789/0 --> 192.168.0.113:6800/2410 -- mon_subscribe_ack(300s) v1 -- ?+0 0x4f283c0 con 0x4d5b440", "debug_ms = 5 debug_osd = 20", "2022-05-12 11:27:53.869151 7f5d55d84700 1 -- 192.168.17.3:0/2410 --> 192.168.17.4:6801/2801 -- osd_ping(ping e322 stamp 2021-08-12 11:27:53.869147) v2 -- ?+0 0x63baa00 con 0x578dee0 2022-05-12 11:27:53.869214 7f5d55d84700 1 -- 192.168.17.3:0/2410 --> 192.168.0.114:6801/2801 -- osd_ping(ping e322 stamp 2021-08-12 11:27:53.869147) v2 -- ?+0 0x638f200 con 0x578e040 2022-05-12 11:27:53.870215 7f5d6359f700 1 -- 192.168.17.3:0/2410 <== osd.1 192.168.0.114:6801/2801 109210 ==== osd_ping(ping_reply e322 stamp 2021-08-12 11:27:53.869147) v2 ==== 47+0+0 (261193640 0 0) 0x63c1a00 con 0x578e040 2022-05-12 11:27:53.870698 7f5d6359f700 1 -- 192.168.17.3:0/2410 <== osd.1 192.168.17.4:6801/2801 109210 ==== osd_ping(ping_reply e322 stamp 2021-08-12 11:27:53.869147) v2 ==== 47+0+0 (261193640 0 0) 0x6313200 con 0x578dee0 . 2022-05-12 11:28:10.432313 7f5d6e71f700 5 osd.0 322 tick 2022-05-12 11:28:10.432375 7f5d6e71f700 20 osd.0 322 scrub_random_backoff lost coin flip, randomly backing off 2022-05-12 11:28:10.432381 7f5d6e71f700 10 osd.0 322 do_waiters -- start 2022-05-12 11:28:10.432383 7f5d6e71f700 10 osd.0 322 do_waiters -- finish", "ceph tell TYPE . ID injectargs --debug- SUBSYSTEM VALUE [-- NAME VALUE ]", "ceph tell osd.0 injectargs --debug-osd 0/5", "ceph daemon NAME config show | less", "ceph daemon osd.0 config show | less", "[global] debug_ms = 1/5 [mon] debug_mon = 20 debug_paxos = 1/5 debug_auth = 2 [osd] debug_osd = 1/5 debug_monc = 5/20 [mds] debug_mds = 1", "rotate 7 weekly size SIZE compress sharedscripts", "rotate 7 weekly size 500 MB compress sharedscripts size 500M", "crontab -e", "30 * * * * /usr/sbin/logrotate /etc/logrotate.d/ceph-d3bb5396-c404-11ee-9e65-002590fc2a2e >/dev/null 2>&1", "logrotate -f", "logrotate -f /etc/logrotate.d/ceph-12ab345c-1a2b-11ed-b736-fa163e4f6220", "ll LOG_LOCATION", "ll /var/log/ceph/12ab345c-1a2b-11ed-b736-fa163e4f6220 -rw-r--r--. 1 ceph ceph 412 Sep 28 09:26 opslog.log.1.gz", "/usr/local/bin/s3cmd ls", "/usr/local/bin/s3cmd mb s3:// NEW_BUCKET_NAME", "/usr/local/bin/s3cmd mb s3://bucket1 Bucket `s3://bucket1` created", "ll LOG_LOCATION", "ll /var/log/ceph/12ab345c-1a2b-11ed-b736-fa163e4f6220 total 852 -rw-r--r--. 1 ceph ceph 920 Jun 29 02:17 opslog.log -rw-r--r--. 1 ceph ceph 412 Jun 28 09:26 opslog.log.1.gz", "tail -f LOG_LOCATION /opslog.log", "tail -f /var/log/ceph/12ab345c-1a2b-11ed-b736-fa163e4f6220/opslog.log {\"bucket\":\"\",\"time\":\"2022-09-29T06:17:03.133488Z\",\"time_local\":\"2022-09- 29T06:17:03.133488+0000\",\"remote_addr\":\"10.0.211.66\",\"user\":\"test1\", \"operation\":\"list_buckets\",\"uri\":\"GET / HTTP/1.1\",\"http_status\":\"200\",\"error_code\":\"\",\"bytes_sent\":232, \"bytes_received\":0,\"object_size\":0,\"total_time\":9,\"user_agent\":\"\",\"referrer\": \"\",\"trans_id\":\"tx00000c80881a9acd2952a-006335385f-175e5-primary\", \"authentication_type\":\"Local\",\"access_key_id\":\"1234\",\"temp_url\":false} {\"bucket\":\"cn1\",\"time\":\"2022-09-29T06:17:10.521156Z\",\"time_local\":\"2022-09- 29T06:17:10.521156+0000\",\"remote_addr\":\"10.0.211.66\",\"user\":\"test1\", \"operation\":\"create_bucket\",\"uri\":\"PUT /cn1/ HTTP/1.1\",\"http_status\":\"200\",\"error_code\":\"\",\"bytes_sent\":0, \"bytes_received\":0,\"object_size\":0,\"total_time\":106,\"user_agent\":\"\", \"referrer\":\"\",\"trans_id\":\"tx0000058d60c593632c017-0063353866-175e5-primary\", \"authentication_type\":\"Local\",\"access_key_id\":\"1234\",\"temp_url\":false}" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/troubleshooting_guide/configuring-logging
Chapter 11. Identity (keystone) Parameters
Chapter 11. Identity (keystone) Parameters You can modify the keystone service with identity parameters. Parameter Description AdminEmail The email for the OpenStack Identity (keystone) admin account. The default value is [email protected] . AdminToken The OpenStack Identity (keystone) secret and database password. ApacheCertificateKeySize Override the private key size used when creating the certificate for this service. CertificateKeySize Specifies the private key size used when creating the certificate. The default value is 2048 . EnableCache Enable caching with memcached. The default value is True . EnablePublicTLS Whether to enable TLS on the public interface or not. The default value is True . KeystoneAuthMethods A list of methods used for authentication. KeystoneChangePasswordUponFirstUse Enabling this option requires users to change their password when the user is created, or upon administrative reset. KeystoneCorsAllowedOrigin Indicate whether this resource may be shared with the domain received in the request "origin" header. KeystoneCredential0 The first OpenStack Identity (keystone) credential key. Must be a valid key. KeystoneCredential1 The second OpenStack Identity (keystone) credential key. Must be a valid key. KeystoneDisableUserAccountDaysInactive The maximum number of days a user can go without authenticating before being considered "inactive" and automatically disabled (locked). KeystoneEnableMember Create the member role, useful for undercloud deployment. The default value is False . KeystoneFederationEnable Enable support for federated authentication. The default value is False . KeystoneFernetKeys Mapping containing OpenStack Identity (keystone) fernet keys and their paths. KeystoneFernetMaxActiveKeys The maximum active keys in the OpenStack Identity (keystone) fernet key repository. The default value is 5 . KeystoneLDAPBackendConfigs Hash containing the configurations for the LDAP backends configured in keystone. KeystoneLDAPDomainEnable Trigger to call ldap_backend puppet keystone define. The default value is False . KeystoneLockoutDuration The number of seconds a user account will be locked when the maximum number of failed authentication attempts (as specified by KeystoneLockoutFailureAttempts) is exceeded. KeystoneLockoutFailureAttempts The maximum number of times that a user can fail to authenticate before the user account is locked for the number of seconds specified by KeystoneLockoutDuration. KeystoneMinimumPasswordAge The number of days that a password must be used before the user can change it. This prevents users from changing their passwords immediately in order to wipe out their password history and reuse an old password. KeystoneNotificationDriver Comma-separated list of Oslo notification drivers used by OpenStack Identity (keystone). KeystoneNotificationFormat The OpenStack Identity (keystone) notification format. The default value is basic . KeystoneNotificationTopics OpenStack Identity (keystone) notification topics to enable. KeystoneOpenIdcClientId The client ID to use when handshaking with your OpenID Connect provider. KeystoneOpenIdcClientSecret The client secret to use when handshaking with your OpenID Connect provider. KeystoneOpenIdcCryptoPassphrase Passphrase to use when encrypting data for OpenID Connect handshake. The default value is openstack . KeystoneOpenIdcEnable Enable support for OpenIDC federation. The default value is False . KeystoneOpenIdcEnableOAuth Enable OAuth 2.0 integration. The default value is False . KeystoneOpenIdcIdpName The name associated with the IdP in OpenStack Identity (keystone). KeystoneOpenIdcIntrospectionEndpoint OAuth 2.0 introspection endpoint for mod_auth_openidc. KeystoneOpenIdcProviderMetadataUrl The url that points to your OpenID Connect provider metadata. KeystoneOpenIdcRemoteIdAttribute Attribute to be used to obtain the entity ID of the Identity Provider from the environment. The default value is HTTP_OIDC_ISS . KeystoneOpenIdcResponseType Response type to be expected from the OpenID Connect provider. The default value is id_token . KeystonePasswordExpiresDays The number of days for which a password will be considered valid before requiring it to be changed. KeystonePasswordRegex The regular expression used to validate password strength requirements. KeystonePasswordRegexDescription Describe your password regular expression here in language for humans. KeystoneSSLCertificate OpenStack Identity (keystone) certificate for verifying token validity. KeystoneSSLCertificateKey OpenStack Identity (keystone) key for signing tokens. KeystoneTokenProvider The OpenStack Identity (keystone) token format. The default value is fernet . KeystoneTrustedDashboards A list of dashboard URLs trusted for single sign-on. KeystoneUniqueLastPasswordCount This controls the number of user password iterations to keep in history, in order to enforce that newly created passwords are unique. KeystoneWorkers Set the number of workers for the OpenStack Identity (keystone) service. Note that more workers creates a larger number of processes on systems, which results in excess memory consumption. It is recommended to choose a suitable non-default value on systems with high CPU core counts. 0 sets to the OpenStack internal default, which is equal to the number of CPU cores on the node. The default value is equal to the number of vCPU cores on the physical node. ManageKeystoneFernetKeys Whether director should manage the OpenStack Identity (keystone) fernet keys or not. If set to True, the fernet keys will get the values from the saved keys repository in OpenStack Workflow (mistral) from the KeystoneFernetKeys variable. If set to false, only the stack creation initializes the keys, but subsequent updates will not touch them. The default value is True . MemcachedTLS Set to True to enable TLS on Memcached service. Because not all services support Memcached TLS, during the migration period, Memcached will listen on 2 ports - on the port set with MemcachedPort parameter (above) and on 11211, without TLS. The default value is False . NotificationDriver Driver or drivers to handle sending notifications. The default value is noop . PublicSSLCertificateAutogenerated Whether the public SSL certificate was autogenerated or not. The default value is False . PublicTLSCAFile Specifies the default CA cert to use if TLS is used for services in the public network. SSLCertificate The content of the SSL certificate (without Key) in PEM format. TokenExpiration Set a token expiration time in seconds. The default value is 3600 .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/overcloud_parameters/ref_identity-keystone-parameters_overcloud_parameters
3.6. Image Builder blueprint format
3.6. Image Builder blueprint format Image Builder blueprints are stored as plain text in the Tom's Obvious, Minimal Language (TOML) format. The elements of a typical blueprint file include: The blueprint metadata Replace BLUEPRINT-NAME and LONG FORM DESCRIPTION TEXT with a name and description for your blueprint. Replace VERSION with a version number according to the Semantic Versioning scheme. This part is present only once for the whole blueprint file. The entry modules describe the package names and matching version glob to be installed into the image and the entry group describe a group of packages to be installed into the image. If you do not add these items, the blueprint indentify them as an empyt lists. Packages included in the image Replace package-name with name of the package, such as httpd , gdb-doc , or coreutils . Replace package-version with a version to use. This field supports dnf version specifications: For a specific version, use the exact version number such as 7.30 . For latest available version, use the asterisk * . For a latest minor version, use format such as 7.* . Repeat this block for every package to be included.
[ "name = \" BLUEPRINT-NAME \" description = \" LONGER BLUEPRINT DESCRIPTION \" version = \" VERSION \" modules = [] groups = []", "[[packages]] name = \" package-name \" version = \" package-version \"" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/image_builder_guide/sect-Documentation-Image_Builder-Test_Chapter3-Test_Section_6
Chapter 11. Client registration CLI
Chapter 11. Client registration CLI The Client Registration CLI is a command-line interface (CLI) tool for application developers to configure new clients in a self-service manner when integrating with Red Hat build of Keycloak. It is specifically designed to interact with Red Hat build of Keycloak Client Registration REST endpoints. It is necessary to create or obtain a client configuration for any application to be able to use Red Hat build of Keycloak. You usually configure a new client for each new application hosted on a unique host name. When an application interacts with Red Hat build of Keycloak, the application identifies itself with a client ID so Red Hat build of Keycloak can provide a login page, single sign-on (SSO) session management, and other services. You can configure application clients from a command line with the Client Registration CLI, and you can use it in shell scripts. To allow a particular user to use Client Registration CLI , the Red Hat build of Keycloak administrator typically uses the Admin Console to configure a new user with proper roles or to configure a new client and client secret to grant access to the Client Registration REST API. 11.1. Configuring a new regular user for use with Client Registration CLI Procedure Log in to the Admin Console (for example, http://localhost:8080 ) as admin . Select a realm to administer. If you want to use an existing user, select that user to edit; otherwise, create a new user. Select Role Mapping , Assign role . From the option list, click Filter by clients . In the search bar, type manage-clients . Select the role, or if you are in the master realm, select the one with NAME-realm , where NAME is the name of the target realm. You can grant access to any other realm to users in the master realm. Click Assign to grant a full set of client management permissions. Another option is to choose view-clients for read-only or create-client to create new clients. Select Available Roles , manage-client to grant a full set of client management permissions. Another option is to choose view-clients for read-only or create-client to create new clients. Note These permissions grant the user the capability to perform operations without the use of Initial Access Token or Registration Access Token (see Client registration service for more information). It is possible to not assign any realm-management roles to a user. In that case, a user can still log in with the Client Registration CLI but cannot use it without an Initial Access Token. Trying to perform any operations without a token results in a 403 Forbidden error. The administrator can issue Initial Access Tokens from the Admin Console in the Clients area on the Initial Access Token tab. 11.2. Configuring a client for use with the Client Registration CLI By default, the server recognizes the Client Registration CLI as the admin-cli client, which is configured automatically for every new realm. No additional client configuration is necessary when logging in with a user name. Procedure Create a client (for example, reg-cli ) if you want to use a separate client configuration for the Client Registration CLI. Uncheck Standard Flow Enabled . Strengthen the security by toggling Client authentication to On . Choose the type of account that you want to use. If you want to use a service account associated with the client, check Service accounts roles . If you prefer to use a regular user account, check Direct access grants . Click . Click Save . Click the Credentials tab. Configure either Client Id and Secret or Signed JWT . If you are using service account roles, click the Service Account Roles tab. Select the roles to configure the access for the service account. For the details on what roles to select, see Section 11.1, "Configuring a new regular user for use with Client Registration CLI" . Click Save . When you run the kcreg config credentials , use the --secret option to supply the configured secret. Specify which clientId to use (for example, --client reg-cli ) when running kcreg config credentials . With the service account enabled, you can omit specifying the user when running kcreg config credentials and only provide the client secret or keystore information. 11.3. Installing the Client Registration CLI The Client Registration CLI is packaged inside the Red Hat build of Keycloak Server distribution. You can find execution scripts inside the bin directory. The Linux script is called kcreg.sh , and the Windows script is called kcreg.bat . Add the Red Hat build of Keycloak server directory to your PATH when setting up the client for use from any location on the file system. For example, on: Linux: Windows: KEYCLOAK_HOME refers to a directory where the Red Hat build of Keycloak Server distribution was unpacked. 11.4. Using the Client Registration CLI Procedure Start an authenticated session by logging in with your credentials. Run commands on the Client Registration REST endpoint. For example, on: Linux: Windows: Note In a production environment, Red Hat build of Keycloak has to be accessed with https: to avoid exposing tokens to network sniffers. If a server's certificate is not issued by one of the trusted certificate authorities (CAs) that are included in Java's default certificate truststore, prepare a truststore.jks file and instruct the Client Registration CLI to use it. For example, on: Linux: Windows: 11.4.1. Logging in Procedure Specify a server endpoint URL and a realm when you log in with the Client Registration CLI. Specify a user name or a client id, which results in a special service account being used. When using a user name, you must use a password for the specified user. When using a client ID, you use a client secret or a Signed JWT instead of a password. Regardless of the login method, the account that logs in needs proper permissions to be able to perform client registration operations. Keep in mind that any account in a non-master realm can only have permissions to manage clients within the same realm. If you need to manage different realms, you can either configure multiple users in different realms, or you can create a single user in the master realm and add roles for managing clients in different realms. You cannot configure users with the Client Registration CLI. Use the Admin Console web interface or the Admin Client CLI to configure users. See Server Administration Guide for more details. When kcreg successfully logs in, it receives authorization tokens and saves them in a private configuration file so the tokens can be used for subsequent invocations. See Section 11.4.2, "Working with alternative configurations" for more information on configuration files. See the built-in help for more information on using the Client Registration CLI. For example, on: Linux: Windows: See kcreg config credentials --help for more information about starting an authenticated session. 11.4.2. Working with alternative configurations By default, the Client Registration CLI automatically maintains a configuration file at a default location, ./.keycloak/kcreg.config , under the user's home directory. You can use the --config option to point to a different file or location to maintain multiple authenticated sessions in parallel. It is the safest way to perform operations tied to a single configuration file from a single thread. Important Do not make the configuration file visible to other users on the system. The configuration file contains access tokens and secrets that should be kept private. You might want to avoid storing secrets inside a configuration file by using the --no-config option with all of your commands, even though it is less convenient and requires more token requests to do so. Specify all authentication information with each kcreg invocation. 11.4.3. Initial Access and Registration Access Tokens Developers who do not have an account configured at the Red Hat build of Keycloak server they want to use can use the Client Registration CLI. This is possible only when the realm administrator issues a developer an Initial Access Token. It is up to the realm administrator to decide how and when to issue and distribute these tokens. The realm administrator can limit the maximum age of the Initial Access Token and the total number of clients that can be created with it. Once a developer has an Initial Access Token, the developer can use it to create new clients without authenticating with kcreg config credentials . The Initial Access Token can be stored in the configuration file or specified as part of the kcreg create command. For example, on: Linux: or Windows: or When using an Initial Access Token, the server response includes a newly issued Registration Access Token. Any subsequent operation for that client needs to be performed by authenticating with that token, which is only valid for that client. The Client Registration CLI automatically uses its private configuration file to save and use this token with its associated client. As long as the same configuration file is used for all client operations, the developer does not need to authenticate to read, update, or delete a client that was created this way. See Client registration service for more information about Initial Access and Registration Access Tokens. Run the kcreg config initial-token --help and kcreg config registration-token --help commands for more information on how to configure tokens with the Client Registration CLI. 11.4.4. Creating a client configuration The first task after authenticating with credentials or configuring an Initial Access Token is usually to create a new client. Often you might want to use a prepared JSON file as a template and set or override some of the attributes. The following example shows how to read a JSON file, override any client id it may contain, set any other attributes, and print the configuration to a standard output after successful creation. Linux: Windows: Run the kcreg create --help for more information about the kcreg create command. You can use kcreg attrs to list available attributes. Keep in mind that many configuration attributes are not checked for validity or consistency. It is up to you to specify proper values. Remember that you should not have any id fields in your template and should not specify them as arguments to the kcreg create command. 11.4.5. Retrieving a client configuration You can retrieve an existing client by using the kcreg get command. For example, on: Linux: Windows: You can also retrieve the client configuration as an adapter configuration file, which you can package with your web application. For example, on: Linux: Windows: Run the kcreg get --help command for more information about the kcreg get command. 11.4.6. Modifying a client configuration There are two methods for updating a client configuration. One method is to submit a complete new state to the server after getting the current configuration, saving it to a file, editing it, and posting it back to the server. For example, on: Linux: Windows: The second method fetches the current client, sets or deletes fields on it, and posts it back in one step. For example, on: Linux: Windows: You can also use a file that contains only changes to be applied so you do not have to specify too many values as arguments. In this case, specify --merge to tell the Client Registration CLI that rather than treating the JSON file as a full, new configuration, it should treat it as a set of attributes to be applied over the existing configuration. For example, on: Linux: Windows: Run the kcreg update --help command for more information about the kcreg update command. 11.4.7. Deleting a client configuration Use the following example to delete a client. Linux: Windows: Run the kcreg delete --help command for more information about the kcreg delete command. 11.4.8. Refreshing invalid Registration Access Tokens When performing a create, read, update, and delete (CRUD) operation using the --no-config mode, the Client Registration CLI cannot handle Registration Access Tokens for you. In that case, it is possible to lose track of the most recently issued Registration Access Token for a client, which makes it impossible to perform any further CRUD operations on that client without authenticating with an account that has manage-clients permissions. If you have permissions, you can issue a new Registration Access Token for the client and have it printed to a standard output or saved to a configuration file of your choice. Otherwise, you have to ask the realm administrator to issue a new Registration Access Token for your client and send it to you. You can then pass it to any CRUD command via the --token option. You can also use the kcreg config registration-token command to save the new token in a configuration file and have the Client Registration CLI automatically handle it for you from that point on. Run the kcreg update-token --help command for more information about the kcreg update-token command. 11.5. Troubleshooting Q: When logging in, I get an error: Parameter client_assertion_type is missing [invalid_client] . A: This error means your client is configured with Signed JWT token credentials, which means you have to use the --keystore parameter when logging in.
[ "export PATH=USDPATH:USDKEYCLOAK_HOME/bin kcreg.sh", "c:\\> set PATH=%PATH%;%KEYCLOAK_HOME%\\bin c:\\> kcreg", "kcreg.sh config credentials --server http://localhost:8080 --realm demo --user user --client reg-cli kcreg.sh create -s clientId=my_client -s 'redirectUris=[\"http://localhost:8980/myapp/*\"]' kcreg.sh get my_client", "c:\\> kcreg config credentials --server http://localhost:8080 --realm demo --user user --client reg-cli c:\\> kcreg create -s clientId=my_client -s \"redirectUris=[\\\"http://localhost:8980/myapp/*\\\"]\" c:\\> kcreg get my_client", "kcreg.sh config truststore --trustpass USDPASSWORD ~/.keycloak/truststore.jks", "c:\\> kcreg config truststore --trustpass %PASSWORD% %HOMEPATH%\\.keycloak\\truststore.jks", "kcreg.sh help", "c:\\> kcreg help", "kcreg.sh config initial-token USDTOKEN kcreg.sh create -s clientId=myclient", "kcreg.sh create -s clientId=myclient -t USDTOKEN", "c:\\> kcreg config initial-token %TOKEN% c:\\> kcreg create -s clientId=myclient", "c:\\> kcreg create -s clientId=myclient -t %TOKEN%", "kcreg.sh create -f client-template.json -s clientId=myclient -s baseUrl=/myclient -s 'redirectUris=[\"/myclient/*\"]' -o", "C:\\> kcreg create -f client-template.json -s clientId=myclient -s baseUrl=/myclient -s \"redirectUris=[\\\"/myclient/*\\\"]\" -o", "kcreg.sh get myclient", "C:\\> kcreg get myclient", "kcreg.sh get myclient -e install > keycloak.json", "C:\\> kcreg get myclient -e install > keycloak.json", "kcreg.sh get myclient > myclient.json vi myclient.json kcreg.sh update myclient -f myclient.json", "C:\\> kcreg get myclient > myclient.json C:\\> notepad myclient.json C:\\> kcreg update myclient -f myclient.json", "kcreg.sh update myclient -s enabled=false -d redirectUris", "C:\\> kcreg update myclient -s enabled=false -d redirectUris", "kcreg.sh update myclient --merge -d redirectUris -f mychanges.json", "C:\\> kcreg update myclient --merge -d redirectUris -f mychanges.json", "kcreg.sh delete myclient", "C:\\> kcreg delete myclient" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/securing_applications_and_services_guide/client-registration-cli-
Chapter 4. RoleBinding [rbac.authorization.k8s.io/v1]
Chapter 4. RoleBinding [rbac.authorization.k8s.io/v1] Description RoleBinding references a role, but does not contain it. It can reference a Role in the same namespace or a ClusterRole in the global namespace. It adds who information via Subjects and namespace information by which namespace it exists in. RoleBindings in a given namespace only have effect in that namespace. Type object Required roleRef 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. roleRef object RoleRef contains information that points to the role being used subjects array Subjects holds references to the objects the role applies to. subjects[] object Subject contains a reference to the object or user identities a role binding applies to. This can either hold a direct API object reference, or a value for non-objects such as user and group names. 4.1.1. .roleRef Description RoleRef contains information that points to the role being used Type object Required apiGroup kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 4.1.2. .subjects Description Subjects holds references to the objects the role applies to. Type array 4.1.3. .subjects[] Description Subject contains a reference to the object or user identities a role binding applies to. This can either hold a direct API object reference, or a value for non-objects such as user and group names. Type object Required kind name Property Type Description apiGroup string APIGroup holds the API group of the referenced subject. Defaults to "" for ServiceAccount subjects. Defaults to "rbac.authorization.k8s.io" for User and Group subjects. kind string Kind of object being referenced. Values defined by this API group are "User", "Group", and "ServiceAccount". If the Authorizer does not recognized the kind value, the Authorizer should report an error. name string Name of the object being referenced. namespace string Namespace of the referenced object. If the object kind is non-namespace, such as "User" or "Group", and this value is not empty the Authorizer should report an error. 4.2. API endpoints The following API endpoints are available: /apis/rbac.authorization.k8s.io/v1/rolebindings GET : list or watch objects of kind RoleBinding /apis/rbac.authorization.k8s.io/v1/watch/rolebindings GET : watch individual changes to a list of RoleBinding. deprecated: use the 'watch' parameter with a list operation instead. /apis/rbac.authorization.k8s.io/v1/namespaces/{namespace}/rolebindings DELETE : delete collection of RoleBinding GET : list or watch objects of kind RoleBinding POST : create a RoleBinding /apis/rbac.authorization.k8s.io/v1/watch/namespaces/{namespace}/rolebindings GET : watch individual changes to a list of RoleBinding. deprecated: use the 'watch' parameter with a list operation instead. /apis/rbac.authorization.k8s.io/v1/namespaces/{namespace}/rolebindings/{name} DELETE : delete a RoleBinding GET : read the specified RoleBinding PATCH : partially update the specified RoleBinding PUT : replace the specified RoleBinding /apis/rbac.authorization.k8s.io/v1/watch/namespaces/{namespace}/rolebindings/{name} GET : watch changes to an object of kind RoleBinding. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 4.2.1. /apis/rbac.authorization.k8s.io/v1/rolebindings Table 4.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind RoleBinding Table 4.2. HTTP responses HTTP code Reponse body 200 - OK RoleBindingList schema 401 - Unauthorized Empty 4.2.2. /apis/rbac.authorization.k8s.io/v1/watch/rolebindings Table 4.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of RoleBinding. deprecated: use the 'watch' parameter with a list operation instead. Table 4.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.3. /apis/rbac.authorization.k8s.io/v1/namespaces/{namespace}/rolebindings Table 4.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 4.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of RoleBinding Table 4.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 4.8. Body parameters Parameter Type Description body DeleteOptions schema Table 4.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind RoleBinding Table 4.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 4.11. HTTP responses HTTP code Reponse body 200 - OK RoleBindingList schema 401 - Unauthorized Empty HTTP method POST Description create a RoleBinding Table 4.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.13. Body parameters Parameter Type Description body RoleBinding schema Table 4.14. HTTP responses HTTP code Reponse body 200 - OK RoleBinding schema 201 - Created RoleBinding schema 202 - Accepted RoleBinding schema 401 - Unauthorized Empty 4.2.4. /apis/rbac.authorization.k8s.io/v1/watch/namespaces/{namespace}/rolebindings Table 4.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 4.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of RoleBinding. deprecated: use the 'watch' parameter with a list operation instead. Table 4.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.5. /apis/rbac.authorization.k8s.io/v1/namespaces/{namespace}/rolebindings/{name} Table 4.18. Global path parameters Parameter Type Description name string name of the RoleBinding namespace string object name and auth scope, such as for teams and projects Table 4.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a RoleBinding Table 4.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 4.21. Body parameters Parameter Type Description body DeleteOptions schema Table 4.22. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified RoleBinding Table 4.23. HTTP responses HTTP code Reponse body 200 - OK RoleBinding schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified RoleBinding Table 4.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 4.25. Body parameters Parameter Type Description body Patch schema Table 4.26. HTTP responses HTTP code Reponse body 200 - OK RoleBinding schema 201 - Created RoleBinding schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified RoleBinding Table 4.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.28. Body parameters Parameter Type Description body RoleBinding schema Table 4.29. HTTP responses HTTP code Reponse body 200 - OK RoleBinding schema 201 - Created RoleBinding schema 401 - Unauthorized Empty 4.2.6. /apis/rbac.authorization.k8s.io/v1/watch/namespaces/{namespace}/rolebindings/{name} Table 4.30. Global path parameters Parameter Type Description name string name of the RoleBinding namespace string object name and auth scope, such as for teams and projects Table 4.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind RoleBinding. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 4.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/rbac_apis/rolebinding-rbac-authorization-k8s-io-v1
2.29. RHEA-2011:0627 - new package: pki-core
2.29. RHEA-2011:0627 - new package: pki-core New pki-core packages are now available for Red Hat Enterprise Linux 6. Red Hat Certificate System is an enterprise software system designed to manage enterprise public key infrastructure (PKI) deployments. PKI Core contains fundamental packages required by Red Hat Certificate System, which comprise the Certificate Authority (CA) subsystem. Note: The Certificate Authority component provided by this errata cannot be used as a standalone server. It is installed and operates as a part of the Red Hat Enterprise Identity (IPA). This enhancement update adds the pki-core packages to Red Hat Enterprise Linux 6. (BZ# 645097 ) All users should install these new packages.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.1_technical_notes/pki-core_new
Chapter 42. Barcode DataFormat
Chapter 42. Barcode DataFormat Available as of Camel version 2.14 The barcode data format is based on the zxing library . The goal of this component is to create a barcode image from a String (marshal) and a String from a barcode image (unmarshal). You're free to use all features that zxing offers. 42.1. Dependencies To use the barcode data format in your camel routes you need to add the a dependency on camel-barcode which implements this data format. If you use maven you could just add the following to your pom.xml, substituting the version number for the latest & greatest release (see the download page for the latest versions). <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-barcode</artifactId> <version>x.x.x</version> </dependency> 42.2. Barcode Options The Barcode dataformat supports 5 options, which are listed below. Name Default Java Type Description width Integer Width of the barcode height Integer Height of the barcode imageType String Image type of the barcode such as png barcodeFormat String Barcode format such as QR-Code contentTypeHeader false Boolean Whether the data format should set the Content-Type header with the type from the data format if the data format is capable of doing so. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSon etc. 42.3. Spring Boot Auto-Configuration The component supports 6 options, which are listed below. Name Description Default Type camel.dataformat.barcode.barcode-format Barcode format such as QR-Code String camel.dataformat.barcode.content-type-header Whether the data format should set the Content-Type header with the type from the data format if the data format is capable of doing so. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSon etc. false Boolean camel.dataformat.barcode.enabled Enable barcode dataformat true Boolean camel.dataformat.barcode.height Height of the barcode Integer camel.dataformat.barcode.image-type Image type of the barcode such as png String camel.dataformat.barcode.width Width of the barcode Integer ND 42.4. Using the Java DSL First you have to initialize the barcode data fomat class. You can use the default constructor, or one of parameterized (see JavaDoc). The default values are: Parameter Default Value image type (BarcodeImageType) PNG width 100 px height 100 px encoding UTF-8 barcode format (BarcodeFormat) QR-Code // QR-Code default DataFormat code = new BarcodeDataFormat(); If you want to use zxing hints, you can use the 'addToHintMap' method of your BarcodeDataFormat instance: code.addToHintMap(DecodeHintType.TRY_HARDER, Boolean.true); For possible hints, please consult the xzing documentation. 42.4.1. Marshalling from("direct://code") .marshal(code) .to("file://barcode_out"); You can call the route from a test class with: template.sendBody("direct://code", "This is a testmessage!"); You should find inside the 'barcode_out' folder this image: 42.4.2. Unmarshalling The unmarshaller is generic. For unmarshalling you can use any BarcodeDataFormat instance. If you've two instances, one for (generating) QR-Code and one for PDF417, it doesn't matter which one will be used. from("file://barcode_in?noop=true") .unmarshal(code) // for unmarshalling, the instance doesn't matter .to("mock:out"); If you'll paste the QR-Code image above into the 'barcode_in' folder, you should find 'This is a testmessage!' inside the mock. You can find the barcode data format as header variable: Name Type Description BarcodeFormat String Value of com.google.zxing.BarcodeFormat.
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-barcode</artifactId> <version>x.x.x</version> </dependency>", "// QR-Code default DataFormat code = new BarcodeDataFormat();", "code.addToHintMap(DecodeHintType.TRY_HARDER, Boolean.true);", "from(\"direct://code\") .marshal(code) .to(\"file://barcode_out\");", "template.sendBody(\"direct://code\", \"This is a testmessage!\");", "from(\"file://barcode_in?noop=true\") .unmarshal(code) // for unmarshalling, the instance doesn't matter .to(\"mock:out\");" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/barcode-dataformat
Chapter 31. Configuring a custom PKI
Chapter 31. Configuring a custom PKI Some platform components, such as the web console, use Routes for communication and must trust other components' certificates to interact with them. If you are using a custom public key infrastructure (PKI), you must configure it so its privately signed CA certificates are recognized across the cluster. You can leverage the Proxy API to add cluster-wide trusted CA certificates. You must do this either during installation or at runtime. During installation , configure the cluster-wide proxy . You must define your privately signed CA certificates in the install-config.yaml file's additionalTrustBundle setting. The installation program generates a ConfigMap that is named user-ca-bundle that contains the additional CA certificates you defined. The Cluster Network Operator then creates a trusted-ca-bundle ConfigMap that merges these CA certificates with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle; this ConfigMap is referenced in the Proxy object's trustedCA field. At runtime , modify the default Proxy object to include your privately signed CA certificates (part of cluster's proxy enablement workflow). This involves creating a ConfigMap that contains the privately signed CA certificates that should be trusted by the cluster, and then modifying the proxy resource with the trustedCA referencing the privately signed certificates' ConfigMap. Note The installer configuration's additionalTrustBundle field and the proxy resource's trustedCA field are used to manage the cluster-wide trust bundle; additionalTrustBundle is used at install time and the proxy's trustedCA is used at runtime. The trustedCA field is a reference to a ConfigMap containing the custom certificate and key pair used by the cluster component. 31.1. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 31.2. Enabling the cluster-wide proxy The Proxy object is used to manage the cluster-wide egress proxy. When a cluster is installed or upgraded without the proxy configured, a Proxy object is still generated but it will have a nil spec . For example: apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: trustedCA: name: "" status: A cluster administrator can configure the proxy for OpenShift Container Platform by modifying this cluster Proxy object. Note Only the Proxy object named cluster is supported, and no additional proxies can be created. Warning Enabling the cluster-wide proxy causes the Machine Config Operator (MCO) to trigger node reboot. Prerequisites Cluster administrator permissions OpenShift Container Platform oc CLI tool installed Procedure Create a config map that contains any additional CA certificates required for proxying HTTPS connections. Note You can skip this step if the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Create a file called user-ca-bundle.yaml with the following contents, and provide the values of your PEM-encoded certificates: apiVersion: v1 data: ca-bundle.crt: | 1 <MY_PEM_ENCODED_CERTS> 2 kind: ConfigMap metadata: name: user-ca-bundle 3 namespace: openshift-config 4 1 This data key must be named ca-bundle.crt . 2 One or more PEM-encoded X.509 certificates used to sign the proxy's identity certificate. 3 The config map name that will be referenced from the Proxy object. 4 The config map must be in the openshift-config namespace. Create the config map from this file: USD oc create -f user-ca-bundle.yaml Use the oc edit command to modify the Proxy object: USD oc edit proxy/cluster Configure the necessary fields for the proxy: apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 readinessEndpoints: - http://www.google.com 4 - https://www.google.com trustedCA: name: user-ca-bundle 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. The URL scheme must be either http or https . Specify a URL for the proxy that supports the URL scheme. For example, most proxies will report an error if they are configured to use https but they only support http . This failure message may not propagate to the logs and can appear to be a network connection failure instead. If using a proxy that listens for https connections from the cluster, you may need to configure the cluster to accept the CAs and certificates that the proxy uses. 3 A comma-separated list of destination domain names, domains, IP addresses (or other network CIDRs), and port numbers to exclude proxying. Note Port numbers are only supported when configuring IPv6 addresses. Port numbers are not supported when configuring IPv4 addresses. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy or httpsProxy fields are set. 4 One or more URLs external to the cluster to use to perform a readiness check before writing the httpProxy and httpsProxy values to status. 5 A reference to the config map in the openshift-config namespace that contains additional CA certificates required for proxying HTTPS connections. Note that the config map must already exist before referencing it here. This field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Save the file to apply the changes. 31.3. Certificate injection using Operators Once your custom CA certificate is added to the cluster via ConfigMap, the Cluster Network Operator merges the user-provided and system CA certificates into a single bundle and injects the merged bundle into the Operator requesting the trust bundle injection. Important After adding a config.openshift.io/inject-trusted-cabundle="true" label to the config map, existing data in it is deleted. The Cluster Network Operator takes ownership of a config map and only accepts ca-bundle as data. You must use a separate config map to store service-ca.crt by using the service.beta.openshift.io/inject-cabundle=true annotation or a similar configuration. Adding a config.openshift.io/inject-trusted-cabundle="true" label and service.beta.openshift.io/inject-cabundle=true annotation on the same config map can cause issues. Operators request this injection by creating an empty ConfigMap with the following label: config.openshift.io/inject-trusted-cabundle="true" An example of the empty ConfigMap: apiVersion: v1 data: {} kind: ConfigMap metadata: labels: config.openshift.io/inject-trusted-cabundle: "true" name: ca-inject 1 namespace: apache 1 Specifies the empty ConfigMap name. The Operator mounts this ConfigMap into the container's local trust store. Note Adding a trusted CA certificate is only needed if the certificate is not included in the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle. Certificate injection is not limited to Operators. The Cluster Network Operator injects certificates across any namespace when an empty ConfigMap is created with the config.openshift.io/inject-trusted-cabundle=true label. The ConfigMap can reside in any namespace, but the ConfigMap must be mounted as a volume to each container within a pod that requires a custom CA. For example: apiVersion: apps/v1 kind: Deployment metadata: name: my-example-custom-ca-deployment namespace: my-example-custom-ca-ns spec: ... spec: ... containers: - name: my-container-that-needs-custom-ca volumeMounts: - name: trusted-ca mountPath: /etc/pki/ca-trust/extracted/pem readOnly: true volumes: - name: trusted-ca configMap: name: ca-inject items: - key: ca-bundle.crt 1 path: tls-ca-bundle.pem 2 1 ca-bundle.crt is required as the ConfigMap key. 2 tls-ca-bundle.pem is required as the ConfigMap path.
[ "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: trustedCA: name: \"\" status:", "apiVersion: v1 data: ca-bundle.crt: | 1 <MY_PEM_ENCODED_CERTS> 2 kind: ConfigMap metadata: name: user-ca-bundle 3 namespace: openshift-config 4", "oc create -f user-ca-bundle.yaml", "oc edit proxy/cluster", "apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 readinessEndpoints: - http://www.google.com 4 - https://www.google.com trustedCA: name: user-ca-bundle 5", "config.openshift.io/inject-trusted-cabundle=\"true\"", "apiVersion: v1 data: {} kind: ConfigMap metadata: labels: config.openshift.io/inject-trusted-cabundle: \"true\" name: ca-inject 1 namespace: apache", "apiVersion: apps/v1 kind: Deployment metadata: name: my-example-custom-ca-deployment namespace: my-example-custom-ca-ns spec: spec: containers: - name: my-container-that-needs-custom-ca volumeMounts: - name: trusted-ca mountPath: /etc/pki/ca-trust/extracted/pem readOnly: true volumes: - name: trusted-ca configMap: name: ca-inject items: - key: ca-bundle.crt 1 path: tls-ca-bundle.pem 2" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/networking/configuring-a-custom-pki
Chapter 21. Execution Environment Setup Reference
Chapter 21. Execution Environment Setup Reference This section contains reference information associated with the definition of an execution environment. You define the content of your execution environment in a YAML file. By default, this file is called execution_environment.yml . This file tells Ansible Builder how to create the build instruction file (Containerfile for Podman, Dockerfile for Docker) and build context for your container image. Note The definition schema for Ansible Builder 3.x is documented here. If you are running an older version of Ansible Builder, you need an older schema version. For more information, see older versions of this documentation. We recommend using version 3, which offers substantially more configurable options and functionality than versions. 21.1. Execution environment definition example You must create a definition file to build an image for an execution environment. The file is in YAML format. You must specify the version of Ansible Builder in the definition file. The default version is 1. The following definition file is using Ansible Builder version 3: version: 3 build_arg_defaults: ANSIBLE_GALAXY_CLI_COLLECTION_OPTS: '--pre' dependencies: galaxy: requirements.yml python: - six - psutil system: bindep.txt images: base_image: name: registry.redhat.io/ansible-automation-platform-24/ee-minimal-rhel8:latest additional_build_files: - src: files/ansible.cfg dest: configs additional_build_steps: prepend_galaxy: - ADD _build/configs/ansible.cfg /home/runner/.ansible.cfg prepend_final: | RUN whoami RUN cat /etc/os-release append_final: - RUN echo This is a post-install command! - RUN ls -la /etc 21.2. Configuration options Use the following configuration YAML keys in your definition file. The Ansible Builder 3.x execution environment definition file accepts seven top-level sections: additional_build_files additional_build_steps build_arg_defaults dependencies images image verification options version 21.2.1. additional_build_files The build files specify what are to be added to the build context directory. These can then be referenced or copied by additional_build_steps during any build stage. The format is a list of dictionary values, each with a src and dest key and value. Each list item must be a dictionary containing the following required keys: src Specifies the source files to copy into the build context directory. This can be an absolute path, for example, /home/user/.ansible.cfg , or a path that is relative to the file. Relative paths can be a glob expression matching one or more files, for example, files/\*.cfg . Note that an absolute path must not include a regular expression. If src is a directory, the entire contents of that directory are copied to dest . dest Specifies a subdirectory path underneath the _build subdirectory of the build context directory that contains the source files, for example, files/configs . This cannot be an absolute path or contain .. within the path. This directory is created for you if it does not exist. Note When using an ansible.cfg file to pass a token and other settings for a private account to an automation hub server, listing the configuration file path here as a string enables it to be included as a build argument in the initial phase of the build. 21.2.2. additional_build_steps The build steps specify custom build commands for any build phase. These commands are inserted directly into the build instruction file for the container runtime, for example, Containerfile or Dockerfile. The commands must conform to any rules required by the containerization tool. You can add build steps before or after any stage of the image creation process. For example, if you need git to be installed before you install your dependencies, you can add a build step at the end of the base build stage. The following are the valid keys. Each supports either a multi-line string, or a list of strings. append_base Commands to insert after building of the base image. append_builder Commands to insert after building of the builder image. append_final Commands to insert after building of the final image. append_galaxy Commands to insert after building of the galaxy image. prepend_base Commands to insert before building of the base image. prepend_builder Commands to insert before building of the builder image. prepend_final Commands to insert before building of the final image. prepend_galaxy Commands to insert before building of the galaxy image. 21.2.3. build_arg_defaults This specifies the default values for build arguments as a dictionary. This is an alternative to using the --build-arg CLI flag. Ansible Builder uses the following build arguments: ANSIBLE_GALAXY_CLI_COLLECTION_OPTS Enables the user to pass the -pre flag and other flags to enable the installation of pre-release collections. ANSIBLE_GALAXY_CLI_ROLE_OPTS This enables the user to pass any flags, such as --no-deps , to the role installation. PKGMGR_PRESERVE_CACHE This controls how often the package manager cache is cleared during the image build process. If this value is not set, which is the default, the cache is cleared frequently. If the value is always , the cache is never cleared. Any other value forces the cache to be cleared only after the system dependencies are installed in the final build stage. Ansible Builder hard-codes values given inside of build_arg_defaults into the build instruction file, so they persist if you run your container build manually. If you specify the same variable in the definition and at the command line with the CLI build-arg flag, the CLI value overrides the value in the definition. 21.2.4. Dependencies Specifies dependencies to install into the final image, including ansible-core , ansible-runner , Python packages, system packages, and collections. Ansible Builder automatically installs dependencies for any Ansible collections you install. In general, you can use standard syntax to constrain package versions. Use the same syntax you would pass to dnf , pip , ansible-galaxy , or any other package management utility. You can also define your packages or collections in separate files and reference those files in the dependencies section of your definition file. The following keys are valid: ansible_core The version of the ansible-core Python package to be installed. This value is a dictionary with a single key, package_pip . The package_pip value is passed directly to pip for installation and can be in any format that pip supports. The following are some example values: ansible_core: package_pip: ansible-core ansible_core: package_pip: ansible-core==2.14.3 ansible_core: package_pip: https://github.com/example_user/ansible/archive/refs/heads/ansible.tar.gz ansible_runner The version of the Ansible Runner Python package to be installed. This value is a dictionary with a single key, package_pip . The package_pip value is passed directly to pip for installation and can be in any format that pip supports. The following are some example values: ansible_runner: package_pip: ansible-runner ansible_runner: package_pip: ansible-runner==2.3.2 ansible_runner: package_pip: https://github.com/example_user/ansible-runner/archive/refs/heads/ansible-runner.tar.gz galaxy Collections to be installed from Ansible Galaxy. This can be a filename, a dictionary, or a multi-line string representation of an Ansible Galaxy requirements.yml file. For more information about the requirements file format, see the Galaxy User Guide . python The Python installation requirements. This can be a filename, or a list of requirements. Ansible Builder combines all the Python requirements files from all collections into a single file using the requirements-parser library. This library supports complex syntax, including references to other files. If many collections require the same package name , Ansible Builder combines them into a single entry and combines the constraints. Ansible Builder excludes some packages in the combined file of Python dependencies even if a collection lists them as dependencies. These include test packages and packages that provide Ansible itself. The full list can is available under EXCLUDE_REQUIREMENTS in src/ansible_builder/_target_scripts/introspect.py . If you need to include one of these excluded package names, use the --user-pip option of the introspect command to list it in the user requirements file. Packages supplied this way are not processed against the list of excluded Python packages. python_interpreter A dictionary that defines the Python system package name to be installed by dnf ( package_system ) or a path to the Python interpreter to be used ( python_path) . system The system packages to be installed, in bindep format. This can be a filename or a list of requirements. For more information about bindep, see the OpenDev documentation . For system packages, use the bindep format to specify cross-platform requirements, so they can be installed by whichever package management system the execution environment uses. Collections must specify necessary requirements for [platform:rpm] . Ansible Builder combines system package entries from multiple collections into a single file. Only requirements with no profiles (runtime requirements) are installed to the image. Entries from many collections which are duplicates of each other can be consolidated in the combined file. The following example uses filenames that contain the various dependencies: dependencies: python: requirements.txt system: bindep.txt galaxy: requirements.yml ansible_core: package_pip: ansible-core==2.14.2 ansible_runner: package_pip: ansible-runner==2.3.1 python_interpreter: package_system: "python310" python_path: "/usr/bin/python3.10" This example uses inline values: dependencies: python: - pywinrm system: - iputils [platform:rpm] galaxy: collections: - name: community.windows - name: ansible.utils version: 2.10.1 ansible_core: package_pip: ansible-core==2.14.2 ansible_runner: package_pip: ansible-runner==2.3.1 python_interpreter: package_system: "python310" python_path: "/usr/bin/python3.10" Note If any of these dependency files ( requirements.txt, bindep.txt, and requirements.yml ) are in the build_ignore of the collection, the build fails. Collection maintainers can verify that ansible-builder recognizes the requirements they expect by using the introspect command: ansible-builder introspect --sanitize ~/.ansible/collections/ The --sanitize option reviews all of the collection requirements and removes duplicates. It also removes any Python requirements that are normally excluded (see python dependencies). Use the -v3 option to introspect to see logging messages about requirements that are being excluded. 21.2.5. images Specifies the base image to be used. At a minimum you must specify a source, image, and tag for the base image. The base image provides the operating system and can also provide some packages. Use the standard host/namespace/container:tag syntax to specify images. You can use Podman or Docker shortcut syntax instead, but the full definition is more reliable and portable. Valid keys for this section are: base_image A dictionary defining the parent image for the execution environment. A name key must be supplied with the container image to use. Use the signature_original_name key if the image is mirrored within your repository, but signed with the original image's signature key. 21.2.6. Image verification You can verify signed container images if you are using the podman container runtime. Set the container-policy CLI option to control how this data is used in relation to a Podman policy.json file for container image signature validation. ignore_all policy: Generate a policy.json file in the build context directory <context> where no signature validation is performed. system policy: Signature validation is performed using pre-existing policy.json files in standard system locations. ansible-builder assumes no responsibility for the content within these files, and the user has complete control over the content. signature_required policy: ansible-builder uses the container image definitions to generate a policy.json file in the build context directory <context> that is used during the build to validate the images. 21.2.7. options A dictionary of keywords or options that can affect the runtime functionality Ansible Builder. Valid keys for this section are: container_init : A dictionary with keys that allow for customization of the container ENTRYPOINT and CMD directives (and related behaviors). Customizing these behaviors is an advanced task, and can result failures that are difficult to debug. Because the provided defaults control several intertwined behaviors, overriding any value skips all remaining defaults in this dictionary. Valid keys are: cmd : Literal value for the CMD Containerfile directive. The default value is ["bash"] . entrypoint : Literal value for the ENTRYPOINT Containerfile directive. The default entrypoint behavior handles signal propagation to subprocesses, as well as attempting to ensure at runtime that the container user has a proper environment with a valid writeable home directory, represented in /etc/passwd , with the HOME environment variable set to match. The default entrypoint script can emit warnings to stderr in cases where it is unable to suitably adjust the user runtime environment. This behavior can be ignored or elevated to a fatal error; consult the source for the entrypoint target script for more details. The default value is ["/opt/builder/bin/entrypoint", "dumb-init"] . package_pip : Package to install with pip for entrypoint support. This package is installed in the final build image. The default value is dumb-init==1.2.5 . package_manager_path : string with the path to the package manager (dnf or microdnf) to use. The default is /usr/bin/dnf . This value is used to install a Python interpreter, if specified in dependencies , and during the build phase by the assemble script. skip_ansible_check : This boolean value controls whether or not the check for an installation of Ansible and Ansible Runner is performed on the final image. Set this value to True to not perform this check. The default is False . relax_passwd_permissions : This boolean value controls whether the root group (GID 0) is explicitly granted write permission to /etc/passwd in the final container image. The default entrypoint script can attempt to update /etc/passwd under some container runtimes with dynamically created users to ensure a fully-functional POSIX user environment and home directory. Disabling this capability can cause failures of software features that require users to be listed in /etc/passwd with a valid and writeable home directory, for example, async in ansible-core, and the ~username shell expansion. The default is True . workdir : Default current working directory for new processes started under the final container image. Some container runtimes also use this value as HOME for dynamically-created users in the root (GID 0) group. When this value is specified, if the directory does not already exist, it is created, set to root group ownership, and rwx group permissions are recursively applied to it. The default value is /runner . user : This sets the username or UID to use as the default user for the final container image. The default value is 1000 . Example options: 21.2.8. version An integer value that sets the schema version of the execution environment definition file. Defaults to 1 . The value must be 3 if you are using Ansible Builder 3.x. 21.3. Default execution environment for AWX The example in test/data/pytz requires the awx.awx collection in the definition. The lookup plugin awx.awx.tower_schedule_rrule requires the PyPI pytz and another library to work. If the test/data/pytz/execution-environment.yml file is provided to the ansible-builder build command, it installs the collection inside the image, reads the requirements.txt file inside of the collection, and then installs pytz into the image. The image produced can be used inside of an ansible-runner project by placing these variables inside the env/settings file, inside the private data directory. --- container_image: image-name process_isolation_executable: podman # or docker process_isolation: true The awx.awx collection is a subset of content included in the default AWX . For further information, see the awx-ee repository .
[ "version: 3 build_arg_defaults: ANSIBLE_GALAXY_CLI_COLLECTION_OPTS: '--pre' dependencies: galaxy: requirements.yml python: - six - psutil system: bindep.txt images: base_image: name: registry.redhat.io/ansible-automation-platform-24/ee-minimal-rhel8:latest additional_build_files: - src: files/ansible.cfg dest: configs additional_build_steps: prepend_galaxy: - ADD _build/configs/ansible.cfg /home/runner/.ansible.cfg prepend_final: | RUN whoami RUN cat /etc/os-release append_final: - RUN echo This is a post-install command! - RUN ls -la /etc", "ansible_core: package_pip: ansible-core ansible_core: package_pip: ansible-core==2.14.3 ansible_core: package_pip: https://github.com/example_user/ansible/archive/refs/heads/ansible.tar.gz", "ansible_runner: package_pip: ansible-runner ansible_runner: package_pip: ansible-runner==2.3.2 ansible_runner: package_pip: https://github.com/example_user/ansible-runner/archive/refs/heads/ansible-runner.tar.gz", "dependencies: python: requirements.txt system: bindep.txt galaxy: requirements.yml ansible_core: package_pip: ansible-core==2.14.2 ansible_runner: package_pip: ansible-runner==2.3.1 python_interpreter: package_system: \"python310\" python_path: \"/usr/bin/python3.10\"", "dependencies: python: - pywinrm system: - iputils [platform:rpm] galaxy: collections: - name: community.windows - name: ansible.utils version: 2.10.1 ansible_core: package_pip: ansible-core==2.14.2 ansible_runner: package_pip: ansible-runner==2.3.1 python_interpreter: package_system: \"python310\" python_path: \"/usr/bin/python3.10\"", "ansible-builder introspect --sanitize ~/.ansible/collections/", "options: container_init: package_pip: dumb-init>=1.2.5 entrypoint: '[\"dumb-init\"]' cmd: '[\"csh\"]' package_manager_path: /usr/bin/microdnf relax_password_permissions: false skip_ansible_check: true workdir: /myworkdir user: bob", "--- container_image: image-name process_isolation_executable: podman # or docker process_isolation: true" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/using_automation_execution/assembly-controller-ee-setup-reference
Chapter 5. Reference
Chapter 5. Reference 5.1. key-manager attributes You can configure a key-manager by setting its attributes. Table 5.1. key-manager attributes Attribute Description algorithm The name of the algorithm to use to create the underlying KeyManagerFactory . This is provided by the JDK. For example, a JDK that uses SunJSSE provides the PKIX and SunX509 algorithms. For more information, see the Support Classes and Interfaces on the Oracle website. alias-filter A filter to apply to the aliases returned from the keystore. This can either be a comma-separated list of aliases to return or one of the following formats: ALL:-alias1:-alias2 NONE:+alias1:+alias2 credential-reference The credential reference to decrypt keystore item. This can be specified in clear text or as a reference to a credential stored in a credential-store . This is not a password of the keystore. generate-self-signed-certificate-host If the file that backs the keystore does not exist and this attribute is set, then a self-signed certificate is generated for the specified host name. Do not set this attribute in a production environment. key-store Reference to the key-store to use to initialize the underlying KeyManagerFactory . provider-name The name of the provider to use to create the underlying KeyManagerFactory . providers Reference to obtain the Provider[] to use when creating the underlying KeyManagerFactory . 5.2. key-store attributes You can configure a key-store by setting its attributes. Table 5.2. key-store attributes Attribute Description alias-filter A filter to apply to the aliases returned from the keystore, can either be a comma separated list of aliases to return or one of the following formats: ALL:-alias1:-alias2 NONE:+alias1:+alias2 Note The alias-filter attribute is case sensitive. Because the use of mixed-case or uppercase aliases, such as elytronAppServer , might not be recognized by some keystore providers, it is recommended to use lowercase aliases, such as elytronappserver . credential-reference The password to use to access the keystore. This can be specified in clear text or as a reference to a credential stored in a credential-store . path The path to the keystore file. provider-name The name of the provider to use to load the keystore. When you set this attribute, the search for the first provider that can create a key store of the specified type is disabled. providers A reference to the providers that should be used to obtain the list of provider instances to search. If not specified, the global list of providers will be used instead. relative-to The base path this store is relative to. This can be a full path or a predefined path such as jboss.server.config.dir . required If set to true , the key store file referenced must exist at the time the key store service starts. The default value is false . type The type of the key store, for example, JKS . Note The following key store types are automatically detected: JKS JCEKS PKCS12 BKS BCFKS UBER You must manually specify the other key store types. A full list of key store types can be found in Java Cryptography Architecture Standard Algorithm Name Documentation for JDK 11 in the Oracle JDK documentation. 5.3. server-ssl-context attributes You can configure the server SSL context, server-ssl-context , by setting its attributes. Table 5.3. server-ssl-context attributes Attribute Description authentication-optional If true rejecting of the client certificate by the security domain will not prevent the connection. This allows a fall through to use other authentication mechanisms, such as form login, when the client certificate is rejected by security domain. This has an effect only when the security domain is set. This defaults to false . cipher-suite-filter The filter to apply to specify the enabled cipher suites. This filter takes a list of items delimited by colons, commas, or spaces. Each item may be an OpenSSL-style cipher suite name, a standard SSL/TLS cipher suite name, or a keyword such as TLSv1.2 or DES . A full list of keywords as well as additional details on creating a filter can be found in the Javadoc for the CipherSuiteSelector class. The default value is DEFAULT , which corresponds to all known cipher suites that do not have NULL encryption and excludes any cipher suites that have no authentication. cipher-suite-names The filter to apply to specify the enabled cipher suites for TLSv1.3. final-principal-transformer A final principal transformer to apply for this mechanism realm. key-manager Reference to the key managers to use within the SSLContext . maximum-session-cache-size The maximum number of SSL/TLS sessions to be cached. need-client-auth If set to true , a client certificate is required on SSL handshake. Connection without a trusted client certificate will be rejected. This defaults to false . post-realm-principal-transformer A principal transformer to apply after the realm is selected. pre-realm-principal-transformer A principal transformer to apply before the realm is selected. protocols The enabled protocols. Allowed options are SSLv2 SSLv3 TLSv1 TLSv1.1 TLSv1.2 TLSv1.3 This defaults to enabling TLSv1 , TLSv1.1 , TLSv1.2 , and TLSv1.3 . Warning Use TLSv1.2, or TLSv1.3 instead of SSLv2, SSLv3, and TLSv1.0. Using SSLv2, SSLv3, or TLSv1.0 poses a security risk, therefore you must explicitly disable them. If you do not specify a protocol, configuring cipher-suite-names sets the value of protocols to TLSv1.3 . provider-name The name of the provider to use. If not specified, all providers from providers will be passed to the SSLContext . providers The name of the providers to obtain the Provider[] to use to load the SSLContext . realm-mapper The realm mapper to be used for SSL/TLS authentication. security-domain The security domain to use for authentication during SSL/TLS session establishment. session-timeout The timeout for SSL sessions, in seconds. The value -1 directs Elytron to use the Java Virtual Machine (JVM) default value. The value 0 indicates that there is timeout. The default value is -1 . trust-manager Reference to the trust-manager to use within the SSLContext. use-cipher-suites-order If set to true the cipher suites order defined on the server is used. If set to false the cipher suites order presented by the client is used. Defaults to true . want-client-auth If set to true a client certificate is requested, but not required, on SSL handshake. If a security domain is referenced and supports X509 evidence, want-client-auth is set to true automatically. This is ignored when need-client-auth is set. This defaults to false . wrap If true , the returned SSLEngine , SSLSocket , and SSLServerSocket instances are wrapped to protect against further modification. This defaults to false . Note The realm-mapper and principal-transformer attributes for server-ssl-context apply only for the SASL EXTERNAL mechanism, where the certificate is verified by the trust manager. HTTP CLIENT-CERT authentication settings are configured in an http-authentication-factory . 5.4. trust-manager attributes You can configure the trust manager, trust-manager , by setting its attributes. Table 5.4. trust-manager attributes Attribute Description algorithm The name of the algorithm to use to create the underlying TrustManagerFactory . This is provided by the JDK. For example, a JDK that uses SunJSSE provides the PKIX and SunX509 algorithms. More details on SunJSSE can be found in the Support Classes and Interfaces in Java Secure Socket Extension (JSSE) Reference Guide in Oracle documentation. alias-filter A filter to apply to the aliases returned from the key store. This can either be a comma-separated list of aliases to return or one of the following formats: ALL:-alias1:-alias2 NONE:+alias1:+alias2 certificate-revocation-list Enables certificate revocation list checks in a trust manager. You can only define a single CRL path using this attribute. To define multiple CRL paths, use certificate-revocation-lists . The attributes of certificate-revocation-list are: maximum-cert-path - The maximum number of non-self-issued intermediate certificates that can exist in a certification path. The default value is 5 . This attribute has been deprecated. Use maximum-cert-path in trust-manager instead. path - The path to the certificate revocation list. relative-to - The base path of the certificate revocation list file. certificate-revocation-lists Enables certificate revocation list checks in a trust manager using multiple certificate revocation lists. The attributes of certificate-revocation-list are: path - The path to the certificate revocation list. relative-to - The base path of the certificate revocation list file. key-store Reference to the key-store to use to initialize the underlying TrustManagerFactory . maximum-cert-path The maximum number of non-self-issued intermediate certificates that can exist in a certification path. The default value is 5 . This attribute has been moved to trust-manager from certificate-revocation-list inside trust-manager in JBoss EAP 7.3. For backward compatibility, the attribute is also present in certificate-revocation-list . Going forward, use maximum-cert-path in trust-manager . Note Define maximum-cert-path in either trust-manager or in certificate-revocation-list not in both. ocsp Enables online certificate status protocol (OCSP) checks in a trust manager. The attributes of ocsp are: responder - Overrides the OCSP Responder URI resolved from the certificate. responder-certificate - Alias for responder certificate located in responder-keystore or trust-manager key store if responder-keystore is not defined. responder-keystore - Alternative keystore for responder certificate. responder-certificate must be defined. prefer-crls - When both OCSP and CRL mechanisms are configured, OCSP mechanism is called first. When prefer-crls is set to true , the CRL mechanism is called first. only-leaf-cert Check revocation status of only the leaf certificate. This is an optional attribute. The default values is false . provider-name The name of the provider to use to create the underlying TrustManagerFactory . providers Reference to obtain the providers to use when creating the underlying TrustManagerFactory .
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/configuring_ssltls_in_jboss_eap/reference
Chapter 2. The value of registering your RHEL system to Red Hat
Chapter 2. The value of registering your RHEL system to Red Hat Registration establishes an authorized connection between your system and Red Hat. Red Hat issues the registered system, whether a physical or virtual machine, a certificate that identifies and authenticates the system so that it can receive protected content, software updates, security patches, support, and managed services from Red Hat. With a valid subscription, you can register a Red Hat Enterprise Linux (RHEL) system in the following ways: During the installation process, using an installer graphical user interface (GUI) or text user interface (TUI) After installation, using the command line (CLI) Automatically, during or after installation, using a kickstart script or an activation key. The specific steps to register your system depend on the version of RHEL that you are using and the registration method that you choose. Registering your system to Red Hat enables features and capabilities that you can use to manage your system and report data. For example, a registered system is authorized to access protected content repositories for subscribed products through the Red Hat Content Delivery Network (CDN) or a Red Hat Satellite Server. These content repositories contain Red Hat software packages and updates that are available only to customers with an active subscription. These packages and updates include security patches, bug fixes, and new features for RHEL and other Red Hat products. Important The entitlement-based subscription model is deprecated and will be retired in the future. Simple content access is now the default subscription model. It provides an improved subscription experience that eliminates the need to attach a subscription to a system before you can access Red Hat subscription content on that system. If your Red Hat account uses the entitlement-based subscription model, contact your Red hat account team, for example, a technical account manager (TAM) or solution architect (SA) to prepare for migration to simple content access. For more information, see Transition of subscription services to the hybrid cloud .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/interactively_installing_rhel_over_the_network/the-value-of-registering-your-rhel-system-to-red-hat_rhel-installer
Chapter 2. Configuring SAP HANA System Replication
Chapter 2. Configuring SAP HANA System Replication Before the HA cluster can be configured, SAP HANA System Replication must be configured and tested according to the guidelines from SAP: SAP HANA System Replication: Configuration . The following example shows how to enable SAP HANA System Replication on the nodes that will later become part of the HA cluster that will manage the SAP HANA System Replication setup. Please refer to RHEL for SAP Subscriptions and Repositories , for more information on how to ensure the correct subscription and repos are enabled on each HA cluster node. SAP HANA configuration used in the example: SID: RH1 Instance Number: 02 node1 FQDN: node1.example.com node2 FQDN: node2.example.com node1 SAP HANA site name: DC1 node2 SAP HANA site name: DC2 SAP HANA 'SYSTEM' user password: <HANA_SYSTEM_PASSWORD> SAP HANA administrative user: rh1adm 2.1. Prerequisites Ensure that both systems can resolve the FQDN of both systems without issues. To ensure that FQDNs can be resolved even without DNS you can place them into /etc/hosts like in the example below: [root]# cat /etc/hosts ... 192.168.0.11 node1.example.com node1 192.168.0.12 node2.example.com node2 Note As documented at hostname | SAP Help Portal SAP HANA only supports hostnames with lowercase characters. For the system replication to work, the SAP HANA log_mode variable must be set to normal, which is also the default value. Please refer to SAP Note 3221437 - System replication is failed due to "Connection refused: Primary has to run in log mode normal for system replication!" , for more information. This can be verified as the SAP HANA administrative user using the command below on both nodes. [rh1adm]USD hdbsql -u system -p <HANA_SYSTEM_PASSWORD> -i 02 "select value from "SYS"."M_INIFILE_CONTENTS" where key='log_mode'" VALUE "normal" 1 row selected A lot of the configuration steps are performed by the SAP HANA administrative user for the SID that was selected during installation. For the example setup described in this document, the user id rh1adm is used for the SAP HANA administrative user, since the SID used is RH1 . To switch from the root user to the SAP HANA administrative user, you can use the following command: [root]# sudo -i -u rh1adm [rh1adm]USD 2.2. Performing an initial SAP HANA database backup SAP HANA System Replication will only work after an initial backup has been performed on the HANA instance that will be the primary instance for the SAP HANA System Replication setup. The following shows an example for creating an initial backup in /tmp/foo directory. Please note that the size of the backup depends on the database size and may take some time to complete. The directory to which the backup will be placed must be writable by the SAP HANA administrative user. On single-tenant SAP HANA setups, the following command can be used to create the initial backup: [rh1adm]USD hdbsql -i 02 -u system -p <HANA_SYSTEM_PASSWORD> "BACKUP DATA USING FILE ('/tmp/foo')" 0 rows affected (overall time xx.xxx sec; server time xx.xxx sec) On multi-tenant SAP HANA setups, the SYSTEMDB and all tenant databases need to be backed up. The following example shows how to backup the SYSTEMDB : [rh1adm]USD hdbsql -i 02 -u system -p <HANA_SYSTEM_PASSWORD> -d SYSTEMDB "BACKUP DATA USING FILE ('/tmp/foo')" 0 rows affected (overall time xx.xxx sec; server time xx.xxx sec) [rh1adm]# hdbsql -i 02 -u system -p <HANA_SYSTEM_PASSWORD> -d SYSTEMDB "BACKUP DATA FOR RH1 USING FILE ('/tmp/foo-RH1')" 0 rows affected (overall time xx.xxx sec; server time xx.xxx sec) Please check the SAP HANA documentation on how to backup the tenant databases. 2.3. Configuring the SAP HANA primary replication instance After the initial backup has been successfully completed, initialize SAP HANA System Replication with the following command: [rh1adm]USD hdbnsutil -sr_enable --name=DC1 checking for active nameserver ... nameserver is active, proceeding ... successfully enabled system as system replication source site done. Verify that after the initialization the SAP HANA System Replication status shows the current node as 'primary': [rh1adm]#USD hdbnsutil -sr_state checking for active or inactive nameserver ... System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~ mode: primary site id: 1 site name: DC1 Host Mappings: 2.4. Configuring the SAP HANA secondary replication instance After installing the secondary SAP HANA instance on the other HA cluster node using the same SID and instance number as the SAP HANA primary instance, it needs to be registered to the already running SAP HANA primary instance. The SAP HANA instance that will become the secondary replication instance needs to be stopped first before it can be registered to the primary instance: [rh1adm]USD HDB stop When the secondary SAP HANA instance has been stopped, copy the SAP HANA system PKI SSFS_RH1.KEY and SSFS_RH1.DAT files from the primary SAP HANA instance to the secondary SAP HANA instance: [rh1adm]USD scp root@node1:/usr/sap/RH1/SYS/global/security/rsecssfs/key/SSFS_RH1.KEY /usr/sap/RH1/SYS/global/security/rsecssfs/key/SSFS_RH1.KEY ... [rh1adm]USD scp root@node1:/usr/sap/RH1/SYS/global/security/rsecssfs/data/SSFS_RH1.DAT /usr/sap/RH1/SYS/global/security/rsecssfs/data/SSFS_RH1.DAT ... Please refer to SAP Note 2369981 - Required configuration steps for authentication with HANA System Replication , for more information. Now the SAP HANA secondary replication instance can be registered to the SAP HANA primary replication instance with the following command: [rh1adm]USD hdbnsutil -sr_register --remoteHost=node1 --remoteInstance=USD{TINSTANCE} --replicationMode=syncmem --operationMode=logreplay --name=DC2 adding site ... checking for inactive nameserver ... nameserver node2:30201 not responding. collecting information ... updating local ini files ... done. Please choose the values for replicationMode and operationMode according to your requirements for HANA System Replication. Please refer to Replication Modes for SAP HANA System Replication and Operation Modes for SAP HANA System Replication , for more information. When the registration is successful, the SAP HANA secondary replication instance can be started again: [rh1adm]USD HDB start Verify that the secondary node is running and that 'mode' matches the value used for the replicationMode parameter in the hdbnsutil -sr_register command. If registration was successful, the SAP HANA System Replication status on the SAP HANA secondary replication instance should look similar to the following: [rh1adm]USD hdbnsutil -sr_state checking for active or inactive nameserver ... System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~ mode: syncmem site id: 2 site name: DC2 active primary site: 1 Host Mappings: ~~~~~~~~~~~~~~ node2 -> [DC1] node1 node2 -> [DC2] node2 2.5. Checking SAP HANA System Replication state To check the current state of SAP HANA System Replication, you can use the systemReplicationStatus.py Python script provided by SAP HANA as the SAP HANA administrative user on the current primary SAP HANA node. On single tenant SAP HANA setups, the output should look similar to the following: [rh1adm]USD python /usr/sap/RH1/HDB02/exe/python_support/systemReplicationStatus.py | Host | Port | Service Name | Volume ID | Site ID | Site Name | Secondary | Secondary | Secondary | Secondary | Secondary | Replication | Replication | Replication | | | | | | | | Host | Port | Site ID | Site Name | Active Status | Mode | Status | Status Details | | ----- | ----- | ------------ | --------- | ------- | --------- | --------- | --------- | --------- | --------- | ------------- | ----------- | ----------- | -------------- | | node1 | 30201 | nameserver | 1 | 1 | DC1 | node2 | 30201 | 2 | DC2 | YES | SYNCMEM | ACTIVE | | | node1 | 30207 | xsengine | 2 | 1 | DC1 | node2 | 30207 | 2 | DC2 | YES | SYNCMEM | ACTIVE | | | node1 | 30203 | indexserver | 3 | 1 | DC1 | node2 | 30203 | 2 | DC2 | YES | SYNCMEM | ACTIVE | | status system replication site "2": ACTIVE overall system replication status: ACTIVE Local System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ mode: PRIMARY site id: 1 site name: DC1 On multi-tenant SAP HANA setups, the output should look similar to the following: [rh1adm]USD python /usr/sap/RH1/HDB02/exe/python_support/systemReplicationStatus.py | Database | Host | Port | Service Name | Volume ID | Site ID | Site Name | Secondary | Secondary | Secondary | Secondary | Secondary | Replication | Replication | Replication | | | | | | | | | Host | Port | Site ID | Site Name | Active Status | Mode | Status | Status Details | | -------- | ----- | ----- | ------------ | --------- | ------- | --------- | ----------| --------- | --------- | --------- | ------------- | ----------- | ----------- | -------------- | | SYSTEMDB | node1 | 30201 | nameserver | 1 | 1 | DC1 | node2 | 30201 | 2 | DC2 | YES | SYNCMEM | ACTIVE | | | RH1 | node1 | 30207 | xsengine | 2 | 1 | DC1 | node2 | 30207 | 2 | DC2 | YES | SYNCMEM | ACTIVE | | | RH1 | node1 | 30203 | indexserver | 3 | 1 | DC1 | node2 | 30203 | 2 | DC2 | YES | SYNCMEM | ACTIVE | | status system replication site "2": ACTIVE overall system replication status: ACTIVE Local System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ mode: PRIMARY site id: 1 site name: DC1 In both cases, please also check the return code: echo USD? 15 A return code of 15 (Active) is fine. 14 means synchronizing and 13 is initiaizing. 2.6. Testing SAP HANA System Replication The test phase is a very important phase to verify if the KPIs are met and the landscape performs the way it was configured. If the SAP HANA System Replication setup does not work as expected without the HA cluster, it can lead to unexpected behavior when the HA cluster is configured later on to manage the SAP HANA System Replication setup. Therefore, a few test cases are suggested below as guidelines, which should be enhanced by your specific requirements. The tests should be performed with realistic data loads and sizes. Test case Description Full Replication Measure how long the initial synchronization takes, from when Lost Connection Measure how long it takes until primary and secondary are Takeover Measure how long it takes for the secondary system to be fully Data Consistency Create or change data, then perform a takeover and check if the data is still available. Client Reconnect Test client access after a take-over, to check if the DNS/Virtual IP switch worked. Primary becomes secondary Measure how long it takes until both systems are in sync, when the former primary becomes the secondary after a takeover. Please refer to section "9. Testing", in How To Perform System Replication for SAP HANA , for more information.
[ "SID: RH1 Instance Number: 02 node1 FQDN: node1.example.com node2 FQDN: node2.example.com node1 SAP HANA site name: DC1 node2 SAP HANA site name: DC2 SAP HANA 'SYSTEM' user password: <HANA_SYSTEM_PASSWORD> SAP HANA administrative user: rh1adm", "cat /etc/hosts 192.168.0.11 node1.example.com node1 192.168.0.12 node2.example.com node2", "[rh1adm]USD hdbsql -u system -p <HANA_SYSTEM_PASSWORD> -i 02 \"select value from \"SYS\".\"M_INIFILE_CONTENTS\" where key='log_mode'\" VALUE \"normal\" 1 row selected", "sudo -i -u rh1adm [rh1adm]USD", "[rh1adm]USD hdbsql -i 02 -u system -p <HANA_SYSTEM_PASSWORD> \"BACKUP DATA USING FILE ('/tmp/foo')\" 0 rows affected (overall time xx.xxx sec; server time xx.xxx sec)", "[rh1adm]USD hdbsql -i 02 -u system -p <HANA_SYSTEM_PASSWORD> -d SYSTEMDB \"BACKUP DATA USING FILE ('/tmp/foo')\" 0 rows affected (overall time xx.xxx sec; server time xx.xxx sec) hdbsql -i 02 -u system -p <HANA_SYSTEM_PASSWORD> -d SYSTEMDB \"BACKUP DATA FOR RH1 USING FILE ('/tmp/foo-RH1')\" 0 rows affected (overall time xx.xxx sec; server time xx.xxx sec)", "[rh1adm]USD hdbnsutil -sr_enable --name=DC1 checking for active nameserver nameserver is active, proceeding successfully enabled system as system replication source site done.", "USD hdbnsutil -sr_state checking for active or inactive nameserver System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~ mode: primary site id: 1 site name: DC1 Host Mappings:", "[rh1adm]USD HDB stop", "[rh1adm]USD scp root@node1:/usr/sap/RH1/SYS/global/security/rsecssfs/key/SSFS_RH1.KEY /usr/sap/RH1/SYS/global/security/rsecssfs/key/SSFS_RH1.KEY [rh1adm]USD scp root@node1:/usr/sap/RH1/SYS/global/security/rsecssfs/data/SSFS_RH1.DAT /usr/sap/RH1/SYS/global/security/rsecssfs/data/SSFS_RH1.DAT", "[rh1adm]USD hdbnsutil -sr_register --remoteHost=node1 --remoteInstance=USD{TINSTANCE} --replicationMode=syncmem --operationMode=logreplay --name=DC2 adding site checking for inactive nameserver nameserver node2:30201 not responding. collecting information updating local ini files done.", "[rh1adm]USD HDB start", "[rh1adm]USD hdbnsutil -sr_state checking for active or inactive nameserver System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~ mode: syncmem site id: 2 site name: DC2 active primary site: 1 Host Mappings: ~~~~~~~~~~~~~~ node2 -> [DC1] node1 node2 -> [DC2] node2", "[rh1adm]USD python /usr/sap/RH1/HDB02/exe/python_support/systemReplicationStatus.py | Host | Port | Service Name | Volume ID | Site ID | Site Name | Secondary | Secondary | Secondary | Secondary | Secondary | Replication | Replication | Replication | | | | | | | | Host | Port | Site ID | Site Name | Active Status | Mode | Status | Status Details | | ----- | ----- | ------------ | --------- | ------- | --------- | --------- | --------- | --------- | --------- | ------------- | ----------- | ----------- | -------------- | | node1 | 30201 | nameserver | 1 | 1 | DC1 | node2 | 30201 | 2 | DC2 | YES | SYNCMEM | ACTIVE | | | node1 | 30207 | xsengine | 2 | 1 | DC1 | node2 | 30207 | 2 | DC2 | YES | SYNCMEM | ACTIVE | | | node1 | 30203 | indexserver | 3 | 1 | DC1 | node2 | 30203 | 2 | DC2 | YES | SYNCMEM | ACTIVE | | status system replication site \"2\": ACTIVE overall system replication status: ACTIVE Local System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ mode: PRIMARY site id: 1 site name: DC1", "[rh1adm]USD python /usr/sap/RH1/HDB02/exe/python_support/systemReplicationStatus.py | Database | Host | Port | Service Name | Volume ID | Site ID | Site Name | Secondary | Secondary | Secondary | Secondary | Secondary | Replication | Replication | Replication | | | | | | | | | Host | Port | Site ID | Site Name | Active Status | Mode | Status | Status Details | | -------- | ----- | ----- | ------------ | --------- | ------- | --------- | ----------| --------- | --------- | --------- | ------------- | ----------- | ----------- | -------------- | | SYSTEMDB | node1 | 30201 | nameserver | 1 | 1 | DC1 | node2 | 30201 | 2 | DC2 | YES | SYNCMEM | ACTIVE | | | RH1 | node1 | 30207 | xsengine | 2 | 1 | DC1 | node2 | 30207 | 2 | DC2 | YES | SYNCMEM | ACTIVE | | | RH1 | node1 | 30203 | indexserver | 3 | 1 | DC1 | node2 | 30203 | 2 | DC2 | YES | SYNCMEM | ACTIVE | | status system replication site \"2\": ACTIVE overall system replication status: ACTIVE Local System Replication State ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ mode: PRIMARY site id: 1 site name: DC1", "echo USD? 15" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/automating_sap_hana_scale-up_system_replication_using_the_rhel_ha_add-on/asmb_configure_sap_hana_replication_v9-automating-sap-hana-scale-up-system-replication
10.6. Host Resilience
10.6. Host Resilience 10.6.1. Host High Availability The Red Hat Virtualization Manager uses fencing to keep hosts in a cluster responsive. A Non Responsive host is different from a Non Operational host. Non Operational hosts can be communicated with by the Manager, but have an incorrect configuration, for example a missing logical network. Non Responsive hosts cannot be communicated with by the Manager. Fencing allows a cluster to react to unexpected host failures and enforce power saving, load balancing, and virtual machine availability policies. You should configure the fencing parameters for your host's power management device and test their correctness from time to time. In a fencing operation, a non-responsive host is rebooted, and if the host does not return to an active status within a prescribed time, it remains non-responsive pending manual intervention and troubleshooting. Note To automatically check the fencing parameters, you can configure the PMHealthCheckEnabled (false by default) and PMHealthCheckIntervalInSec (3600 sec by default) engine-config options. When set to true, PMHealthCheckEnabled will check all host agents at the interval specified by PMHealthCheckIntervalInSec , and raise warnings if it detects issues. See Section 22.2.2, "Syntax for the engine-config Command" for more information about configuring engine-config options. Power management operations can be performed by Red Hat Virtualization Manager after it reboots, by a proxy host, or manually in the Administration Portal. All the virtual machines running on the non-responsive host are stopped, and highly available virtual machines are started on a different host. At least two hosts are required for power management operations. After the Manager starts up, it automatically attempts to fence non-responsive hosts that have power management enabled after the quiet time (5 minutes by default) has elapsed. The quiet time can be configured by updating the DisableFenceAtStartupInSec engine-config option. Note The DisableFenceAtStartupInSec engine-config option helps prevent a scenario where the Manager attempts to fence hosts while they boot up. This can occur after a data center outage because a host's boot process is normally longer than the Manager boot process. Hosts can be fenced automatically by the proxy host using the power management parameters, or manually by right-clicking on a host and using the options on the menu. Important If a host runs virtual machines that are highly available, power management must be enabled and configured. 10.6.2. Power Management by Proxy in Red Hat Virtualization The Red Hat Virtualization Manager does not communicate directly with fence agents. Instead, the Manager uses a proxy to send power management commands to a host power management device. The Manager uses VDSM to execute power management device actions, so another host in the environment is used as a fencing proxy. You can select between: Any host in the same cluster as the host requiring fencing. Any host in the same data center as the host requiring fencing. A viable fencing proxy host has a status of either UP or Maintenance . 10.6.3. Setting Fencing Parameters on a Host The parameters for host fencing are set using the Power Management fields on the New Host or Edit Host windows. Power management enables the system to fence a troublesome host using an additional interface such as a Remote Access Card (RAC). All power management operations are done using a proxy host, as opposed to directly by the Red Hat Virtualization Manager. At least two hosts are required for power management operations. Setting fencing parameters on a host Click Compute Hosts and select the host. Click Edit . Click the Power Management tab. Select the Enable Power Management check box to enable the fields. Select the Kdump integration check box to prevent the host from fencing while performing a kernel crash dump. Important If you enable or disable Kdump integration on an existing host, you must reinstall the host . Optionally, select the Disable policy control of power management check box if you do not want your host's power management to be controlled by the Scheduling Policy of the host's cluster. Click the + button to add a new power management device. The Edit fence agent window opens. Enter the Address , User Name , and Password of the power management device. Select the power management device Type from the drop-down list. Note For more information on how to set up a custom power management device, see https://access.redhat.com/articles/1238743 . Enter the SSH Port number used by the power management device to communicate with the host. Enter the Slot number used to identify the blade of the power management device. Enter the Options for the power management device. Use a comma-separated list of 'key=value' entries. Select the Secure check box to enable the power management device to connect securely to the host. Click the Test button to ensure the settings are correct. Test Succeeded, Host Status is: on will display upon successful verification. Warning Power management parameters (userid, password, options, etc) are tested by Red Hat Virtualization Manager only during setup and manually after that. If you choose to ignore alerts about incorrect parameters, or if the parameters are changed on the power management hardware without the corresponding change in Red Hat Virtualization Manager, fencing is likely to fail when most needed. Click OK to close the Edit fence agent window. In the Power Management tab, optionally expand the Advanced Parameters and use the up and down buttons to specify the order in which the Manager will search the host's cluster and dc (datacenter) for a fencing proxy. Click OK . You are returned to the list of hosts. Note that the exclamation mark to the host's name has now disappeared, signifying that power management has been successfully configured. 10.6.4. fence_kdump Advanced Configuration kdump Click the name of a host to view the status of the kdump service in the General tab of the details view: Enabled : kdump is configured properly and the kdump service is running. Disabled : the kdump service is not running (in this case kdump integration will not work properly). Unknown : happens only for hosts with an earlier VDSM version that does not report kdump status. For more information on installing and using kdump, see the Red Hat Enterprise Linux 7 Kernel Crash Dump Guide . fence_kdump Enabling Kdump integration in the Power Management tab of the New Host or Edit Host window configures a standard fence_kdump setup. If the environment's network configuration is simple and the Manager's FQDN is resolvable on all hosts, the default fence_kdump settings are sufficient for use. However, there are some cases where advanced configuration of fence_kdump is necessary. Environments with more complex networking may require manual changes to the configuration of the Manager, fence_kdump listener, or both. For example, if the Manager's FQDN is not resolvable on all hosts with Kdump integration enabled, you can set a proper host name or IP address using engine-config : The following example cases may also require configuration changes: The Manager has two NICs, where one of these is public-facing, and the second is the preferred destination for fence_kdump messages. You need to execute the fence_kdump listener on a different IP or port. You need to set a custom interval for fence_kdump notification messages, to prevent possible packet loss. Customized fence_kdump detection settings are recommended for advanced users only, as changes to the default configuration are only necessary in more complex networking setups. For configuration options for the fence_kdump listener see fence_kdump listener Configuration . For configuration of kdump on the Manager see Configuring fence_kdump on the Manager . 10.6.4.1. fence_kdump listener Configuration Edit the configuration of the fence_kdump listener. This is only necessary in cases where the default configuration is not sufficient. Manually Configuring the fence_kdump Listener Create a new file (for example, my-fence-kdump.conf ) in /etc/ovirt-engine/ovirt-fence-kdump-listener.conf.d/ . Enter your customization with the syntax OPTION = value and save the file. Important The edited values must also be changed in engine-config as outlined in the fence_kdump Listener Configuration Options table in Section 10.6.4.2, "Configuring fence_kdump on the Manager" . Restart the fence_kdump listener: The following options can be customized if required: Table 10.9. fence_kdump Listener Configuration Options Variable Description Default Note LISTENER_ADDRESS Defines the IP address to receive fence_kdump messages on. 0.0.0.0 If the value of this parameter is changed, it must match the value of FenceKdumpDestinationAddress in engine-config . LISTENER_PORT Defines the port to receive fence_kdump messages on. 7410 If the value of this parameter is changed, it must match the value of FenceKdumpDestinationPort in engine-config . HEARTBEAT_INTERVAL Defines the interval in seconds of the listener's heartbeat updates. 30 If the value of this parameter is changed, it must be half the size or smaller than the value of FenceKdumpListenerTimeout in engine-config . SESSION_SYNC_INTERVAL Defines the interval in seconds to synchronize the listener's host kdumping sessions in memory to the database. 5 If the value of this parameter is changed, it must be half the size or smaller than the value of KdumpStartedTimeout in engine-config . REOPEN_DB_CONNECTION_INTERVAL Defines the interval in seconds to reopen the database connection which was previously unavailable. 30 - KDUMP_FINISHED_TIMEOUT Defines the maximum timeout in seconds after the last received message from kdumping hosts after which the host kdump flow is marked as FINISHED. 60 If the value of this parameter is changed, it must be double the size or higher than the value of FenceKdumpMessageInterval in engine-config . 10.6.4.2. Configuring fence_kdump on the Manager Edit the Manager's kdump configuration. This is only necessary in cases where the default configuration is not sufficient. The current configuration values can be found using: Manually Configuring Kdump with engine-config Edit kdump's configuration using the engine-config command: Important The edited values must also be changed in the fence_kdump listener configuration file as outlined in the Kdump Configuration Options table. See Section 10.6.4.1, "fence_kdump listener Configuration" . Restart the ovirt-engine service: Reinstall all hosts with Kdump integration enabled, if required (see the table below). The following options can be configured using engine-config : Table 10.10. Kdump Configuration Options Variable Description Default Note FenceKdumpDestinationAddress Defines the hostname(s) or IP address(es) to send fence_kdump messages to. If empty, the Manager's FQDN is used. Empty string (Manager FQDN is used) If the value of this parameter is changed, it must match the value of LISTENER_ADDRESS in the fence_kdump listener configuration file, and all hosts with Kdump integration enabled must be reinstalled. FenceKdumpDestinationPort Defines the port to send fence_kdump messages to. 7410 If the value of this parameter is changed, it must match the value of LISTENER_PORT in the fence_kdump listener configuration file, and all hosts with Kdump integration enabled must be reinstalled. FenceKdumpMessageInterval Defines the interval in seconds between messages sent by fence_kdump. 5 If the value of this parameter is changed, it must be half the size or smaller than the value of KDUMP_FINISHED_TIMEOUT in the fence_kdump listener configuration file, and all hosts with Kdump integration enabled must be reinstalled. FenceKdumpListenerTimeout Defines the maximum timeout in seconds since the last heartbeat to consider the fence_kdump listener alive. 90 If the value of this parameter is changed, it must be double the size or higher than the value of HEARTBEAT_INTERVAL in the fence_kdump listener configuration file. KdumpStartedTimeout Defines the maximum timeout in seconds to wait until the first message from the kdumping host is received (to detect that host kdump flow has started). 30 If the value of this parameter is changed, it must be double the size or higher than the value of SESSION_SYNC_INTERVAL in the fence_kdump listener configuration file, and FenceKdumpMessageInterval . 10.6.5. Soft-Fencing Hosts Hosts can sometimes become non-responsive due to an unexpected problem, and though VDSM is unable to respond to requests, the virtual machines that depend upon VDSM remain alive and accessible. In these situations, restarting VDSM returns VDSM to a responsive state and resolves this issue. "SSH Soft Fencing" is a process where the Manager attempts to restart VDSM via SSH on non-responsive hosts. If the Manager fails to restart VDSM via SSH, the responsibility for fencing falls to the external fencing agent if an external fencing agent has been configured. Soft-fencing over SSH works as follows. Fencing must be configured and enabled on the host, and a valid proxy host (a second host, in an UP state, in the data center) must exist. When the connection between the Manager and the host times out, the following happens: On the first network failure, the status of the host changes to "connecting". The Manager then makes three attempts to ask VDSM for its status, or it waits for an interval determined by the load on the host. The formula for determining the length of the interval is configured by the configuration values TimeoutToResetVdsInSeconds (the default is 60 seconds) + [DelayResetPerVmInSeconds (the default is 0.5 seconds)]*(the count of running virtual machines on host) + [DelayResetForSpmInSeconds (the default is 20 seconds)] * 1 (if host runs as SPM) or 0 (if the host does not run as SPM). To give VDSM the maximum amount of time to respond, the Manager chooses the longer of the two options mentioned above (three attempts to retrieve the status of VDSM or the interval determined by the above formula). If the host does not respond when that interval has elapsed, vdsm restart is executed via SSH. If vdsm restart does not succeed in re-establishing the connection between the host and the Manager, the status of the host changes to Non Responsive and, if power management is configured, fencing is handed off to the external fencing agent. Note Soft-fencing over SSH can be executed on hosts that have no power management configured. This is distinct from "fencing": fencing can be executed only on hosts that have power management configured. 10.6.6. Using Host Power Management Functions When power management has been configured for a host, you can access a number of options from the Administration Portal interface. While each power management device has its own customizable options, they all support the basic options to start, stop, and restart a host. Using Host Power Management Functions Click Compute Hosts and select the host. Click the Management drop-down menu and select one of the following Power Management options: Restart : This option stops the host and waits until the host's status changes to Down . When the agent has verified that the host is down, the highly available virtual machines are restarted on another host in the cluster. The agent then restarts this host. When the host is ready for use its status displays as Up . Start : This option starts the host and lets it join a cluster. When it is ready for use its status displays as Up . Stop : This option powers off the host. Before using this option, ensure that the virtual machines running on the host have been migrated to other hosts in the cluster. Otherwise the virtual machines will crash and only the highly available virtual machines will be restarted on another host. When the host has been stopped its status displays as Non-Operational . Note If Power Management is not enabled, you can restart or stop the host by selecting it, clicking the Management drop-down menu, and selecting an SSH Management option, Restart or Stop . Important When two fencing agents are defined on a host, they can be used concurrently or sequentially. For concurrent agents, both agents have to respond to the Stop command for the host to be stopped; and when one agent responds to the Start command, the host will go up. For sequential agents, to start or stop a host, the primary agent is used first; if it fails, the secondary agent is used. Click OK . 10.6.7. Manually Fencing or Isolating a Non-Responsive Host If a host unpredictably goes into a non-responsive state, for example, due to a hardware failure, it can significantly affect the performance of the environment. If you do not have a power management device, or if it is incorrectly configured, you can reboot the host manually. Warning Do not use the Confirm host has been rebooted option unless you have manually rebooted the host. Using this option while the host is still running can lead to a virtual machine image corruption. Manually fencing or isolating a non-responsive host In the Administration Portal, click Compute Hosts and confirm the host's status is Non Responsive . Manually reboot the host. This could mean physically entering the lab and rebooting the host. In the Administration Portal, select the host and click More Actions ( ), then click Confirm 'Host has been Rebooted' . Select the Approve Operation check box and click OK . If your hosts take an unusually long time to boot, you can set ServerRebootTimeout to specify how many seconds to wait before determining that the host is Non Responsive :
[ "engine-config -s FenceKdumpDestinationAddress= A.B.C.D", "systemctl restart ovirt-fence-kdump-listener.service", "engine-config -g OPTION", "engine-config -s OPTION = value", "systemctl restart ovirt-engine.service", "engine-config --set ServerRebootTimeout= integer" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/sect-Host_Resilience
19.2. Preparing for a Hard Drive Installation
19.2. Preparing for a Hard Drive Installation Use this option to install Red Hat Enterprise Linux on hardware systems without a DVD drive and if you do not want to access installation phase 3 and the package repository over a network. 19.2.1. Accessing Installation Phase 3 and the Package Repository on a Hard Drive Note Hard drive installations using DASD or FCP-attached SCSI storage only work from native ext2, ext3, or ext4 partitions. If you have a file system based on devices other than native ext2, ext3, or ext4 (particularly a file system based on RAID or LVM partitions) you will not be able to use it as a source to perform a hard drive installation. Hard drive installations use an ISO image of the installation DVD (a file that contains an exact copy of the content of the DVD), and an install.img file extracted from the ISO image. With these files present on a hard drive, you can choose Hard drive as the installation source when you boot the installation program. Hard drive installations use the following files: an ISO image of the installation DVD. An ISO image is a file that contains an exact copy of the content of a DVD. an install.img file extracted from the ISO image. optionally, a product.img file extracted from the ISO image. With these files present on a hard drive, you can choose Hard drive as the installation source when you boot the installation program (refer to Section 22.4, "Installation Method" ). Ensure that you have boot media available as described in Chapter 20, Booting (IPL) the Installer . To prepare a DASD or FCP-attached device as an installation source, follow these steps: Obtain an ISO image of the Red Hat Enterprise Linux installation DVD (refer to Chapter 1, Obtaining Red Hat Enterprise Linux ). Alternatively, if you have the DVD on physical media, you can create an image of it with the following command on a Linux system: where dvd is your DVD drive device, name_of_image is the name you give to the resulting ISO image file, and path_to_image is the path to the location on your system where the resulting ISO image will be stored. Transfer the ISO images to the DASD or SCSI device. The ISO files must be located on a hard drive that is activated in installation phase 1 (refer to Chapter 21, Installation Phase 1: Configuring a Network Device ) or in installation phase 2 (refer to Chapter 22, Installation Phase 2: Configuring Language and Installation Source ). This is automatically possible with DASDs. For an FCP LUN, you must either boot (IPL) from the same FCP LUN or use the rescue shell provided by the installation phase 1 menus to manually activate the FCP LUN holding the ISOs as described in Section 25.2.1, "Dynamically Activating an FCP LUN" . Use a SHA256 checksum program to verify that the ISO image that you copied is intact. Many SHA256 checksum programs are available for various operating systems. On a Linux system, run: where name_of_image is the name of the ISO image file. The SHA256 checksum program displays a string of 64 characters called a hash . Compare this hash to the hash displayed for this particular image on the Downloads page in the Red Hat Customer Portal (refer to Chapter 1, Obtaining Red Hat Enterprise Linux ). The two hashes should be identical. Copy the images/ directory from inside the ISO image to the same directory in which you stored the ISO image file itself. Enter the following commands: where path_to_image is the path to the ISO image file, name_of_image is the name of the ISO image file, and mount_point is a mount point on which to mount the image while you copy files from the image. For example: The ISO image file and an images/ directory are now present, side-by-side, in the same directory. Verify that the images/ directory contains at least the install.img file, without which installation cannot proceed. Optionally, the images/ directory should contain the product.img file, without which only the packages for a Minimal installation will be available during the package group selection stage (refer to Section 23.17, "Package Group Selection" ). Important install.img and product.img must be the only files in the images/ directory. Make the DASD or SCSI LUN accessible to the new z/VM guest virtual machine or LPAR, and then proceed with installation. (Refer to Chapter 20, Booting (IPL) the Installer ) or alternatively with Section 19.2.1.1, "Preparing for Booting the Installer from a Hard Drive" . Note The Red Hat Enterprise Linux installation program can test the integrity of the installation medium. It works with the DVD, hard drive ISO, and NFS ISO installation methods. We recommend that you test all installation media before starting the installation process, and before reporting any installation-related bugs. To use this test, add the mediacheck parameter to your parameter file (refer to Section 26.7, "Miscellaneous Parameters" ). 19.2.1.1. Preparing for Booting the Installer from a Hard Drive If you would like to boot (IPL) the installer from a hard drive, in addition to accessing installation phase 3 and the package repository, you can optionally install the zipl boot loader on the same (or a different) disk. Be aware that zipl only supports one boot record per disk. If you have multiple partitions on a disk, they all "share" the disk's one boot record. In the following, assume the hard drive is prepared as described in Section 19.2.1, "Accessing Installation Phase 3 and the Package Repository on a Hard Drive" , mounted under /mnt , and you do not need to preserve an existing boot record. To prepare a hard drive to boot the installer, install the zipl boot loader on the hard drive by entering the following command: For more details on zipl.conf, refer to the chapter on zipl in Linux on System z Device Drivers, Features, and Commands on Red Hat Enterprise Linux 6 . Warning If you have an operating system installed on the disk, and you still plan to access it later on, refer the chapter on zipl in Linux on System z Device Drivers, Features, and Commands on Red Hat Enterprise Linux 6 for how to add a new entry in the zipl boot loader (that is, in zipl.conf ).
[ "dd if=/dev/ dvd of=/ path_to_image / name_of_image .iso", "sha256sum name_of_image .iso", "mount -t iso9660 / path_to_image / name_of_image .iso / mount_point -o loop,ro cp -pr / mount_point /images / publicly_available_directory / umount / mount_point", "mount -t iso9660 /var/isos/RHEL6.iso /mnt/tmp -o loop,ro cp -pr /mnt/tmp/images /var/isos/ umount /mnt/tmp", "zipl -V -t /mnt/ -i /mnt/images/kernel.img -r /mnt/images/initrd.img -p /mnt/images/generic.prm" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-steps-hd-installs-s390
Release notes for Red Hat build of OpenJDK 11.0.14
Release notes for Red Hat build of OpenJDK 11.0.14 Red Hat build of OpenJDK 11 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.14/index
Chapter 3. Installing and configuring automation controller on Red Hat OpenShift Container Platform web console
Chapter 3. Installing and configuring automation controller on Red Hat OpenShift Container Platform web console You can use these instructions to install the automation controller operator on Red Hat OpenShift Container Platform, specify custom resources, and deploy Ansible Automation Platform with an external database. Automation controller configuration can be done through the automation controller extra_settings or directly in the user interface after deployment. However, it is important to note that configurations made in extra_settings take precedence over settings made in the user interface. Note When an instance of automation controller is removed, the associated PVCs are not automatically deleted. This can cause issues during migration if the new deployment has the same name as the one. Therefore, it is recommended that you manually remove old PVCs before deploying a new automation controller instance in the same namespace. See Finding and deleting PVCs for more information. 3.1. Prerequisites You have installed the Red Hat Ansible Automation Platform catalog in Operator Hub. 3.2. Installing the automation controller operator Navigate to Operators Installed Operators , then click on the Ansible Automation Platform operator. Locate the Automation controller tab, then click Create instance . You can proceed with configuring the instance using either the Form View or YAML view. 3.2.1. Creating your automation controller form-view Ensure Form view is selected. It should be selected by default. Enter the name of the new controller. Optional: Add any labels necessary. Click Advanced configuration . Enter Hostname of the instance. The hostname is optional. The default hostname will be generated based upon the deployment name you have selected. Enter the Admin account username . Enter the Admin email address . Under the Admin password secret drop-down menu, select the secret. Under Database configuration secret drop-down menu, select the secret. Under Old Database configuration secret drop-down menu, select the secret. Under Secret key secret drop-down menu, select the secret. Under Broadcast Websocket Secret drop-down menu, select the secret. Enter any Service Account Annotations necessary. 3.2.2. Configuring your controller image pull policy Under Image Pull Policy , click on the radio button to select Always Never IfNotPresent To display the option under Image Pull Secrets , click the arrow. Click + beside Add Image Pull Secret and enter a value. To display fields under the Web container resource requirements drop-down list, click the arrow. Under Limits , and Requests , enter values for CPU cores , Memory , and Storage . To display fields under the Task container resource requirements drop-down list, click the arrow. Under Limits , and Requests , enter values for CPU cores , Memory , and Storage . To display fields under the EE Control Plane container resource requirements drop-down list, click the arrow. Under Limits , and Requests , enter values for CPU cores , Memory , and Storage . To display fields under the PostgreSQL init container resource requirements (when using a managed service) drop-down list, click the arrow. Under Limits , and Requests , enter values for CPU cores , Memory , and Storage . To display fields under the Redis container resource requirements drop-down list, click the arrow. Under Limits , and Requests , enter values for CPU cores , Memory , and Storage . To display fields under the PostgreSQL container resource requirements (when using a managed instance) * drop-down list, click the arrow. Under Limits , and Requests , enter values for CPU cores , Memory , and Storage . To display the PostgreSQL container storage requirements (when using a managed instance) drop-down list, click the arrow. Under Limits , and Requests , enter values for CPU cores , Memory , and Storage . Under Replicas, enter the number of instance replicas. Under Remove used secrets on instance removal , select true or false . The default is false. Under Preload instance with data upon creation , select true or false . The default is true. 3.2.3. Configuring your controller LDAP security Procedure If you do not have a ldap_cacert_secret , you can create one with the following command: USD oc create secret generic <resourcename>-custom-certs \ --from-file=ldap-ca.crt=<PATH/TO/YOUR/CA/PEM/FILE> \ 1 1 Modify this to point to where your CA cert is stored. This will create a secret that looks like this: USD oc get secret/mycerts -o yaml apiVersion: v1 data: ldap-ca.crt: <mysecret> 1 kind: Secret metadata: name: mycerts namespace: awx type: Opaque 1 Automation controller looks for the data field ldap-ca.crt in the specified secret when using the ldap_cacert_secret . Under LDAP Certificate Authority Trust Bundle click the drop-down menu and select your ldap_cacert_secret . Under LDAP Password Secret , click the drop-down menu and select a secret. Under EE Images Pull Credentials Secret , click the drop-down menu and select a secret. Under Bundle Cacert Secret , click the drop-down menu and select a secret. Under Service Type , click the drop-down menu and select ClusterIP LoadBalancer NodePort 3.2.4. Configuring your automation controller operator route options The Red Hat Ansible Automation Platform operator installation form allows you to further configure your automation controller operator route options under Advanced configuration . Click Advanced configuration . Under Ingress type , click the drop-down menu and select Route . Under Route DNS host , enter a common host name that the route answers to. Under Route TLS termination mechanism , click the drop-down menu and select Edge or Passthrough . For most instances Edge should be selected. Under Route TLS credential secret , click the drop-down menu and select a secret from the list. Under Enable persistence for /var/lib/projects directory select either true or false by moving the slider. 3.2.5. Configuring the Ingress type for your automation controller operator The Red Hat Ansible Automation Platform operator installation form allows you to further configure your automation controller operator Ingress under Advanced configuration . Procedure Click Advanced Configuration . Under Ingress type , click the drop-down menu and select Ingress . Under Ingress annotations , enter any annotations to add to the ingress. Under Ingress TLS secret , click the drop-down menu and select a secret from the list. After you have configured your automation controller operator, click Create at the bottom of the form view. Red Hat OpenShift Container Platform will now create the pods. This may take a few minutes. You can view the progress by navigating to Workloads Pods and locating the newly created instance. Verification Verify that the following operator pods provided by the Ansible Automation Platform Operator installation from automation controller are running: Operator manager controllers automation controller automation hub The operator manager controllers for each of the 3 operators, include the following: automation-controller-operator-controller-manager automation-hub-operator-controller-manager resource-operator-controller-manager After deploying automation controller, you will see the addition of these pods: controller controller-postgres After deploying automation hub, you will see the addition of these pods: hub-api hub-content hub-postgres hub-redis hub-worker Note A missing pod can indicate the need for a pull secret. Pull secrets are required for protected or private image registries. See Using image pull secrets for more information. You can diagnose this issue further by running oc describe pod <pod-name> to see if there is an ImagePullBackOff error on that pod. 3.3. Configuring an external database for automation controller on Red Hat Ansible Automation Platform operator For users who prefer to deploy Ansible Automation Platform with an external database, they can do so by configuring a secret with instance credentials and connection information, then applying it to their cluster using the oc create command. By default, the Red Hat Ansible Automation Platform operator automatically creates and configures a managed PostgreSQL pod in the same namespace as your Ansible Automation Platform deployment. You can deploy Ansible Automation Platform with an external database instead of the managed PostgreSQL pod that the Red Hat Ansible Automation Platform operator automatically creates. Using an external database lets you share and reuse resources and manually manage backups, upgrades, and performance optimizations. Note The same external database (PostgreSQL instance) can be used for both automation hub and automation controller as long as the database names are different. In other words, you can have multiple databases with different names inside a single PostgreSQL instance. The following section outlines the steps to configure an external database for your automation controller on a Ansible Automation Platform operator. Prerequisite The external database must be a PostgreSQL database that is the version supported by the current release of Ansible Automation Platform. Note Ansible Automation Platform 2.3 supports PostgreSQL 13. Procedure The external postgres instance credentials and connection information will need to be stored in a secret, which will then be set on the automation controller spec. Create a postgres_configuration_secret .yaml file, following the template below: apiVersion: v1 kind: Secret metadata: name: external-postgres-configuration namespace: <target_namespace> 1 stringData: host: "<external_ip_or_url_resolvable_by_the_cluster>" 2 port: "<external_port>" 3 database: "<desired_database_name>" username: "<username_to_connect_as>" password: "<password_to_connect_with>" 4 sslmode: "prefer" 5 type: "unmanaged" type: Opaque 1 Namespace to create the secret in. This should be the same namespace you wish to deploy to. 2 The resolvable hostname for your database node. 3 External port defaults to 5432 . 4 Value for variable password should not contain single or double quotes (', ") or backslashes (\) to avoid any issues during deployment, backup or restoration. 5 The variable sslmode is valid for external databases only. The allowed values are: prefer , disable , allow , require , verify-ca , and verify-full . Apply external-postgres-configuration-secret.yml to your cluster using the oc create command. USD oc create -f external-postgres-configuration-secret.yml When creating your AutomationController custom resource object, specify the secret on your spec, following the example below: apiVersion: awx.ansible.com/v1beta1 kind: AutomationController metadata: name: controller-dev spec: postgres_configuration_secret: external-postgres-configuration 3.4. Finding and deleting PVCs A persistent volume claim (PVC) is a storage volume used to store data that automation hub and automation controller applications use. These PVCs are independent from the applications and remain even when the application is deleted. If you are confident that you no longer need a PVC, or have backed it up elsewhere, you can manually delete them. Procedure List the existing PVCs in your deployment namespace: oc get pvc -n <namespace> Identify the PVC associated with your deployment by comparing the old deployment name and the PVC name. Delete the old PVC: oc delete pvc -n <namespace> <pvc-name> 3.5. Additional resources For more information on running operators on OpenShift Container Platform, navigate to the OpenShift Container Platform product documentation and click the Operators - Working with Operators in OpenShift Container Platform guide.
[ "oc create secret generic <resourcename>-custom-certs --from-file=ldap-ca.crt=<PATH/TO/YOUR/CA/PEM/FILE> \\ 1", "oc get secret/mycerts -o yaml apiVersion: v1 data: ldap-ca.crt: <mysecret> 1 kind: Secret metadata: name: mycerts namespace: awx type: Opaque", "apiVersion: v1 kind: Secret metadata: name: external-postgres-configuration namespace: <target_namespace> 1 stringData: host: \"<external_ip_or_url_resolvable_by_the_cluster>\" 2 port: \"<external_port>\" 3 database: \"<desired_database_name>\" username: \"<username_to_connect_as>\" password: \"<password_to_connect_with>\" 4 sslmode: \"prefer\" 5 type: \"unmanaged\" type: Opaque", "oc create -f external-postgres-configuration-secret.yml", "apiVersion: awx.ansible.com/v1beta1 kind: AutomationController metadata: name: controller-dev spec: postgres_configuration_secret: external-postgres-configuration", "get pvc -n <namespace>", "delete pvc -n <namespace> <pvc-name>" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/deploying_the_red_hat_ansible_automation_platform_operator_on_openshift_container_platform/installing-controller-operator
7.165. ppc64-diag
7.165. ppc64-diag 7.165.1. RHSA-2015:1320 - Moderate: ppc64-diag security, bug fix and enhancement update Updated ppc64-diag packages that fix two security issues, several bugs, and add various enhancements are now available for Red Hat Enterprise Linux 6. Red Hat Product Security has rated this update as having Moderate security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links in the References section. The ppc64-diag packages provide diagnostic tools for Linux on the 64-bit PowerPC platforms. The platform diagnostics write events reported by the firmware to the service log, provide automated responses to urgent events, and notify system administrators or connected service frameworks about the reported events. Security Fix CVE-2014-4038 , CVE-2014-4039 Multiple insecure temporary file use flaws were found in the way the ppc64-diag utility created certain temporary files. A local attacker could possibly use either of these flaws to perform a symbolic link attack and overwrite arbitrary files with the privileges of the user running ppc64-diag, or obtain sensitive information from the temporary files. The ppc64-diag packages have been upgraded to upstream version 2.6.7, which provides a number of bug fixes and enhancements over the version. (BZ# 1148142 ) Bug Fixes BZ# 1139655 Previously, the "explain_syslog" and "syslog_to_svclog" commands failed with a "No such file or directory" error message. With this update, the ppc64-diag package specifies the location of the message_catalog directory correctly, which prevents the described error from occurring. BZ# 1131501 Prior to this update, the /var/lock/subsys/rtas_errd file was incorrectly labeled for SELinux as "system_u:object_r:var_lock_t:s0". This update corrects the SELinux label to "system_u:object_r:rtas_errd_var_lock_t:s0". Users of ppc64-diag are advised to upgrade to these updated packages, which correct these issues and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-ppc64-diag
Chapter 5. Exporting applications
Chapter 5. Exporting applications As a developer, you can export your application in the ZIP file format. Based on your needs, import the exported application to another project in the same cluster or a different cluster by using the Import YAML option in the +Add view. Exporting your application helps you to reuse your application resources and saves your time. 5.1. Prerequisites You have installed the gitops-primer Operator from the OperatorHub. Note The Export application option is disabled in the Topology view even after installing the gitops-primer Operator. You have created an application in the Topology view to enable Export application . 5.2. Procedure In the developer perspective, perform one of the following steps: Navigate to the +Add view and click Export application in the Application portability tile. Navigate to the Topology view and click Export application . Click OK in the Export Application dialog box. A notification opens to confirm that the export of resources from your project has started. Optional steps that you might need to perform in the following scenarios: If you have started exporting an incorrect application, click Export application Cancel Export . If your export is already in progress and you want to start a fresh export, click Export application Restart Export . If you want to view logs associated with exporting an application, click Export application and the View Logs link. After a successful export, click Download in the dialog box to download application resources in ZIP format onto your machine.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/building_applications/odc-exporting-applications
Chapter 1. bundle
Chapter 1. bundle 1.1. bundle:capabilities 1.1.1. Description Displays OSGi capabilities of a given bundles. 1.1.2. Syntax bundle:capabilities [options] [ids] 1.1.3. Arguments Name Description ids The list of bundle (identified by IDs or name or name/version) separated by whitespaces 1.1.4. Options Name Description --help Display this help message --namespace --context, -c Use the given bundle context 1.2. bundle:classes 1.2.1. Description Displays a list of classes/resources contained in the bundle 1.2.2. Syntax bundle:classes [options] [ids] 1.2.3. Arguments Name Description ids The list of bundle (identified by IDs or name or name/version) separated by whitespaces 1.2.4. Options Name Description --help Display this help message -a, --display-all-files List all classes and files in the bundle --context, -c Use the given bundle context 1.3. bundle:diag 1.3.1. Description Displays diagnostic information why a bundle is not Active 1.3.2. Syntax bundle:diag [options] [ids] 1.3.3. Arguments Name Description ids The list of bundle (identified by IDs or name or name/version) separated by whitespaces 1.3.4. Options Name Description --help Display this help message --context, -c Use the given bundle context 1.4. bundle:dynamic-import 1.4.1. Description Enables/disables dynamic-import for a given bundle. 1.4.2. Syntax bundle:dynamic-import [options] id 1.4.3. Arguments Name Description id The bundle ID or name or name/version 1.4.4. Options Name Description --help Display this help message --context Use the given bundle context 1.5. bundle:find-class 1.5.1. Description Locates a specified class in any deployed bundle 1.5.2. Syntax bundle:find-class [options] className 1.5.3. Arguments Name Description className Class name or partial class name to be found 1.5.4. Options Name Description --help Display this help message 1.6. bundle:headers 1.6.1. Description Displays OSGi headers of a given bundles. 1.6.2. Syntax bundle:headers [options] [ids] 1.6.3. Arguments Name Description ids The list of bundle (identified by IDs or name or name/version) separated by whitespaces 1.6.4. Options Name Description --help Display this help message --no-uses Print or not the Export-Package uses section --indent Indentation method --context, -c Use the given bundle context 1.7. bundle:id 1.7.1. Description Gets the bundle ID. 1.7.2. Syntax bundle:id [options] id 1.7.3. Arguments Name Description id The bundle ID or name or name/version 1.7.4. Options Name Description --help Display this help message --context Use the given bundle context 1.8. bundle:info 1.8.1. Description Displays detailed information of a given bundles. 1.8.2. Syntax bundle:info [options] [ids] 1.8.3. Arguments Name Description ids The list of bundle (identified by IDs or name or name/version) separated by whitespaces 1.8.4. Options Name Description --help Display this help message --context, -c Use the given bundle context 1.9. bundle:install 1.9.1. Description Installs one or more bundles. 1.9.2. Syntax bundle:install [options] urls 1.9.3. Arguments Name Description urls Bundle URLs separated by whitespaces 1.9.4. Options Name Description -l, --start-level Sets the start level of the bundles --help Display this help message --force, -f Forces the command to execute --r3-bundles Allow OSGi R3 bundles without the Bundle-ManifestVersion: 2 header. -s, --start Starts the bundles after installation 1.10. bundle:list 1.10.1. Description Lists all installed bundles. 1.10.2. Syntax bundle:list [options] [ids] 1.10.3. Arguments Name Description ids The list of bundle (identified by IDs or name or name/version) separated by whitespaces 1.10.4. Options Name Description -name, -n Show bundle name --help Display this help message -u Shows the update locations -r Shows the bundle revisions --no-ellipsis -l Show the locations -s Shows the symbolic name --context, -c Use the given bundle context -t Specifies the bundle threshold; bundles with a start-level less than this value will not get printed out. --no-format Disable table rendered output 1.11. bundle:load-test 1.11.1. Description Load test bundle lifecycle 1.11.2. Syntax bundle:load-test [options] 1.11.3. Options Name Description --help Display this help message --refresh percentage of bundle refresh vs restart --excludes List of bundles (ids or symbolic names) to exclude --iterations number of iterations per thread --delay maximum delay between actions --threads number of concurrent threads 1.12. bundle:refresh 1.12.1. Description Refresh bundles. 1.12.2. Syntax bundle:refresh [options] [ids] 1.12.3. Arguments Name Description ids The list of bundle (identified by IDs or name or name/version) separated by whitespaces 1.12.4. Options Name Description --help Display this help message --context, -c Use the given bundle context 1.13. bundle:requirements 1.13.1. Description Displays OSGi requirements of a given bundles. 1.13.2. Syntax bundle:requirements [options] [ids] 1.13.3. Arguments Name Description ids The list of bundle (identified by IDs or name or name/version) separated by whitespaces 1.13.4. Options Name Description --help Display this help message --namespace --context, -c Use the given bundle context 1.14. bundle:resolve 1.14.1. Description Resolve bundles. 1.14.2. Syntax bundle:resolve [options] [ids] 1.14.3. Arguments Name Description ids The list of bundle (identified by IDs or name or name/version) separated by whitespaces 1.14.4. Options Name Description --help Display this help message --context, -c Use the given bundle context 1.15. bundle:restart 1.15.1. Description Restarts bundles. 1.15.2. Syntax bundle:restart [options] [ids] 1.15.3. Arguments Name Description ids The list of bundle (identified by IDs or name or name/version) separated by whitespaces 1.15.4. Options Name Description --help Display this help message --context, -c Use the given bundle context 1.16. bundle:services 1.16.1. Description Lists OSGi services per Bundle 1.16.2. Syntax bundle:services [options] [ids] 1.16.3. Arguments Name Description ids The list of bundle (identified by IDs or name or name/version) separated by whitespaces 1.16.4. Options Name Description --help Display this help message -p Shows the properties of the services -u Shows the services each bundle uses. (By default the provided services are shown) --context, -c Use the given bundle context -a Shows all services. (Karaf commands and completers are hidden by default) 1.17. bundle:start-level 1.17.1. Description Gets or sets the start level of a bundle. 1.17.2. Syntax bundle:start-level [options] id [startLevel] 1.17.3. Arguments Name Description id The bundle ID or name or name/version startLevel The bundle's new start level 1.17.4. Options Name Description --help Display this help message --context Use the given bundle context 1.18. bundle:start 1.18.1. Description Starts bundles. 1.18.2. Syntax bundle:start [options] [ids] 1.18.3. Arguments Name Description ids The list of bundle (identified by IDs or name or name/version) separated by whitespaces 1.18.4. Options Name Description --help Display this help message -t, --transient Keep the bundle as auto-start --context, -c Use the given bundle context 1.19. bundle:status 1.19.1. Description Get the bundle current status 1.19.2. Syntax bundle:status [options] id 1.19.3. Arguments Name Description id The bundle ID or name or name/version 1.19.4. Options Name Description --help Display this help message --context Use the given bundle context 1.20. bundle:stop 1.20.1. Description Stop bundles. 1.20.2. Syntax bundle:stop [options] [ids] 1.20.3. Arguments Name Description ids The list of bundle (identified by IDs or name or name/version) separated by whitespaces 1.20.4. Options Name Description --help Display this help message -t, --transient Keep the bundle as auto-start --context, -c Use the given bundle context 1.21. bundle:tree-show 1.21.1. Description Shows the tree of bundles based on the wiring information. 1.21.2. Syntax bundle:tree-show [options] id 1.21.3. Arguments Name Description id The bundle ID or name or name/version 1.21.4. Options Name Description --help Display this help message -v, --version Show bundle versions --context Use the given bundle context 1.22. bundle:uninstall 1.22.1. Description Uninstall bundles. 1.22.2. Syntax bundle:uninstall [options] [ids] 1.22.3. Arguments Name Description ids The list of bundle (identified by IDs or name or name/version) separated by whitespaces 1.22.4. Options Name Description --help Display this help message --context, -c Use the given bundle context 1.23. bundle:update 1.23.1. Description Update bundle. 1.23.2. Syntax bundle:update [options] id [location] 1.23.3. Arguments Name Description id The bundle ID or name or name/version location The bundles update location 1.23.4. Options Name Description --help Display this help message --context Use the given bundle context --raw Do not update the bundles's Bundle-UpdateLocation manifest header -r, --refresh Perform a refresh after the bundle update 1.24. bundle:watch 1.24.1. Description Watches and updates bundles 1.24.2. Syntax bundle:watch [options] [urls] 1.24.3. Arguments Name Description urls The bundle IDs or URLs 1.24.4. Options Name Description -i Watch interval --help Display this help message --stop Stops watching all bundles --remove Removes bundles from the watch list --start Starts watching the selected bundles --list Displays the watch list 1.24.5. Details Watches the local maven repo for changes in snapshot jars and redploys changed jars
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_karaf_console_reference/bundle
Appendix E. Admin Client configuration parameters
Appendix E. Admin Client configuration parameters bootstrap.servers Type: list Importance: high A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping-this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form host1:port1,host2:port2,... . Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down). ssl.key.password Type: password Default: null Importance: high The password of the private key in the key store file orthe PEM key specified in `ssl.keystore.key'. This is required for clients only if two-way authentication is configured. ssl.keystore.certificate.chain Type: password Default: null Importance: high Certificate chain in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with a list of X.509 certificates. ssl.keystore.key Type: password Default: null Importance: high Private key in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with PKCS#8 keys. If the key is encrypted, key password must be specified using 'ssl.key.password'. ssl.keystore.location Type: string Default: null Importance: high The location of the key store file. This is optional for client and can be used for two-way authentication for client. ssl.keystore.password Type: password Default: null Importance: high The store password for the key store file. This is optional for client and only needed if 'ssl.keystore.location' is configured. Key store password is not supported for PEM format. ssl.truststore.certificates Type: password Default: null Importance: high Trusted certificates in the format specified by 'ssl.truststore.type'. Default SSL engine factory supports only PEM format with X.509 certificates. ssl.truststore.location Type: string Default: null Importance: high The location of the trust store file. ssl.truststore.password Type: password Default: null Importance: high The password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity checking is disabled. Trust store password is not supported for PEM format. client.dns.lookup Type: string Default: use_all_dns_ips Valid Values: [use_all_dns_ips, resolve_canonical_bootstrap_servers_only] Importance: medium Controls how the client uses DNS lookups. If set to use_all_dns_ips , connect to each returned IP address in sequence until a successful connection is established. After a disconnection, the IP is used. Once all IPs have been used once, the client resolves the IP(s) from the hostname again (both the JVM and the OS cache DNS name lookups, however). If set to resolve_canonical_bootstrap_servers_only , resolve each bootstrap address into a list of canonical names. After the bootstrap phase, this behaves the same as use_all_dns_ips . client.id Type: string Default: "" Importance: medium An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging. connections.max.idle.ms Type: long Default: 300000 (5 minutes) Importance: medium Close idle connections after the number of milliseconds specified by this config. default.api.timeout.ms Type: int Default: 60000 (1 minute) Valid Values: [0,... ] Importance: medium Specifies the timeout (in milliseconds) for client APIs. This configuration is used as the default timeout for all client operations that do not specify a timeout parameter. receive.buffer.bytes Type: int Default: 65536 (64 kibibytes) Valid Values: [-1,... ] Importance: medium The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used. request.timeout.ms Type: int Default: 30000 (30 seconds) Valid Values: [0,... ] Importance: medium The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted. sasl.client.callback.handler.class Type: class Default: null Importance: medium The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface. sasl.jaas.config Type: password Default: null Importance: medium JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described here . The format for the value is: loginModuleClass controlFlag (optionName=optionValue)*; . For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;. sasl.kerberos.service.name Type: string Default: null Importance: medium The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config. sasl.login.callback.handler.class Type: class Default: null Importance: medium The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler. sasl.login.class Type: class Default: null Importance: medium The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin. sasl.mechanism Type: string Default: GSSAPI Importance: medium SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism. sasl.oauthbearer.jwks.endpoint.url Type: string Default: null Importance: medium The OAuth/OIDC provider URL from which the provider's JWKS (JSON Web Key Set) can be retrieved. The URL can be HTTP(S)-based or file-based. If the URL is HTTP(S)-based, the JWKS data will be retrieved from the OAuth/OIDC provider via the configured URL on broker startup. All then-current keys will be cached on the broker for incoming requests. If an authentication request is received for a JWT that includes a "kid" header claim value that isn't yet in the cache, the JWKS endpoint will be queried again on demand. However, the broker polls the URL every sasl.oauthbearer.jwks.endpoint.refresh.ms milliseconds to refresh the cache with any forthcoming keys before any JWT requests that include them are received. If the URL is file-based, the broker will load the JWKS file from a configured location on startup. In the event that the JWT includes a "kid" header value that isn't in the JWKS file, the broker will reject the JWT and authentication will fail. sasl.oauthbearer.token.endpoint.url Type: string Default: null Importance: medium The URL for the OAuth/OIDC identity provider. If the URL is HTTP(S)-based, it is the issuer's token endpoint URL to which requests will be made to login based on the configuration in sasl.jaas.config. If the URL is file-based, it specifies a file containing an access token (in JWT serialized form) issued by the OAuth/OIDC identity provider to use for authorization. security.protocol Type: string Default: PLAINTEXT Importance: medium Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. send.buffer.bytes Type: int Default: 131072 (128 kibibytes) Valid Values: [-1,... ] Importance: medium The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used. socket.connection.setup.timeout.max.ms Type: long Default: 30000 (30 seconds) Importance: medium The maximum amount of time the client will wait for the socket connection to be established. The connection setup timeout will increase exponentially for each consecutive connection failure up to this maximum. To avoid connection storms, a randomization factor of 0.2 will be applied to the timeout resulting in a random range between 20% below and 20% above the computed value. socket.connection.setup.timeout.ms Type: long Default: 10000 (10 seconds) Importance: medium The amount of time the client will wait for the socket connection to be established. If the connection is not built before the timeout elapses, clients will close the socket channel. ssl.enabled.protocols Type: list Default: TLSv1.2,TLSv1.3 Importance: medium The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for ssl.protocol . ssl.keystore.type Type: string Default: JKS Importance: medium The file format of the key store file. This is optional for client. ssl.protocol Type: string Default: TLSv1.3 Importance: medium The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'. 'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' if the server does not support 'TLSv1.3'. If this config is set to 'TLSv1.2', clients will not use 'TLSv1.3' even if it is one of the values in ssl.enabled.protocols and the server only supports 'TLSv1.3'. ssl.provider Type: string Default: null Importance: medium The name of the security provider used for SSL connections. Default value is the default security provider of the JVM. ssl.truststore.type Type: string Default: JKS Importance: medium The file format of the trust store file. metadata.max.age.ms Type: long Default: 300000 (5 minutes) Valid Values: [0,... ] Importance: low The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions. metric.reporters Type: list Default: "" Importance: low A list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics. metrics.num.samples Type: int Default: 2 Valid Values: [1,... ] Importance: low The number of samples maintained to compute metrics. metrics.recording.level Type: string Default: INFO Valid Values: [INFO, DEBUG, TRACE] Importance: low The highest recording level for metrics. metrics.sample.window.ms Type: long Default: 30000 (30 seconds) Valid Values: [0,... ] Importance: low The window of time a metrics sample is computed over. reconnect.backoff.max.ms Type: long Default: 1000 (1 second) Valid Values: [0,... ] Importance: low The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms. reconnect.backoff.ms Type: long Default: 50 Valid Values: [0,... ] Importance: low The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker. retries Type: int Default: 2147483647 Valid Values: [0,... ,2147483647] Importance: low Setting a value greater than zero will cause the client to resend any request that fails with a potentially transient error. It is recommended to set the value to either zero or MAX_VALUE and use corresponding timeout parameters to control how long a client should retry a request. retry.backoff.ms Type: long Default: 100 Valid Values: [0,... ] Importance: low The amount of time to wait before attempting to retry a failed request. This avoids repeatedly sending requests in a tight loop under some failure scenarios. sasl.kerberos.kinit.cmd Type: string Default: /usr/bin/kinit Importance: low Kerberos kinit command path. sasl.kerberos.min.time.before.relogin Type: long Default: 60000 Importance: low Login thread sleep time between refresh attempts. sasl.kerberos.ticket.renew.jitter Type: double Default: 0.05 Importance: low Percentage of random jitter added to the renewal time. sasl.kerberos.ticket.renew.window.factor Type: double Default: 0.8 Importance: low Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket. sasl.login.connect.timeout.ms Type: int Default: null Importance: low The (optional) value in milliseconds for the external authentication provider connection timeout. Currently applies only to OAUTHBEARER. sasl.login.read.timeout.ms Type: int Default: null Importance: low The (optional) value in milliseconds for the external authentication provider read timeout. Currently applies only to OAUTHBEARER. sasl.login.refresh.buffer.seconds Type: short Default: 300 Valid Values: [0,... ,3600] Importance: low The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 (1 hour); a default value of 300 (5 minutes) is used if no value is specified. This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER. sasl.login.refresh.min.period.seconds Type: short Default: 60 Valid Values: [0,... ,900] Importance: low The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified. This value and sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER. sasl.login.refresh.window.factor Type: double Default: 0.8 Valid Values: [0.5,... ,1.0] Importance: low Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER. sasl.login.refresh.window.jitter Type: double Default: 0.05 Valid Values: [0.0,... ,0.25] Importance: low The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER. sasl.login.retry.backoff.max.ms Type: long Default: 10000 (10 seconds) Importance: low The (optional) value in milliseconds for the maximum wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER. sasl.login.retry.backoff.ms Type: long Default: 100 Importance: low The (optional) value in milliseconds for the initial wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER. sasl.oauthbearer.clock.skew.seconds Type: int Default: 30 Importance: low The (optional) value in seconds to allow for differences between the time of the OAuth/OIDC identity provider and the broker. sasl.oauthbearer.expected.audience Type: list Default: null Importance: low The (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences. The JWT will be inspected for the standard OAuth "aud" claim and if this value is set, the broker will match the value from JWT's "aud" claim to see if there is an exact match. If there is no match, the broker will reject the JWT and authentication will fail. sasl.oauthbearer.expected.issuer Type: string Default: null Importance: low The (optional) setting for the broker to use to verify that the JWT was created by the expected issuer. The JWT will be inspected for the standard OAuth "iss" claim and if this value is set, the broker will match it exactly against what is in the JWT's "iss" claim. If there is no match, the broker will reject the JWT and authentication will fail. sasl.oauthbearer.jwks.endpoint.refresh.ms Type: long Default: 3600000 (1 hour) Importance: low The (optional) value in milliseconds for the broker to wait between refreshing its JWKS (JSON Web Key Set) cache that contains the keys to verify the signature of the JWT. sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms Type: long Default: 10000 (10 seconds) Importance: low The (optional) value in milliseconds for the maximum wait between attempts to retrieve the JWKS (JSON Web Key Set) from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting. sasl.oauthbearer.jwks.endpoint.retry.backoff.ms Type: long Default: 100 Importance: low The (optional) value in milliseconds for the initial wait between JWKS (JSON Web Key Set) retrieval attempts from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting. sasl.oauthbearer.scope.claim.name Type: string Default: scope Importance: low The OAuth claim for the scope is often named "scope", but this (optional) setting can provide a different name to use for the scope included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim. sasl.oauthbearer.sub.claim.name Type: string Default: sub Importance: low The OAuth claim for the subject is often named "sub", but this (optional) setting can provide a different name to use for the subject included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim. security.providers Type: string Default: null Importance: low A list of configurable creator classes each returning a provider implementing security algorithms. These classes should implement the org.apache.kafka.common.security.auth.SecurityProviderCreator interface. ssl.cipher.suites Type: list Default: null Importance: low A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported. ssl.endpoint.identification.algorithm Type: string Default: https Importance: low The endpoint identification algorithm to validate server hostname using server certificate. ssl.engine.factory.class Type: class Default: null Importance: low The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory. ssl.keymanager.algorithm Type: string Default: SunX509 Importance: low The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine. ssl.secure.random.implementation Type: string Default: null Importance: low The SecureRandom PRNG implementation to use for SSL cryptography operations. ssl.trustmanager.algorithm Type: string Default: PKIX Importance: low The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.1/html/using_amq_streams_on_rhel/admin-client-configuration-parameters-str
Chapter 2. Creating an S3 client
Chapter 2. Creating an S3 client To interact with data stored in an S3-compatible object store from a workbench, you must create a local client to handle requests to the AWS S3 service by using an AWS SDK such as Boto3. Boto3 is an AWS SDK for Python that provides an API for creating and managing AWS services, such as AWS S3 or S3-compatible object storage. After you have configured a Boto3 client for the S3 service from a workbench, you can connect and work with data in your S3-compatible object store. Prerequisites You have access to an S3-compatible object store. You have stored files in a bucket on your object store. You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. You have created a data science project. You have added a workbench to the project using a Jupyter notebook image. You have configured a connection for your workbench based on the credentials of your S3-compatible storage account. Procedure From the OpenShift AI dashboard, click Data Science Projects . Click the name of the project that contains the workbench. Click the Workbenches tab. If the status of the workbench is Running , skip to the step. If the status of the workbench is Stopped , in the Status column for the workbench, click Start . The Status column changes from Stopped to Starting when the workbench server is starting, and then to Running when the workbench has successfully started. Click the Open link to the workbench. Your Jupyter environment window opens. On the toolbar, click the Git Clone icon and then select Clone a Repository . In the Clone a repo dialog, enter the following URL https://github.com/opendatahub-io/odh-doc-examples.git and then click Clone . In the file browser, select the newly-created odh-doc-examples folder. Double-click the newly created storage folder. You see a Jupyter notebook named s3client_examples.ipynb . Double-click the s3client_examples.ipynb file to launch the notebook. The notebook opens. You see code examples for the following tasks: Installing Boto3 and required Boto3 libraries Creating an S3 client session Creating an S3 client connection Listing files Creating a bucket Uploading a file to a bucket Downloading a file from a bucket Copying files between buckets Deleting an object from a bucket Deleting a bucket In the notebook, locate the following instructions to install Boto3 and its required libraries, and run the code cell: The instructions in the code cell update the Python Package Manager (pip) to the latest version, install Boto3 and its required libraries, and display the version of Boto3 installed. Locate the following instructions to create an S3 client and session. Run the code cell. The instructions in the code cell configure an S3 client and establish a session to your S3-compatible object store. Verification To use the S3 client to connect to your object store and list the available buckets, locate the following instructions to list buckets and run the code cell: A successful response includes a HTTPStatusCode of 200 and a list of buckets similar to the following output:
[ "#Upgrade pip to the latest version !pip3 install --upgrade pip #Install Boto3 !pip3 install boto3 #Install Boto3 libraries import os import boto3 from botocore.client import Config from boto3 import session #Check Boto3 version !pip3 show boto3", "#Creating an S3 client #Define credentials key_id = os.environ.get('AWS_ACCESS_KEY_ID') secret_key = os.environ.get('AWS_SECRET_ACCESS_KEY') endpoint = os.environ.get('AWS_S3_ENDPOINT') region = os.environ.get('AWS_DEFAULT_REGION') #Define client session session = Boto3.session.Session(aws_access_key_id=key_id, aws_secret_access_key=secret_key) #Define client connection s3_client = Boto3.client('s3', aws_access_key_id=key_id, aws_secret_access_key=secret_key,aws_session_token=None, config=Boto3.session.Config(signature_version='s3v4'), endpoint_url=endpoint, region_name=region)", "s3_client.list_buckets()", "'HTTPStatusCode': 200, 'Buckets': [{'Name': 'aqs086-image-registry', 'CreationDate': datetime.datetime(2024, 1, 16, 20, 21, 36, 244000, tzinfo=tzlocal ())}]" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/working_with_data_in_an_s3-compatible_object_store/creating-an-s3-client_s3
Chapter 1. JBoss EAP 8.0 update methods
Chapter 1. JBoss EAP 8.0 update methods You can update JBoss EAP 8.0 using the following methods: JBoss EAP Installation Manager Management CLI Web console RPM From JBoss EAP 8.0 onward, the JBoss EAP server can be updated in either online or offline mode. These modes are supported by all the update methods. Online update: You can update JBoss EAP directly from an online repository. You must have access to the Red Hat repositories or their mirrors to use this mode. This option always updates to the latest available JBoss EAP 8.0 update. Offline update: You can update JBoss EAP from a local file-system. Use the offline update mode if you do not have online access to the Red Hat repositories or their mirrors. You will need to download the latest update and distribute it to your systems. Depending on your requirements, choose one of the listed update methods. The following table provides a brief overview of each type of update method. Table 1.1. Update Methods Method Description JBoss EAP Installation Manager Use this method if you want to update a local JBoss EAP 8.x server that is not running. Management CLI Use this method if you want to update a remote JBoss EAP 8.x server. Web console Use this method if you want to update a 8.x server that is running either in a standalone or managed domain mode using management console GUI. RPM Installation Use this method if you want to update a JBoss EAP 8.x server installed using the RPM installation method. You can run JBoss EAP on the following cloud platforms. This documentation does not cover provisioning on other cloud platforms. See the related documentation. JBoss EAP on OpenShift. Additional resources JBoss EAP on OpenShift .
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/updating_red_hat_jboss_enterprise_application_platform/jboss-eap-8-update-methods_default
Chapter 16. Red Hat Quay build enhancements
Chapter 16. Red Hat Quay build enhancements Red Hat Quay builds can be run on virtualized platforms. Backwards compatibility to run build configurations are also available. 16.1. Red Hat Quay enhanced build architecture The following image shows the expected design flow and architecture of the enhanced build features: With this enhancement, the build manager first creates the Job Object . Then, the Job Object then creates a pod using the quay-builder-image . The quay-builder-image will contain the quay-builder binary and the Podman service. The created pod runs as unprivileged . The quay-builder binary then builds the image while communicating status and retrieving build information from the Build Manager. 16.2. Red Hat Quay build limitations Running builds in Red Hat Quay in an unprivileged context might cause some commands that were working under the build strategy to fail. Attempts to change the build strategy could potentially cause performance issues and reliability with the build. Running builds directly in a container does not have the same isolation as using virtual machines. Changing the build environment might also caused builds that were previously working to fail. 16.3. Creating a Red Hat Quay builders environment with OpenShift Container Platform The procedures in this section explain how to create a Red Hat Quay virtual builders environment with OpenShift Container Platform. 16.3.1. OpenShift Container Platform TLS component The tls component allows you to control TLS configuration. Note Red Hat Quay 3.9 does not support builders when the TLS component is managed by the Operator. If you set tls to unmanaged , you supply your own ssl.cert and ssl.key files. In this instance, if you want your cluster to support builders, you must add both the Quay route and the builder route name to the SAN list in the cert, or use a wildcard. To add the builder route, use the following format: [quayregistry-cr-name]-quay-builder-[ocp-namespace].[ocp-domain-name]:443 16.3.2. Using OpenShift Container Platform for Red Hat Quay builders Builders require SSL/TLS certificates. For more information about SSL/TLS certificates, see Adding TLS certificates to the Red Hat Quay container . If you are using Amazon Web Service (AWS) S3 storage, you must modify your storage bucket in the AWS console, prior to running builders. See "Modifying your AWS S3 storage bucket" in the following section for the required parameters. 16.3.2.1. Preparing OpenShift Container Platform for virtual builders Use the following procedure to prepare OpenShift Container Platform for Red Hat Quay virtual builders. Note This procedure assumes you already have a cluster provisioned and a Quay Operator running. This procedure is for setting up a virtual namespace on OpenShift Container Platform. Procedure Log in to your Red Hat Quay cluster using a cluster administrator account. Create a new project where your virtual builders will be run, for example, virtual-builders , by running the following command: USD oc new-project virtual-builders Create a ServiceAccount in the project that will be used to run builds by entering the following command: USD oc create sa -n virtual-builders quay-builder Provide the created service account with editing permissions so that it can run the build: USD oc adm policy -n virtual-builders add-role-to-user edit system:serviceaccount:virtual-builders:quay-builder Grant the Quay builder anyuid scc permissions by entering the following command: USD oc adm policy -n virtual-builders add-scc-to-user anyuid -z quay-builder Note This action requires cluster admin privileges. This is required because builders must run as the Podman user for unprivileged or rootless builds to work. Obtain the token for the Quay builder service account. If using OpenShift Container Platform 4.10 or an earlier version, enter the following command: oc sa get-token -n virtual-builders quay-builder If using OpenShift Container Platform 4.11 or later, enter the following command: USD oc create token quay-builder -n virtual-builders Example output eyJhbGciOiJSUzI1NiIsImtpZCI6IldfQUJkaDVmb3ltTHZ0dGZMYjhIWnYxZTQzN2dJVEJxcDJscldSdEUtYWsifQ... Determine the builder route by entering the following command: USD oc get route -n quay-enterprise Example output NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD ... example-registry-quay-builder example-registry-quay-builder-quay-enterprise.apps.docs.quayteam.org example-registry-quay-app grpc edge/Redirect None ... Generate a self-signed SSL/TlS certificate with the .crt extension by entering the following command: USD oc extract cm/kube-root-ca.crt -n openshift-apiserver Example output ca.crt Rename the ca.crt file to extra_ca_cert_build_cluster.crt by entering the following command: USD mv ca.crt extra_ca_cert_build_cluster.crt Locate the secret for you configuration bundle in the Console , and select Actions Edit Secret and add the appropriate builder configuration: FEATURE_USER_INITIALIZE: true BROWSER_API_CALLS_XHR_ONLY: false SUPER_USERS: - <superusername> FEATURE_USER_CREATION: false FEATURE_QUOTA_MANAGEMENT: true FEATURE_BUILD_SUPPORT: True BUILDMAN_HOSTNAME: <sample_build_route> 1 BUILD_MANAGER: - ephemeral - ALLOWED_WORKER_COUNT: 1 ORCHESTRATOR_PREFIX: buildman/production/ JOB_REGISTRATION_TIMEOUT: 3600 2 ORCHESTRATOR: REDIS_HOST: <sample_redis_hostname> 3 REDIS_PASSWORD: "" REDIS_SSL: false REDIS_SKIP_KEYSPACE_EVENT_SETUP: false EXECUTORS: - EXECUTOR: kubernetesPodman NAME: openshift BUILDER_NAMESPACE: <sample_builder_namespace> 4 SETUP_TIME: 180 MINIMUM_RETRY_THRESHOLD: 0 BUILDER_CONTAINER_IMAGE: <sample_builder_container_image> 5 # Kubernetes resource options K8S_API_SERVER: <sample_k8s_api_server> 6 K8S_API_TLS_CA: <sample_crt_file> 7 VOLUME_SIZE: 8G KUBERNETES_DISTRIBUTION: openshift CONTAINER_MEMORY_LIMITS: 300m 8 CONTAINER_CPU_LIMITS: 1G 9 CONTAINER_MEMORY_REQUEST: 300m 10 CONTAINER_CPU_REQUEST: 1G 11 NODE_SELECTOR_LABEL_KEY: "" NODE_SELECTOR_LABEL_VALUE: "" SERVICE_ACCOUNT_NAME: <sample_service_account_name> SERVICE_ACCOUNT_TOKEN: <sample_account_token> 12 1 The build route is obtained by running oc get route -n with the name of your OpenShift Operator's namespace. A port must be provided at the end of the route, and it should use the following format: [quayregistry-cr-name]-quay-builder-[ocp-namespace].[ocp-domain-name]:443 . 2 If the JOB_REGISTRATION_TIMEOUT parameter is set too low, you might receive the following error: failed to register job to build manager: rpc error: code = Unauthenticated desc = Invalid build token: Signature has expired . It is suggested that this parameter be set to at least 240. 3 If your Redis host has a password or SSL/TLS certificates, you must update accordingly. 4 Set to match the name of your virtual builders namespace, for example, virtual-builders . 5 For early access, the BUILDER_CONTAINER_IMAGE is currently quay.io/projectquay/quay-builder:3.7.0-rc.2 . Note that this might change during the early access window. If this happens, customers are alerted. 6 The K8S_API_SERVER is obtained by running oc cluster-info . 7 You must manually create and add your custom CA cert, for example, K8S_API_TLS_CA: /conf/stack/extra_ca_certs/build_cluster.crt . 8 Defaults to 5120Mi if left unspecified. 9 For virtual builds, you must ensure that there are enough resources in your cluster. Defaults to 1000m if left unspecified. 10 Defaults to 3968Mi if left unspecified. 11 Defaults to 500m if left unspecified. 12 Obtained when running oc create sa . Sample configuration FEATURE_USER_INITIALIZE: true BROWSER_API_CALLS_XHR_ONLY: false SUPER_USERS: - quayadmin FEATURE_USER_CREATION: false FEATURE_QUOTA_MANAGEMENT: true FEATURE_BUILD_SUPPORT: True BUILDMAN_HOSTNAME: example-registry-quay-builder-quay-enterprise.apps.docs.quayteam.org:443 BUILD_MANAGER: - ephemeral - ALLOWED_WORKER_COUNT: 1 ORCHESTRATOR_PREFIX: buildman/production/ JOB_REGISTRATION_TIMEOUT: 3600 ORCHESTRATOR: REDIS_HOST: example-registry-quay-redis REDIS_PASSWORD: "" REDIS_SSL: false REDIS_SKIP_KEYSPACE_EVENT_SETUP: false EXECUTORS: - EXECUTOR: kubernetesPodman NAME: openshift BUILDER_NAMESPACE: virtual-builders SETUP_TIME: 180 MINIMUM_RETRY_THRESHOLD: 0 BUILDER_CONTAINER_IMAGE: quay.io/projectquay/quay-builder:3.7.0-rc.2 # Kubernetes resource options K8S_API_SERVER: api.docs.quayteam.org:6443 K8S_API_TLS_CA: /conf/stack/extra_ca_certs/build_cluster.crt VOLUME_SIZE: 8G KUBERNETES_DISTRIBUTION: openshift CONTAINER_MEMORY_LIMITS: 1G CONTAINER_CPU_LIMITS: 1080m CONTAINER_MEMORY_REQUEST: 1G CONTAINER_CPU_REQUEST: 580m NODE_SELECTOR_LABEL_KEY: "" NODE_SELECTOR_LABEL_VALUE: "" SERVICE_ACCOUNT_NAME: quay-builder SERVICE_ACCOUNT_TOKEN: "eyJhbGciOiJSUzI1NiIsImtpZCI6IldfQUJkaDVmb3ltTHZ0dGZMYjhIWnYxZTQzN2dJVEJxcDJscldSdEUtYWsifQ" 16.3.2.2. Manually adding SSL/TLS certificates Due to a known issue with the configuration tool, you must manually add your custom SSL/TLS certificates to properly run builders. Use the following procedure to manually add custom SSL/TLS certificates. For more information creating SSL/TLS certificates, see Adding TLS certificates to the Red Hat Quay container . 16.3.2.2.1. Creating and signing certificates Use the following procedure to create and sign an SSL/TLS certificate. Procedure Create a certificate authority and sign a certificate. For more information, see Create a Certificate Authority and sign a certificate . openssl.cnf [req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = example-registry-quay-quay-enterprise.apps.docs.quayteam.org 1 DNS.2 = example-registry-quay-builder-quay-enterprise.apps.docs.quayteam.org 2 1 An alt_name for the URL of your Red Hat Quay registry must be included. 2 An alt_name for the BUILDMAN_HOSTNAME Sample commands USD openssl genrsa -out rootCA.key 2048 USD openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem USD openssl genrsa -out ssl.key 2048 USD openssl req -new -key ssl.key -out ssl.csr USD openssl x509 -req -in ssl.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out ssl.cert -days 356 -extensions v3_req -extfile openssl.cnf 16.3.2.2.2. Setting TLS to unmanaged Use the following procedure to set king:tls to unmanaged. Procedure In your Red Hat Quay Registry YAML, set kind: tls to managed: false : - kind: tls managed: false On the Events page, the change is blocked until you set up the appropriate config.yaml file. For example: - lastTransitionTime: '2022-03-28T12:56:49Z' lastUpdateTime: '2022-03-28T12:56:49Z' message: >- required component `tls` marked as unmanaged, but `configBundleSecret` is missing necessary fields reason: ConfigInvalid status: 'True' 16.3.2.2.3. Creating temporary secrets Use the following procedure to create temporary secrets for the CA certificate. Procedure Create a secret in your default namespace for the CA certificate: Create a secret in your default namespace for the ssl.key and ssl.cert files: 16.3.2.2.4. Copying secret data to the configuration YAML Use the following procedure to copy secret data to your config.yaml file. Procedure Locate the new secrets in the console UI at Workloads Secrets . For each secret, locate the YAML view: kind: Secret apiVersion: v1 metadata: name: temp-crt namespace: quay-enterprise uid: a4818adb-8e21-443a-a8db-f334ace9f6d0 resourceVersion: '9087855' creationTimestamp: '2022-03-28T13:05:30Z' ... data: extra_ca_cert_build_cluster.crt: >- LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURNakNDQWhxZ0F3SUJBZ0l.... type: Opaque kind: Secret apiVersion: v1 metadata: name: quay-config-ssl namespace: quay-enterprise uid: 4f5ae352-17d8-4e2d-89a2-143a3280783c resourceVersion: '9090567' creationTimestamp: '2022-03-28T13:10:34Z' ... data: ssl.cert: >- LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVaakNDQTA2Z0F3SUJBZ0lVT... ssl.key: >- LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBc... type: Opaque Locate the secret for your Red Hat Quay registry configuration bundle in the UI, or through the command line by running a command like the following: USD oc get quayregistries.quay.redhat.com -o jsonpath="{.items[0].spec.configBundleSecret}{'\n'}" -n quay-enterprise In the OpenShift Container Platform console, select the YAML tab for your configuration bundle secret, and add the data from the two secrets you created: kind: Secret apiVersion: v1 metadata: name: init-config-bundle-secret namespace: quay-enterprise uid: 4724aca5-bff0-406a-9162-ccb1972a27c1 resourceVersion: '4383160' creationTimestamp: '2022-03-22T12:35:59Z' ... data: config.yaml: >- RkVBVFVSRV9VU0VSX0lOSVRJQUxJWkU6IHRydWUKQlJ... extra_ca_cert_build_cluster.crt: >- LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURNakNDQWhxZ0F3SUJBZ0ldw.... ssl.cert: >- LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVaakNDQTA2Z0F3SUJBZ0lVT... ssl.key: >- LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBc... type: Opaque Click Save . Enter the following command to see if your pods are restarting: USD oc get pods -n quay-enterprise Example output NAME READY STATUS RESTARTS AGE ... example-registry-quay-app-6786987b99-vgg2v 0/1 ContainerCreating 0 2s example-registry-quay-app-7975d4889f-q7tvl 1/1 Running 0 5d21h example-registry-quay-app-7975d4889f-zn8bb 1/1 Running 0 5d21h example-registry-quay-app-upgrade-lswsn 0/1 Completed 0 6d1h example-registry-quay-config-editor-77847fc4f5-nsbbv 0/1 ContainerCreating 0 2s example-registry-quay-config-editor-c6c4d9ccd-2mwg2 1/1 Running 0 5d21h example-registry-quay-database-66969cd859-n2ssm 1/1 Running 0 6d1h example-registry-quay-mirror-764d7b68d9-jmlkk 1/1 Terminating 0 5d21h example-registry-quay-mirror-764d7b68d9-jqzwg 1/1 Terminating 0 5d21h example-registry-quay-redis-7cc5f6c977-956g8 1/1 Running 0 5d21h After your Red Hat Quay registry has reconfigured, enter the following command to check if the Red Hat Quay app pods are running: USD oc get pods -n quay-enterprise Example output example-registry-quay-app-6786987b99-sz6kb 1/1 Running 0 7m45s example-registry-quay-app-6786987b99-vgg2v 1/1 Running 0 9m1s example-registry-quay-app-upgrade-lswsn 0/1 Completed 0 6d1h example-registry-quay-config-editor-77847fc4f5-nsbbv 1/1 Running 0 9m1s example-registry-quay-database-66969cd859-n2ssm 1/1 Running 0 6d1h example-registry-quay-mirror-758fc68ff7-5wxlp 1/1 Running 0 8m29s example-registry-quay-mirror-758fc68ff7-lbl82 1/1 Running 0 8m29s example-registry-quay-redis-7cc5f6c977-956g8 1/1 Running 0 5d21h In your browser, access the registry endpoint and validate that the certificate has been updated appropriately. For example: Common Name (CN) example-registry-quay-quay-enterprise.apps.docs.quayteam.org Organisation (O) DOCS Organisational Unit (OU) QUAY 16.3.2.3. Using the UI to create a build trigger Use the following procedure to use the UI to create a build trigger. Procedure Log in to your Red Hat Quay repository. Click Create New Repository and create a new registry, for example, testrepo . On the Repositories page, click the Builds tab on the navigation pane. Alternatively, use the corresponding URL directly: Important In some cases, the builder might have issues resolving hostnames. This issue might be related to the dnsPolicy being set to default on the job object. Currently, there is no workaround for this issue. It will be resolved in a future version of Red Hat Quay. Click Create Build Trigger Custom Git Repository Push . Enter the HTTPS or SSH style URL used to clone your Git repository, then click Continue . For example: Check Tag manifest with the branch or tag name and then click Continue . Enter the location of the Dockerfile to build when the trigger is invoked, for example, /Dockerfile and click Continue . Enter the location of the context for the Docker build, for example, / , and click Continue . If warranted, create a Robot Account. Otherwise, click Continue . Click Continue to verify the parameters. On the Builds page, click Options icon of your Trigger Name, and then click Run Trigger Now . Enter a commit SHA from the Git repository and click Start Build . You can check the status of your build by clicking the commit in the Build History page, or by running oc get pods -n virtual-builders . For example: Example output USD oc get pods -n virtual-builders Example output Example output When the build is finished, you can check the status of the tag under Tags on the navigation pane. Note With early access, full build logs and timestamps of builds are currently unavailable. 16.3.2.4. Modifying your AWS S3 storage bucket If you are using AWS S3 storage, you must change your storage bucket in the AWS console, prior to running builders. Procedure Log in to your AWS console at s3.console.aws.com . In the search bar, search for S3 and then click S3 . Click the name of your bucket, for example, myawsbucket . Click the Permissions tab. Under Cross-origin resource sharing (CORS) , include the following parameters: [ { "AllowedHeaders": [ "Authorization" ], "AllowedMethods": [ "GET" ], "AllowedOrigins": [ "*" ], "ExposeHeaders": [], "MaxAgeSeconds": 3000 }, { "AllowedHeaders": [ "Content-Type", "x-amz-acl", "origin" ], "AllowedMethods": [ "PUT" ], "AllowedOrigins": [ "*" ], "ExposeHeaders": [], "MaxAgeSeconds": 3000 } ] 16.3.2.5. Modifying your Google Cloud Platform object bucket Use the following procedure to configure cross-origin resource sharing (CORS) for virtual builders. Note Without CORS configuration, uploading a build Dockerfile fails. Procedure Use the following reference to create a JSON file for your specific CORS needs. For example: USD cat gcp_cors.json Example output [ { "origin": ["*"], "method": ["GET"], "responseHeader": ["Authorization"], "maxAgeSeconds": 3600 }, { "origin": ["*"], "method": ["PUT"], "responseHeader": [ "Content-Type", "x-goog-acl", "origin"], "maxAgeSeconds": 3600 } ] Enter the following command to update your GCP storage bucket: USD gcloud storage buckets update gs://<bucket_name> --cors-file=./gcp_cors.json Example output Updating Completed 1 You can display the updated CORS configuration of your GCP bucket by running the following command: USD gcloud storage buckets describe gs://<bucket_name> --format="default(cors)" Example output cors: - maxAgeSeconds: 3600 method: - GET origin: - '*' responseHeader: - Authorization - maxAgeSeconds: 3600 method: - PUT origin: - '*' responseHeader: - Content-Type - x-goog-acl - origin
[ "[quayregistry-cr-name]-quay-builder-[ocp-namespace].[ocp-domain-name]:443", "oc new-project virtual-builders", "oc create sa -n virtual-builders quay-builder", "oc adm policy -n virtual-builders add-role-to-user edit system:serviceaccount:virtual-builders:quay-builder", "oc adm policy -n virtual-builders add-scc-to-user anyuid -z quay-builder", "sa get-token -n virtual-builders quay-builder", "oc create token quay-builder -n virtual-builders", "eyJhbGciOiJSUzI1NiIsImtpZCI6IldfQUJkaDVmb3ltTHZ0dGZMYjhIWnYxZTQzN2dJVEJxcDJscldSdEUtYWsifQ", "oc get route -n quay-enterprise", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD example-registry-quay-builder example-registry-quay-builder-quay-enterprise.apps.docs.quayteam.org example-registry-quay-app grpc edge/Redirect None", "oc extract cm/kube-root-ca.crt -n openshift-apiserver", "ca.crt", "mv ca.crt extra_ca_cert_build_cluster.crt", "FEATURE_USER_INITIALIZE: true BROWSER_API_CALLS_XHR_ONLY: false SUPER_USERS: - <superusername> FEATURE_USER_CREATION: false FEATURE_QUOTA_MANAGEMENT: true FEATURE_BUILD_SUPPORT: True BUILDMAN_HOSTNAME: <sample_build_route> 1 BUILD_MANAGER: - ephemeral - ALLOWED_WORKER_COUNT: 1 ORCHESTRATOR_PREFIX: buildman/production/ JOB_REGISTRATION_TIMEOUT: 3600 2 ORCHESTRATOR: REDIS_HOST: <sample_redis_hostname> 3 REDIS_PASSWORD: \"\" REDIS_SSL: false REDIS_SKIP_KEYSPACE_EVENT_SETUP: false EXECUTORS: - EXECUTOR: kubernetesPodman NAME: openshift BUILDER_NAMESPACE: <sample_builder_namespace> 4 SETUP_TIME: 180 MINIMUM_RETRY_THRESHOLD: 0 BUILDER_CONTAINER_IMAGE: <sample_builder_container_image> 5 # Kubernetes resource options K8S_API_SERVER: <sample_k8s_api_server> 6 K8S_API_TLS_CA: <sample_crt_file> 7 VOLUME_SIZE: 8G KUBERNETES_DISTRIBUTION: openshift CONTAINER_MEMORY_LIMITS: 300m 8 CONTAINER_CPU_LIMITS: 1G 9 CONTAINER_MEMORY_REQUEST: 300m 10 CONTAINER_CPU_REQUEST: 1G 11 NODE_SELECTOR_LABEL_KEY: \"\" NODE_SELECTOR_LABEL_VALUE: \"\" SERVICE_ACCOUNT_NAME: <sample_service_account_name> SERVICE_ACCOUNT_TOKEN: <sample_account_token> 12", "FEATURE_USER_INITIALIZE: true BROWSER_API_CALLS_XHR_ONLY: false SUPER_USERS: - quayadmin FEATURE_USER_CREATION: false FEATURE_QUOTA_MANAGEMENT: true FEATURE_BUILD_SUPPORT: True BUILDMAN_HOSTNAME: example-registry-quay-builder-quay-enterprise.apps.docs.quayteam.org:443 BUILD_MANAGER: - ephemeral - ALLOWED_WORKER_COUNT: 1 ORCHESTRATOR_PREFIX: buildman/production/ JOB_REGISTRATION_TIMEOUT: 3600 ORCHESTRATOR: REDIS_HOST: example-registry-quay-redis REDIS_PASSWORD: \"\" REDIS_SSL: false REDIS_SKIP_KEYSPACE_EVENT_SETUP: false EXECUTORS: - EXECUTOR: kubernetesPodman NAME: openshift BUILDER_NAMESPACE: virtual-builders SETUP_TIME: 180 MINIMUM_RETRY_THRESHOLD: 0 BUILDER_CONTAINER_IMAGE: quay.io/projectquay/quay-builder:3.7.0-rc.2 # Kubernetes resource options K8S_API_SERVER: api.docs.quayteam.org:6443 K8S_API_TLS_CA: /conf/stack/extra_ca_certs/build_cluster.crt VOLUME_SIZE: 8G KUBERNETES_DISTRIBUTION: openshift CONTAINER_MEMORY_LIMITS: 1G CONTAINER_CPU_LIMITS: 1080m CONTAINER_MEMORY_REQUEST: 1G CONTAINER_CPU_REQUEST: 580m NODE_SELECTOR_LABEL_KEY: \"\" NODE_SELECTOR_LABEL_VALUE: \"\" SERVICE_ACCOUNT_NAME: quay-builder SERVICE_ACCOUNT_TOKEN: \"eyJhbGciOiJSUzI1NiIsImtpZCI6IldfQUJkaDVmb3ltTHZ0dGZMYjhIWnYxZTQzN2dJVEJxcDJscldSdEUtYWsifQ\"", "[req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = example-registry-quay-quay-enterprise.apps.docs.quayteam.org 1 DNS.2 = example-registry-quay-builder-quay-enterprise.apps.docs.quayteam.org 2", "openssl genrsa -out rootCA.key 2048 openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem openssl genrsa -out ssl.key 2048 openssl req -new -key ssl.key -out ssl.csr openssl x509 -req -in ssl.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out ssl.cert -days 356 -extensions v3_req -extfile openssl.cnf", "- kind: tls managed: false", "- lastTransitionTime: '2022-03-28T12:56:49Z' lastUpdateTime: '2022-03-28T12:56:49Z' message: >- required component `tls` marked as unmanaged, but `configBundleSecret` is missing necessary fields reason: ConfigInvalid status: 'True'", "oc create secret generic -n quay-enterprise temp-crt --from-file extra_ca_cert_build_cluster.crt", "oc create secret generic -n quay-enterprise quay-config-ssl --from-file ssl.cert --from-file ssl.key", "kind: Secret apiVersion: v1 metadata: name: temp-crt namespace: quay-enterprise uid: a4818adb-8e21-443a-a8db-f334ace9f6d0 resourceVersion: '9087855' creationTimestamp: '2022-03-28T13:05:30Z' data: extra_ca_cert_build_cluster.crt: >- LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURNakNDQWhxZ0F3SUJBZ0l. type: Opaque", "kind: Secret apiVersion: v1 metadata: name: quay-config-ssl namespace: quay-enterprise uid: 4f5ae352-17d8-4e2d-89a2-143a3280783c resourceVersion: '9090567' creationTimestamp: '2022-03-28T13:10:34Z' data: ssl.cert: >- LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVaakNDQTA2Z0F3SUJBZ0lVT ssl.key: >- LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBc type: Opaque", "oc get quayregistries.quay.redhat.com -o jsonpath=\"{.items[0].spec.configBundleSecret}{'\\n'}\" -n quay-enterprise", "kind: Secret apiVersion: v1 metadata: name: init-config-bundle-secret namespace: quay-enterprise uid: 4724aca5-bff0-406a-9162-ccb1972a27c1 resourceVersion: '4383160' creationTimestamp: '2022-03-22T12:35:59Z' data: config.yaml: >- RkVBVFVSRV9VU0VSX0lOSVRJQUxJWkU6IHRydWUKQlJ extra_ca_cert_build_cluster.crt: >- LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURNakNDQWhxZ0F3SUJBZ0ldw. ssl.cert: >- LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVaakNDQTA2Z0F3SUJBZ0lVT ssl.key: >- LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBc type: Opaque", "oc get pods -n quay-enterprise", "NAME READY STATUS RESTARTS AGE example-registry-quay-app-6786987b99-vgg2v 0/1 ContainerCreating 0 2s example-registry-quay-app-7975d4889f-q7tvl 1/1 Running 0 5d21h example-registry-quay-app-7975d4889f-zn8bb 1/1 Running 0 5d21h example-registry-quay-app-upgrade-lswsn 0/1 Completed 0 6d1h example-registry-quay-config-editor-77847fc4f5-nsbbv 0/1 ContainerCreating 0 2s example-registry-quay-config-editor-c6c4d9ccd-2mwg2 1/1 Running 0 5d21h example-registry-quay-database-66969cd859-n2ssm 1/1 Running 0 6d1h example-registry-quay-mirror-764d7b68d9-jmlkk 1/1 Terminating 0 5d21h example-registry-quay-mirror-764d7b68d9-jqzwg 1/1 Terminating 0 5d21h example-registry-quay-redis-7cc5f6c977-956g8 1/1 Running 0 5d21h", "oc get pods -n quay-enterprise", "example-registry-quay-app-6786987b99-sz6kb 1/1 Running 0 7m45s example-registry-quay-app-6786987b99-vgg2v 1/1 Running 0 9m1s example-registry-quay-app-upgrade-lswsn 0/1 Completed 0 6d1h example-registry-quay-config-editor-77847fc4f5-nsbbv 1/1 Running 0 9m1s example-registry-quay-database-66969cd859-n2ssm 1/1 Running 0 6d1h example-registry-quay-mirror-758fc68ff7-5wxlp 1/1 Running 0 8m29s example-registry-quay-mirror-758fc68ff7-lbl82 1/1 Running 0 8m29s example-registry-quay-redis-7cc5f6c977-956g8 1/1 Running 0 5d21h", "Common Name (CN) example-registry-quay-quay-enterprise.apps.docs.quayteam.org Organisation (O) DOCS Organisational Unit (OU) QUAY", "https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org/repository/quayadmin/testrepo?tab=builds", "https://github.com/gabriel-rh/actions_test.git", "oc get pods -n virtual-builders", "NAME READY STATUS RESTARTS AGE f192fe4a-c802-4275-bcce-d2031e635126-9l2b5-25lg2 1/1 Running 0 7s", "oc get pods -n virtual-builders", "NAME READY STATUS RESTARTS AGE f192fe4a-c802-4275-bcce-d2031e635126-9l2b5-25lg2 1/1 Terminating 0 9s", "oc get pods -n virtual-builders", "No resources found in virtual-builders namespace.", "[ { \"AllowedHeaders\": [ \"Authorization\" ], \"AllowedMethods\": [ \"GET\" ], \"AllowedOrigins\": [ \"*\" ], \"ExposeHeaders\": [], \"MaxAgeSeconds\": 3000 }, { \"AllowedHeaders\": [ \"Content-Type\", \"x-amz-acl\", \"origin\" ], \"AllowedMethods\": [ \"PUT\" ], \"AllowedOrigins\": [ \"*\" ], \"ExposeHeaders\": [], \"MaxAgeSeconds\": 3000 } ]", "cat gcp_cors.json", "[ { \"origin\": [\"*\"], \"method\": [\"GET\"], \"responseHeader\": [\"Authorization\"], \"maxAgeSeconds\": 3600 }, { \"origin\": [\"*\"], \"method\": [\"PUT\"], \"responseHeader\": [ \"Content-Type\", \"x-goog-acl\", \"origin\"], \"maxAgeSeconds\": 3600 } ]", "gcloud storage buckets update gs://<bucket_name> --cors-file=./gcp_cors.json", "Updating Completed 1", "gcloud storage buckets describe gs://<bucket_name> --format=\"default(cors)\"", "cors: - maxAgeSeconds: 3600 method: - GET origin: - '*' responseHeader: - Authorization - maxAgeSeconds: 3600 method: - PUT origin: - '*' responseHeader: - Content-Type - x-goog-acl - origin" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/use_red_hat_quay/red-hat-quay-builders-enhancement
Chapter 43. Using the standalone library perspective
Chapter 43. Using the standalone library perspective You can use the library perspective of Business Central to select a project you want to edit. You can also perform all the authoring functions on the selected project. The standalone library perspective can be used in two ways, with and without using the header=UberfireBreadcrumbsContainer parameter. The difference is that the address with the header parameter will display a breadcrumb trail on top of the library perspective. Using this link you can create additional Spaces for your projects. Procedure Log in to Business Central. In a web browser, enter the appropriate web address: For accessing the standalone library perspective without the header parameter http://localhost:8080/business-central/kie-wb.jsp?standalone=true&perspective=LibraryPerspective The standalone library perspective without the breadcrumb trail opens in the browser. For accessing the standalone library perspective with the header parameter http://localhost:8080/business-central/kie-wb.jsp?standalone=true&perspective=LibraryPerspective&header=UberfireBreadcrumbsContainer The standalone library perspective with the breadcrumb trail opens in the browser.
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/managing_red_hat_decision_manager_and_kie_server_settings/using-standalone-perspectives-library-proc
Chapter 3. Tuned
Chapter 3. Tuned 3.1. Tuned Overview Tuned is a daemon that uses udev to monitor connected devices and statically and dynamically tunes system settings according to a selected profile. Tuned is distributed with a number of predefined profiles for common use cases like high throughput, low latency, or powersave. It is possible to modify the rules defined for each profile and customize how to tune a particular device. To revert all changes made to the system settings by a certain profile, you can either switch to another profile or deactivate the tuned service. Note Starting with Red Hat Enterprise Linux 7.2, you can run Tuned in no-daemon mode , which does not require any resident memory. In this mode, tuned applies the settings and exits. The no-daemon mode is disabled by default because a lot of tuned functionality is missing in this mode, including D-Bus support, hot-plug support, or rollback support for settings. To enable no-daemon mode , set the following in the /etc/tuned/tuned-main.conf file: daemon = 0 . Static tuning mainly consists of the application of predefined sysctl and sysfs settings and one-shot activation of several configuration tools like ethtool . Tuned also monitors the use of system components and tunes system settings dynamically based on that monitoring information. Dynamic tuning accounts for the way that various system components are used differently throughout the uptime for any given system. For example, the hard drive is used heavily during startup and login, but is barely used later when the user might mainly work with applications such as web browsers or email clients. Similarly, the CPU and network devices are used differently at different times. Tuned monitors the activity of these components and reacts to the changes in their use. As a practical example, consider a typical office workstation. Most of the time, the Ethernet network interface is very inactive. Only a few emails go in and out every once in a while or some web pages might be loaded. For those kinds of loads, the network interface does not have to run at full speed all the time, as it does by default. Tuned has a monitoring and tuning plug-in for network devices that can detect this low activity and then automatically lower the speed of that interface, typically resulting in a lower power usage. If the activity on the interface increases for a longer period of time, for example because a DVD image is being downloaded or an email with a large attachment is opened, tuned detects this and sets the interface speed to maximum to offer the best performance while the activity level is so high. This principle is used for other plug-ins for CPU and hard disks as well. Dynamic tuning is globally disabled in Red Hat Enterprise Linux and can be enabled by editing the /etc/tuned/tuned-main.conf file and changing the dynamic_tuning flag to 1 . 3.1.1. Plug-ins Tuned uses two types of plugins: monitoring plugins and tuning plugins . Monitoring plugins are used to get information from a running system. Currently, the following monitoring plugins are implemented: disk Gets disk load (number of IO operations) per device and measurement interval. net Gets network load (number of transferred packets) per network card and measurement interval. load Gets CPU load per CPU and measurement interval. The output of the monitoring plugins can be used by tuning plugins for dynamic tuning. Currently implemented dynamic tuning algorithms try to balance the performance and powersave and are therefore disabled in the performance profiles (dynamic tuning for individual plugins can be enabled or disabled in the tuned profiles). Monitoring plugins are automatically instantiated whenever their metrics are needed by any of the enabled tuning plugins. If two tuning plugins require the same data, only one instance of the monitoring plugin is created and the data is shared. Each tuning plugin tunes an individual subsystem and takes several parameters that are populated from the tuned profiles. Each subsystem can have multiple devices (for example, multiple CPUs or network cards) that are handled by individual instances of the tuning plugins. Specific settings for individual devices are also supported. The supplied profiles use wildcards to match all devices of individual subsystems (for details on how to change this, refer to Section 3.1.3, "Custom Profiles" ), which allows the plugins to tune these subsystems according to the required goal (selected profile) and the only thing that the user needs to do is to select the correct tuned profile. Currently, the following tuning plugins are implemented (only some of these plugins implement dynamic tuning, parameters supported by plugins are also listed): cpu Sets the CPU governor to the value specified by the governor parameter and dynamically changes the PM QoS CPU DMA latency according to the CPU load. If the CPU load is lower than the value specified by the load_threshold parameter, the latency is set to the value specified by the latency_high parameter, otherwise it is set to value specified by latency_low . Also the latency can be forced to a specific value without being dynamically changed further. This can be accomplished by setting the force_latency parameter to the required latency value. eeepc_she Dynamically sets the FSB speed according to the CPU load; this feature can be found on some netbooks and is also known as the Asus Super Hybrid Engine. If the CPU load is lower or equal to the value specified by the load_threshold_powersave parameter, the plugin sets the FSB speed to the value specified by the she_powersave parameter (for details about the FSB frequencies and corresponding values, see the kernel documentation, the provided defaults should work for most users). If the CPU load is higher or equal to the value specified by the load_threshold_normal parameter, it sets the FSB speed to the value specified by the she_normal parameter. Static tuning is not supported and the plugin is transparently disabled if the hardware support for this feature is not detected. net Configures wake-on-lan to the values specified by the wake_on_lan parameter (it uses same syntax as the ethtool utility). It also dynamically changes the interface speed according to the interface utilization. sysctl Sets various sysctl settings specified by the plugin parameters. The syntax is name = value , where name is the same as the name provided by the sysctl tool. Use this plugin if you need to change settings that are not covered by other plugins (but prefer specific plugins if the settings are covered by them). usb Sets autosuspend timeout of USB devices to the value specified by the autosuspend parameter. The value 0 means that autosuspend is disabled. vm Enables or disables transparent huge pages depending on the Boolean value of the transparent_hugepages parameter. audio Sets the autosuspend timeout for audio codecs to the value specified by the timeout parameter. Currently snd_hda_intel and snd_ac97_codec are supported. The value 0 means that the autosuspend is disabled. You can also enforce the controller reset by setting the Boolean parameter reset_controller to true . disk Sets the elevator to the value specified by the elevator parameter. It also sets ALPM to the value specified by the alpm parameter, ASPM to the value specified by the aspm parameter, scheduler quantum to the value specified by the scheduler_quantum parameter, disk spindown timeout to the value specified by the spindown parameter, disk readahead to the value specified by the readahead parameter, and can multiply the current disk readahead value by the constant specified by the readahead_multiply parameter. In addition, this plugin dynamically changes the advanced power management and spindown timeout setting for the drive according to the current drive utilization. The dynamic tuning can be controlled by the Boolean parameter dynamic and is enabled by default. Note Applying a tuned profile which stipulates a different disk readahead value overrides the disk readahead value settings if they have been configured using a udev rule. Red Hat recommends using the tuned tool to adjust the disk readahead values. mounts Enables or disables barriers for mounts according to the Boolean value of the disable_barriers parameter. script This plugin can be used for the execution of an external script that is run when the profile is loaded or unloaded. The script is called by one argument which can be start or stop (it depends on whether the script is called during the profile load or unload). The script file name can be specified by the script parameter. Note that you need to correctly implement the stop action in your script and revert all setting you changed during the start action, otherwise the roll-back will not work. For your convenience, the functions Bash helper script is installed by default and allows you to import and use various functions defined in it. Note that this functionality is provided mainly for backwards compatibility and it is recommended that you use it as the last resort and prefer other plugins if they cover the required settings. sysfs Sets various sysfs settings specified by the plugin parameters. The syntax is name = value , where name is the sysfs path to use. Use this plugin in case you need to change some settings that are not covered by other plugins (please prefer specific plugins if they cover the required settings). video Sets various powersave levels on video cards (currently only the Radeon cards are supported). The powersave level can be specified by using the radeon_powersave parameter. Supported values are: default , auto , low , mid , high , and dynpm . For details, refer to http://www.x.org/wiki/RadeonFeature#KMS_Power_Management_Options . Note that this plugin is experimental and the parameter may change in the future releases. bootloader Adds parameters to the kernel boot command line. This plugin supports the legacy GRUB 1, GRUB 2, and also GRUB with Extensible Firmware Interface (EFI). Customized non-standard location of the grub2 configuration file can be specified by the grub2_cfg_file option. The parameters are added to the current grub configuration and its templates. The machine needs to be rebooted for the kernel parameters to take effect. The parameters can be specified by the following syntax: 3.1.2. Installation and Usage To install the tuned package, run, as root, the following command: Installation of the tuned package also presets the profile which should be the best for you system. Currently the default profile is selected according the following customizable rules: throughput-performance This is pre-selected on Red Hat Enterprise Linux 7 operating systems which act as compute nodes. The goal on such systems is the best throughput performance. virtual-guest This is pre-selected on virtual machines. The goal is best performance. If you are not interested in best performance, you would probably like to change it to the balanced or powersave profile (see bellow). balanced This is pre-selected in all other cases. The goal is balanced performance and power consumption. To start tuned , run, as root, the following command: To enable tuned to start every time the machine boots, type the following command: For other tuned control such as selection of profiles and other, use: This command requires the tuned service to be running. To view the available installed profiles, run: To view the currently activated profile, run: To select or activate a profile, run: For example: As an experimental feature it is possible to select more profiles at once. The tuned application will try to merge them during the load. If there are conflicts the settings from the last specified profile will take precedence. This is done automatically and there is no checking whether the resulting combination of parameters makes sense. If used without thinking, the feature may tune some parameters the opposite way which may be counterproductive. An example of such situation would be setting the disk for the high throughput by using the throughput-performance profile and concurrently setting the disk spindown to the low value by the spindown-disk profile. The following example optimizes the system for run in a virtual machine for the best performance and concurrently tune it for the low power consumption while the low power consumption is the priority: To let tuned recommend you the best suitable profile for your system without changing any existing profiles and using the same logic as used during the installation, run the following command: Tuned itself has additional options that you can use when you run it manually. However, this is not recommended and is mostly intended for debugging purposes. The available options can be viewing using the following command: 3.1.3. Custom Profiles Distribution-specific profiles are stored in the /usr/lib/tuned/ directory. Each profile has its own directory. The profile consists of the main configuration file called tuned.conf , and optionally other files, for example helper scripts. If you need to customize a profile, copy the profile directory into the /etc/tuned/ directory, which is used for custom profiles. If there are two profiles of the same name, the profile included in /etc/tuned/ is used. You can also create your own profile in the /etc/tuned/ directory to use a profile included in /usr/lib/tuned/ with only certain parameters adjusted or overridden. The tuned.conf file contains several sections. There is one [main] section. The other sections are configurations for plugins instances. All sections are optional including the [main] section. Lines starting with the hash sign (#) are comments. The [main] section has the following option: include= profile The specified profile will be included, e.g. include=powersave will include the powersave profile. Sections describing plugins instances are formatted in the following way: NAME is the name of the plugin instance as it is used in the logs. It can be an arbitrary string. TYPE is the type of the tuning plugin. For a list and descriptions of the tuning plugins refer to Section 3.1.1, "Plug-ins" . DEVICES is the list of devices this plugin instance will handle. The devices line can contain a list, a wildcard (*), and negation (!). You can also combine rules. If there is no devices line all devices present or later attached on the system of the TYPE will be handled by the plugin instance. This is same as using devices=* . If no instance of the plugin is specified, the plugin will not be enabled. If the plugin supports more options, they can be also specified in the plugin section. If the option is not specified, the default value will be used (if not previously specified in the included plugin). For the list of plugin options refer to Section 3.1.1, "Plug-ins" ). Example 3.1. Describing Plug-ins Instances The following example will match everything starting with sd , such as sda or sdb , and does not disable barriers on them: The following example will match everything except sda1 and sda2 : In cases where you do not need custom names for the plugin instance and there is only one definition of the instance in your configuration file, Tuned supports the following short syntax: In this case, it is possible to omit the type line. The instance will then be referred to with a name, same as the type. The example could be then rewritten into: If the same section is specified more than once using the include option, then the settings are merged. If they cannot be merged due to a conflict, the last conflicting definition overrides the settings in conflict. Sometimes, you do not know what was previously defined. In such cases, you can use the replace boolean option and set it to true . This will cause all the definitions with the same name to be overwritten and the merge will not happen. You can also disable the plugin by specifying the enabled=false option. This has the same effect as if the instance was never defined. Disabling the plugin can be useful if you are redefining the definition from the include option and do not want the plugin to be active in your custom profile. The following is an example of a custom profile that is based on the balanced profile and extends it the way that ALPM for all devices is set to the maximal powersaving. The following is an example of a custom profile that adds isolcpus=2 to the kernel boot command line: The machine needs to be rebooted after the profile is applied for the changes to take effect. 3.1.4. Tuned-adm A detailed analysis of a system can be very time-consuming. Red Hat Enterprise Linux 7 includes a number of predefined profiles for typical use cases that you can easily activate with the tuned-adm utility. You can also create, modify, and delete profiles. To list all available profiles and identify the current active profile, run: To only display the currently active profile, run: To switch to one of the available profiles, run: for example: To disable all tuning: The following is a list of pre-defined profiles for typical use cases: Note The following profiles may or may not be installed with the base package, depending on the repo files being used. To see the tuned profiles installed on your system, run the following command as root: To see the list of available tuned profiles to install, run the following command as root: To install a tuned profile on your system, run the following command as root: Replacing profile-name with the profile you want to install. balanced The default power-saving profile. It is intended to be a compromise between performance and power consumption. It tries to use auto-scaling and auto-tuning whenever possible. It has good results for most loads. The only drawback is the increased latency. In the current tuned release it enables the CPU, disk, audio and video plugins and activates the conservative governor. The radeon_powersave is set to auto . powersave A profile for maximum power saving performance. It can throttle the performance in order to minimize the actual power consumption. In the current tuned release it enables USB autosuspend, WiFi power saving and ALPM power savings for SATA host adapters. It also schedules multi-core power savings for systems with a low wakeup rate and activates the ondemand governor. It enables AC97 audio power saving or, depending on your system, HDA-Intel power savings with a 10 seconds timeout. In case your system contains supported Radeon graphics card with enabled KMS it configures it to automatic power saving. On Asus Eee PCs a dynamic Super Hybrid Engine is enabled. Note The powersave profile may not always be the most efficient. Consider there is a defined amount of work that needs to be done, for example a video file that needs to be transcoded. Your machine can consume less energy if the transcoding is done on the full power, because the task will be finished quickly, the machine will start to idle and can automatically step-down to very efficient power save modes. On the other hand if you transcode the file with a throttled machine, the machine will consume less power during the transcoding, but the process will take longer and the overall consumed energy can be higher. That is why the balanced profile can be generally a better option. throughput-performance A server profile optimized for high throughput. It disables power savings mechanisms and enables sysctl settings that improve the throughput performance of the disk, network IO and switched to the deadline scheduler. CPU governor is set to performance . latency-performance A server profile optimized for low latency. It disables power savings mechanisms and enables sysctl settings that improve the latency. CPU governor is set to performance and the CPU is locked to the low C states (by PM QoS). network-latency A profile for low latency network tuning. It is based on the latency-performance profile. It additionally disables transparent hugepages, NUMA balancing and tunes several other network related sysctl parameters. network-throughput Profile for throughput network tuning. It is based on the throughput-performance profile. It additionally increases kernel network buffers. virtual-guest A profile designed for Red Hat Enterprise Linux 7 virtual machines as well as VMware guests based on the enterprise-storage profile that, among other tasks, decreases virtual memory swappiness and increases disk readahead values. It does not disable disk barriers. virtual-host A profile designed for virtual hosts based on the enterprise-storage profile that, among other tasks, decreases virtual memory swappiness, increases disk readahead values and enables a more aggressive value of dirty pages. oracle A profile optimized for Oracle databases loads based on throughput-performance profile. It additionally disables transparent huge pages and modifies some other performance related kernel parameters. This profile is provided by tuned-profiles-oracle package. It is available in Red Hat Enterprise Linux 6.8 and later. desktop A profile optimized for desktops, based on the balanced profile. It additionally enables scheduler autogroups for better response of interactive applications. cpu-partitioning The cpu-partitioning profile partitions the system CPUs into isolated and housekeeping CPUs. To reduce jitter and interruptions on an isolated CPU, the profile clears the isolated CPU from user-space processes, movable kernel threads, interrupt handlers, and kernel timers. A housekeeping CPU can run all services, shell processes, and kernel threads. You can configure the cpu-partitioning profile in the /etc/tuned/cpu-partitioning-variables.conf file. The configuration options are: isolated_cores= cpu-list Lists CPUs to isolate. The list of isolated CPUs is comma-separated or the user can specify the range. You can specify a range using a dash, such as 3-5 . This option is mandatory. Any CPU missing from this list is automatically considered a housekeeping CPU. no_balance_cores= cpu-list Lists CPUs which are not considered by the kernel during system wide process load-balancing. This option is optional. This is usually the same list as isolated_cores . For more information on cpu-partitioning , see the tuned-profiles-cpu-partitioning (7) man page. Note There may be more product specific or third party Tuned profiles available. Such profiles are usually provided by separate RPM packages. Additional predefined profiles can be installed with the tuned-profiles-compat package available in the Optional channel. These profiles are intended for backward compatibility and are no longer developed. The generalized profiles from the base package will mostly perform the same or better. If you do not have specific reason for using them, please prefer the above mentioned profiles from the base package. The compat profiles are following: default This has the lowest impact on power saving of the available profiles and only enables CPU and disk plugins of tuned . desktop-powersave A power-saving profile directed at desktop systems. Enables ALPM power saving for SATA host adapters as well as the CPU, Ethernet, and disk plugins of tuned . laptop-ac-powersave A medium-impact power-saving profile directed at laptops running on AC. Enables ALPM powersaving for SATA host adapters, Wi-Fi power saving, as well as the CPU, Ethernet, and disk plugins of tuned . laptop-battery-powersave A high-impact power-saving profile directed at laptops running on battery. In the current tuned implementation it is an alias for the powersave profile. spindown-disk A power-saving profile for machines with classic HDDs to maximize spindown time. It disables the tuned power savings mechanism, disables USB autosuspend, disables Bluetooth, enables Wi-Fi power saving, disables logs syncing, increases disk write-back time, and lowers disk swappiness. All partitions are remounted with the noatime option. enterprise-storage A server profile directed at enterprise-class storage, maximizing I/O throughput. It activates the same settings as the throughput-performance profile, multiplies readahead settings, and disables barriers on non-root and non-boot partitions. Note Use the atomic-host profile on physical machines, and the atomic-guest profile on virtual machines. To enable the tuned profiles for Red Hat Enterprise Linux Atomic Host, install the tuned-profiles-atomic package. Run, as root, the following command: The two tuned profiles for Red Hat Enterprise Linux Atomic Host are: atomic-host A profile optimized for Red Hat Enterprise Linux Atomic Host, when used as a host system on a bare-metal server, using the throughput-performance profile. It additionally increases SELinux AVC cache, PID limit, and tunes netfilter connections tracking. atomic-guest A profile optimized for Red Hat Enterprise Linux Atomic Host, when used as a guest system based on the virtual-guest profile. It additionally increases SELinux AVC cache, PID limit, and tunes netfilter connections tracking. Note There may be more product-specific or third-party tuned profiles available. These profiles are usually provided by separate RPM packages. Three tuned profiles are available that enable to edit the kernel command line: realtime , realtime-virtual-host and realtime-virtual-guest . To enable the realtime profile, install the tuned-profiles-realtime package. Run, as root, the following command: To enable the realtime-virtual-host and realtime-virtual-guest profiles, install the tuned-profiles-nfv package. Run, as root, the following command: 3.1.5. powertop2tuned The powertop2tuned utility is a tool that allows you to create custom tuned profiles from the PowerTOP suggestions. To install the powertop2tuned application, run the following command as root: To create a custom profile, run the following command as root: By default it creates the profile in the /etc/tuned directory and it bases it on the currently selected tuned profile. For safety reasons all PowerTOP tunings are initially disabled in the new profile. To enable them uncomment the tunings of your interest in the /etc/tuned/ profile /tuned.conf . You can use the --enable or -e option that will generate the new profile with most of the tunings suggested by PowerTOP enabled. Some dangerous tunings like the USB autosuspend will still be disabled. If you really need them you will have to uncomment them manually. By default, the new profile is not activated. To activate it run the following command: For a complete list of the options powertop2tuned supports, type in the following command:
[ "cmdline = arg 1 arg 2 ... arg n .", "install tuned", "systemctl start tuned", "systemctl enable tuned", "tuned-adm", "tuned-adm list", "tuned-adm active", "tuned-adm profile profile", "tuned-adm profile powersave", "tuned-adm profile virtual-guest powersave", "tuned-adm recommend", "tuned --help", "[NAME] type=TYPE devices=DEVICES", "[data_disk] type=disk devices=sd* disable_barriers=false", "[data_disk] type=disk devices=!sda1, !sda2 disable_barriers=false", "[TYPE] devices=DEVICES", "[disk] devices=sdb* disable_barriers=false", "[main] include=balanced [disk] alpm=min_power", "[bootloader] cmdline=isolcpus=2", "tuned-adm list", "tuned-adm active", "tuned-adm profile profile_name", "tuned-adm profile latency-performance", "tuned-adm off", "tuned-adm list", "search tuned-profiles", "install tuned-profiles- profile-name", "install tuned-profiles-atomic", "install tuned-profiles-realtime", "install tuned-profiles-nfv", "install tuned-utils", "powertop2tuned new_profile_name", "tuned-adm profile new_profile_name", "powertop2tuned --help" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/performance_tuning_guide/chap-Red_Hat_Enterprise_Linux-Performance_Tuning_Guide-Tuned
Chapter 5. Support
Chapter 5. Support Only the configuration options described in this documentation are supported for logging. Do not use any other configuration options, as they are unsupported. Configuration paradigms might change across OpenShift Container Platform releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this documentation, your changes will be overwritten, because Operators are designed to reconcile any differences. Note If you must perform configurations not described in the OpenShift Container Platform documentation, you must set your Red Hat OpenShift Logging Operator to Unmanaged . An unmanaged logging instance is not supported and does not receive updates until you return its status to Managed . Note Logging is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. Loki is a horizontally scalable, highly available, multi-tenant log aggregation system offered as a GA log store for logging for Red Hat OpenShift that can be visualized with the OpenShift Observability UI. The Loki configuration provided by OpenShift Logging is a short-term log store designed to enable users to perform fast troubleshooting with the collected logs. For that purpose, the logging for Red Hat OpenShift configuration of Loki has short-term storage, and is optimized for very recent queries. Important For long-term storage or queries over a long time period, users should look to log stores external to their cluster. Loki sizing is only tested and supported for short term storage, for a maximum of 30 days. Logging for Red Hat OpenShift is an opinionated collector and normalizer of application, infrastructure, and audit logs. It is intended to be used for forwarding logs to various supported systems. Logging is not: A high scale log collection system Security Information and Event Monitoring (SIEM) compliant A "bring your own" (BYO) log collector configuration Historical or long term log retention or storage A guaranteed log sink Secure storage - audit logs are not stored by default 5.1. Supported API custom resource definitions The following table describes the supported Logging APIs. Table 5.1. Loki API support states CustomResourceDefinition (CRD) ApiVersion Support state LokiStack lokistack.loki.grafana.com/v1 Supported from 5.5 RulerConfig rulerconfig.loki.grafana/v1 Supported from 5.7 AlertingRule alertingrule.loki.grafana/v1 Supported from 5.7 RecordingRule recordingrule.loki.grafana/v1 Supported from 5.7 LogFileMetricExporter LogFileMetricExporter.logging.openshift.io/v1alpha1 Supported from 5.8 ClusterLogForwarder clusterlogforwarder.logging.openshift.io/v1 Supported from 4.5. 5.2. Unsupported configurations You must set the Red Hat OpenShift Logging Operator to the Unmanaged state to modify the following components: The fluent.conf file The Fluentd daemon set The vector.toml file for Vector collector deployments Explicitly unsupported cases include: Configuring the collected log location . You cannot change the location of the log collector output file, which by default is /var/log/fluentd/fluentd.log . Throttling log collection . You cannot throttle down the rate at which the logs are read in by the log collector. Configuring the logging collector using environment variables . You cannot use environment variables to modify the log collector. Configuring how the log collector normalizes logs . You cannot modify default log normalization. 5.3. Support policy for unmanaged Operators The management state of an Operator determines whether an Operator is actively managing the resources for its related component in the cluster as designed. If an Operator is set to an unmanaged state, it does not respond to changes in configuration nor does it receive updates. While this can be helpful in non-production clusters or during debugging, Operators in an unmanaged state are unsupported and the cluster administrator assumes full control of the individual component configurations and upgrades. An Operator can be set to an unmanaged state using the following methods: Individual Operator configuration Individual Operators have a managementState parameter in their configuration. This can be accessed in different ways, depending on the Operator. For example, the Red Hat OpenShift Logging Operator accomplishes this by modifying a custom resource (CR) that it manages, while the Cluster Samples Operator uses a cluster-wide configuration resource. Changing the managementState parameter to Unmanaged means that the Operator is not actively managing its resources and will take no action related to the related component. Some Operators might not support this management state as it might damage the cluster and require manual recovery. Warning Changing individual Operators to the Unmanaged state renders that particular component and functionality unsupported. Reported issues must be reproduced in Managed state for support to proceed. Cluster Version Operator (CVO) overrides The spec.overrides parameter can be added to the CVO's configuration to allow administrators to provide a list of overrides to the CVO's behavior for a component. Setting the spec.overrides[].unmanaged parameter to true for a component blocks cluster upgrades and alerts the administrator after a CVO override has been set: Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing. Warning Setting a CVO override puts the entire cluster in an unsupported state. Reported issues must be reproduced after removing any overrides for support to proceed. 5.4. Support exception for the Logging UI Plugin Until the approaching General Availability (GA) release of the Cluster Observability Operator (COO), which is currently in Technology Preview (TP), Red Hat provides support to customers who are using Logging 6.0 or later with the COO for its Logging UI Plugin on OpenShift Container Platform 4.14 or later. This support exception is temporary as the COO includes several independent features, some of which are still TP features, but the Logging UI Plugin is ready for GA. 5.5. Collecting logging data for Red Hat Support When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support. You can use the must-gather tool to collect diagnostic information for project-level resources, cluster-level resources, and each of the logging components. For prompt support, supply diagnostic information for both OpenShift Container Platform and logging. 5.5.1. About the must-gather tool The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues. For your logging, must-gather collects the following information: Project-level resources, including pods, configuration maps, service accounts, roles, role bindings, and events at the project level Cluster-level resources, including nodes, roles, and role bindings at the cluster level OpenShift Logging resources in the openshift-logging and openshift-operators-redhat namespaces, including health status for the log collector, the log store, and the log visualizer When you run oc adm must-gather , a new pod is created on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local . This directory is created in the current working directory. 5.5.2. Collecting logging data You can use the oc adm must-gather CLI command to collect information about logging. Procedure To collect logging information with must-gather : Navigate to the directory where you want to store the must-gather information. Run the oc adm must-gather command against the logging image: USD oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == "cluster-logging-operator")].image}') The must-gather tool creates a new directory that starts with must-gather.local within the current directory. For example: must-gather.local.4157245944708210408 . Create a compressed file from the must-gather directory that was just created. For example, on a computer that uses a Linux operating system, run the following command: USD tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408 Attach the compressed file to your support case on the Red Hat Customer Portal .
[ "Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.", "oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == \"cluster-logging-operator\")].image}')", "tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/logging/support-2