title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Architecture | Architecture OpenShift Container Platform 4.13 An overview of the architecture for OpenShift Container Platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/architecture/index |
7.207. system-config-kickstart | 7.207. system-config-kickstart 7.207.1. RHBA-2015:1356 - system-config-kickstart bug fix update An updated system-config-kickstart package that fixes one bug is now available for Red Hat Enterprise Linux 6. The system-config-kickstart package contains Kickstart Configurator, a graphical tool for creating kickstart files. Bug Fix BZ# 1022372 Previously, system-config-kickstart tried to display the user manual by executing /usr/bin/htmlview even though this program did not exist, and the underlying code did not handle this situation properly. Consequently, system-config-kickstart terminated. With this update, the user manual, which was in fact outdated and not translated like the rest of the user interface, has been removed from the system-config-kickstart package, and the corresponding menu item has also been removed from the user interface. As a result, system-config-kickstart no longer terminates unexpectedly. Users of system-config-kickstart are advised to upgrade to this updated package, which fixes this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-system-config-kickstart |
21.10. Using the API from Programming Languages | 21.10. Using the API from Programming Languages The libguestfs API can be used directly from the following languages in Red Hat Enterprise Linux 7: C, C++, Perl, Python, Java, Ruby and OCaml. To install C and C++ bindings, enter the following command: To install Perl bindings: To install Python bindings: To install Java bindings: To install Ruby bindings: To install OCaml bindings: The binding for each language is essentially the same, but with minor syntactic changes. A C statement: Would appear like the following in Perl: Or like the following in OCaml: Only the API from C is detailed in this section. In the C and C++ bindings, you must manually check for errors. In the other bindings, errors are converted into exceptions; the additional error checks shown in the examples below are not necessary for other languages, but conversely you may wish to add code to catch exceptions. See the following list for some points of interest regarding the architecture of the libguestfs API: The libguestfs API is synchronous. Each call blocks until it has completed. If you want to make calls asynchronously, you have to create a thread. The libguestfs API is not thread safe: each handle should be used only from a single thread, or if you want to share a handle between threads you should implement your own mutex to ensure that two threads cannot execute commands on one handle at the same time. You should not open multiple handles on the same disk image. It is permissible if all the handles are read-only, but still not recommended. You should not add a disk image for writing if anything else could be using that disk image (a live VM, for example). Doing this will cause disk corruption. Opening a read-only handle on a disk image that is currently in use (for example by a live VM) is possible. However, the results may be unpredictable or inconsistent, particularly if the disk image is being heavily written to at the time you are reading it. 21.10.1. Interaction with the API using a C program Your C program should start by including the <guestfs.h> header file, and creating a handle: Save this program to a file ( test.c ). Compile this program and run it with the following two commands: At this stage it should print no output. The rest of this section demonstrates an example showing how to extend this program to create a new disk image, partition it, format it with an ext4 file system, and create some files in the file system. The disk image will be called disk.img and be created in the current directory. The outline of the program is: Create the handle. Add disk(s) to the handle. Launch the libguestfs back end. Create the partition, file system and files. Close the handle and exit. Here is the modified program: Compile and run this program with the following two commands: If the program runs to completion successfully, you should be left with a disk image called disk.img , which you can examine with guestfish: By default (for C and C++ bindings only), libguestfs prints errors to stderr. You can change this behavior by setting an error handler. The guestfs(3) man page discusses this in detail. | [
"yum install libguestfs-devel",
"yum install 'perl(Sys::Guestfs)'",
"yum install python-libguestfs",
"yum install libguestfs-java libguestfs-java-devel libguestfs-javadoc",
"yum install ruby-libguestfs",
"yum install ocaml-libguestfs ocaml-libguestfs-devel",
"guestfs_launch (g);",
"USDg->launch ()",
"g#launch ()",
"#include <stdio.h> #include <stdlib.h> #include <guestfs.h> int main (int argc, char *argv[]) { guestfs_h *g; g = guestfs_create (); if (g == NULL) { perror (\"failed to create libguestfs handle\"); exit (EXIT_FAILURE); } /* ... */ guestfs_close (g); exit (EXIT_SUCCESS); }",
"gcc -Wall test.c -o test -lguestfs ./test",
"#include <stdio.h> #include <stdlib.h> #include <string.h> #include <fcntl.h> #include <unistd.h> #include <guestfs.h> int main (int argc, char *argv[]) { guestfs_h *g; size_t i; g = guestfs_create (); if (g == NULL) { perror (\"failed to create libguestfs handle\"); exit (EXIT_FAILURE); } /* Create a raw-format sparse disk image, 512 MB in size. */ int fd = open (\"disk.img\", O_CREAT|O_WRONLY|O_TRUNC|O_NOCTTY, 0666); if (fd == -1) { perror (\"disk.img\"); exit (EXIT_FAILURE); } if (ftruncate (fd, 512 * 1024 * 1024) == -1) { perror (\"disk.img: truncate\"); exit (EXIT_FAILURE); } if (close (fd) == -1) { perror (\"disk.img: close\"); exit (EXIT_FAILURE); } /* Set the trace flag so that we can see each libguestfs call. */ guestfs_set_trace (g, 1); /* Set the autosync flag so that the disk will be synchronized * automatically when the libguestfs handle is closed. */ guestfs_set_autosync (g, 1); /* Add the disk image to libguestfs. */ if (guestfs_add_drive_opts (g, \"disk.img\", GUESTFS_ADD_DRIVE_OPTS_FORMAT, \"raw\", /* raw format */ GUESTFS_ADD_DRIVE_OPTS_READONLY, 0, /* for write */ -1 /* this marks end of optional arguments */ ) == -1) exit (EXIT_FAILURE); /* Run the libguestfs back-end. */ if (guestfs_launch (g) == -1) exit (EXIT_FAILURE); /* Get the list of devices. Because we only added one drive * above, we expect that this list should contain a single * element. */ char **devices = guestfs_list_devices (g); if (devices == NULL) exit (EXIT_FAILURE); if (devices[0] == NULL || devices[1] != NULL) { fprintf (stderr, \"error: expected a single device from list-devices\\n\"); exit (EXIT_FAILURE); } /* Partition the disk as one single MBR partition. */ if (guestfs_part_disk (g, devices[0], \"mbr\") == -1) exit (EXIT_FAILURE); /* Get the list of partitions. We expect a single element, which * is the partition we have just created. */ char **partitions = guestfs_list_partitions (g); if (partitions == NULL) exit (EXIT_FAILURE); if (partitions[0] == NULL || partitions[1] != NULL) { fprintf (stderr, \"error: expected a single partition from list-partitions\\n\"); exit (EXIT_FAILURE); } /* Create an ext4 filesystem on the partition. */ if (guestfs_mkfs (g, \"ext4\", partitions[0]) == -1) exit (EXIT_FAILURE); /* Now mount the filesystem so that we can add files. */ if (guestfs_mount_options (g, \"\", partitions[0], \"/\") == -1) exit (EXIT_FAILURE); /* Create some files and directories. */ if (guestfs_touch (g, \"/empty\") == -1) exit (EXIT_FAILURE); const char *message = \"Hello, world\\n\"; if (guestfs_write (g, \"/hello\", message, strlen (message)) == -1) exit (EXIT_FAILURE); if (guestfs_mkdir (g, \"/foo\") == -1) exit (EXIT_FAILURE); /* This uploads the local file /etc/resolv.conf into the disk image. */ if (guestfs_upload (g, \"/etc/resolv.conf\", \"/foo/resolv.conf\") == -1) exit (EXIT_FAILURE); /* Because 'autosync' was set (above) we can just close the handle * and the disk contents will be synchronized. You can also do * this manually by calling guestfs_umount_all and guestfs_sync. */ guestfs_close (g); /* Free up the lists. */ for (i = 0; devices[i] != NULL; ++i) free (devices[i]); free (devices); for (i = 0; partitions[i] != NULL; ++i) free (partitions[i]); free (partitions); exit (EXIT_SUCCESS); }",
"gcc -Wall test.c -o test -lguestfs ./test",
"guestfish --ro -a disk.img -m /dev/sda1 ><fs> ll / ><fs> cat /foo/resolv.conf"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-guest_virtual_machine_disk_access_with_offline_tools-using_the_api_from_programming_languages |
Chapter 10. High Availability and Failover | Chapter 10. High Availability and Failover After creating a cluster configuration, you can link broker instances together to form high availability (HA) pairs. An HA pair consists of a master broker that serves client requests, and one or more slave brokers that replace the master if it can no longer communicate with clients. In AMQ Broker 7, a cluster configuration is required for HA. Broker clusters can consist of either a set of non-HA brokers or HA pairs. AMQ Broker 7 provides the following HA policies: Replication Replication synchronizes the data between the master and slave brokers over the network. With replication, you can enable failback to return control to the master broker when it comes back online after a failure event and allow clients to fail back to it. You can also create HA groups in which multiple master brokers share one or more slave brokers, and colocate slave brokers in the same JVM as the master broker. Important Starting in 7.5, network pinging, which was previously available for use with the replication HA policy, is a deprecated feature. Network pinging cannot protect a broker cluster from network isolation issues that can lead to irrecoverable message loss. This feature will be removed in a future release. Red Hat continues to support existing AMQ Broker deployments that use network pinging. However, Red Hat no longer recommends use of network pinging in new deployments. For guidance on configuring a broker cluster for high availability and to avoid network isolation issues, see Implementing high availability . Shared Store Shared store provides a location for the master and slave brokers to share messaging data. Using a shared store is generally preferable, as it offers the following benefits over replication: Performance (shared stores are faster) No split-brain issues Fewer brokers required to maintain quorum (replication requires at least three) Like with replication, you can enable failback to return control to the master broker after a failure event and allow clients to fail back to it. You can configure multiple slave brokers for a master broker, and colocate slave brokers. For more information about HA and failover, see Implementing high availability in Configuring AMQ Broker . 10.1. High Availability and Failover Changes High availability in AMQ Broker 7 differs from AMQ 6 based on how the master is determined and when the broker connections become active. In AMQ Broker 7, the master and slave roles are fixed. You specify which broker instance is the master, and the slave only becomes active in certain conditions. In AMQ 6, the master and slave roles were not fixed. Instead, the brokers in an HA pair would compete for a lock, and the winner would become the master. In AMQ Broker 7, in an HA pair, the slave broker's acceptors are active even if the broker is inactive. In AMQ 6, the slave broker's transport connectors did not become active until the broker became active. 10.2. How High Availability is Configured You configure HA by adding an HA policy configuration to the BROKER_INSTANCE_DIR /etc/broker.xml configuration file of each broker. Example: HA Pair with Shared Store The master broker is configured like this. By setting failover-on-shutdown to true , the HA pair will fail over to the slave broker if the master broker is shut down: <configuration> <core> ... <ha-policy> <shared-store> <master/> <failover-on-shutdown>true</failover-on-shutdown> </shared-store> </ha-policy> ... </core> </configuration> The slave broker is configured like this. By setting failover-on-shutdown to true , this slave broker will become the master if the current master broker is shut down: <configuration> <core> ... <ha-policy> <shared-store> <slave/> <failover-on-shutdown>true</failover-on-shutdown> </shared-store> </ha-policy> ... </core> </configuration> Related Information For full details on configuring HA policies, see the following topics in Configuring AMQ Broker : Configuring shared store high availability Configuring replication high availability Configuring limited high availability with live-only Configuring high availability with colocated backups Revised on 2021-07-26 10:08:13 UTC | [
"<configuration> <core> <ha-policy> <shared-store> <master/> <failover-on-shutdown>true</failover-on-shutdown> </shared-store> </ha-policy> </core> </configuration>",
"<configuration> <core> <ha-policy> <shared-store> <slave/> <failover-on-shutdown>true</failover-on-shutdown> </shared-store> </ha-policy> </core> </configuration>"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q2/html/migrating_to_red_hat_amq_7/ha-failover |
Chapter 7. Bug fixes | Chapter 7. Bug fixes This part describes bugs fixed in Red Hat Enterprise Linux 8.4 that have a significant impact on users. 7.1. Installer and image creation Anaconda now shows a dialog for ldl or unformatted DASD disks in text mode Previously, during an installation in text mode, Anaconda failed to show a dialog for Linux disk layout ( ldl ) or unformatted Direct-Access Storage Device (DASD) disks. As a result, users were unable to utilize those disks for the installation. With this update, in text mode Anaconda recognizes ldl and unformatted DASD disks and shows a dialog where users can format them properly for the future utilization for the installation. ( BZ#1874394 ) RHEL installer failed to start when InfiniBand network interfaces were configured using installer boot options Previously, when you configured InfiniBand network interfaces at an early stage of RHEL installation using installer boot options (for example, downloaded installer image using PXE server), the installer failed to activate the network interfaces. This issue occured because the RHEL NetworkManager failed to recognize the network interfaces in InfiniBand mode, and instead configured Ethernet connections for the interfaces. As a result, connection activation failed, and if the connectivity over the InfiniBand interface was required at an early stage, RHEL installer failed to start the installation. With this release, the installer successfully activates the InfiniBand network interfaces that you configure at an early stage of RHEL installation using installer boot options, and the installation completes successfully. (BZ#1890009) The automatic partitioning can be scheduled in Anaconda Previously, during automatic partitioning on LVM type disks, the installer tried to create a partition for an LVM PV on each selected disk. If these disks already had partitioning layout, the schedule of the automatic partitioning could have failed with the error message. With this update, the problem has been fixed. Now you can schedule the automatic partitioning in the installer. (BZ#1642391) Configuring a wireless network using Anaconda GUI is fixed Previously, configuring the wireless network while using Anaconda graphical user interface (GUI) caused the installation to crash. With this update, the problem has been fixed. You can configure the wireless network during the installation while using Anaconda GUI. (BZ#1847681) 7.2. Software management New -m and -M parameters are now supported for the %autopatch rpm macro With this update, the -m (min) and -M (max) parameters have been added to the %autopatch macro to apply only a range of patches with given parameters. ( BZ#1834931 ) popt rebased to version 1.18 The popt packages have been upgraded to the upstream version 1.18, which provides the following notable changes over the version: Overall codebase cleanup and modernization. Failing to drop privileges on the alias exec command has been fixed. Various bugs, including resource leaks, have been fixed. ( BZ#1843787 ) 7.3. Shells and command-line tools snmpbulkget now provides valid output for a non-existing PID Previously, the snmpbulkget command did not provide valid output for a non-existing PID. Consequently, this command would fail with the output as no results found . With this update, snmpbulkget provides valid output for a non-existing PID. ( BZ#1817190 ) The CRON command now sends an email as per the trigger conditions. Previously, when the Relax-and-Recover ( ReaR ) utility was configured incorrectly, the CRON command triggered an error message that was sent to the administrator through an email. Consequently, the administrator would receive emails even if the configuration was not performed for ReaR . With this update, the CRON command is modified and sends an email as per the trigger conditions. ( BZ#1729499 ) Using NetBackup version 8.2 as the backup mechanism in ReaR now works. Previously, when using NetBackup as a backup method, the Relax-and-Recover ( ReaR ) utility did not start the vxpbx_exchanged service in the rescue system. Consequently, restoring the data from the backup in the rescue system with NetBackup 8.2 failed with the following error messages logged on the NetBackup server: Error bpbrm (pid=... ) cannot execute cmd on client Info tar (pid=... ) done. status: 25: cannot connect on socket Error bpbrm (pid=... ) client restore EXIT STATUS 25: cannot connect on socket With this update, ReaR adds the vxpbx_exchanged service and related required files to the rescue system, and starts the service when the rescue system launches. (BZ#1898080) libvpd rebased to version 2.2.8. Notable changes include: Improved performance of vpdupdate by making the sqlite operations asynchronous. (BZ#1844429) ReaR utility now restores system using LUKS2 encrypted partition Previously, when at least one LUKS2 encrypted partition was present on the system to backup with Relax-and-Recover ( Rear ) utility, the user was not informed that ReaR does not support LUKS2 encrypted partition. Consequently, the ReaR utility was unable to recreate the original state of the system during the restore phase. With this update, support of basic LUKS2 configuration, error checking, and improved output has been added to the ReaR utility. The ReaR utility now restores systems using basic LUKS2 encrypted partitions or notifies users in the opposite case. ( BZ#1832394 ) Texlive now correctly works with Poppler Previously, the Poppler utility underwent an update for API changes. Consequently, due to these API changes the Texlive build did not function. With this update, the Texlive build now functions correctly with the new Poppler utility. ( BZ#1889802 ) 7.4. Infrastructure services RPZ now works with wildcard characters Previously, the dns_rpz_find_name function in the lib/dns/rpz.c file did not consider wildcard characters when a record for the same suffix was present. Consequently, some records containing wildcard characters were ignored. With this update, the dns_rpz_find_name function has been fixed and it now considers wildcard characters. ( BZ#1876492 ) 7.5. Security Improved padding for pkcs11 Previously, the pkcs11 token label had extra padding for some smart cards. As a consequence, the wrong padding could cause issues matching cards based on the label attribute. With this update, the padding is fixed for all the cards and defined PKCS #11 URIs and matching against them in application should work as expected. ( BZ#1877973 ) Fixed sealert connection issue handling Previously, a crash of the setroubleshoot daemon could cause the sealert process to stop responding. Consequently, the GUI did not show any analysis and also became unresponsive, the command line tool did not print any output and kept running until killed. This update improves handling of connection issues between sealert and setroubleshootd . Now sealert reports an error message and exits in case the setroubleshoot daemon crashes. ( BZ#1875290 ) Optimized audit record analysis by setroubleshoot Previously, new features introduced in setroubleshoot-3.3.23-1 had a negative impact on performance, which led to the AVC analysis being up to 8 times slower than before. This update provides optimizations that significantly reduce the AVC analysis times. (BZ#1794807) Fixed SELinux policy interface parser Previously, the policy interface parser caused syntax error messages to appear when installing a custom policy that contained an ifndef block in its interface file. This update improves the interface file parsing, and thus resolves this issue. ( BZ#1868717 ) setfiles does not stop on labeling error Previously, the setfiles utility stopped whenever it failed to relabel a file. Consequently, mislabeled files were left in the target directory. With this update, setfiles skips files it cannot relabel, and as a result, setfiles processes all files in the target directory. ( BZ#1926386 ) Rebuilds of the SELinux policy store are now more resistant to power failures Previously, SELinux-policy rebuilds were not resistant to power failures due to write caching. Consequently, the SELinux policy store may become corrupted after a power failure during a policy rebuild. With this update, the libsemanage library writes all pending modifications to metadata and cached file data to the file system that contains the policy store before using it. As a result, the policy store is now more resistant to power failures and other interruptions. ( BZ#1913224 ) libselinux now determines the default context of SELinux users correctly Previously, the libselinux library failed to determine the default context of SELinux users on some systems, due to the use of the deprecated security_compute_user() function. As a consequence, some system services were unavailable on systems with complex security policies. With this update, libselinux no longer uses security_compute_user() and determines the SELinux user's default context properly, regardless of policy complexity. (BZ#1879368) Geo-replication in rsync mode no longer fails due to SELinux Previously, SELinux policy did not allow processes running under rsync_t to set the value of the security.trusted extended attribute. As a consequence, geo-replication in Red Hat Gluster Storage (RHGS) failed. This update includes the new SELinux boolean rsync_sys_admin that allows the rsync_t processes to set security.trusted . As a result, if the rsync_sys_admin boolean is enabled, rsync can set the security.trusted extended attribute and geo-replication no longer fails. ( BZ#1889673 ) OpenSCAP can now scan systems with large numbers of files without running out of memory Previously, when scanning systems with low RAM and large numbers of files, the OpenSCAP scanner sometimes caused the system to run out of memory. With this update, OpenSCAP scanner memory management has been improved. As a result, the scanner no longer runs out of memory on systems with low RAM when scanning large numbers of files, for example package groups Server with GUI and Workstation . ( BZ#1824152 ) CIS-remediated systems with FAT no longer fail on boot Previously, the Center for Internet Security (CIS) profile in the SCAP Security Guide (SSG) contained a rule which disabled loading of the kernel module responsible for access to FAT file systems. As a consequence, if SSG remediated this rule, the system could not access partitions formatted with FAT12, FAT16, and FAT32 file systems, including EFI System Partitions (ESP). This caused the systems to fail to boot. With this update, the rule has been removed from the profile. As a result, systems that use these file systems no longer fail to boot. ( BZ#1927019 ) OVAL checks consider GPFS as remote Previously, the OpenSCAP scanner did not identify mounted General Parallel File Systems (GPFS) as remote file systems (FS). As a consequence, OpenSCAP scanned GPFS even for OVAL checks that applied only to local systems. This sometimes caused the scanner to run out of resources and fail to complete the scan. With this update, GPFS has been included in the list of remote FS. As a result, OVAL checks correctly consider GPFS as a remote FS, and the scans are faster. ( BZ#1840579 ) The fapolicyd-selinux SELinux policy now covers all file types Previously, the fapolicyd-selinux SELinux policy did not cover all file types. Consequently, the fapolicyd service could not access files located on non-monitored locations such as sysfs . With this update, the fapolicyd service covers and analyzes all file system types. ( BZ#1940289 ) fapolicyd no longer prevents RHEL updates When an update replaces the binary of a running application, the kernel modifies the application binary path in memory by appending the (deleted) suffix. Previously, the fapolicyd file access policy daemon treated such applications as untrusted. As a consequence, fapolicyd prevented these applications from opening and executing any other files. With this update, fapolicyd ignores the suffix in the binary path so the binary can match the trust database. As a result, fapolicyd enforces the rules correctly and the update process can finish. ( BZ#1896875 ) USBGuard rebased to 1.0.0-1 The usbguard packages have been rebased to the upstream version 1.0.0-1. This update provides improvements and bug fixes, most notably: Stable public API ensures backwards compatibility. Rule files inside the rules.d directory now load in alphanumeric order. Some use cases when the policy of multiple devices could not be changed by a single rule have been fixed. Filtering rules by their labels no longer produces errors. ( BZ#1887448 ) USBGuard now can send Audit messages As part of service hardening, the capabilities of usbguard.service were limited while the CAP_AUDIT_WRITE capability was missing. As a consequence, usbguard running as a system service could not send Audit events. With this update, the service configuration has been updated, and as a result, USBGuard can send Audit messages. ( BZ#1940060 ) tangd now handles invalid requests correctly Previously, the tangd daemon returned an error exit code for some invalid requests. As a consequence, [email protected] failed, which in turn might have caused problems if the number of such failed units increased. With this update, tangd exits with an error code only when the tangd server itself is facing problems. As a result, tangd handles invalid requests correctly. ( BZ#1828558 ) 7.6. Networking Migrating an iptables rule set from RHEL 7 to RHEL 8 with rules involving ipset lookups no longer fails Previously, the ipset counters were updated only if all the additional constraints match while referring to an ipset command with enabled counters from an iptables rule set. Consequently, the rules involving ipset lookups, e.g. -m set --match-set xxx src --bytes-gt 100 will never get chance to match, because the member's counter of ipset will not be added up. With this update, migrating an iptables rule set with rules involving ipset lookups works as expected. (BZ#1806882) The iptraf-ng no longer exposes raw memory content Previously, when setting %p in a filter in iptraf-ng , the application displayed raw memory content in the status bar. Consequently, inessential information was getting displayed. With this update, the iptraf-ng processes do not show any raw memory content on the status bar at the bottom. (BZ#1842690) Network access is now available when using DHCP in the Anaconda ip boot option The initial RAM disk ( initrd ) uses NetworkManager to manage networking. Previously, the dracut NetworkManager module provided by the RHEL 8.3 ISO file incorrectly assumed that the first field of the ip option in the Anaconda boot options was always set. As a consequence, if you used DHCP and set ip=::::<host_name>::dhcp , NetworkManager did not retrieve an IP address, and the network was not available in Anaconda. This problem has been fixed. As a result, the Anaconda ip boot option works as expected when you use the RHEL 8.4 ISO to install a host in the mentioned scenario. (BZ#1900260) Unloading XDP programs no longer fails on Netronome network cards that use the nfp driver Previously, the nfp driver for Netronome network cards contained a bug. As a consequence, unloading eXpress Data Path (XDP) programs failed if you used such a card and loaded the XDP program using the IFLA_XDP_EXPECTED_FD feature with the XDP_FLAGS_REPLACE flag. For example, this affected XDP programs that were loaded using the libxdp library. This bug has been fixed. As a result, unloading an XDP program from Netronome network cards works as expected. ( BZ#1880268 ) NetworkManager now tries to retrieve the host name using DHCP and reverse DNS lookups on all interfaces Previously, if the host name was not set in the /etc/hostname file, NetworkManager tried to obtain the host name using DHCP or a reverse DNS lookup only through the interface with the default route with the lowest metric value. As a consequence, it was not possible to automatically assign a host name on networks without a default route. This update changes the behavior, and NetworkManager now first tries to retrieve the host name using the default route interface. If this process fails, NetworkManager tries other available interfaces. As a result, NetworkManager tries to retrieve the host name using DHCP and reverse DNS lookups on all interfaces if it is not set in /etc/hostname . To configure that NetworkManager uses the old behavior: Create the /etc/NetworkManager/conf.d/10-hostname.conf file with the following content: Reload the NetworkManager service: ( BZ#1766944 ) 7.7. Kernel The kernel no longer returns false positive warnings on IBM Z systems Previously, IBM Z systems on RHEL 8 were missing an allowed entry for the ZONE_DMA memory zone to allow user access. Consequently, the kernel returned false positive warnings such as: The warnings appeared when accessing certain system information through the sysfs interface. For example, by running the debuginfo.sh script. This update adds a flag in the Direct Memory Access (DMA) buffer, so that user space applications can access the buffer. As a result, no warning messages are displayed in the described scenario. (BZ#1660290) RHEL systems boot as expected from the tboot GRUB entry Previously, the tboot utility of version 1.9.12-2 caused some RHEL systems with Trusted Platform Module (TPM) 2.0 enabled to fail to boot in legacy mode. As a consequence, the system halted when it attempted to boot from the tboot Grand Unified Bootloader (GRUB) entry. With a new version of RHEL 8 and the update of the tboot utility, the problem has been fixed and RHEL systems boot as expected. (BZ#1947839) The kernel successfully reclaims memory in heavy-workload container scenarios When a volume was constrained for I/O and memory within a container, the kernel code responsible for reclaiming memory experienced soft-lockup due to a data race condition. Data race is a phenomenon that happens if: At least two CPU threads try to modify the same set of data simultaneously. At least one of these CPU threads tries to do a write operation on the dataset. Based on the exact timing of each thread to modify the dataset, the result can be A, B, or AB (indeterminate). When a container was under memory pressure, the situation likely led to multiple Out of Memory (OOM) kills, causing the container locking up and becoming unresponsive. In this release, the RHEL kernel code for locking and optimization has been updated. As a result, the kernel no longer becomes unresponsive, and the data does not become subject to race conditions. (BZ#1860031) RHEL 8 with offline memory no longer causes kernel panics Previously, when running RHEL 8 with memory that was initiated but marked as offline, the kernel in some cases attempted to access uninitialized memory pages. As a consequence, a kernel panic occurred. This update fixes the kernel mechanism for idle page tracking, which prevents the problem from occurring. (BZ#1867490) The NUMA systems no longer experience unexpected memory layout Previously, ARM64 and S390 architectures experienced unexpected memory layouts on NUMA systems due to missing of the CONFIG_NODES_SPAN_OTHER_NODES option. As a consequence, the memory regions from different NUMA nodes intersected and the intersecting memory regions from low NUMA nodes were added into the high NUMA. With this update, the NUMA systems no longer experience the memory layouts issue. (BZ#1844157) The rngd service no longer busy-waits on poll() system call A new kernel entropy source for FIPS mode was added for kernels, starting with version 4.18.0-193.10. Consequently, the rngd service busy-waited on the poll() system call for the /dev/random device. This situation caused consumption of 100% of CPU time, when a system was in a FIPS mode. With this update, in FIPS mode, a poll() handler for the /dev/random device has been changed from a default one to a handler developed especially for the /dev/random device. As a result, the rngd service no longer busy-waits on poll() in the described scenario. (BZ#1884857) HRTICK support for SCHED_DEADLINE scheduler is enabled Previously, the feature for high resolution system timers ( HRTICK ) was not armed for certain tasks configured with the SCHED_DEADLINE policy. Consequently, the throttling mechanism for these tasks using the SCHED_DEADLINE scheduler, consumed all the runtime configured for those tasks. This behavior caused an unexpected latency spike in the real-time environment. This update enables the HRTICK feature, which provides high resolution preemption. HRTICK uses a high resolution timer, which enforces the throttling mechanism when a task completes its runtime. As a result, this problem no longer occurs in the described scenario. (BZ#1885850) tpm2-abrmd rebased to version 2.3.3.2 The tpm2-abrmd package has been upgraded to version 2.3.3.2, which provides multiple bug fixes. Notable changes include: Fixed the usage of transient handles Fixed partial reads in TPM Command Transmission Interface (TCTI) Refactored the access broker ( BZ#1855177 ) The cxgb4 driver no longer causes crash in the kdump kernel Previously, the kdump kernel would crash while trying to save information in the vmcore file. Consequently, the cxgb4 driver prevented the kdump kernel from saving a core for later analysis. To work around this problem, add the novmcoredd parameter to the kdump kernel command line to allow saving core files. With the release of the RHSA-2020:1769 advisory, the kdump kernel handles this situation properly and no longer crashes. ( BZ#1708456 ) 7.8. File systems and storage Accessing SMB targets no longer fail with EREMOTE error Previously, mounting a DFS namespace on a RHEL SMB client with the cifsacl mount option was inaccessible and a listing failed with an EREMOTE error. This update fixes the kernel to account for EREMOTE , and thus makes the SMB share accessible. (BZ#1871246) Performance improvements for NFS readdir function Previously, a process on a NFS client listing a directory could take a long time to complete the listing, with possibility to never complete. With this update, the NFS client directory listing performance is improved in the following scenarios: Listing of large directories with 100,000 or more files. Listing of directories that are being modified. (BZ#1893882) 7.9. High availability and clusters Default token timeout value in corosync.conf file increased from 1 second to 3 seconds Previously, the TOTEM token timeout value in the corosync.conf file was set to 1 second. This short timeout makes the cluster react quickly but in the case of network delays it may result in premature failover. The default value is now set to 3 seconds to provide a better trade-off between quick response and broader applicability. For information on modifying the token timeout value, see How to change totem token timeout value in a RHEL 5, 6, 7, or 8 High Availability cluster? ( BZ#1870449 ) 7.10. Dynamic programming languages, web and database servers An in-place upgrade is now possible when perl-Time-HiRes is installed Previously, the perl-Time-HiRes package distributed in RHEL 8 was missing an epoch number that was included in the RHEL 7 version of the package. As a consequence, it was impossible to perform an in-place upgrade from RHEL 7 to RHEL 8 when perl-Time-HiRes was installed. The missing epoch number has been added, and the in-place upgrade no longer fails when perl-Time-HiRes is installed. ( BZ#1895852 ) 7.11. Compilers and development tools The glibc DNS stub resolver correctly processes parallel queries with identical transaction IDs Prior to this update, the DNS stub resolver in the GNU C library glibc did not process responses to parallel queries with identical transaction IDs correctly. Consequently, when the transaction IDs were equal, the second parallel response was never matched to a query, resulting in a timeout and retry. With this update, the second parallel response is now recognized as valid. As a result, the glibc DNS stub resolver avoids excessive timeouts due to unrecognized responses. ( BZ#1868106 ) Reading configuration files with fgetsgent() and fgetsgent_r() is now more robust Specifically structured entries in the /etc/gshadow file, or changes in file sizes while reading, sometimes caused the fgetsgent() and fgetsgent_r() functions to return invalid pointers. Consequently, applications that used these functions to read /etc/gshadow , or other configuration files in /etc/ , failed with a segmentation fault error. This update modifies fgetsgent() and fgetsgent_r() to make reading of configuration files more robust. As a result, applications are now able to read configuration files successfully. ( BZ#1871397 ) The glibc string functions now avoid negative impact on system cache on AMD64 and Intel 64 processors Previously, the glibc implementation of string functions incorrectly estimated the amount of last-level cache available to a thread on the 64-bit AMD and Intel processors. As a consequence, calling the memcpy function on large buffers either negatively impacted the overall cache performance of the system or slowed down the memcpy system call. With this update, the last-level cache size is no longer scaled with the number of reported hardware threads in the system. As a result, the string functions now bypass caches for large buffers, avoiding negative impact on the rest of the system cache. ( BZ#1880670 ) The glibc dynamic loader now avoids certain failures of libc.so.6 Previously, when the libc.so.6 shared object ran as a main program (for example, to display the glibc version information), the glibc dynamic loader did not order relocation of libc.so.6 correctly in relation to the objects loaded using the LD_PRELOAD environment variable. Consequently, when LD_PRELOAD was set, invoking libc.so.6 sometimes caused libc.so.6 to terminate unexpectedly with a segmentation fault. This update fixes the bug, and the dynamic loader now correctly handles the relocation of libc.so.6 . As a result, the described problem no longer occurs. (BZ#1882466) The glibc dynamic linker now restricts part of the static thread-local storage space to static TLS allocations Previously, the glibc dynamic linker used all available static thread-local storage (TLS) space for dynamic TLS, on a first come, first served basis. Consequently, loading additional shared objects at run time using the dlopen function sometimes failed, because dynamic TLS allocations had already consumed all available static TLS space. This problem occurred particularly on the 64-bit ARM architecture and IBM Power Systems. Now, the dynamic linker restricts part of the static TLS area to static TLS allocations and does not use this space for dynamic TLS optimizations. As a result, dlopen calls succeed in more cases with the default setting. Applications that require more allocated static TLS than the default setting allows can use a new glibc.rtld.optional_static_tls tunable. ( BZ#1871396 ) The glibc dynamic linker now disables lazy binding for the 64-bit ARM variant calling convention Previously, the glibc dynamic linker did not disable lazy binding for functions using the 64-bit ARM (AArch64) variant calling convention. As a consequence, the dynamic linker corrupted arguments in such function calls, leading to incorrect results or process failures. With this update, the dynamic linker now disables lazy binding in the described scenario, and the function arguments are passed correctly. ( BZ#1893662 ) gcc rebased to version 8.4 The GNU Compiler Collection (GCC) has been rebased to upstream version 8.4, which provides a number of bug fixes over the version. ( BZ#1868446 ) 7.12. Identity Management The Samba wide links feature has been converted to a VFS module Previously, the wide links parameter was part of the smbd service's core functionality. Enabling this feature is insecure and, therefore, has been moved into a separate virtual file system (VFS) module named widelinks . For backward compatibility, Samba in RHEL 8.4 automatically loads this module for shares that have wide links = yes set in their configuration. Important: Red Hat recommends not to use the insecure wide links feature. Instead, use a bind mount to mount a part of the file hierarchy to a directory that you shared in Samba. For details about configuring a bind mount, see the Bind mount operation section in the mount(8) man page. To switch from a configuration that uses wide links to bind mount : For every symbolic link that links outside of a share, replace the link with a bind mount . For details, see the Bind mount operation section in the mount(8) man page. Remove all wide links = yes entries from the /etc/samba/smb.conf file. Reload Samba: ( BZ#1925192 ) Network connection idle timeouts are no longer reported as resource errors Previously, Directory Server reported a misleading error that a resource was temporarily unavailable when an idle network connection timed out. With this update, the error macro for network connection idle timeouts has been changed from EAGAIN to ETIMEDOUT , and an accurate error message describing a timeout is written to the Directory Server access logs. ( BZ#1859301 ) Certificates issued by PKI ACME Responder connected to PKI CA no longer fail OCSP validation Previously, the default ACME certificate profile provided by PKI CA contained a sample OCSP URL that did not point to an actual OCSP service. As a consequence, if PKI ACME Responder was configured to use a PKI CA issuer, the certificates issued by the responder could fail OCSP validation. This update removes hard-coded URLs in the ACME certificate profile and adds an upgrade script to fix the profile configuration file in case you did not customize it. ( BZ#1868233 ) 7.13. Graphics infrastructures Display backlight now works reliably on recent Intel laptops Certain recent laptops with Intel CPUs require a proprietary interface to control display backlight. Previously, RHEL did not support the proprietary interface, and attempted to use the VESA interface, which was unreliable on the laptops. As a consequence, RHEL could not control display backlight on those laptops. With this update, RHEL adds support for the proprietary backlight interface, and as a result, display control now works as expected. (BZ#1885406) 7.14. Red Hat Enterprise Linux system roles tests_luks.yml no longer cause partition case fail with NVME disk Previously, NVME disks used a different partition naming convention than the one used by virtio/scsi and the Storage role did not reflect it. As a consequence, running the Storage role with NVME disks resulted in a crash. With this fix, the Storage RHEL system role now obtains the partition name from the blivet module. ( BZ#1865990 ) The selinux RHEL system role no longer uses variable named present Previously, some tasks in the selinux RHEL system role were incorrectly using a variable named present instead of using the string present . As a consequence, the selinux RHEL system role returned an error informing that there is no variable named present . This update fixes this issue, changing those tasks to use the string present . As a result, the selinux RHEL system role works as expected, with no error message. ( BZ#1926947 ) Logging output no longer fails when the rsyslog-gnutls package is missing A global tls rsyslog-gnutls package is required when the logging RHEL system role is configured to provide secure remote input and secure forward output. Previously, thel tls rsyslog-gnutls package was changed to install unconditionally in the version. As a consequence, when the tls rsyslog-gnutls package was not available on the managed nodes, the logging role configuration failed, even if the secure remote input and secure forward output were not included as part of the configuration. This update fixes the issue by examining if the secure connection is configured and checking the global tls logging_pki_files variable. The rsyslog-gnutls package is installed only when the secure connection is configured. As a result, the operation to configure Red Hat Enterprise Virtualization Hypervisor to integrate elasticsearch as the logging output no longer fails with the missing rsyslog-gnutls package. ( BZ#1927943 ) 7.15. Virtualization Connecting to the RHEL 8 guest console on a Windows Server 2019 host is no longer slowed down Previously, when using RHEL 8 as a guest operating system in multi-user mode on a Windows Server 2019 host, connecting to a console output of the guest currently took significantly longer than expected. This update improves the performance of VRAM on the Hyper-V hypervisor, which fixes the problem. (BZ#1908893) Displaying multiple monitors of virtual machines that use Wayland is now possible with QXL Previously, using the remote-viewer utility to display more than one monitor of a virtual machine (VM) that was using the Wayland display server caused the VM to become unresponsive and the Waiting for display status message to be displayed indefinitely. The underlying code has been fixed, which prevents the described problem from occurring. (BZ#1642887) 7.16. RHEL in cloud environments GPU-optimized Azure instances now work correctly after hibernation When running RHEL 8 as a guest operating system on a Microsoft Azure instance with GPU-optmized virtual machine (VM) size, such as NV6, resuming the VM from hibernation previously caused the VM's GPU to work incorrectly. When this occurred, the kernel logged the following message: With this update, the impacted VMs on Microsoft Azure handle their GPUs correctly after resuming, which prevents the problem from occurring. (BZ#1846838) The TX/RX packet counters increase as intended after virtual machines resume from hibernation Previously, the TX/RX packet counters stopped increasing when a RHEL 8 virtual machine using a CX4 VF NIC resumed from hibernation on Microsoft Azure. This update resolves the issue, and the packet counters increase as intended. (BZ#1876527) RHEL 8 virtual machines no longer fail to resume from hibernation on Azure Previously, the GUID of the virtual function (VF), vmbus device , changed when a RHEL 8 virtual machine (VM), with SR-IOV enabled, was hibernated and deallocated on Microsoft Azure. Consequently, when the VM was restarted, it failed to resume and terminated unexpectedly. With this update, the vmbus device VF no longer changes, and the VM resumes from hibernation successfully. (BZ#1876519) Removed a redundant error message in Hyper-V and KVM guests Previously, when a RHEL 8 guest operating system was running in a KVM or Hyper-V virtual machine, the following error message was reported in the /var/log/messages file: This was a redundant error message and has now been removed. For more information on the problem, see the Red Hat Knowledgebase solution . (BZ#1919745) 7.17. Containers podman system connection add automatically set the default connection Previously, the podman system connection add command did not automatically set the first connection to be the default connection. As a consequence, you must manually run the podman system connection default <connection_name> command to set the default connection. With this update, the podman system connection add command works as expected. ( BZ#1881894 ) The podman run --pid=host works in a rootless mode Previously, running the podman run --pid=host command as a rootless user did not work. Consequently, an OCI permission error occurred: With this update, the problem has been fixed. (BZ#1940854) | [
"[connection-hostname-only-from-default] hostname.only-from-default=1",
"systemctl reload NetworkManager",
"Bad or missing usercopy whitelist? Kernel memory exposure attempt detected from SLUB object 'dma-kmalloc-192' (offset 0, size 144)! WARNING: CPU: 0 PID: 8519 at mm/usercopy.c:83 usercopy_warn+0xac/0xd8",
"smbcontrol all reload-config",
"hv_irq_unmask() failed: 0x5",
"serial8250: too much work for irq4",
"podman run --rm --pid=host quay.io/libpod/testimage:20200929 cat -v /proc/self/attr/current Error: container_linux.go:370: starting container process caused: process_linux.go:459: container init caused: readonly path /proc/bus: operation not permitted: OCI permission denied"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.4_release_notes/bug_fixes |
Data Services Builder Guide | Data Services Builder Guide Red Hat JBoss Data Virtualization 6.4 David Le Sage [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/data_services_builder_guide/index |
Chapter 5. Sending and receiving messages from a topic | Chapter 5. Sending and receiving messages from a topic Send messages to and receive messages from a Kafka cluster installed on OpenShift. This procedure describes how to use Kafka clients to produce and consume messages. You can deploy clients to OpenShift or connect local Kafka clients to the OpenShift cluster. You can use either or both options to test your Kafka cluster installation. For the local clients, you access the Kafka cluster using an OpenShift route connection. You will use the oc command-line tool to deploy and run the Kafka clients. Prerequisites You have created a Kafka cluster on OpenShift . You have created a route for external access to the Kafka cluster running in OpenShift . You can access the latest version of the Red Hat AMQ Streams archive from the AMQ Streams software downloads page . Sending and receiving messages from Kafka clients deployed to the OpenShift cluster Deploy producer and consumer clients to the OpenShift cluster. You can then use the clients to send and receive messages from the Kafka cluster in the same namespace. The deployment uses the AMQ Streams container image for running Kafka. Use the oc command-line interface to deploy a Kafka producer. This example deploys a Kafka producer that connects to the Kafka cluster my-cluster A topic named my-topic is created. Deploying a Kafka producer to OpenShift oc run kafka-producer -ti \ --image=registry.redhat.io/amq7/amq-streams-kafka-31-rhel8:2.1.0 \ --rm=true \ --restart=Never \ -- bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 \ --topic my-topic From the command prompt, enter a number of messages. Navigate in the OpenShift web console to the Home > Projects page and select the amq-streams-kafka project you created. From the list of pods, click kafka-producer to view the producer pod details. Select Logs page to check the messages you entered are present. Use the oc command-line interface to deploy a Kafka consumer. Deploying a Kafka consumer to OpenShift oc run kafka-consumer -ti \ --image=registry.redhat.io/amq7/amq-streams-kafka-31-rhel8:2.1.0 \ --rm=true \ --restart=Never \ -- bin/kafka-console-consumer.sh \ --bootstrap-server my-cluster-kafka-bootstrap:9092 \ --topic my-topic \ --from-beginning The consumer consumed messages produced to my-topic . From the command prompt, confirm that you see the incoming messages in the consumer console. Navigate in the OpenShift web console to the Home > Projects page and select the amq-streams-kafka project you created. From the list of pods, click kafka-consumer to view the consumer pod details. Select the Logs page to check the messages you consumed are present. Sending and receiving messages from Kafka clients running locally Use a command-line interface to run a Kafka producer and consumer on a local machine. Download and extract the AMQ Streams <version> installation and example files archive from the AMQ Streams software downloads page . Unzip the file to any destination. Open a command-line interface, and start the Kafka console producer with the topic my-topic and the authentication properties for TLS. Add the properties that are required for accessing the Kafka broker with an OpenShift route . Use the hostname and port 443 for the OpenShift route you are using. Use the password and reference to the truststore you created for the broker certificate. Starting a local Kafka producer kafka-console-producer.sh \ --bootstrap-server my-cluster-kafka-listener1-bootstrap-amq-streams-kafka.apps.ci-ln-50kcyvt-72292.origin-ci-int-gce.dev.rhcloud.com:443 \ --producer-property security.protocol=SSL \ --producer-property ssl.truststore.password=password \ --producer-property ssl.truststore.location=client.truststore.jks \ --topic my-topic Type your message into the command-line interface where the producer is running. Press enter to send the message. Open a new command-line interface tab or window, and start the Kafka console consumer to receive the messages. Use the same connection details as the producer. Starting a local Kafka consumer kafka-console-consumer.sh \ --bootstrap-server my-cluster-kafka-listener1-bootstrap-amq-streams-kafka.apps.ci-ln-50kcyvt-72292.origin-ci-int-gce.dev.rhcloud.com:443 \ --consumer-property security.protocol=SSL \ --consumer-property ssl.truststore.password=password \ --consumer-property ssl.truststore.location=client.truststore.jks \ --topic my-topic --from-beginning Confirm that you see the incoming messages in the consumer console. Press Crtl+C to exit the Kafka console producer and consumer. | [
"run kafka-producer -ti --image=registry.redhat.io/amq7/amq-streams-kafka-31-rhel8:2.1.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic",
"run kafka-consumer -ti --image=registry.redhat.io/amq7/amq-streams-kafka-31-rhel8:2.1.0 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic --from-beginning",
"kafka-console-producer.sh --bootstrap-server my-cluster-kafka-listener1-bootstrap-amq-streams-kafka.apps.ci-ln-50kcyvt-72292.origin-ci-int-gce.dev.rhcloud.com:443 --producer-property security.protocol=SSL --producer-property ssl.truststore.password=password --producer-property ssl.truststore.location=client.truststore.jks --topic my-topic",
"kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-listener1-bootstrap-amq-streams-kafka.apps.ci-ln-50kcyvt-72292.origin-ci-int-gce.dev.rhcloud.com:443 --consumer-property security.protocol=SSL --consumer-property ssl.truststore.password=password --consumer-property ssl.truststore.location=client.truststore.jks --topic my-topic --from-beginning"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.1/html/getting_started_with_amq_streams_on_openshift/proc-using-amq-streams-str |
Chapter 7. Making persistent changes to the GRUB boot loader | Chapter 7. Making persistent changes to the GRUB boot loader Use the grubby tool to make persistent changes in GRUB. 7.1. Prerequisites You have successfully installed RHEL on your system. You have root permission. 7.2. Listing the default kernel By listing the default kernel, you can find the file name and the index number of the default kernel to make permanent changes to the GRUB boot loader. Procedure To get the file name of the default kernel, enter: To get the index number of the default kernel, enter: 7.3. Viewing the GRUB menu entry for a kernel You can list all the kernel menu entries or view the GRUB menu entry for a specific kernel. Procedure To list all kernel menu entries, enter: To view the GRUB menu entry for a specific kernel, enter: Note Try tab completion to see available kernels within the /boot directory. 7.4. Editing a Kernel Argument You can change a value in an existing kernel argument. For example, you can change the virtual console (screen) font and size. Procedure Change the virtual console font to latarcyrheb-sun with the size of 32 : 7.5. Adding and removing arguments from a GRUB menu entry You can add, remove, or simultaneously add and remove arguments from the GRUB Menu. Procedure To add arguments to a GRUB menu entry, use the --update-kernel option in combination with --args . For example, following command adds a serial console: The console arguments are attached to the end of the line, the new console will take precedence over any other configured consoles. To remove arguments from a GRUB menu entry, use the --update-kernel option in combination with --remove-args . For example: This command removes the Red Hat graphical boot argument and enables log messages, that is verbose mode. To add and remove arguments simultaneously, enter: Verification To review the permanent changes you have made, enter: 7.6. Adding a new boot entry You can add a new boot entry to the boot loader menu entries. Procedure Copy all the kernel arguments from your default kernel to this new kernel entry: Get the list of available boot entries: Create a new boot entry. For example, for the 4.18.0-193.el8.x86_64 kernel, issue the command as follows: Verification Verify that the newly added boot entry is listed among the available boot entries: 7.7. Changing the default boot entry with grubby With the grubby tool, you can change the default boot entry. Procedure To make a persistent change in the kernel designated as the default kernel, enter: 7.8. Updating all kernel menus with the same arguments You can add the same kernel boot arguments to all the kernel menu entries. Procedure To add the same kernel boot arguments to all the kernel menu entries, attach the --update-kernel=ALL parameter. For example, this command adds a serial console to all kernels: Note The --update-kernel parameter also accepts DEFAULT or a comma-separated list of kernel index numbers. 7.9. Changing default kernel options for current and future kernels By using the kernelopts variable, you can change the default kernel options for both current and future kernels. Procedure List the kernel parameters from the kernelopts variable: Make the changes to the kernel command-line parameters. You can add, remove or modify a parameter. For example, to add the debug parameter, enter: Optional: Verify the parameter newly added to kernelopts : Reboot the system for the changes to take effect. Note As an alternative, you can use the grubby command to pass the arguments to current and future kernels: 7.10. Additional resources The /usr/share/doc/grub2-common directory. The info grub2 command. | [
"grubby --default-kernel /boot/vmlinuz-4.18.0-372.9.1.el8.x86_64",
"grubby --default-index 0",
"grubby --info=ALL index=0 kernel=\"/boot/vmlinuz-4.18.0-372.9.1.el8.x86_64\" args=\"ro crashkernel=auto resume=/dev/mapper/rhel-swap rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quiet USDtuned_params zswap.enabled=1\" root=\"/dev/mapper/rhel-root\" initrd=\"/boot/initramfs-4.18.0-372.9.1.el8.x86_64.img USDtuned_initrd\" title=\"Red Hat Enterprise Linux (4.18.0-372.9.1.el8.x86_64) 8.6 (Ootpa)\" id=\"67db13ba8cdb420794ef3ee0a8313205-4.18.0-372.9.1.el8.x86_64\" index=1 kernel=\"/boot/vmlinuz-0-rescue-67db13ba8cdb420794ef3ee0a8313205\" args=\"ro crashkernel=auto resume=/dev/mapper/rhel-swap rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quiet\" root=\"/dev/mapper/rhel-root\" initrd=\"/boot/initramfs-0-rescue-67db13ba8cdb420794ef3ee0a8313205.img\" title=\"Red Hat Enterprise Linux (0-rescue-67db13ba8cdb420794ef3ee0a8313205) 8.6 (Ootpa)\" id=\"67db13ba8cdb420794ef3ee0a8313205-0-rescue\"",
"grubby --info /boot/vmlinuz-4.18.0-372.9.1.el8.x86_64 grubby --info /boot/vmlinuz-4.18.0-372.9.1.el8.x86_64 index=0 kernel=\"/boot/vmlinuz-4.18.0-372.9.1.el8.x86_64\" args=\"ro crashkernel=auto resume=/dev/mapper/rhel-swap rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quiet USDtuned_params zswap.enabled=1\" root=\"/dev/mapper/rhel-root\" initrd=\"/boot/initramfs-4.18.0-372.9.1.el8.x86_64.img USDtuned_initrd\" title=\"Red Hat Enterprise Linux (4.18.0-372.9.1.el8.x86_64) 8.6 (Ootpa)\" id=\"67db13ba8cdb420794ef3ee0a8313205-4.18.0-372.9.1.el8.x86_64\"",
"grubby --args=vconsole.font=latarcyrheb-sun32 --update-kernel /boot/vmlinuz-4.18.0-372.9.1.el8.x86_64",
"grubby --args=console=ttyS0,115200 --update-kernel /boot/vmlinuz-4.18.0-372.9.1.el8.x86_64",
"grubby --remove-args=\"rhgb quiet\" --update-kernel /boot/vmlinuz-4.18.0-372.9.1.el8.x86_64",
"grubby --remove-args=\"rhgb quiet\" --args=console=ttyS0,115200 --update-kernel /boot/vmlinuz-4.18.0-372.9.1.el8.x86_64",
"grubby --info /boot/vmlinuz-4.18.0-372.9.1.el8.x86_64 index=0 kernel=\"/boot/vmlinuz-4.18.0-372.9.1.el8.x86_64\" args=\"ro crashkernel=auto resume=/dev/mapper/rhel-swap rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap USDtuned_params zswap.enabled=1 console=ttyS0,115200\" root=\"/dev/mapper/rhel-root\" initrd=\"/boot/initramfs-4.18.0-372.9.1.el8.x86_64.img USDtuned_initrd\" title=\"Red Hat Enterprise Linux (4.18.0-372.9.1.el8.x86_64) 8.6 (Ootpa)\" id=\"67db13ba8cdb420794ef3ee0a8313205-4.18.0-372.9.1.el8.x86_64\"",
"grubby --add-kernel=new_kernel --title=\"entry_title\" --initrd=\"new_initrd\" --copy-default",
"ls -l /boot/loader/entries/ * -rw-r--r--. 1 root root 408 May 27 06:18 /boot/loader/entries/67db13ba8cdb420794ef3ee0a8313205-0-rescue.conf -rw-r--r--. 1 root root 536 Jun 30 07:53 /boot/loader/entries/67db13ba8cdb420794ef3ee0a8313205-4.18.0-372.9.1.el8.x86_64.conf -rw-r--r-- 1 root root 336 Aug 15 15:12 /boot/loader/entries/d88fa2c7ff574ae782ec8c4288de4e85-4.18.0-193.el8.x86_64.conf",
"grubby --grub2 --add-kernel=/boot/vmlinuz-4.18.0-193.el8.x86_64 --title=\"Red Hat Enterprise 8 Test\" --initrd=/boot/initramfs-4.18.0-193.el8.x86_64.img --copy-default",
"ls -l /boot/loader/entries/ * -rw-r--r--. 1 root root 408 May 27 06:18 /boot/loader/entries/67db13ba8cdb420794ef3ee0a8313205-0-rescue.conf -rw-r--r--. 1 root root 536 Jun 30 07:53 /boot/loader/entries/67db13ba8cdb420794ef3ee0a8313205-4.18.0-372.9.1.el8.x86_64.conf -rw-r--r-- 1 root root 287 Aug 16 15:17 /boot/loader/entries/d88fa2c7ff574ae782ec8c4288de4e85-4.18.0-193.el8.x86_64.0~custom.conf -rw-r--r-- 1 root root 287 Aug 16 15:29 /boot/loader/entries/d88fa2c7ff574ae782ec8c4288de4e85-4.18.0-193.el8.x86_64.conf",
"grubby --set-default /boot/vmlinuz-4.18.0-372.9.1.el8.x86_64 The default is /boot/loader/entries/67db13ba8cdb420794ef3ee0a8313205-4.18.0-372.9.1.el8.x86_64.conf with index 0 and kernel /boot/vmlinuz-4.18.0-372.9.1.el8.x86_64",
"grubby --update-kernel=ALL --args=console=ttyS0,115200",
"grub2-editenv - list | grep kernelopts kernelopts=root=/dev/mapper/rhel-root ro crashkernel=auto resume=/dev/mapper/rhel-swap rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quiet",
"grub2-editenv - set \"USD(grub2-editenv - list | grep kernelopts) < debug >\"",
"grub2-editenv - list | grep kernelopts kernelopts=root=/dev/mapper/rhel-root ro crashkernel=auto resume=/dev/mapper/rhel-swap rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quiet debug",
"grubby --update-kernel ALL --args=\"< PARAMETER >\""
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_monitoring_and_updating_the_kernel/assembly_making-persistent-changes-to-the-grub-boot-loader_managing-monitoring-and-updating-the-kernel |
Chapter 17. Replacing Controller nodes | Chapter 17. Replacing Controller nodes In certain circumstances a Controller node in a high availability cluster might fail. In these situations, you must remove the node from the cluster and replace it with a new Controller node. Complete the steps in this section to replace a Controller node. The Controller node replacement process involves running the openstack overcloud deploy command to update the overcloud with a request to replace a Controller node. Important The following procedure applies only to high availability environments. Do not use this procedure if you are using only one Controller node. 17.1. Preparing for Controller replacement Before you replace an overcloud Controller node, it is important to check the current state of your Red Hat OpenStack Platform environment. Checking the current state can help avoid complications during the Controller replacement process. Use the following list of preliminary checks to determine if it is safe to perform a Controller node replacement. Run all commands for these checks on the undercloud. Procedure Check the current status of the overcloud stack on the undercloud: The overcloud stack and its subsequent child stacks should have either a CREATE_COMPLETE or UPDATE_COMPLETE . Install the database client tools: Configure root user access to the database: Perform a backup of the undercloud databases: Check that your undercloud contains 10 GB free storage to accommodate for image caching and conversion when you provision the new node: If you are reusing the IP address for the new controller node, ensure that you delete the port used by the old controller: Check the status of Pacemaker on the running Controller nodes. For example, if 192.168.0.47 is the IP address of a running Controller node, use the following command to view the Pacemaker status: The output shows all services that are running on the existing nodes and that are stopped on the failed node. Check the following parameters on each node of the overcloud MariaDB cluster: wsrep_local_state_comment: Synced wsrep_cluster_size: 2 Use the following command to check these parameters on each running Controller node. In this example, the Controller node IP addresses are 192.168.0.47 and 192.168.0.46: Check the RabbitMQ status. For example, if 192.168.0.47 is the IP address of a running Controller node, use the following command to view the RabbitMQ status: The running_nodes key should show only the two available nodes and not the failed node. If fencing is enabled, disable it. For example, if 192.168.0.47 is the IP address of a running Controller node, use the following command to check the status of fencing: Run the following command to disable fencing: Check the Compute services are active on the director node: The output should show all non-maintenance mode nodes as up . Ensure all undercloud containers are running: Stop all the nova_* containers running on the failed Controller node: Optional: If you are using the Bare Metal Service (ironic) as the virt driver, you must manually update the service entries in your cell database for any bare metal instances whose instances.host is set to the controller that you are removing. Contact Red Hat Support for assistance. Note This manual update of the cell database when using Bare Metal Service (ironic) as the virt driver is a temporary workaround to ensure the nodes are rebalanced, until BZ2017980 is complete. 17.2. Removing a Ceph Monitor daemon If your Controller node is running a Ceph monitor service, complete the following steps to remove the ceph-mon daemon. Note Adding a new Controller node to the cluster also adds a new Ceph monitor daemon automatically. Procedure Connect to the Controller node that you want to replace and become the root user: Note If the Controller node is unreachable, skip steps 1 and 2 and continue the procedure at step 3 on any working Controller node. Stop the monitor: For example: Disconnect from the Controller node that you want to replace. Connect to one of the existing Controller nodes. Remove the monitor from the cluster: On all Controller nodes, remove the v1 and v2 monitor entries from /etc/ceph/ceph.conf . For example, if you remove controller-1, then remove the IPs and hostname for controller-1. Before: After: Note Director updates the ceph.conf file on the relevant overcloud nodes when you add the replacement Controller node. Normally, director manages this configuration file exclusively and you should not edit the file manually. However, you can edit the file manually if you want to ensure consistency in case the other nodes restart before you add the new node. (Optional) Archive the monitor data and save the archive on another server: 17.3. Preparing the cluster for Controller node replacement Before you replace the old node, you must ensure that Pacemaker is not running on the node and then remove that node from the Pacemaker cluster. Procedure To view the list of IP addresses for the Controller nodes, run the following command: If the old node is still reachable, log in to one of the remaining nodes and stop pacemaker on the old node. For this example, stop pacemaker on overcloud-controller-1: Note In case the old node is physically unavailable or stopped, it is not necessary to perform the operation, as pacemaker is already stopped on that node. After you stop Pacemaker on the old node, delete the old node from the pacemaker cluster. The following example command logs in to overcloud-controller-0 to remove overcloud-controller-1 : If the node that that you want to replace is unreachable (for example, due to a hardware failure), run the pcs command with additional --skip-offline and --force options to forcibly remove the node from the cluster: After you remove the old node from the pacemaker cluster, remove the node from the list of known hosts in pacemaker: You can run this command whether the node is reachable or not. To ensure that the new Controller node uses the correct STONITH fencing device after the replacement, delete the old devices from the node by entering the following command: Replace <stonith_resource_name> with the name of the STONITH resource that corresponds to the old node. The resource name uses the the format <resource_agent>-<host_mac> . You can find the resource agent and the host MAC address in the FencingConfig section of the fencing.yaml file. The overcloud database must continue to run during the replacement procedure. To ensure that Pacemaker does not stop Galera during this procedure, select a running Controller node and run the following command on the undercloud with the IP address of the Controller node: 17.4. Replacing a Controller node To replace a Controller node, identify the index of the node that you want to replace. If the node is a virtual node, identify the node that contains the failed disk and restore the disk from a backup. Ensure that the MAC address of the NIC used for PXE boot on the failed server remains the same after disk replacement. If the node is a bare metal node, replace the disk, prepare the new disk with your overcloud configuration, and perform a node introspection on the new hardware. If the node is a part of a high availability cluster with fencing, you might need recover the Galera nodes separately. For more information, see the article How Galera works and how to rescue Galera clusters in the context of Red Hat OpenStack Platform . Complete the following example steps to replace the overcloud-controller-1 node with the overcloud-controller-3 node. The overcloud-controller-3 node has the ID 75b25e9a-948d-424a-9b3b-f0ef70a6eacf . Important To replace the node with an existing bare metal node, enable maintenance mode on the outgoing node so that the director does not automatically reprovision the node. Procedure Source the stackrc file: Identify the index of the overcloud-controller-1 node: Identify the bare metal node associated with the instance: Set the node to maintenance mode: If the Controller node is a virtual node, run the following command on the Controller host to replace the virtual disk from a backup: Replace <VIRTUAL_DISK_BACKUP> with the path to the backup of the failed virtual disk, and replace <VIRTUAL_DISK> with the name of the virtual disk that you want to replace. If you do not have a backup of the outgoing node, you must use a new virtualized node. If the Controller node is a bare metal node, complete the following steps to replace the disk with a new bare metal disk: Replace the physical hard drive or solid state drive. Prepare the node with the same configuration as the failed node. List unassociated nodes and identify the ID of the new node: Tag the new node with the control profile: 17.5. Replacing a bootstrap Controller node If you want to replace the Controller node that you use for bootstrap operations and keep the node name, complete the following steps to set the name of the bootstrap Controller node after the replacement process. Procedure Find the name of the bootstrap Controller node: Replace <controller_ip> with the IP address of any active Controller node. Check if your environment files include the ExtraConfig section. If the ExtraConfig parameter does not exist, create the following environment file ~/templates/bootstrap-controller.yaml and add the following content: Replace <node_name> with the name of an existing Controller node that you want to use in bootstrap operations after the replacement process. If your environment files already include the ExtraConfig parameter, add only the lines that set the pacemaker_short_bootstrap_node_name and mysql_short_bootstrap_node_name parameters. Follow the steps to trigger the Controller node replacement and include the environment files in the overcloud deploy command . For more information, see Triggering the Controller node replacement . For information about troubleshooting the bootstrap Controller node replacement, see the article Replacement of the first Controller node fails at step 1 if the same hostname is used for a new node . 17.6. Preserving hostnames when replacing nodes that use predictable IP addresses and HostNameMap If you configured your overcloud to use predictable IP addresses, and HostNameMap to map heat-based hostnames to the hostnames of pre-provisioned nodes, then you must configure your overcloud to map the new replacement node index to an IP address and hostname. Procedure Log in to the undercloud as the stack user. Source the stackrc file: Retrieve the physical_resource_id and the removed_rsrc_list for the resource you want to replace: Replace <stack> with the name of the stack the resource belongs to, for example, overcloud . Replace <role> with the name of the role that you want to replace the node for, for example, Compute . Example output: +------------------------+-----------------------------------------------------------+ | Field | Value | +------------------------+-----------------------------------------------------------+ | attributes | {u'attributes': None, u'refs': None, u'refs_map': None, | | | u'removed_rsrc_list': [u'2', u'3']} | 1 | creation_time | 2017-09-05T09:10:42Z | | description | | | links | [{u'href': u'http://192.168.24.1:8004/v1/bd9e6da805594de9 | | | 8d4a1d3a3ee874dd/stacks/overcloud/1c7810c4-8a1e- | | | 4d61-a5d8-9f964915d503/resources/Compute', u'rel': | | | u'self'}, {u'href': u'http://192.168.24.1:8004/v1/bd9e6da | | | 805594de98d4a1d3a3ee874dd/stacks/overcloud/1c7810c4-8a1e- | | | 4d61-a5d8-9f964915d503', u'rel': u'stack'}, {u'href': u'h | | | ttp://192.168.24.1:8004/v1/bd9e6da805594de98d4a1d3a3ee874 | | | dd/stacks/overcloud-Compute-zkjccox63svg/7632fb0b- | | | 80b1-42b3-9ea7-6114c89adc29', u'rel': u'nested'}] | | logical_resource_id | Compute | | physical_resource_id | 7632fb0b-80b1-42b3-9ea7-6114c89adc29 | | required_by | [u'AllNodesDeploySteps', | | | u'ComputeAllNodesValidationDeployment', | | | u'AllNodesExtraConfig', u'ComputeIpListMap', | | | u'ComputeHostsDeployment', u'UpdateWorkflow', | | | u'ComputeSshKnownHostsDeployment', u'hostsConfig', | | | u'SshKnownHostsConfig', u'ComputeAllNodesDeployment'] | | resource_name | Compute | | resource_status | CREATE_COMPLETE | | resource_status_reason | state changed | | resource_type | OS::Heat::ResourceGroup | | updated_time | 2017-09-05T09:10:42Z | +------------------------+-----------------------------------------------------------+ 1 The removed_rsrc_list lists the indexes of nodes that have already been removed for the resource. Retrieve the resource_name to determine the maximum index that heat has applied to a node for this resource: Replace <physical_resource_id> with the ID you retrieved in step 3. Use the resource_name and the removed_rsrc_list to determine the index that heat will apply to a new node: If removed_rsrc_list is empty, then the index will be (current_maximum_index) + 1. If removed_rsrc_list includes the value (current_maximum_index) + 1, then the index will be the available index. Retrieve the ID of the replacement bare-metal node: Update the capability of the replacement node with the new index: Replace <role> with the name of the role that you want to replace the node for, for example, compute . Replace <index> with the index calculated in step 5. Replace <node> with the ID of the bare metal node. The Compute scheduler uses the node capability to match the node on deployment. Assign a hostname to the new node by adding the index to the HostnameMap configuration, for example: 1 Node that you are removing and replacing with the new node. 2 New node. 3 Node that you are removing and replacing with the new node. 4 New node. Note Do not delete the mapping for the removed node from HostnameMap . Add the IP address for the replacement node to the end of each network IP address list in your network IP address mapping file, ips-from-pool-all.yaml . In the following example, the IP address for the new index, overcloud-controller-3 , is added to the end of the IP address list for each ControllerIPs network, and is assigned the same IP address as overcloud-controller-1 because it replaces overcloud-controller-1 . The IP address for the new index, overcloud-compute-8 , is also added to the end of the IP address list for each ComputeIPs network, and is assigned the same IP address as the index it replaces, overcloud-compute-3 : 1 IP address assigned to index 0, host name overcloud-controller-prod-123-0 . 2 IP address assigned to index 1, host name overcloud-controller-prod-456-0 . This node is replaced by index 3. Do not remove this entry. 3 IP address assigned to index 2, host name overcloud-controller-prod-789-0 . 4 IP address assigned to index 3, host name overcloud-controller-prod-456-0 . This is the new node that replaces index 1. 5 IP address assigned to index 0, host name overcloud-compute-0 . 6 IP address assigned to index 1, host name overcloud-compute-3 . This node is replaced by index 2. Do not remove this entry. 7 IP address assigned to index 2, host name overcloud-compute-8 . This is the new node that replaces index 1. 17.7. Triggering the Controller node replacement Complete the following steps to remove the old Controller node and replace it with a new Controller node. Procedure Determine the UUID of the Controller node that you want to remove and store it in the <NODEID> variable. Ensure that you replace <node_name> with the name of the node that you want to remove: To identify the Heat resource ID, enter the following command: Create the following environment file ~/templates/remove-controller.yaml and include the node index of the Controller node that you want to remove: Enter the overcloud deployment command, and include the remove-controller.yaml environment file and any other environment files relevant to your environment: Note Include -e ~/templates/remove-controller.yaml only for this instance of the deployment command. Remove this environment file from subsequent deployment operations. Include ~/templates/bootstrap-controller.yaml if you are replacing a bootstrap Controller node and want to keep the node name. For more information, see Replacing a bootstrap Controller node . Director removes the old node, creates a new node, and updates the overcloud stack. You can check the status of the overcloud stack with the following command: When the deployment command completes, confirm that the old node is replaced with the new node: The new node now hosts running control plane services. 17.8. Cleaning up after Controller node replacement After you complete the node replacement, complete the following steps to finalize the Controller cluster. Procedure Log into a Controller node. Enable Pacemaker management of the Galera cluster and start Galera on the new node: Perform a final status check to ensure that the services are running correctly: Note If any services have failed, use the pcs resource refresh command to resolve and restart the failed services. Exit to director: Source the overcloudrc file so that you can interact with the overcloud: Check the network agents in your overcloud environment: If any agents appear for the old node, remove them: If necessary, add your router to the L3 agent host on the new node. Use the following example command to add a router named r1 to the L3 agent using the UUID 2d1c1dc1-d9d4-4fa9-b2c8-f29cd1a649d4: Clean the cinder services. List the cinder services: Log in to a controller node, connect to the cinder-api container and use the cinder-manage service remove command to remove leftover services: Clean the RabbitMQ cluster. Log into a Controller node. Use the podman exec command to launch bash, and verify the status of the RabbitMQ cluster: Use the rabbitmqctl command to forget the replaced controller node: If you replaced a bootstrap Controller node, you must remove the environment file ~/templates/bootstrap-controller.yaml after the replacement process, or delete the pacemaker_short_bootstrap_node_name and mysql_short_bootstrap_node_name parameters from your existing environment file. This step prevents director from attempting to override the Controller node name in subsequent replacements. For more information, see Replacing a bootstrap controller node . | [
"source stackrc (undercloud)USD openstack stack list --nested",
"(undercloud)USD sudo dnf -y install mariadb",
"(undercloud)USD sudo cp /var/lib/config-data/puppet-generated/mysql/root/.my.cnf /root/.",
"(undercloud)USD mkdir /home/stack/backup (undercloud)USD sudo mysqldump --all-databases --quick --single-transaction | gzip > /home/stack/backup/dump_db_undercloud.sql.gz",
"(undercloud)USD df -h",
"(undercloud)USD openstack port delete <port>",
"(undercloud)USD ssh [email protected] 'sudo pcs status'",
"(undercloud)USD for i in 192.168.0.46 192.168.0.47 ; do echo \"*** USDi ***\" ; ssh heat-admin@USDi \"sudo podman exec \\USD(sudo podman ps --filter name=galera-bundle -q) mysql -e \\\"SHOW STATUS LIKE 'wsrep_local_state_comment'; SHOW STATUS LIKE 'wsrep_cluster_size';\\\"\"; done",
"(undercloud)USD ssh [email protected] \"sudo podman exec \\USD(sudo podman ps -f name=rabbitmq-bundle -q) rabbitmqctl cluster_status\"",
"(undercloud)USD ssh [email protected] \"sudo pcs property show stonith-enabled\"",
"(undercloud)USD ssh [email protected] \"sudo pcs property set stonith-enabled=false\"",
"(undercloud)USD openstack hypervisor list",
"(undercloud)USD sudo podman ps",
"[root@controller-0 ~]USD sudo systemctl stop tripleo_nova_api.service [root@controller-0 ~]USD sudo systemctl stop tripleo_nova_api_cron.service [root@controller-0 ~]USD sudo systemctl stop tripleo_nova_compute.service [root@controller-0 ~]USD sudo systemctl stop tripleo_nova_conductor.service [root@controller-0 ~]USD sudo systemctl stop tripleo_nova_metadata.service [root@controller-0 ~]USD sudo systemctl stop tripleo_nova_placement.service [root@controller-0 ~]USD sudo systemctl stop tripleo_nova_scheduler.service",
"ssh [email protected] sudo su -",
"systemctl stop ceph-mon@<monitor_hostname>",
"systemctl stop ceph-mon@overcloud-controller-1",
"ssh [email protected] sudo su -",
"sudo podman exec -it ceph-mon-controller-0 ceph mon remove overcloud-controller-1",
"mon host = [v2:172.18.0.21:3300,v1:172.18.0.21:6789],[v2:172.18.0.22:3300,v1:172.18.0.22:6789],[v2:172.18.0.24:3300,v1:172.18.0.24:6789] mon initial members = overcloud-controller-2,overcloud-controller-1,overcloud-controller-0",
"mon host = [v2:172.18.0.21:3300,v1:172.18.0.21:6789],[v2:172.18.0.24:3300,v1:172.18.0.24:6789] mon initial members = overcloud-controller-2,overcloud-controller-0",
"mv /var/lib/ceph/mon/<cluster>-<daemon_id> /var/lib/ceph/mon/removed-<cluster>-<daemon_id>",
"(undercloud) USD openstack server list -c Name -c Networks +------------------------+-----------------------+ | Name | Networks | +------------------------+-----------------------+ | overcloud-compute-0 | ctlplane=192.168.0.44 | | overcloud-controller-0 | ctlplane=192.168.0.47 | | overcloud-controller-1 | ctlplane=192.168.0.45 | | overcloud-controller-2 | ctlplane=192.168.0.46 | +------------------------+-----------------------+",
"(undercloud) USD ssh [email protected] \"sudo pcs status | grep -w Online | grep -w overcloud-controller-1\" (undercloud) USD ssh [email protected] \"sudo pcs cluster stop overcloud-controller-1\"",
"(undercloud) USD ssh [email protected] \"sudo pcs cluster node remove overcloud-controller-1\"",
"(undercloud) USD ssh [email protected] \"sudo pcs cluster node remove overcloud-controller-1 --skip-offline --force\"",
"(undercloud) USD ssh [email protected] \"sudo pcs host deauth overcloud-controller-1\"",
"(undercloud) USD ssh [email protected] \"sudo pcs stonith delete <stonith_resource_name>\"",
"(undercloud) USD ssh [email protected] \"sudo pcs resource unmanage galera-bundle\"",
"source ~/stackrc",
"INSTANCE=USD(openstack server list --name overcloud-controller-1 -f value -c ID)",
"NODE=USD(openstack baremetal node list -f csv --quote minimal | grep USDINSTANCE | cut -f1 -d,)",
"openstack baremetal node maintenance set USDNODE",
"cp <VIRTUAL_DISK_BACKUP> /var/lib/libvirt/images/<VIRTUAL_DISK>",
"openstack baremetal node list --unassociated",
"(undercloud) USD openstack baremetal node set --property capabilities='profile:control,boot_option:local' 75b25e9a-948d-424a-9b3b-f0ef70a6eacf",
"ssh heat-admin@<controller_ip> \"sudo hiera -c /etc/puppet/hiera.yaml pacemaker_short_bootstrap_node_name\"",
"parameter_defaults: ExtraConfig: pacemaker_short_bootstrap_node_name: <node_name> mysql_short_bootstrap_node_name: <node_name>",
"source ~/stackrc",
"(undercloud)USD openstack stack resource show <stack> <role>",
"+------------------------+-----------------------------------------------------------+ | Field | Value | +------------------------+-----------------------------------------------------------+ | attributes | {u'attributes': None, u'refs': None, u'refs_map': None, | | | u'removed_rsrc_list': [u'2', u'3']} | 1 | creation_time | 2017-09-05T09:10:42Z | | description | | | links | [{u'href': u'http://192.168.24.1:8004/v1/bd9e6da805594de9 | | | 8d4a1d3a3ee874dd/stacks/overcloud/1c7810c4-8a1e- | | | 4d61-a5d8-9f964915d503/resources/Compute', u'rel': | | | u'self'}, {u'href': u'http://192.168.24.1:8004/v1/bd9e6da | | | 805594de98d4a1d3a3ee874dd/stacks/overcloud/1c7810c4-8a1e- | | | 4d61-a5d8-9f964915d503', u'rel': u'stack'}, {u'href': u'h | | | ttp://192.168.24.1:8004/v1/bd9e6da805594de98d4a1d3a3ee874 | | | dd/stacks/overcloud-Compute-zkjccox63svg/7632fb0b- | | | 80b1-42b3-9ea7-6114c89adc29', u'rel': u'nested'}] | | logical_resource_id | Compute | | physical_resource_id | 7632fb0b-80b1-42b3-9ea7-6114c89adc29 | | required_by | [u'AllNodesDeploySteps', | | | u'ComputeAllNodesValidationDeployment', | | | u'AllNodesExtraConfig', u'ComputeIpListMap', | | | u'ComputeHostsDeployment', u'UpdateWorkflow', | | | u'ComputeSshKnownHostsDeployment', u'hostsConfig', | | | u'SshKnownHostsConfig', u'ComputeAllNodesDeployment'] | | resource_name | Compute | | resource_status | CREATE_COMPLETE | | resource_status_reason | state changed | | resource_type | OS::Heat::ResourceGroup | | updated_time | 2017-09-05T09:10:42Z | +------------------------+-----------------------------------------------------------+",
"(undercloud)USD openstack stack resource list <physical_resource_id>",
"(undercloud)USD openstack baremetal node list",
"openstack baremetal node set --property capabilities='node:<role>-<index>,boot_option:local' <node>",
"parameter_defaults: ControllerSchedulerHints: 'capabilities:node': 'controller-%index%' ComputeSchedulerHints: 'capabilities:node': 'compute-%index%' HostnameMap: overcloud-controller-0: overcloud-controller-prod-123-0 overcloud-controller-1: overcloud-controller-prod-456-0 1 overcloud-controller-2: overcloud-controller-prod-789-0 overcloud-controller-3: overcloud-controller-prod-456-0 2 overcloud-compute-0: overcloud-compute-prod-abc-0 overcloud-compute-3: overcloud-compute-prod-abc-3 3 overcloud-compute-8: overcloud-compute-prod-abc-3 4 .",
"parameter_defaults: ControllerIPs: internal_api: - 192.168.1.10 1 - 192.168.1.11 2 - 192.168.1.12 3 - 192.168.1.11 4 storage: - 192.168.2.10 - 192.168.2.11 - 192.168.2.12 - 192.168.2.11 ComputeIPs: internal_api: - 172.17.0.10 5 - 172.17.0.11 6 - 172.17.0.11 7 storage: - 172.17.0.10 - 172.17.0.11 - 172.17.0.11",
"(undercloud)[stack@director ~]USD NODEID=USD(openstack server list -f value -c ID --name <node_name>)",
"(undercloud)[stack@director ~]USD openstack stack resource show overcloud ControllerServers -f json -c attributes | jq --arg NODEID \"USDNODEID\" -c '.attributes.value | keys[] as USDk | if .[USDk] == USDNODEID then \"Node index \\(USDk) for \\(.[USDk])\" else empty end'",
"parameters: ControllerRemovalPolicies: [{'resource_list': ['<node_index>']}]",
"(undercloud) USD openstack overcloud deploy --templates -e /home/stack/templates/remove-controller.yaml [OTHER OPTIONS]",
"(undercloud)USD openstack stack list --nested",
"(undercloud) USD openstack server list -c Name -c Networks +------------------------+-----------------------+ | Name | Networks | +------------------------+-----------------------+ | overcloud-compute-0 | ctlplane=192.168.0.44 | | overcloud-controller-0 | ctlplane=192.168.0.47 | | overcloud-controller-2 | ctlplane=192.168.0.46 | | overcloud-controller-3 | ctlplane=192.168.0.48 | +------------------------+-----------------------+",
"[heat-admin@overcloud-controller-0 ~]USD sudo pcs resource refresh galera-bundle [heat-admin@overcloud-controller-0 ~]USD sudo pcs resource manage galera-bundle",
"[heat-admin@overcloud-controller-0 ~]USD sudo pcs status",
"[heat-admin@overcloud-controller-0 ~]USD exit",
"source ~/overcloudrc",
"(overcloud) USD openstack network agent list",
"(overcloud) USD for AGENT in USD(openstack network agent list --host overcloud-controller-1.localdomain -c ID -f value) ; do openstack network agent delete USDAGENT ; done",
"(overcloud) USD openstack network agent add router --l3 2d1c1dc1-d9d4-4fa9-b2c8-f29cd1a649d4 r1",
"(overcloud) USD openstack volume service list",
"[heat-admin@overcloud-controller-0 ~]USD sudo podman exec -it cinder_api cinder-manage service remove cinder-backup <host> [heat-admin@overcloud-controller-0 ~]USD sudo podman exec -it cinder_api cinder-manage service remove cinder-scheduler <host>",
"[heat-admin@overcloud-controller-0 ~]USD podman exec -it rabbitmq-bundle-podman-0 bash [heat-admin@overcloud-controller-0 ~]USD rabbitmqctl cluster_status",
"[heat-admin@overcloud-controller-0 ~]USD rabbitmqctl forget_cluster_node <node_name>"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/director_installation_and_usage/assembly_replacing-controller-nodes |
Chapter 6. Runtime verification of the real-time kernel | Chapter 6. Runtime verification of the real-time kernel Runtime verification is a lightweight and rigorous method to check the behavioral equivalence between system events and their formal specifications. Runtime verification has monitors integrated in the kernel that attach to tracepoints . If a system state deviates from defined specifications, the runtime verification program activates reactors to inform or enable a reaction, such as capturing the event in log files or a system shutdown to prevent failure propagation in an extreme case. 6.1. Runtime monitors and reactors The runtime verification (RV) monitors are encapsulated inside the RV monitor abstraction and coordinate between the defined specifications and the kernel trace to capture runtime events in trace files. The RV monitor includes: Reference Model is a reference model of the system. Monitor Instance(s) is a set of instance for a monitor, such as a per-CPU monitor or a per-task monitor. Helper functions that connect the monitor to the system. In addition to verifying and monitoring a system at runtime, you can enable a response to an unexpected system event. The forms of reaction can vary from capturing an event in the trace file to initiating an extreme reaction, such as a shut-down to avoid a system failure on safety critical systems. Reactors are reaction methods available for RV monitors to define reactions to system events as required. By default, monitors provide a trace output of the actions. 6.2. Online runtime monitors Runtime verification (RV) monitors are classified into following types: Online monitors capture events in the trace while the system is running. Online monitors are synchronous if the event processing is attached to the system execution. This will block the system during the event monitoring. Online monitors are asynchronous, if the execution is detached from the system and is run on a different machine. This however requires saved execution log files. Offline monitors process traces that are generated after the events have occurred. Offline runtime verification capture information by reading the saved trace log files generally from a permanent storage. Offline monitors can work only if you have the events saved in a file. 6.3. The user interface The user interface is located at /sys/kernel/tracing/rv and resembles the tracing interface. The user interface includes the mentioned files and folders. Settings Description Example commands available_monitors Displays the available monitors one per line. # cat available_monitors available_reactors Display the available reactors one per line. # cat available_reactors enabled_monitors Displays enabled monitors one per line. You can enable more than one monitor at the same time. Writing a monitor name with a '!' prefix disables the monitor and truncating the file disables all enabled monitors. # cat enabled_monitors # echo wip > enabled_monitors # echo '!wip'>> enabled_monitors monitors/ The monitors/ directory resembles the events directory on the tracefs file system with each monitor having its own directory inside monitors/ . # cd monitors/wip/ monitors/MONITOR/reactors Lists available reactors with the select reaction for a specific MONITOR inside "[]". The default is the no operation ( nop ) reactor. Writing the name of a reactor integrates it to a specific MONITOR. # cat monitors/wip/reactors monitoring_on Initiates the tracing_on and the tracing_off switcher in the trace interface. Writing 0 stops the monitoring and 1 continues the monitoring. The switcher does not disable enabled monitors but stops the per-entity monitors from monitoring the events. reacting_on Enables reactors. Writing 0 disables reactions and 1 enables reactions. monitors/MONITOR/desc Displays the Monitor description monitors/MONITOR/enable Displays the current status of the Monitor. Writing 0 disables the Monitor and 1 enables the Monitor. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/9/html/optimizing_rhel_9_for_real_time_for_low_latency_operation/runtime-verification-of-the-real-time-kernel_optimizing-rhel9-for-real-time-for-low-latency-operation |
Chapter 10. Understanding and creating service accounts | Chapter 10. Understanding and creating service accounts 10.1. Service accounts overview A service account is an OpenShift Container Platform account that allows a component to directly access the API. Service accounts are API objects that exist within each project. Service accounts provide a flexible way to control API access without sharing a regular user's credentials. When you use the OpenShift Container Platform CLI or web console, your API token authenticates you to the API. You can associate a component with a service account so that they can access the API without using a regular user's credentials. For example, service accounts can allow: Replication controllers to make API calls to create or delete pods. Applications inside containers to make API calls for discovery purposes. External applications to make API calls for monitoring or integration purposes. Each service account's user name is derived from its project and name: system:serviceaccount:<project>:<name> Every service account is also a member of two groups: Group Description system:serviceaccounts Includes all service accounts in the system. system:serviceaccounts:<project> Includes all service accounts in the specified project. Each service account automatically contains two secrets: An API token Credentials for the OpenShift Container Registry The generated API token and registry credentials do not expire, but you can revoke them by deleting the secret. When you delete the secret, a new one is automatically generated to take its place. 10.2. Creating service accounts You can create a service account in a project and grant it permissions by binding it to a role. Procedure Optional: To view the service accounts in the current project: USD oc get sa Example output NAME SECRETS AGE builder 2 2d default 2 2d deployer 2 2d To create a new service account in the current project: USD oc create sa <service_account_name> 1 1 To create a service account in a different project, specify -n <project_name> . Example output serviceaccount "robot" created Tip You can alternatively apply the following YAML to create the service account: apiVersion: v1 kind: ServiceAccount metadata: name: <service_account_name> namespace: <current_project> Optional: View the secrets for the service account: USD oc describe sa robot Example output Name: robot Namespace: project1 Labels: <none> Annotations: <none> Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-token-f4khf robot-dockercfg-qzbhb Tokens: robot-token-f4khf robot-token-z8h44 10.3. Examples of granting roles to service accounts You can grant roles to service accounts in the same way that you grant roles to a regular user account. You can modify the service accounts for the current project. For example, to add the view role to the robot service account in the top-secret project: USD oc policy add-role-to-user view system:serviceaccount:top-secret:robot Tip You can alternatively apply the following YAML to add the role: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view namespace: top-secret roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - kind: ServiceAccount name: robot namespace: top-secret You can also grant access to a specific service account in a project. For example, from the project to which the service account belongs, use the -z flag and specify the <service_account_name> USD oc policy add-role-to-user <role_name> -z <service_account_name> Important If you want to grant access to a specific service account in a project, use the -z flag. Using this flag helps prevent typos and ensures that access is granted to only the specified service account. Tip You can alternatively apply the following YAML to add the role: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <rolebinding_name> namespace: <current_project_name> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <role_name> subjects: - kind: ServiceAccount name: <service_account_name> namespace: <current_project_name> To modify a different namespace, you can use the -n option to indicate the project namespace it applies to, as shown in the following examples. For example, to allow all service accounts in all projects to view resources in the my-project project: USD oc policy add-role-to-group view system:serviceaccounts -n my-project Tip You can alternatively apply the following YAML to add the role: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view namespace: my-project roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts To allow all service accounts in the managers project to edit resources in the my-project project: USD oc policy add-role-to-group edit system:serviceaccounts:managers -n my-project Tip You can alternatively apply the following YAML to add the role: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: edit namespace: my-project roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: edit subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts:managers | [
"system:serviceaccount:<project>:<name>",
"oc get sa",
"NAME SECRETS AGE builder 2 2d default 2 2d deployer 2 2d",
"oc create sa <service_account_name> 1",
"serviceaccount \"robot\" created",
"apiVersion: v1 kind: ServiceAccount metadata: name: <service_account_name> namespace: <current_project>",
"oc describe sa robot",
"Name: robot Namespace: project1 Labels: <none> Annotations: <none> Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-token-f4khf robot-dockercfg-qzbhb Tokens: robot-token-f4khf robot-token-z8h44",
"oc policy add-role-to-user view system:serviceaccount:top-secret:robot",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view namespace: top-secret roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - kind: ServiceAccount name: robot namespace: top-secret",
"oc policy add-role-to-user <role_name> -z <service_account_name>",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <rolebinding_name> namespace: <current_project_name> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <role_name> subjects: - kind: ServiceAccount name: <service_account_name> namespace: <current_project_name>",
"oc policy add-role-to-group view system:serviceaccounts -n my-project",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view namespace: my-project roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts",
"oc policy add-role-to-group edit system:serviceaccounts:managers -n my-project",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: edit namespace: my-project roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: edit subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts:managers"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/authentication_and_authorization/understanding-and-creating-service-accounts |
Chapter 7. Known issues | Chapter 7. Known issues This release includes the following known issue: Issue Description JWS-3092 Race condition in stop under stress: Context checks block Container stop | null | https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/red_hat_jboss_web_server_6.0_release_notes/known_issues |
Chapter 3. The gofmt formatting tool | Chapter 3. The gofmt formatting tool Instead of a style guide, the Go programming language uses the gofmt code formatting tool. gofmt automatically formats your code according to the Go layout rules. 3.1. Prerequisites Go Toolset is installed. For more information, see Installing Go Toolset . 3.2. Formatting code You can use the gofmt formatting tool to format code in a given path. When the path leads to a single file, the changes apply only to the file. When the path leads to a directory, all .go files in the directory are processed. Procedure To format your code in a given path, run: On Red Hat Enterprise Linux 8: Replace < code_path > with the path to the code you want to format. On Red Hat Enterprise Linux 9: Replace < code_path > with the path to the code you want to format. Note To print the formatted code to standard output instead of writing it to the original file, omit the -w option. 3.3. Previewing changes to code You can use the gofmt formatting tool to preview changes done by formatting code in a given path. The output in unified diff format is printed to standard output. Procedure To show differences in your code in a given path, run: On Red Hat Enterprise Linux 8: Replace < code_path > with the path to the code you want to compare. On Red Hat Enterprise Linux 9: Replace < code_path > with the path to the code you want to compare. 3.4. Simplifying code You can use the gofmt formatting tool to simplify your code. Procedure To simplify anf format code in a given path, run: On Red Hat Enterprise Linux 8: Replace < code_path > with the path to the code you want to simplify. On Red Hat Enterprise Linux 9: Replace < code_path > with the path to the code you want to simplify. 3.5. Refactoring code You can use the gofmt formatting tool to refactor your code by applying arbitrary substitutions. Procedure To refactor and format your code in a given path, run: On Red Hat Enterprise Linux 8: Replace < code_path > with the path to the code you want to refactor and < rewrite_rule > with the rule you want it to be rewritten by. On Red Hat Enterprise Linux 9: Replace < code_path > with the path to the code you want to refactor and < rewrite_rule > with the rule you want it to be rewritten by. 3.6. Additional resources The official gofmt documentation . | [
"gofmt -w < code_path >",
"gofmt -w < code_path >",
"gofmt -w -d < code_path >",
"gofmt -w -d < code_path >",
"gofmt -s -w < code_path >",
"gofmt -s -w < code_path >",
"gofmt -r -w < rewrite_rule > < code_path >",
"gofmt -r -w < rewrite_rule > < code_path >"
] | https://docs.redhat.com/en/documentation/red_hat_developer_tools/1/html/using_go_1.20.10_toolset/assembly_the-gofmt-formatting-tool_using-go-toolset |
Chapter 4. General Updates | Chapter 4. General Updates In-place upgrade from Red Hat Enterprise Linux 6 to Red Hat Enterprise Linux 7 An in-place upgrade offers a way of upgrading a system to a new major release of Red Hat Enterprise Linux by replacing the existing operating system. To perform an in-place upgrade, use the Preupgrade Assistant , a utility that checks the system for upgrade issues before running the actual upgrade, and that also provides additional scripts for the Red Hat Upgrade Tool . When you have solved all the problems reported by the Preupgrade Assistant , use the Red Hat Upgrade Tool to upgrade the system. For details regarding procedures and supported scenarios, see https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Migration_Planning_Guide/chap-Red_Hat_Enterprise_Linux-Migration_Planning_Guide-Upgrading.html and https://access.redhat.com/solutions/637583 . Note that the Preupgrade Assistant and the Red Hat Upgrade Tool are available in the Red Hat Enterprise Linux 6 Extras channel, see https://access.redhat.com/support/policy/updates/extras . (BZ#1432080) cloud-init moved to the Base channel As of Red Hat Enterprise Linux 7.4, the cloud-init package and its dependencies have been moved from the Red Hat Common channel to the Base channel. Cloud-init is a tool that handles early initialization of a system using metadata provided by the environment. It is typically used to configure servers booting in a cloud environment, such as OpenStack or Amazon Web Services. Note that the cloud-init package has not been updated since the latest version provided through the Red Hat Common channel. (BZ#1427280) | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.4_release_notes/new_features_general_updates |
function::proc_mem_rss | function::proc_mem_rss Name function::proc_mem_rss - Program resident set size in pages Synopsis Arguments None Description Returns the resident set size in pages of the current process, or zero when there is no current process or the number of pages couldn't be retrieved. | [
"function proc_mem_rss:long()"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-proc-mem-rss |
Chapter 2. APIService [apiregistration.k8s.io/v1] | Chapter 2. APIService [apiregistration.k8s.io/v1] Description APIService represents a server for a particular GroupVersion. Name must be "version.group". Type object 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object APIServiceSpec contains information for locating and communicating with a server. Only https is supported, though you are able to disable certificate verification. status object APIServiceStatus contains derived information about an API server 2.1.1. .spec Description APIServiceSpec contains information for locating and communicating with a server. Only https is supported, though you are able to disable certificate verification. Type object Required groupPriorityMinimum versionPriority Property Type Description caBundle string CABundle is a PEM encoded CA bundle which will be used to validate an API server's serving certificate. If unspecified, system trust roots on the apiserver are used. group string Group is the API group name this server hosts groupPriorityMinimum integer GroupPriorityMininum is the priority this group should have at least. Higher priority means that the group is preferred by clients over lower priority ones. Note that other versions of this group might specify even higher GroupPriorityMininum values such that the whole group gets a higher priority. The primary sort is based on GroupPriorityMinimum, ordered highest number to lowest (20 before 10). The secondary sort is based on the alphabetical comparison of the name of the object. (v1.bar before v1.foo) We'd recommend something like: *.k8s.io (except extensions) at 18000 and PaaSes (OpenShift, Deis) are recommended to be in the 2000s insecureSkipTLSVerify boolean InsecureSkipTLSVerify disables TLS certificate verification when communicating with this server. This is strongly discouraged. You should use the CABundle instead. service object ServiceReference holds a reference to Service.legacy.k8s.io version string Version is the API version this server hosts. For example, "v1" versionPriority integer VersionPriority controls the ordering of this API version inside of its group. Must be greater than zero. The primary sort is based on VersionPriority, ordered highest to lowest (20 before 10). Since it's inside of a group, the number can be small, probably in the 10s. In case of equal version priorities, the version string will be used to compute the order inside a group. If the version string is "kube-like", it will sort above non "kube-like" version strings, which are ordered lexicographically. "Kube-like" versions start with a "v", then are followed by a number (the major version), then optionally the string "alpha" or "beta" and another number (the minor version). These are sorted first by GA > beta > alpha (where GA is a version with no suffix such as beta or alpha), and then by comparing major version, then minor version. An example sorted list of versions: v10, v2, v1, v11beta2, v10beta3, v3beta1, v12alpha1, v11alpha2, foo1, foo10. 2.1.2. .spec.service Description ServiceReference holds a reference to Service.legacy.k8s.io Type object Property Type Description name string Name is the name of the service namespace string Namespace is the namespace of the service port integer If specified, the port on the service that hosting webhook. Default to 443 for backward compatibility. port should be a valid port number (1-65535, inclusive). 2.1.3. .status Description APIServiceStatus contains derived information about an API server Type object Property Type Description conditions array Current service state of apiService. conditions[] object APIServiceCondition describes the state of an APIService at a particular point 2.1.4. .status.conditions Description Current service state of apiService. Type array 2.1.5. .status.conditions[] Description APIServiceCondition describes the state of an APIService at a particular point Type object Required type status Property Type Description lastTransitionTime Time Last time the condition transitioned from one status to another. message string Human-readable message indicating details about last transition. reason string Unique, one-word, CamelCase reason for the condition's last transition. status string Status is the status of the condition. Can be True, False, Unknown. type string Type is the type of the condition. 2.2. API endpoints The following API endpoints are available: /apis/apiregistration.k8s.io/v1/apiservices DELETE : delete collection of APIService GET : list or watch objects of kind APIService POST : create an APIService /apis/apiregistration.k8s.io/v1/watch/apiservices GET : watch individual changes to a list of APIService. deprecated: use the 'watch' parameter with a list operation instead. /apis/apiregistration.k8s.io/v1/apiservices/{name} DELETE : delete an APIService GET : read the specified APIService PATCH : partially update the specified APIService PUT : replace the specified APIService /apis/apiregistration.k8s.io/v1/watch/apiservices/{name} GET : watch changes to an object of kind APIService. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/apiregistration.k8s.io/v1/apiservices/{name}/status GET : read status of the specified APIService PATCH : partially update status of the specified APIService PUT : replace status of the specified APIService 2.2.1. /apis/apiregistration.k8s.io/v1/apiservices HTTP method DELETE Description delete collection of APIService Table 2.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind APIService Table 2.3. HTTP responses HTTP code Reponse body 200 - OK APIServiceList schema 401 - Unauthorized Empty HTTP method POST Description create an APIService Table 2.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.5. Body parameters Parameter Type Description body APIService schema Table 2.6. HTTP responses HTTP code Reponse body 200 - OK APIService schema 201 - Created APIService schema 202 - Accepted APIService schema 401 - Unauthorized Empty 2.2.2. /apis/apiregistration.k8s.io/v1/watch/apiservices HTTP method GET Description watch individual changes to a list of APIService. deprecated: use the 'watch' parameter with a list operation instead. Table 2.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 2.2.3. /apis/apiregistration.k8s.io/v1/apiservices/{name} Table 2.8. Global path parameters Parameter Type Description name string name of the APIService HTTP method DELETE Description delete an APIService Table 2.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.10. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified APIService Table 2.11. HTTP responses HTTP code Reponse body 200 - OK APIService schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified APIService Table 2.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.13. HTTP responses HTTP code Reponse body 200 - OK APIService schema 201 - Created APIService schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified APIService Table 2.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.15. Body parameters Parameter Type Description body APIService schema Table 2.16. HTTP responses HTTP code Reponse body 200 - OK APIService schema 201 - Created APIService schema 401 - Unauthorized Empty 2.2.4. /apis/apiregistration.k8s.io/v1/watch/apiservices/{name} Table 2.17. Global path parameters Parameter Type Description name string name of the APIService HTTP method GET Description watch changes to an object of kind APIService. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 2.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 2.2.5. /apis/apiregistration.k8s.io/v1/apiservices/{name}/status Table 2.19. Global path parameters Parameter Type Description name string name of the APIService HTTP method GET Description read status of the specified APIService Table 2.20. HTTP responses HTTP code Reponse body 200 - OK APIService schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified APIService Table 2.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.22. HTTP responses HTTP code Reponse body 200 - OK APIService schema 201 - Created APIService schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified APIService Table 2.23. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.24. Body parameters Parameter Type Description body APIService schema Table 2.25. HTTP responses HTTP code Reponse body 200 - OK APIService schema 201 - Created APIService schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/extension_apis/apiservice-apiregistration-k8s-io-v1 |
Chapter 8. Installation configuration parameters for IBM Z and IBM LinuxONE | Chapter 8. Installation configuration parameters for IBM Z and IBM LinuxONE Before you deploy an OpenShift Container Platform cluster, you provide a customized install-config.yaml installation configuration file that describes the details for your environment. Note While this document refers only to IBM Z(R), all information in it also applies to IBM(R) LinuxONE. 8.1. Available installation configuration parameters for IBM Z The following tables specify the required, optional, and IBM Z-specific installation configuration parameters that you can set as part of the installation process. Note After installation, you cannot modify these parameters in the install-config.yaml file. 8.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 8.1. Required parameters Parameter Description Values The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object Get a pull secret from Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 8.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. If you use the Red Hat OpenShift Networking OVN-Kubernetes network plugin, both IPv4 and IPv6 address families are supported. If you configure your cluster to use both IP address families, review the following requirements: Both IP families must use the same network interface for the default gateway. Both IP families must have the default gateway. You must specify IPv4 and IPv6 addresses in the same order for all network configuration parameters. For example, in the following configuration IPv4 addresses are listed before IPv6 addresses. networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112 Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 8.2. Network parameters Parameter Description Values The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. The Red Hat OpenShift Networking network plugin to install. OVNKubernetes . OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . The IP address block for services. The default value is 172.30.0.0/16 . The OVN-Kubernetes network plugins supports only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. If you specify multiple IP kernel arguments, the machineNetwork.cidr value must be the CIDR of the primary network. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power(R) Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power(R) Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 8.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 8.3. Optional parameters Parameter Description Values A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are s390x (the default). String Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use compute . The name of the machine pool. worker Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . The configuration for the machines that comprise the control plane. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are s390x (the default). String Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use controlPlane . The name of the machine pool. master Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of control plane machines to provision. Supported values are 3 , or 1 when deploying single-node OpenShift. The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Mint , Passthrough , Manual or an empty string ( "" ). [1] Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String Specify one or more repositories that may also contain the same images. Array of strings How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. | [
"apiVersion:",
"baseDomain:",
"metadata:",
"metadata: name:",
"platform:",
"pullSecret:",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112",
"networking:",
"networking: networkType:",
"networking: clusterNetwork:",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: clusterNetwork: cidr:",
"networking: clusterNetwork: hostPrefix:",
"networking: serviceNetwork:",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork:",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"networking: machineNetwork: cidr:",
"additionalTrustBundle:",
"capabilities:",
"capabilities: baselineCapabilitySet:",
"capabilities: additionalEnabledCapabilities:",
"cpuPartitioningMode:",
"compute:",
"compute: architecture:",
"compute: hyperthreading:",
"compute: name:",
"compute: platform:",
"compute: replicas:",
"featureSet:",
"controlPlane:",
"controlPlane: architecture:",
"controlPlane: hyperthreading:",
"controlPlane: name:",
"controlPlane: platform:",
"controlPlane: replicas:",
"credentialsMode:",
"fips:",
"imageContentSources:",
"imageContentSources: source:",
"imageContentSources: mirrors:",
"publish:",
"sshKey:"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_ibm_z_and_ibm_linuxone/installation-config-parameters-ibm-z |
Chapter 13. SR-IOV | Chapter 13. SR-IOV 13.1. Introduction Developed by the PCI-SIG (PCI Special Interest Group), the Single Root I/O Virtualization (SR-IOV) specification is a standard for a type of PCI device assignment that can share a single device to multiple virtual machines. SR-IOV improves device performance for virtual machines. Figure 13.1. How SR-IOV works SR-IOV enables a Single Root Function (for example, a single Ethernet port), to appear as multiple, separate, physical devices. A physical device with SR-IOV capabilities can be configured to appear in the PCI configuration space as multiple functions. Each device has its own configuration space complete with Base Address Registers (BARs). SR-IOV uses two PCI functions: Physical Functions (PFs) are full PCIe devices that include the SR-IOV capabilities. Physical Functions are discovered, managed, and configured as normal PCI devices. Physical Functions configure and manage the SR-IOV functionality by assigning Virtual Functions. Virtual Functions (VFs) are simple PCIe functions that only process I/O. Each Virtual Function is derived from a Physical Function. The number of Virtual Functions a device may have is limited by the device hardware. A single Ethernet port, the Physical Device, may map to many Virtual Functions that can be shared to virtual machines. The hypervisor can map one or more Virtual Functions to a virtual machine. The Virtual Function's configuration space is then mapped to the configuration space presented to the guest. Each Virtual Function can only be mapped to a single guest at a time, as Virtual Functions require real hardware resources. A virtual machine can have multiple Virtual Functions. A Virtual Function appears as a network card in the same way as a normal network card would appear to an operating system. The SR-IOV drivers are implemented in the kernel. The core implementation is contained in the PCI subsystem, but there must also be driver support for both the Physical Function (PF) and Virtual Function (VF) devices. An SR-IOV capable device can allocate VFs from a PF. The VFs appear as PCI devices which are backed on the physical PCI device by resources such as queues and register sets. Advantages of SR-IOV SR-IOV devices can share a single physical port with multiple virtual machines. Virtual Functions have near-native performance and provide better performance than paravirtualized drivers and emulated access. Virtual Functions provide data protection between virtual machines on the same physical server as the data is managed and controlled by the hardware. These features allow for increased virtual machine density on hosts within a data center. SR-IOV is better able to utilize the bandwidth of devices with multiple guests. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_host_configuration_and_guest_installation_guide/chap-virtualization_host_configuration_and_guest_installation_guide-sr_iov |
5.232. pcsc-lite | 5.232. pcsc-lite 5.232.1. RHBA-2012:1343 - pcsc-lite bug fix update Updated pcsc-lite packages that fix one bug are now available for Red Hat Enterprise Linux 6. PC/SC Lite provides a Windows SCard compatible interface for communicating with smart cards, smart card readers, and other security tokens. Bug Fix BZ# 851199 Despite the update described in the RHBA-2012:0990 advisory, the chkconfig utility did not automatically place the pcscd init script after the start of the HAL daemon. Consequently, pcscd was unable to recognize USB readers. With this update, the pcscd init script has been changed to explicitly start only after HAL is up, thus fixing this bug. All pcsc-lite users are advised to upgrade to these updated packages, which fix this bug. 5.232.2. RHBA-2012:0990 - pcsc-lite bug fix update Updated pcsc-lite packages that fix one bug are now available for Red Hat Enterprise Linux 6. PC/SC Lite provides a Windows SCard compatible interface for communicating with smart cards, smart card readers, and other security tokens. Bug Fix BZ# 812469 Previously, the pcscd init script pointed to the wrong value to identify the HAL daemon. It also wrongly started at runlevel 2. As a result, chkconfig did not automatically place pcscd after the start of the HAL daemon, thus pcscd failed to see USB readers. With this update, the pcscd init script has been changed to properly identify the HAL daemon and only start at runlevels 3, 4, and 5, thus fixing this bug. All pcsc-lite users are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/pcsc-lite |
14.3. Problems After Installation | 14.3. Problems After Installation 14.3.1. Trouble With the Graphical Boot Sequence After you finish the installation and reboot your system for the first time, it is possible that the system stops responding during the graphical boot sequence, requiring a reset. In this case, the boot loader is displayed successfully, but selecting any entry and attempting to boot the system results in a halt. This usually means a problem with the graphical boot sequence; to solve this issue, you must disable graphical boot. To do this, temporarily alter the setting at boot time before changing it permanently. Procedure 14.3. Disabling Graphical Boot Temporarily Start your computer and wait until the boot loader menu appears. If you set your boot loader timeout period to 0, hold down the Esc key to access it. When the boot loader menu appears, use your cursor keys to highlight the entry you want to boot and press the e key to edit this entry's options. In the list of options, find the kernel line - that is, the line beginning with the keyword linux . On this line, locate the rhgb option and delete it. The option might not be immediately visible; use the cursor keys to scroll up and down. Press F10 or Ctrl + X to boot your system with the edited options. If the system started successfully, you can log in normally. Then you will need to disable the graphical boot permanently - otherwise you will have to perform the procedure every time the system boots. To permanently change boot options, do the following. Procedure 14.4. Disabling Graphical Boot Permanently Log in to the root account using the su - command: Use the grubby tool to find the default GRUB2 kernel: Use the grubby tool to remove the rhgb boot option from the default kernel, identified in the last step, in your GRUB2 configuration. For example: After you finish this procedure, you can reboot your computer. Red Hat Enterprise Linux will not use the graphical boot sequence any more. If you want to enable graphical boot in the future, follow the same procedure, replacing the --remove-args="rhgb" parameter with the --args="rhgb" paramter. This will restore the rhgb boot option to the default kernel in your GRUB2 configuration. See the Red Hat Enterprise Linux 7 System Administrator's Guide for more information about working with the GRUB2 boot loader. 14.3.2. Booting into a Graphical Environment If you have installed the X Window System but are not seeing a graphical desktop environment once you log into your system, you can start it manually using the startx command. Note, however, that this is just a one-time fix and does not change the log in process for future log ins. To set up your system so that you can log in at a graphical login screen, you must change the default systemd target to graphical.target . When you are finished, reboot the computer. You will presented with a graphical login prompt after the system restarts. Procedure 14.5. Setting Graphical Login as Default Open a shell prompt. If you are in your user account, become root by typing the su - command. Change the default target to graphical.target . To do this, execute the following command: Graphical login is now enabled by default - you will be presented with a graphical login prompt after the reboot. If you want to reverse this change and keep using the text-based login prompt, execute the following command as root : For more information about targets in systemd , see the Red Hat Enterprise Linux 7 System Administrator's Guide . 14.3.3. No Graphical User Interface Present If you are having trouble getting X (the X Window System ) to start, it is possible that it has not been installed. Some of the preset base environments you can select during the installation, such as Minimal install or Web Server , do not include a graphical interface - it has to be installed manually. If you want X , you can install the necessary packages afterwards. See the Knowledgebase article at https://access.redhat.com/site/solutions/5238 for information on installing a graphical desktop environment. 14.3.4. X Server Crashing After User Logs In If you are having trouble with the X server crashing when a user logs in, one or more of your file systems can be full or nearly full. To verify that this is the problem you are experiencing, execute the following command: The output will help you diagnose which partition is full - in most cases, the problem will be on the /home partition. The following is a sample output of the df command: In the above example, you can see that the /home partition is full, which causes the crash. You can make some room on the partition by removing unneeded files. After you free up some disk space, start X using the startx command. For additional information about df and an explanation of the options available (such as the -h option used in this example), see the df(1) man page. 14.3.5. Is Your System Displaying Signal 11 Errors? A signal 11 error, commonly known as a segmentation fault , means that a program accessed a memory location that was not assigned to it. A signal 11 error can occur due to a bug in one of the software programs that is installed, or faulty hardware. If you receive a fatal signal 11 error during the installation, first make sure you are using the most recent installation images, and let Anaconda verify them to make sure they are not corrupted. Bad installation media (such as an improperly burned or scratched optical disk) are a common cause of signal 11 errors. Verifying the integrity of the installation media is recommended before every installation. For information about obtaining the most recent installation media, see Chapter 2, Downloading Red Hat Enterprise Linux . To perform a media check before the installation starts, append the rd.live.check boot option at the boot menu. See Section 23.2.2, "Verifying Boot Media" for details. Other possible causes are beyond this document's scope. Consult your hardware manufacturer's documentation for more information. 14.3.6. Unable to IPL from Network Storage Space (*NWSSTG) If you are experiencing difficulties when trying to IPL from Network Storage Space (*NWSSTG), in most cases the reason is a missing PReP partition. In this case, you must reinstall the system and make sure to create this partition during the partitioning phase or in the Kickstart file. 14.3.7. The GRUB2 next_entry variable can behave unexpectedly in a virtualized environment IBM Power System users booting their virtual environment with SLOF firmware must manually unset the next_entry grub environment variable after a system reboot. The SLOF firmware does not support block writes at boot time by design thus the bootloader is unable to clear this variable at boot time. | [
"su -",
"grubby --default-kernel /boot/vmlinuz-3.10.0-229.4.2.el7.ppc64",
"grubby --remove-args=\"rhgb\" --update-kernel /boot/vmlinuz-3.10.0-229.4.2.el7.ppc64",
"systemctl set-default graphical.target",
"systemctl set-default multi-user.target",
"df -h",
"Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_rhel-root 20G 6.0G 13G 32% / devtmpfs 1.8G 0 1.8G 0% /dev tmpfs 1.8G 2.7M 1.8G 1% /dev/shm tmpfs 1.8G 1012K 1.8G 1% /run tmpfs 1.8G 0 1.8G 0% /sys/fs/cgroup tmpfs 1.8G 2.6M 1.8G 1% /tmp /dev/sda1 976M 150M 760M 17% /boot /dev/dm-4 90G 90G 0 100% /home"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-trouble-after-ppc |
Chapter 6. Software Maintenance | Chapter 6. Software Maintenance Software maintenance is extremely important to maintaining a secure system. It is vital to patch software as soon as it becomes available in order to prevent attackers from using known holes to infiltrate your system. 6.1. Install Minimal Software It is a recommended practice to install only the packages you will use because each piece of software on your computer could possibly contain a vulnerability. If you are installing from the DVD media take the opportunity to select exactly what packages you want to install during the installation. When you find you need another package, you can always add it to the system later. For more information on minimal installation, see the "Package Group Selection" section of the Red Hat Enterprise Linux 6 Installation Guide . A minimal installation can also be performed via a kickstart file using the --nobase option. For more information, see the "Package Selection" section of the Red Hat Enterprise Linux 6 Installation Guide . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/chap-security_guide-software_maintenance |
Chapter 3. Getting started | Chapter 3. Getting started This chapter guides you through the steps to set up your environment and run a simple messaging program. 3.1. Prerequisites You must complete the installation procedure for your environment. You must have an AMQP 1.0 message broker listening for connections on interface localhost and port 5672 . It must have anonymous access enabled. For more information, see Starting the broker . You must have a queue named examples . For more information, see Creating a queue . 3.2. Running Hello World on Red Hat Enterprise Linux The Hello World example creates a connection to the broker, sends a message containing a greeting to the examples queue, and receives it back. On success, it prints the received message to the console. Change to the examples directory and run the helloworld.js example. USD cd <source-dir> /examples USD node helloworld.js Hello World! 3.3. Running Hello World on Microsoft Windows The Hello World example creates a connection to the broker, sends a message containing a greeting to the examples queue, and receives it back. On success, it prints the received message to the console. Change to the examples directory and run the helloworld.js example. > cd <source-dir> /examples > node helloworld.js Hello World! | [
"cd <source-dir> /examples node helloworld.js Hello World!",
"> cd <source-dir> /examples > node helloworld.js Hello World!"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_rhea/3.0/html/using_rhea/getting_started |
2.8. Firewalls | 2.8. Firewalls Information security is commonly thought of as a process and not a product. However, standard security implementations usually employ some form of dedicated mechanism to control access privileges and restrict network resources to users who are authorized, identifiable, and traceable. Red Hat Enterprise Linux includes several tools to assist administrators and security engineers with network-level access control issues. Firewalls are one of the core components of a network security implementation. Several vendors market firewall solutions catering to all levels of the marketplace: from home users protecting one PC to data center solutions safeguarding vital enterprise information. Firewalls can be stand-alone hardware solutions, such as firewall appliances by Cisco, Nokia, and Sonicwall. Vendors such as Checkpoint, McAfee, and Symantec have also developed proprietary software firewall solutions for home and business markets. Apart from the differences between hardware and software firewalls, there are also differences in the way firewalls function that separate one solution from another. Table 2.6, "Firewall Types" details three common types of firewalls and how they function: Table 2.6. Firewall Types Method Description Advantages Disadvantages NAT Network Address Translation (NAT) places private IP subnetworks behind one or a small pool of public IP addresses, masquerading all requests to one source rather than several. The Linux kernel has built-in NAT functionality through the Netfilter kernel subsystem. Can be configured transparently to machines on a LAN. Protection of many machines and services behind one or more external IP addresses simplifies administration duties. Restriction of user access to and from the LAN can be configured by opening and closing ports on the NAT firewall/gateway. Cannot prevent malicious activity once users connect to a service outside of the firewall. Packet Filter A packet filtering firewall reads each data packet that passes through a LAN. It can read and process packets by header information and filters the packet based on sets of programmable rules implemented by the firewall administrator. The Linux kernel has built-in packet filtering functionality through the Netfilter kernel subsystem. Customizable through the iptables front-end utility. Does not require any customization on the client side, as all network activity is filtered at the router level rather than the application level. Since packets are not transmitted through a proxy, network performance is faster due to direct connection from client to remote host. Cannot filter packets for content like proxy firewalls. Processes packets at the protocol layer, but cannot filter packets at an application layer. Complex network architectures can make establishing packet filtering rules difficult, especially if coupled with IP masquerading or local subnets and DMZ networks. Proxy Proxy firewalls filter all requests of a certain protocol or type from LAN clients to a proxy machine, which then makes those requests to the Internet on behalf of the local client. A proxy machine acts as a buffer between malicious remote users and the internal network client machines. Gives administrators control over what applications and protocols function outside of the LAN. Some proxy servers can cache frequently-accessed data locally rather than having to use the Internet connection to request it. This helps to reduce bandwidth consumption. Proxy services can be logged and monitored closely, allowing tighter control over resource utilization on the network. Proxies are often application-specific (HTTP, Telnet, etc.), or protocol-restricted (most proxies work with TCP-connected services only). Application services cannot run behind a proxy, so your application servers must use a separate form of network security. Proxies can become a network bottleneck, as all requests and transmissions are passed through one source rather than directly from a client to a remote service. 2.8.1. Netfilter and IPTables The Linux kernel features a powerful networking subsystem called Netfilter . The Netfilter subsystem provides stateful or stateless packet filtering as well as NAT and IP masquerading services. Netfilter also has the ability to mangle IP header information for advanced routing and connection state management. Netfilter is controlled using the iptables tool. 2.8.1.1. IPTables Overview The power and flexibility of Netfilter is implemented using the iptables administration tool, a command line tool similar in syntax to its predecessor, ipchains , which Netfilter/iptables replaced in the Linux kernel 2.4 and above. iptables uses the Netfilter subsystem to enhance network connection, inspection, and processing. iptables features advanced logging, pre- and post-routing actions, network address translation, and port forwarding, all in one command line interface. This section provides an overview of iptables . For more detailed information, see Section 2.8.9, "IPTables" . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-Security_Guide-Firewalls |
8.3. Configuration Compliance Scanning | 8.3. Configuration Compliance Scanning 8.3.1. Configuration Compliance in RHEL 7 You can use configuration compliance scanning to conform to a baseline defined by a specific organization. For example, if you work with the US government, you might have to comply with the Operating System Protection Profile (OSPP), and if you are a payment processor, you might have to be compliant with the Payment Card Industry Data Security Standard (PCI-DSS). You can also perform configuration compliance scanning to harden your system security. Red Hat recommends you follow the Security Content Automation Protocol (SCAP) content provided in the SCAP Security Guide package because it is in line with Red Hat best practices for affected components. The SCAP Security Guide package provides content which conforms to the SCAP 1.2 and SCAP 1.3 standards. The openscap scanner utility is compatible with both SCAP 1.2 and SCAP 1.3 content provided in the SCAP Security Guide package. Important Performing a configuration compliance scanning does not guarantee the system is compliant. The SCAP Security Guide suite provides profiles for several platforms in a form of data stream documents. A data stream is a file that contains definitions, benchmarks, profiles, and individual rules. Each rule specifies the applicability and requirements for compliance. RHEL 7 provides several profiles for compliance with security policies. In addition to the industry standard, Red Hat data streams also contain information for remediation of failed rules. Structure of Compliance Scanning Resources A profile is a set of rules based on a security policy, such as Operating System Protection Profile (OSPP) or Payment Card Industry Data Security Standard (PCI-DSS). This enables you to audit the system in an automated way for compliance with security standards. You can modify (tailor) a profile to customize certain rules, for example, password length. For more information on profile tailoring, see Section 8.7.2, "Customizing a Security Profile with SCAP Workbench" Note To scan containers or container images for configuration compliance, see Section 8.9, "Scanning Containers and Container Images for Vulnerabilities" 8.3.2. Possible results of an OpenSCAP scan Depending on various properties of your system and the data stream and profile applied to an OpenSCAP scan, each rule may produce a specific result. This is a list of possible results with brief explanations of what they mean. Table 8.1. Possible results of OpenSCAP scan Result Explanation Pass The scan did not find any conflicts with this rule. Fail The scan found a conflict with this rule. Not checked OpenSCAP does not perform an automatic evaluation of this rule. Check whether your system conforms to this rule manually. Not applicable This rule does not apply to the current configuration. Not selected This rule is not part of the profile. OpenSCAP does not evaluate this rule and does not display these rules in the results. Error The scan encountered an error. For additional information, you can enter the oscap-scanner command with the - -verbose DEVEL option. Consider opening a bug report . Unknown The scan encountered an unexpected situation. For additional information, you can enter the oscap-scanner command with the - -verbose DEVEL option. Consider opening a bug report . 8.3.3. Viewing Profiles for Configuration Compliance Before you decide to use profiles for scanning or remediation, you can list them and check their detailed descriptions using the oscap info sub-command. Prerequisites The openscap-scanner and scap-security-guide packages are installed. Procedure List all available files with configuration compliance profiles provided by the SCAP Security Guide project: Display detailed information about a selected data stream using the oscap info sub-command. XML files containing data streams are indicated by the -ds string in their names. In the Profiles section, you can find a list of available profiles and their IDs: Select a profile from the data stream file and display additional details about the selected profile. To do so, use oscap info with the --profile option followed by the suffix of the ID displayed in the output of the command. For example, the ID of the PCI-DSS profile is: xccdf_org.ssgproject.content_profile_pci-dss , and the value for the --profile option can be _pci-dss : Alternatively, when using GUI, install the scap-security-guide-doc package and open the file:///usr/share/doc/scap-security-guide-doc-0.1.46/ssg-rhel7-guide-index.html file in a web browser. Select the required profile in the upper right field of the Guide to the Secure Configuration of Red Hat Enterprise Linux 7 document, and you can see the ID already included in the relevant command for the subsequent evaluation. Additional Resources The scap-security-guide(8) man page also contains the list of profiles. 8.3.4. Assessing Configuration Compliance with a Specific Baseline To determine whether your system conforms to a specific baseline, follow these steps. Prerequisites The openscap-scanner and scap-security-guide packages are installed. You know the ID of the profile within the baseline with which the system should comply. To find the ID, see Section 8.3.3, "Viewing Profiles for Configuration Compliance" . Procedure Evaluate the compliance of the system with the selected profile and save the scan results in the report.html HTML file, for example: Optional: Scan a remote system with the machine1 host name, SSH running on port 22, and the joesec user name for vulnerabilities and save results to the remote-report.html file: Additional Resources scap-security-guide(8) man page The SCAP Security Guide documentation installed in the file:///usr/share/doc/scap-security-guide-doc-0.1.46/ directory. The file:///usr/share/doc/scap-security-guide-doc-0.1.46/ssg-rhel7-guide-index.html" Guide to the Secure Configuration of Red Hat Enterprise Linux 7 installed with the scap-security-guide-doc package. | [
"Data stream ├── xccdf | ├── benchmark | ├── profile | ├──rule | ├── xccdf | ├── oval reference ├── oval ├── ocil reference ├── ocil ├── cpe reference └── cpe └── remediation",
"~]USD ls /usr/share/xml/scap/ssg/content/ ssg-firefox-cpe-dictionary.xml ssg-rhel6-ocil.xml ssg-firefox-cpe-oval.xml ssg-rhel6-oval.xml ssg-rhel6-ds-1.2.xml ssg-rhel8-xccdf.xml ssg-rhel6-ds.xml",
"~]USD oscap info /usr/share/xml/scap/ssg/content/ssg-rhel7-ds.xml Profiles: Title: PCI-DSS v3.2.1 Control Baseline for Red Hat Enterprise Linux 7 Id: xccdf_org.ssgproject.content_profile_pci-dss Title: OSPP - Protection Profile for General Purpose Operating Systems v. 4.2.1 Id: xccdf_org.ssgproject.content_profile_ospp",
"~]USD oscap info --profile _pci-dss /usr/share/xml/scap/ssg/content/ssg-rhel7-ds.xml Profile Title: PCI-DSS v3.2.1 Control Baseline for Red Hat Enterprise Linux 7 Id: xccdf_org.ssgproject.content_profile_pci-dss Description: Ensures PCI-DSS v3.2.1 related security configuration settings are applied.",
"~]USD sudo oscap xccdf eval --report report.html --profile ospp /usr/share/xml/scap/ssg/content/ssg-rhel7-ds.xml",
"~]USD oscap-ssh joesec@machine1 22 xccdf eval --report remote_report.html --profile ospp /usr/share/xml/scap/ssg/content/ssg-rhel7-ds.xml"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/configuration-compliance-scanning_scanning-the-system-for-configuration-compliance-and-vulnerabilities |
Chapter 5. Generalized stretch cluster configuration for three availability zones | Chapter 5. Generalized stretch cluster configuration for three availability zones As a storage administrator, you can configure a generalized stretch cluster configuration for three availability zones with Ceph OSDs. Ceph can withstand the loss of Ceph OSDs because of its network and cluster, which are equally reliable with failures randomly distributed across the CRUSH map. If a number of OSDs are shut down, the remaining OSDs and monitors still manage to operate. Using a single cluster limits data availability to a single location with a single point of failure. However, in some situations, higher availability might be required. Using three availability zones allows the cluster to withstand power loss and even a full data center loss in the event of a natural disaster. With a generalized stretch cluster configuration for three availability zones, three data centers are supported, with each site holding two copies of the data. This helps ensure that even during a data center outage, the data remains accessible and writeable from another site. With this configuration, the pool replication size is 6 and the pool min_size is 3 . Note The standard Ceph configuration survives many failures of the network or data centers and it never compromises data consistency. If you restore enough Ceph servers following a failure, it recovers. Ceph maintains availability if you lose a data center, but can still form a quorum of monitors and have all the data available with enough copies to satisfy pools' min_size , or CRUSH rules that replicate again to meet the size. 5.1. Generalized stretch cluster deployment limitations When using generalized stretch clusters, the following limitations should be considered. Generalized stretch cluster configuration for three availability zones does not support I/O operations during a netsplit scenario between two or more zones. While the cluster remains accessible for basic Ceph commands, I/O usage remains unavailable until the netsplit is resolved. This is different from stretch mode, where the tiebreaker monitor can isolate one zone of the cluster and continue I/O operations in degraded mode during a netsplit. For more information about stretch mode, see Stretch mode for a storage cluster . In a three availability zone configuration, Red Hat Ceph Storage is designed to tolerate multiple host failures. However, if more than 25% of the OSDs in the cluster go down, Ceph may stop marking OSDs as out . This behavior is controlled by the mon_osd_min_in_ratio parameter. By default, mon_osd_min_in_ratio is set to 0.75, meaning that at least 75% of the OSDs in the cluster must remain in (active) before any additional OSDs can be marked out . This setting prevents too many OSDs from being marked out as this might lead to significant data movement. The data movement can cause high client I/O impact and long recovery times when the OSDs are returned to service. If Red Hat Ceph Storage stops marking OSDs as out, some placement groups (PGs) may fail to rebalance to surviving OSDs, potentially leading to inactive placement groups (PGs). Important While adjusting the mon_osd_min_in_ratio value can allow more OSDs to be marked out and trigger rebalancing, this should be done with caution. For more information on the mon_osd_min_in_ratio parameter, see Ceph Monitor and OSD configuration options . 5.2. Generalized stretch cluster deployment requirements This information details important hardware, software, and network requirements that are needed for deploying a generalized stretch cluster configuration for three availability zones. 5.2.1. Hardware requirements Use the following minimum hardware requirements before deploying generalized stretch cluster configuration for three availability zones. The following table lists the physical server locations and Ceph component layout for an example three availability zone deployment. Table 5.1. Hardware requirements Host name Datacenter Ceph services host01 DC1 OSD+MON+MGR host02 DC1 OSD+MON+MGR+RGW host03 DC1 OSD+MON+MDS host04 DC2 OSD+MON+MGR host05 DC2 OSD+MON+MGR+RGW host06 DC2 OSD+MON+MDS host07 DC3 OSD+MON+MGR host08 DC3 OSD+MON+MGR+RGW host09 DC3 OSD+MON+MDS 5.2.2. Network configuration requirements Use the following network configuration requirements before deploying generalized stretch cluster configuration for three availability zones. Have two separate networks, one public network and one cluster network. Have three different data centers that support VLANS and subnets for Ceph cluster and public networks for all data centers. Note You can use different subnets for each of the data centers. The latencies between data centers running the Red Hat Ceph Storage Object Storage Devices (OSDs) cannot exceed 10 ms RTT. For more information about network considerations, see Network considerations for Red Hat Ceph Storage in the Red Hat Ceph Storage Hardware Guide. 5.2.3. Cluster setup requirements Ensure that the hostname is configured by using the bare or short hostname in all hosts. Syntax Note The hostname command should only return the short hostname, when run on all nodes. If the FQDN is returned, the cluster configuration will not be successful. 5.3. Bootstrapping the Ceph cluster with a specification file Deploy the generalized stretch cluster by setting the CRUSH location to the daemons in the cluster with the spec configuration file. Set the CRUSH location to the daemons in the cluster with a service configuration file. Use the configuration file to add the hosts to the proper locations during deployment. For more information about Ceph bootstrapping and different cephadm bootstrap command options, see Bootstrapping a new storage cluster in the Red Hat Ceph Storage Installation Guide. Important Run cephadm bootstrap on the node that you want to be the initial Monitor node in the cluster. The IP_ADDRESS option should be the IP address of the node you are using to run cephadm bootstrap . Note If the storage cluster includes multiple networks and interfaces, be sure to choose a network that is accessible by any node that uses the storage cluster. To deploy a storage cluster by using IPV6 addresses, use the IPV6 address format for the --mon-ip <IP_ADDRESS> option. For example: cephadm bootstrap --mon-ip 2620:52:0:880:225:90ff:fefc:2536 --registry-json /etc/mylogin.json . To route the internal cluster traffic over the public network, omit the --cluster-network SUBNET option. Within this procedure the network Classless Inter-Domain Routing (CIDR) is referred to as subnet . Prerequisites Be sure that you have root-level access to the nodes. Procedure Create the service configuration YAML file. The YAML file adds the nodes to the Red Hat Ceph Storage cluster and also sets specific labels for where the services run. The following example depends on the specific OSD and Ceph Object Gateway (RGW) configuration that is needed. Syntax For more information about changing the custom spec for OSD and Object Gateway, see the following deployment instructions: * Deploying Ceph OSDs using advanced service specifications in the Red Hat Ceph Storage Operations Guide. * Deploying the Ceph Object Gateway using the service specification in the Red Hat Ceph Storage Object Gateway Guide. Bootstrap the storage cluster with the --apply-spec option. Syntax Example Important You can use different command options with the cephadm bootstrap command but always include the --apply-spec option to use the service configuration file and configure the host locations. Log into the cephadm shell. Syntax Example Configure the public network with the subnet. For more information about configuring multiple public networks to the cluster, see Configuring multiple public networks to the cluster in the Red Hat Ceph Storage Configuration Guide. Syntax Example Optional: Configure a cluster network. For more information about configuring multiple cluster networks to the cluster, see Configuring a private network in the Red Hat Ceph Storage Configuration Guide. Syntax Example Optional: Verify the network configurations. Syntax Example Restart the daemons. Ceph daemons bind dynamically, so you do not have to restart the entire cluster at once if you change the network configuration for a specific daemon. Syntax Optional: To restart the cluster on the admin node as a root user, run the systemctl restart command. Note To get the FSID of the cluster, use the ceph fsid command. Syntax Example Verification Verify the specification file details and that the bootstrap was installed successfully. Verify that all hosts were placed in the expected data centers, as specified in step 1 of the procedure. Syntax Check that there are three data centers under root and that the hosts are placed in each of the expected data centers. Note The hosts with OSDs will only be present after bootstrap if OSDs are deployed during bootstrap with the specification file. Example From the cephadm shell, verify that the mon daemons are deployed with CRUSH locations, as specified in step 1 of the procedure. Syntax + Check that all mon daemons are in the output and that the correct CRUSH locations are added. + .Example --- [root@host01 ~]# ceph mon dump epoch 19 fsid b556497a-693a-11ef-b9d1-fa163e841fd7 last_changed 2024-09-03T12:47:08.419495+0000 created 2024-09-02T14:50:51.490781+0000 min_mon_release 19 (squid) election_strategy: 3 0: [v2:10.0.67.43:3300/0,v1:10.0.67.43:6789/0] mon.host01-installer; crush_location {datacenter=DC1} 1: [v2:10.0.67.20:3300/0,v1:10.0.67.20:6789/0] mon.host02; crush_location {datacenter=DC1} 2: [v2:10.0.64.242:3300/0,v1:10.0.64.242:6789/0] mon.host03; crush_location {datacenter=DC1} 3: [v2:10.0.66.17:3300/0,v1:10.0.66.17:6789/0] mon.host06; crush_location {datacenter=DC2} 4: [v2:10.0.66.228:3300/0,v1:10.0.66.228:6789/0] mon.host09; crush_location {datacenter=DC3} 5: [v2:10.0.65.125:3300/0,v1:10.0.65.125:6789/0] mon.host05; crush_location {datacenter=DC2} 6: [v2:10.0.66.252:3300/0,v1:10.0.66.252:6789/0] mon.host07; crush_location {datacenter=DC3} 7: [v2:10.0.64.145:3300/0,v1:10.0.64.145:6789/0] mon.host08; crush_location {datacenter=DC3} 8: [v2:10.0.64.125:3300/0,v1:10.0.64.125:6789/0] mon.host04; crush_location {datacenter=DC2} dumped monmap epoch 19 --- Verify that the service spec and all location attributes are added correctly. Check the service name for mon daemons on the cluster, by using the ceph orch ls command. Example Confirm the mon daemon services, by using the ceph orch ls mon --export command. Example Verify that the bootstrap was installed successfully, by running the cephadm shell ceph -s command. For more information, see Verifying the cluster installation. 5.4. Enabling three availability zones on the pool Use this information to enable and integrate three availability zones within a generalized stretch cluster configuration. Prerequisites Before you begin, make sure that you have the following prerequisites in place: * Root-level access to the nodes. * The CRUSH location is set to the hosts. Procedure Get the most recent CRUSH map and decompile the map into a text file. Syntax Example Add the new CRUSH rule into the decompiled CRUSH map file from the . In this example, the rule name is 3az_rule . Syntax With this rule, the placement groups will be replicated with two copies in each of the three data centers. Inject the CRUSH map to make the rule available to the cluster. Syntax Example You can verify that the rule was injected successfully, by using the following steps. List the rules on the cluster. Syntax Example Dump the CRUSH rule. Syntax Example Set the MON election strategy to connectivity. Syntax When updated successfully, the election_strategy is updated to 3 . The default election_strategy is 1 . Optional: Verify the election strategy that was set in the step. Syntax Check that all mon daemons are in the output and that the correct CRUSH locations are added. Example Set the pool to associate with three availability zone stretch clusters. For more information about available pool values, see Pool values in the Red Hat Ceph Storage Storage Strategies Guide. Syntax Replace the variables as follows: POOL_NAME The name of the pool. It must be an existing pool, this command doesn't create a new pool. PEERING_CRUSH_BUCKET_COUNT The value is used along with peering_crush_bucket_barrier to determined whether the set of OSDs in the chosen acting set can peer with each other, based on the number of distinct buckets there are in the acting set. PEERING_CRUSH_BUCKET_TARGET This value is used along with peering_crush_bucket_barrier and size to calculate the value bucket_max which limits the number of OSDs in the same bucket from getting chose to be in the acting set of a PG. PEERING_CRUSH_BUCKET_BARRIER The type of bucket a pool is stretched across. For example, rack, row, or datacenter. CRUSH_RULE The crush rule to use for the stretch pool. The type of pool must match the type of crush_rule (replicated or erasure). SIZE The number of replicas for objects in the stretch pool. MIN_SIZE The minimum number of replicas required for I/O in the stretch pool. Important The `--yes-i-really-mean-it flag is required when setting the PEERING_CRUSH_BUCKET_COUNT and PEERING_CRUSH_BUCKET_TARGET to be more than the number of buckets in the CRUSH map. Use the optional flag to confirm that you want to bypass the safety checks and set the values for a stretch pool. Example Note To revert a pool to a nonstretched cluster, use the ceph osd pool stretch unset POOL_NAME command. Using this command does not unset the crush_rule , size , and min_size values. If needed, these need to be reset manually. A success message is emitted that the pool stretch values were set correctly. Optional: Verify the pools associated with the stretch clusters, by using the ceph osd pool stretch show commands. Example 5.5. Adding OSD hosts with three availability zones You can add Ceph OSDs with three availability zones on a generalized stretch cluster. The procedure is similar to the addition of the OSD hosts on a cluster where a generalized stretch cluster is not enabled. For more information, see Adding OSDs in the Red Hat Ceph Storage Installing Guide. Prerequisites Before you begin, make sure that you have the following prerequisites in place: * A running Red Hat Ceph Storage cluster. * Three availability zones enabled on a cluster. For more information, see _Enabling three availability zones on the pool . * Root-level access to the nodes. Procedure From the node that contains the admin keyring, install the storage cluster's public SSH key in the root user's authorized_keys file on the new host. Syntax Example Optional: Verify the status of the storage cluster and that each new host has been added by using the ceph orch host ls command. See that the new host has been added and that the Status of each host is blank in the output. List the available devices to deploy OSDs. Deploy in one of the following ways: Create an OSD from a specific device on a specific host. Syntax Example Deploy OSDs on any available and unused devices. Important This command creates collocated WAL and DB devices. If you want to create non-collocated devices, do not use this command. Syntax Move the OSD hosts under the CRUSH bucket. Syntax Example Note Ensure you add the same topology nodes on all sites. Issues might arise if hosts are added only on one site. Verification Verify that all hosts are moved to the assigned data centers, by using the ceph osd tree command. | [
"hostnamectl set-hostname SHORT_NAME",
"service_type: host hostname: HOST01 addr: IP_ADDRESS01 labels: ['alertmanager', 'osd', 'installer', '_admin', 'mon', 'prometheus', 'mgr', 'grafana'] location: root: default datacenter: DC1 --- service_type: host hostname: HOST02 addr: IP_ADDRESS02 labels: ['osd', 'mon', 'mgr', 'rgw'] location: root: default datacenter: DC1 --- service_type: host hostname: HOST03 addr: IP_ADDRESS03 labels: ['osd', 'mon', 'mds'] location: root: default datacenter: DC1 --- service_type: host hostname: HOST04 addr: IP_ADDRESS04 labels: ['osd', '_admin', 'mon', 'mgr'] location: root: default datacenter: DC2 --- service_type: host hostname: HOST05 addr: IP_ADDRESS05 labels: ['osd', 'mon', 'mgr', 'rgw'] location: root: default datacenter: DC2 --- service_type: host hostname: HOST06 addr: IP_ADDRESS06 labels: ['osd', 'mon', 'mds'] location: root: default datacenter: DC2 --- service_type: host hostname: HOST07 addr: IP_ADDRESS07 labels: ['osd', '_admin', 'mon', 'mgr'] location: root: default datacenter: DC3 --- service_type: host hostname: HOST08 addr: IP_ADDRESS08 labels: ['osd', 'mon', 'mgr', 'rgw'] location: root: default datacenter: DC3 --- service_type: host hostname: HOST09 addr: IP_ADDRESS09 labels: ['osd', 'mon', 'mds'] location: root: default datacenter: DC3 --- service_type: mon service_name: mon placement: label: mon spec: crush_locations: HOST01: - datacenter=DC1 HOST02: - datacenter=DC1 HOST03: - datacenter=DC1 HOST04: - datacenter=DC2 HOST05: - datacenter=DC2 HOST06: - datacenter=DC2 HOST07: - datacenter=DC3 HOST08: - datacenter=DC3 HOST09: - datacenter=DC3 --- service_type: mgr service_name: mgr placement: label: mgr ------ service_type: osd service_id: osds placement: label: osd spec: data_devices: all: true --------- service_type: rgw service_id: rgw.rgw.1 placement: label: rgw ------------",
"cephadm bootstrap --apply-spec CONFIGURATION_FILE_NAME --mon-ip MONITOR_IP_ADDRESS --ssh-private-key PRIVATE_KEY --ssh-public-key PUBLIC_KEY --registry-url REGISTRY_URL --registry-username USER_NAME --registry-password PASSWORD",
"cephadm bootstrap --apply-spec initial-config.yaml --mon-ip 10.10.128.68 --ssh-private-key /home/ceph/.ssh/id_rsa --ssh-public-key /home/ceph/.ssh/id_rsa.pub --registry-url registry.redhat.io --registry-username myuser1 --registry-password mypassword1",
"cephadm shell",
"cephadm shell",
"ceph config set global public_network \" SUBNET_1 , SUBNET_2 , ...\"",
"ceph config global mon public_network \"10.0.208.0/22,10.0.212.0/22,10.0.64.0/22,10.0.56.0/22\"",
"ceph config set global cluster_network \" SUBNET_1 , SUBNET_2 , ...\"",
"ceph config set global cluster_network \"10.0.208.0/22,10.0.212.0/22,10.0.64.0/22,10.0.56.0/22\"",
"ceph config dump | grep network",
"ceph config dump | grep network",
"ceph orch restart mon",
"systemctl restart ceph- FSID_OF_CLUSTER .target",
"systemctl restart ceph-1ca9f6a8-d036-11ec-8263-fa163ee967ad.target",
"ceph osd tree",
"ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.87836 root default -3 0.29279 datacenter DC1 -2 0.09760 host host01-installer 0 hdd 0.02440 osd.0 up 1.00000 1.00000 12 hdd 0.02440 osd.12 up 1.00000 1.00000 21 hdd 0.02440 osd.21 up 1.00000 1.00000 29 hdd 0.02440 osd.29 up 1.00000 1.00000 -4 0.09760 host host02 1 hdd 0.02440 osd.1 up 1.00000 1.00000 9 hdd 0.02440 osd.9 up 1.00000 1.00000 18 hdd 0.02440 osd.18 up 1.00000 1.00000 28 hdd 0.02440 osd.28 up 1.00000 1.00000 -5 0.09760 host host03 8 hdd 0.02440 osd.8 up 1.00000 1.00000 16 hdd 0.02440 osd.16 up 1.00000 1.00000 24 hdd 0.02440 osd.24 up 1.00000 1.00000 34 hdd 0.02440 osd.34 up 1.00000 1.00000 -7 0.29279 datacenter DC2 -6 0.09760 host host04 4 hdd 0.02440 osd.4 up 1.00000 1.00000 13 hdd 0.02440 osd.13 up 1.00000 1.00000 20 hdd 0.02440 osd.20 up 1.00000 1.00000 27 hdd 0.02440 osd.27 up 1.00000 1.00000 -8 0.09760 host host05 3 hdd 0.02440 osd.3 up 1.00000 1.00000 10 hdd 0.02440 osd.10 up 1.00000 1.00000 19 hdd 0.02440 osd.19 up 1.00000 1.00000 30 hdd 0.02440 osd.30 up 1.00000 1.00000 -9 0.09760 host host06 7 hdd 0.02440 osd.7 up 1.00000 1.00000 17 hdd 0.02440 osd.17 up 1.00000 1.00000 26 hdd 0.02440 osd.26 up 1.00000 1.00000 35 hdd 0.02440 osd.35 up 1.00000 1.00000 -11 0.29279 datacenter DC3 -10 0.09760 host host07 5 hdd 0.02440 osd.5 up 1.00000 1.00000 14 hdd 0.02440 osd.14 up 1.00000 1.00000 23 hdd 0.02440 osd.23 up 1.00000 1.00000 32 hdd 0.02440 osd.32 up 1.00000 1.00000 -12 0.09760 host host08 2 hdd 0.02440 osd.2 up 1.00000 1.00000 11 hdd 0.02440 osd.11 up 1.00000 1.00000 22 hdd 0.02440 osd.22 up 1.00000 1.00000 31 hdd 0.02440 osd.31 up 1.00000 1.00000 -13 0.09760 host host09 6 hdd 0.02440 osd.6 up 1.00000 1.00000 15 hdd 0.02440 osd.15 up 1.00000 1.00000 25 hdd 0.02440 osd.25 up 1.00000 1.00000 33 hdd 0.02440 osd.33 up 1.00000 1.00000",
"ceph mon dump",
"ceph orch ls NAME PORTS RUNNING REFRESHED AGE PLACEMENT alertmanager ?:9093,9094 1/1 8m ago 6d count:1 ceph-exporter 9/9 8m ago 6d * crash 9/9 8m ago 6d * grafana ?:3000 1/1 8m ago 6d count:1 mds.cephfs 3/3 8m ago 6d label:mds mgr 6/6 8m ago 6d label:mgr mon 9/9 8m ago 5d label:mon node-exporter ?:9100 9/9 8m ago 6d * osd.all-available-devices 36 8m ago 6d label:osd prometheus ?:9095 1/1 8m ago 6d count:1 rgw.rgw.1 ?:80 3/3 8m ago 6d label:rgw",
"ceph orch ls mon --export service_type: mon service_name: mon placement: label: mon spec: crush_locations: host01-installer: - datacenter=DC1 host02: - datacenter=DC1 host03: - datacenter=DC1 host04: - datacenter=DC2 host05: - datacenter=DC2 host06: - datacenter=DC2 host07: - datacenter=DC3 host08: - datacenter=DC3 host09: - datacenter=DC3",
"ceph osd getcrushmap > COMPILED_CRUSHMAP_FILENAME crushtool -d COMPILED_CRUSHMAP_FILENAME -o DECOMPILED_CRUSHMAP_FILENAME",
"ceph osd getcrushmap > crush.map.bin crushtool -d crush.map.bin -o crush.map.txt",
"rule 3az_rule { id 1 type replicated step take default step choose firstn 3 type datacenter step chooseleaf firstn 2 type host step emit }",
"crushtool -c DECOMPILED_CRUSHMAP_FILENAME -o COMPILED_CRUSHMAP_FILENAME ceph osd setcrushmap -i COMPILED_CRUSHMAP_FILENAME",
"crushtool -c crush.map.txt -o crush2.map.bin ceph osd setcrushmap -i crush2.map.bin",
"ceph osd crush rule ls",
"ceph osd crush rule ls replicated_rule ec86_pool 3az_rule",
"ceph osd crush rule dump CRUSH_RULE",
"ceph osd crush rule dump 3az_rule { \"rule_id\": 1, \"rule_name\": \"3az_rule\", \"type\": 1, \"steps\": [ { \"op\": \"take\", \"item\": -1, \"item_name\": \"default\" }, { \"op\": \"choose_firstn\", \"num\": 3, \"type\": \"datacenter\" }, { \"op\": \"chooseleaf_firstn\", \"num\": 2, \"type\": \"host\" }, { \"op\": \"emit\" } ] }",
"ceph mon set election_strategy connectivity",
"ceph mon dump",
"ceph mon dump epoch 19 fsid b556497a-693a-11ef-b9d1-fa163e841fd7 last_changed 2024-09-03T12:47:08.419495+0000 created 2024-09-02T14:50:51.490781+0000 min_mon_release 19 (squid) election_strategy: 3 0: [v2:10.0.67.43:3300/0,v1:10.0.67.43:6789/0] mon.host01-installer; crush_location {datacenter=DC1} 1: [v2:10.0.67.20:3300/0,v1:10.0.67.20:6789/0] mon.host02; crush_location {datacenter=DC1} 2: [v2:10.0.64.242:3300/0,v1:10.0.64.242:6789/0] mon.host03; crush_location {datacenter=DC1} 3: [v2:10.0.66.17:3300/0,v1:10.0.66.17:6789/0] mon.host06; crush_location {datacenter=DC2} 4: [v2:10.0.66.228:3300/0,v1:10.0.66.228:6789/0] mon.host09; crush_location {datacenter=DC3} 5: [v2:10.0.65.125:3300/0,v1:10.0.65.125:6789/0] mon.host05; crush_location {datacenter=DC2} 6: [v2:10.0.66.252:3300/0,v1:10.0.66.252:6789/0] mon.host07; crush_location {datacenter=DC3} 7: [v2:10.0.64.145:3300/0,v1:10.0.64.145:6789/0] mon.host08; crush_location {datacenter=DC3} 8: [v2:10.0.64.125:3300/0,v1:10.0.64.125:6789/0] mon.host04; crush_location {datacenter=DC2} dumped monmap epoch 19",
"ceph osd pool stretch set _POOL_NAME_ _PEERING_CRUSH_BUCKET_COUNT_ _PEERING_CRUSH_BUCKET_TARGET_ _PEERING_CRUSH_BUCKET_BARRIER_ _CRUSH_RULE_ _SIZE_ _MIN_SIZE_ [--yes-i-really-mean-it]",
"ceph osd pool stretch set pool01 2 3 datacenter 3az_rule 6 3",
"ceph osd pool stretch show pool01 pool: pool01 pool_id: 1 is_stretch_pool: 1 peering_crush_bucket_count: 2 peering_crush_bucket_target: 3 peering_crush_bucket_barrier: 8 crush_rule: 3az_rule size: 6 min_size: 3",
"ssh-copy-id -f -i /etc/ceph/ceph.pub user@ NEWHOST",
"ssh-copy-id -f -i /etc/ceph/ceph.pub root@host11 ssh-copy-id -f -i /etc/ceph/ceph.pub root@host12",
"ceph orch daemon add osd _HOST_:_DEVICE_PATH_",
"ceph orch daemon add osd host11:/dev/sdb",
"ceph orch apply osd --all-available-devices",
"ceph osd crush move HOST datacenter= DATACENTER",
"ceph osd crush move host10 datacenter=DC1 ceph osd crush move host11 datacenter=DC2 ceph osd crush move host12 datacenter=DC3"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/administration_guide/generalized_stretch_cluster_configuration_for_three_availability_zones |
8.84. hmaccalc | 8.84. hmaccalc 8.84.1. RHBA-2014:1584 - hmaccalc bug fix update Updated hmaccalc packages that fix one bug are now available for Red Hat Enterprise Linux 6. The hmaccalc packages contain tools to calculate HMAC (Hash-based Message Authentication Code) values for files. The names and interfaces were designed to mimic those of the sha1sum, sha256sum, sha384sum and sha512sum tools provided by the coreutils package. Bug Fix BZ# 1016706 The .hmac files are used to check the kernel image at the boot time; if the check fails, the boot process is expected to be halted. Previously, the hmaccalc utility did not flag empty .hmac files as an error, allowing the system to boot even if the boot was supposed to fail. With this update, a patch has been provided to address this bug. As a result, the system is no longer allowed to boot in the described scenario. Users of hmaccalc are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/hmaccalc |
Chapter 3. Planning the replica topology | Chapter 3. Planning the replica topology Review guidance on determining the appropriate replica topology for your use case. 3.1. Multiple replica servers as a solution for high performance and disaster recovery You can achieve continuous functionality and high-availability of Identity Management (IdM) services by creating replicas of the existing IdM servers. When you create an appropriate number of IdM replicas, you can use load balancing to distribute client requests across multiple servers to optimize performance of IdM services. With IdM, you can place additional servers in geographically dispersed data centers to reflect your enterprise organizational structure. In this way, the path between IdM clients and the nearest accessible server is shortened. In addition, having multiple servers allows spreading the load and scaling for more clients. Replicating IdM servers is also a common backup mechanism to mitigate or prevent server loss. For example, if one server fails, the remaining servers continue providing services to the domain. You can also recover the lost server by creating a new replica based on one of the remaining servers. 3.2. Introduction to IdM servers and clients The Identity Management (IdM) domain includes the following types of systems: IdM clients IdM clients are Red Hat Enterprise Linux systems enrolled with the servers and configured to use the IdM services on these servers. Clients interact with the IdM servers to access services provided by them. For example, clients use the Kerberos protocol to perform authentication and acquire tickets for enterprise single sign-on (SSO), use LDAP to get identity and policy information, and use DNS to detect where the servers and services are located and how to connect to them. IdM servers IdM servers are Red Hat Enterprise Linux systems that respond to identity, authentication, and authorization requests from IdM clients within an IdM domain. IdM servers are the central repositories for identity and policy information. They can also host any of the optional services used by domain members: Certificate authority (CA): This service is present in most IdM deployments. Key Recovery Authority (KRA) DNS Active Directory (AD) trust controller Active Directory (AD) trust agent IdM servers are also embedded IdM clients. As clients enrolled with themselves, the servers provide the same functionality as other clients. To provide services for large numbers of clients, as well as for redundancy and availability, IdM allows deployment on multiple IdM servers in a single domain. It is possible to deploy up to 60 servers. This is the maximum number of IdM servers, also called replicas, that is currently supported in the IdM domain. When creating a replica, IdM clones the configuration of the existing server. A replica shares with the initial server its core configuration, including internal information about users, systems, certificates, and configured policies. NOTE A replica and the server it was created from are functionally identical, except for the CA renewal and CRL publisher roles. Therefore, the term server and replica are used interchangeably in RHEL IdM documentation, depending on the context. However, different IdM servers can provide different services for the client, if so configured. Core components like Kerberos and LDAP are available on every server. Other services like CA, DNS, Trust Controller or Vault are optional. This means that different IdM servers can have distinct roles in the deployment. If your IdM topology contains an integrated CA, one server has the role of the Certificate revocation list (CRL) publisher server and one server has the role of the CA renewal server . By default, the first CA server installed fulfills these two roles, but you can assign these roles to separate servers. Warning The CA renewal server is critical for your IdM deployment because it is the only system in the domain responsible for tracking CA subsystem certificates and keys . For details about how to recover from a disaster affecting your IdM deployment, see Performing disaster recovery with Identity Management . NOTE All IdM servers (for clients, see Supported versions of RHEL for installing IdM clients ) must be running on the same major and minor version of RHEL. Do not spend more than several days applying z-stream updates or upgrading the IdM servers in your topology. For details about how to apply Z-stream fixes and upgrade your servers, see Updating IdM packages . For details about how to migrate to IdM on RHEL 9, see Migrating your IdM environment from RHEL 8 servers to RHEL 9 servers . 3.3. Replication agreements between IdM replicas When an administrator creates a replica based on an existing server, Identity Management (IdM) creates a replication agreement between the initial server and the replica. The replication agreement ensures that the data and configuration is continuously replicated between the two servers. IdM uses multiple read/write replica replication . In this configuration, all replicas joined in a replication agreement receive and provide updates, and are therefore considered suppliers and consumers. Replication agreements are always bilateral. Figure 3.1. Server and replica agreements IdM uses two types of replication agreements: Domain replication agreements replicate the identity information. Certificate replication agreements replicate the certificate information. Both replication channels are independent. Two servers can have one or both types of replication agreements configured between them. For example, when server A and server B have only domain replication agreement configured, only identity information is replicated between them, not the certificate information. 3.4. Guidelines for determining the appropriate number of IdM replicas in a topology Plan IdM topology to match your organization's requirements and ensure optimal performance and service availability. Set up at least two replicas in each data center Deploy at least two replicas in each data center to ensure that if one server fails, the replica can take over and handle requests. Set up a sufficient number of servers to serve your clients One Identity Management (IdM) server can provide services to 2000 - 3000 clients. This assumes the clients query the servers multiple times a day, but not, for example, every minute. If you expect frequent queries, plan for more servers. Set up a sufficient number of Certificate Authority (CA) replicas Only replicas with the CA role installed can replicate certificate data. If you use the IdM CA, ensure your environment has at least two CA replicas with certificate replication agreements between them. Set up a maximum of 60 replicas in a single IdM domain Red Hat supports environments with up to 60 replicas. 3.5. Guidelines for connecting IdM replicas in a topology Connect each replica to at least two other replicas This ensures that information is replicated not just between the initial replica and the first server you installed, but between other replicas as well. Connect a replica to a maximum of four other replicas (not a hard requirement) A large number of replication agreements per server does not add significant benefits. A receiving replica can only be updated by one other replica at a time and meanwhile, the other replication agreements are idle. More than four replication agreements per replica typically means a waste of resources. Note This recommendation applies to both certificate replication and domain replication agreements. There are two exceptions to the limit of four replication agreements per replica: You want failover paths if certain replicas are not online or responding. In larger deployments, you want additional direct links between specific nodes. Configuring a high number of replication agreements can have a negative impact on overall performance: when multiple replication agreements in the topology are sending updates, certain replicas can experience a high contention on the changelog database file between incoming updates and the outgoing updates. If you decide to use more replication agreements per replica, ensure that you do not experience replication issues and latency. However, note that large distances and high numbers of intermediate nodes can also cause latency problems. Connect the replicas in a data center with each other This ensures domain replication within the data center. Connect each data center to at least two other data centers This ensures domain replication between data centers. Connect data centers using at least a pair of replication agreements If data centers A and B have a replication agreement from A1 to B1, having a replication agreement from A2 to B2 ensures that if one of the servers is down, the replication can continue between the two data centers. 3.6. Replica topology examples You can create a reliable replica topology by using one of the following examples. Figure 3.2. Replica topology with four data centers, each with four servers that are connected with replication agreements Figure 3.3. Replica topology with three data centers, each with a different number of servers that are all interconnected through replication agreements 3.7. The hidden replica mode A hidden replica is an IdM server that has all services running and available. However, a hidden replica has no SRV records in DNS, and LDAP server roles are not enabled. Therefore, clients cannot use service discovery to detect hidden replicas. By default, when you set up a replica, the installation program automatically creates service (SRV) resource records for it in DNS. These records enable clients to auto-discover the replica and its services. When installing a replica as hidden, add the --hidden-replica parameter to the ipa-replica-install command. Hidden replicas are primarily designed for dedicated services that might disrupt clients. For example, a full backup of IdM requires shutting down all IdM services on the server. As no clients use a hidden replica, administrators can temporarily shut down the services on this host without affecting any clients. Other use cases include high-load operations on the IdM API or the LDAP server, such as a mass import or extensive queries. Before backing up a hidden replica, you must install all required server roles used in a cluster, especially the Certificate Authority role if the integrated CA is used. Therefore, restoring a backup from a hidden replica on a new host always results in a regular replica. Additional resources Installing an Identity Management replica Backing up and restoring IdM Demoting or promoting hidden replicas . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/planning_identity_management/planning-the-replica-topology_planning-identity-management |
6.2. Xen | 6.2. Xen Red Hat Enterprise Linux 7 Xen HVM Guest Users can use Red Hat Enterprise Linux 7 as a guest on the Xen environment. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.0_release_notes/sect-red_hat_enterprise_linux-7.0_release_notes-virtualization-xen |
Chapter 1. Overview of Hosts in Satellite | Chapter 1. Overview of Hosts in Satellite A host is any Linux client that Red Hat Satellite manages. Hosts can be physical or virtual. Virtual hosts can be deployed on any platform supported by Red Hat Satellite, such as Amazon EC2, Google Compute Engine, KVM, libvirt, Microsoft Azure, OpenStack, Red Hat Virtualization, Rackspace Cloud Services, or VMware vSphere. Red Hat Satellite enables host management at scale, including monitoring, provisioning, remote execution, configuration management, software management, and subscription management. You can manage your hosts from the Satellite web UI or from the command line. In the Satellite web UI, you can browse all hosts recognized by Satellite Server, grouped by type: All Hosts - a list of all hosts recognized by Satellite Server. Discovered Hosts - a list of bare-metal hosts detected on the provisioning network by the Discovery plug-in. Content Hosts - a list of hosts that manage tasks related to content and subscriptions. Host Collections - a list of user-defined collections of hosts used for bulk actions such as errata installation. To search for a host, type in the Search field, and use an asterisk (*) to perform a partial string search. For example, if searching for a content host named dev-node.example.com , click the Content Hosts page and type dev-node* in the Search field. Alternatively, *node* will also find the content host dev-node.example.com. Warning Satellite Server is listed as a host itself even if it is not self-registered. Do not delete Satellite Server from the list of hosts. | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/managing_hosts/overview_of_hosts_managing-hosts |
32.11. Starting a Kickstart Installation | 32.11. Starting a Kickstart Installation Important Firstboot does not run after a system is installed from a Kickstart file unless a desktop and the X Window System were included in the installation and graphical login was enabled. Either specify a user with the user option in the Kickstart file before installing additional systems from it (refer to Section 32.4, "Kickstart Options" for details) or log into the installed system with a virtual console as root and add users with the adduser command. To begin a kickstart installation, you must boot the system from boot media you have made or the Red Hat Enterprise Linux DVD, and enter a special boot command at the boot prompt. The installation program looks for a kickstart file if the ks command line argument is passed to the kernel. DVD and local storage The linux ks= command also works if the ks.cfg file is located on a vfat or ext2 file system on local storage and you boot from the Red Hat Enterprise Linux DVD. With Driver Disk If you need to use a driver disk with kickstart, specify the dd option as well. For example, if installation requires a kickstart file on a local hard drive and also requires a driver disk, boot the system with: Boot CD-ROM If the kickstart file is on a boot CD-ROM as described in Section 32.9.1, "Creating Kickstart Boot Media" , insert the CD-ROM into the system, boot the system, and enter the following command at the boot: prompt (where ks.cfg is the name of the kickstart file): Other options to start a kickstart installation are as follows: askmethod Prompt the user to select an installation source, even if a Red Hat Enterprise Linux installation DVD is detected on the system. asknetwork Prompt for network configuration in the first stage of installation regardless of installation method. autostep Make kickstart non-interactive. Used for debugging and to generate screenshots. This option should not be used when deploying a system because it may disrupt package installation. debug Start up pdb immediately. dd Use a driver disk. dhcpclass= <class> Sends a custom DHCP vendor class identifier. ISC's dhcpcd can inspect this value using "option vendor-class-identifier". dns= <dns> Comma separated list of nameservers to use for a network installation. driverdisk Same as 'dd'. expert Turns on special features: allows partitioning of removable media prompts for a driver disk gateway= <gw> Gateway to use for a network installation. graphical Force graphical install. Required to have ftp/http use GUI. isa Prompt user for ISA devices configuration. ip= <ip> IP to use for a network installation, use 'dhcp' for DHCP. ipv6=auto , ipv6=dhcp IPv6 configuration for the device. Use auto for automatic configuration (SLAAC, SLAAC with DHCPv6), or dhcp for DHCPv6 only configuration (no router advertisements). keymap= <keymap> Keyboard layout to use. Valid layouts include: be-latin1 - Belgian bg_bds-utf8 - Bulgarian bg_pho-utf8 - Bulgarian (Phonetic) br-abnt2 - Brazilian (ABNT2) cf - French Canadian croat - Croatian cz-us-qwertz - Czech cz-lat2 - Czech (qwerty) de - German de-latin1 - German (latin1) de-latin1-nodeadkeys - German (latin1 without dead keys) dvorak - Dvorak dk - Danish dk-latin1 - Danish (latin1) es - Spanish et - Estonian fi - Finnish fi-latin1 - Finnish (latin1) fr - French fr-latin9 - French (latin9) fr-latin1 - French (latin1) fr-pc - French (pc) fr_CH - Swiss French fr_CH-latin1 - Swiss French (latin1) gr - Greek hu - Hungarian hu101 - Hungarian (101 key) is-latin1 - Icelandic it - Italian it-ibm - Italian (IBM) it2 - Italian (it2) jp106 - Japanese ko - Korean la-latin1 - Latin American mk-utf - Macedonian nl - Dutch no - Norwegian pl2 - Polish pt-latin1 - Portuguese ro - Romanian ru - Russian sr-cy - Serbian sr-latin - Serbian (latin) sv-latin1 - Swedish sg - Swiss German sg-latin1 - Swiss German (latin1) sk-qwerty - Slovak (qwerty) slovene - Slovenian trq - Turkish uk - United Kingdom ua-utf - Ukrainian us-acentos - U.S. International us - U.S. English The file /usr/lib/python2.6/site-packages/system_config_keyboard/keyboard_models.py on 32-bit systems or /usr/lib64/python2.6/site-packages/system_config_keyboard/keyboard_models.py on 64-bit systems also contains this list and is part of the system-config-keyboard package. ks=nfs: <server> :/ <path> The installation program looks for the kickstart file on the NFS server <server> , as file <path> . The installation program uses DHCP to configure the Ethernet card. For example, if your NFS server is server.example.com and the kickstart file is in the NFS share /mydir/ks.cfg , the correct boot command would be ks=nfs:server.example.com:/mydir/ks.cfg . ks={http|https}:// <server> / <path> The installation program looks for the kickstart file on the HTTP or HTTPS server <server> , as file <path> . The installation program uses DHCP to configure the Ethernet card. For example, if your HTTP server is server.example.com and the kickstart file is in the HTTP directory /mydir/ks.cfg , the correct boot command would be ks=http://server.example.com/mydir/ks.cfg . ks=hd: <device> :/ <file> The installation program mounts the file system on <device> (which must be vfat or ext2), and looks for the kickstart configuration file as <file> in that file system (for example, ks=hd:sda3:/mydir/ks.cfg ). ks=bd: <biosdev> :/ <path> The installation program mounts the file system on the specified partition on the specified BIOS device <biosdev> , and looks for the kickstart configuration file specified in <path> (for example, ks=bd:80p3:/mydir/ks.cfg ). Note this does not work for BIOS RAID sets. ks=file:/ <file> The installation program tries to read the file <file> from the file system; no mounts are done. This is normally used if the kickstart file is already on the initrd image. ks=cdrom:/ <path> The installation program looks for the kickstart file on CD-ROM, as file <path> . ks If ks is used alone, the installation program configures the Ethernet card to use DHCP. The kickstart file is read from NFS server specified by DHCP option server-name. The name of the kickstart file is one of the following: If DHCP is specified and the boot file begins with a / , the boot file provided by DHCP is looked for on the NFS server. If DHCP is specified and the boot file begins with something other than a / , the boot file provided by DHCP is looked for in the /kickstart directory on the NFS server. If DHCP did not specify a boot file, then the installation program tries to read the file /kickstart/1.2.3.4-kickstart , where 1.2.3.4 is the numeric IP address of the machine being installed. ksdevice= <device> The installation program uses this network device to connect to the network. You can specify the device in one of five ways: the device name of the interface, for example, eth0 the MAC address of the interface, for example, 00:12:34:56:78:9a the keyword link , which specifies the first interface with its link in the up state the keyword bootif , which uses the MAC address that pxelinux set in the BOOTIF variable. Set IPAPPEND 2 in your pxelinux.cfg file to have pxelinux set the BOOTIF variable. the keyword ibft , which uses the MAC address of the interface specified by iBFT For example, consider a system connected to an NFS server through the eth1 device. To perform a kickstart installation on this system using a kickstart file from the NFS server, you would use the command ks=nfs: <server> :/ <path> ksdevice=eth1 at the boot: prompt. kssendmac Adds HTTP headers to ks=http:// request that can be helpful for provisioning systems. Includes MAC address of all nics in CGI environment variables of the form: "X-RHN-Provisioning-MAC-0: eth0 01:23:45:67:89:ab". lang= <lang> Language to use for the installation. This should be a language which is valid to be used with the 'lang' kickstart command. loglevel= <level> Set the minimum level required for messages to be logged. Values for <level> are debug, info, warning, error, and critical. The default value is info. mediacheck Activates loader code to give user option of testing integrity of install source (if an ISO-based method). netmask= <nm> Netmask to use for a network installation. nofallback If GUI fails, exit. nofb Do not load the VGA16 framebuffer required for doing text-mode installation in some languages. nofirewire Do not load support for firewire devices. noipv4 Disable IPv4 networking on the device specified by the ksdevice= boot option. noipv6 Disable IPv6 networking on all network devices on the installed system, and during installation. Important During installations from a PXE server, IPv6 networking might become active before anaconda processes the Kickstart file. If so, this option will have no effect during installation. Note To disable IPv6 on the installed system, the --noipv6 kickstart option must be used on each network device, in addition to the noipv6 boot option. See the Knowledgebase article at https://access.redhat.com/solutions/1565723 for more information about disabling IPv6 system-wide. nomount Don't automatically mount any installed Linux partitions in rescue mode. nonet Do not auto-probe network devices. noparport Do not attempt to load support for parallel ports. nopass Do not pass information about the keyboard and mouse from anaconda stage 1 (the loader) to stage 2 (the installer). nopcmcia Ignore PCMCIA controllers in the system. noprobe Do not automatically probe for hardware; prompt the user to allow anaconda to probe for particular categories of hardware. noshell Do not put a shell on tty2 during install. repo=cdrom Do a DVD based installation. repo=ftp:// <path> Use <path> for an FTP installation. repo=hd: <dev> : <path> Use <path> on <dev> for a hard drive installation. repo=http:// <path> Use <path> for an HTTP installation. repo=https:// <path> Use <path> for an HTTPS installation. repo=nfs: <path> Use <path> for an NFS installation. rescue Run rescue environment. resolution= <mode> Run installer in mode specified, '1024x768' for example. serial Turns on serial console support. skipddc Do not probe the Data Display Channel (DDC) of the monitor. This option provides a workaround if the DDC probe causes the system to stop responding. syslog= <host> [: <port> ] Once installation is up and running, send log messages to the syslog process on <host> , and optionally, on port <port> . Requires the remote syslog process to accept connections (the -r option). text Force text mode install. Important If you select text mode for a kickstart installation, make sure that you specify choices for the partitioning, bootloader, and package selection options. These steps are automated in the text mode, and anaconda cannot prompt you for missing information. If you do not provide choices for these options, anaconda will stop the installation process. updates Prompt for storage device containing updates (bug fixes). updates=ftp:// <path> Image containing updates over FTP. updates=http:// <path> Image containing updates over HTTP. updates=https:// <path> Image containing updates over HTTPS. upgradeany Offer to upgrade any Linux installation detected on the system, regardless of the contents or the existence of the /etc/redhat-release file. vnc Enable vnc-based installation. You will need to connect to the machine using a vnc client application. vncconnect= <host> [: <port> ] Connect to the vnc client named <host> , and optionally use port <port> . Requires 'vnc' option to be specified as well. vncpassword= <password> Enable a password for the vnc connection. This will prevent someone from inadvertently connecting to the vnc-based installation. Requires 'vnc' option to be specified as well. | [
"linux ks=hd: partition :/ path /ks.cfg dd",
"linux ks=cdrom:/ks.cfg"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-kickstart2-startinginstall |
28.6. Removing a Replica | 28.6. Removing a Replica Deleting or demoting a replica removes the IdM replica from the server/replica topology so that it no longer processes IdM requests and it also removes the host machine itself from the IdM domain. On an IdM server, obtain a Kerberos ticket before running IdM tools. List all of the configured replication agreements for the IdM domain. Removing the replica from the topology involves deleting all the agreements between the replica and the other servers in the IdM domain and all of the data about the replica in the domain configuration. If the replica was configured with its own CA , then also use the ipa-csreplica-manage command to remove all of the replication agreements between the certificate databases for the replica. This is required if the replica itself was configured with a Dogtag Certificate System CA. It is not required if only the master server or other replicas were configured with a CA. On the replica, uninstall the replica packages. | [
"kinit admin",
"ipa-replica-manage list Directory Manager password: ipaserver.example.com: master ipaserver2.example.com: master replica.example.com: master replica2.example.com: master",
"ipa-replica-manage del replica.example.com",
"ipa-csreplica-manage del replica.example.com",
"ipa-server-install --uninstall -U"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/removing-replica |
Chapter 11. Controlling LVM allocation | Chapter 11. Controlling LVM allocation By default, a volume group uses the normal allocation policy. This allocates physical extents according to common-sense rules such as not placing parallel stripes on the same physical volume. You can specify a different allocation policy ( contiguous , anywhere , or cling ) by using the --alloc argument of the vgcreate command. In general, allocation policies other than normal are required only in special cases where you need to specify unusual or nonstandard extent allocation. 11.1. Allocating extents from specified devices You can restrict the allocation from specific devices by using the device arguments at the end of the command line with the lvcreate and the lvconvert commands. You can specify the actual extent ranges for each device for more control. The command only allocates extents for the new logical volume (LV) by using the specified physical volume (PV) as arguments. It takes available extents from each PV until they run out and then takes extents from the PV listed. If there is not enough space on all the listed PVs for the requested LV size, then command fails. Note that the command only allocates from the named PVs. Raid LVs use sequential PVs for separate raid images or separate stripes. If the PVs are not large enough for an entire raid image, then the resulting device use is not entirely predictable. Procedure Create a volume group (VG): Where: <vg_name> is the name of the VG. <PV> are the PVs. You can allocate PV to create different volume types, such as linear or raid: Allocate extents to create a linear volume: Where: <lv_name> is the name of the LV. <lv_size> is the size of the LV. Default unit is megabytes. <vg_name> is the name of the VG. [ <PV ... > ] are the PVs. You can specify one of the PVs, all of them, or none on the command line: If you specify one PV, extents for that LV will be allocated from it. Note If the PV does not have sufficient free extents for the entire LV, then the lvcreate fails. If you specify two PVs, extents for that LV will be allocated from one of them, or a combination of both. If you do not specify any PV, extents will be allocated from one of the PVs in the VG, or any combination of all PVs in the VG. Note In these cases, LVM might not use all of the named or available PVs. If the first PV has sufficient free extents for the entire LV, then the other PV will probably not be used. However, if the first PV does not have a set allocation size of free extents, then LV might be allocated partly from the first PV and partly from the second PV. Example 11.1. Allocating extents from one PV In this example, lv1 extents will be allocated from sda . Example 11.2. Allocating extents from two PVs In this example, lv2 extents will be allocated from either sda , or sdb , or a combination of both. Example 11.3. Allocating extents without specifying PV In this example, lv3 extents will be allocated from one of the PVs in the VG, or any combination of all PVs in the VG. or Allocate extents to create a raid volume: Where: <segment_type> is the specified segment type (for example raid5 , mirror , snapshot ). <mirror_images> creates a raid1 or a mirrored LV with the specified number of images. For example, -m 1 would result in a raid1 LV with two images. <lv_name> is the name of the LV. <lv_size> is the size of the LV. Default unit is megabytes. <vg_name> is the name of the VG. <[PV ... ]> are the PVs. The first raid image will be allocated from the first PV, the second raid image from the second PV, and so on. Example 11.4. Allocating raid images from two PVs In this example, lv4 first raid image will be allocated from sda and second image will be allocated from sdb . Example 11.5. Allocating raid images from three PVs In this example, lv5 first raid image will be allocated from sda , second image will be allocated from sdb , and third image will be allocated from sdc . Additional resources lvcreate(8) , lvconvert(8) , and lvmraid(7) man pages on your system 11.2. LVM allocation policies When an LVM operation must allocate physical extents for one or more logical volumes (LVs), the allocation proceeds as follows: The complete set of unallocated physical extents in the volume group is generated for consideration. If you supply any ranges of physical extents at the end of the command line, only unallocated physical extents within those ranges on the specified physical volumes (PVs) are considered. Each allocation policy is tried in turn, starting with the strictest policy ( contiguous ) and ending with the allocation policy specified using the --alloc option or set as the default for the particular LV or volume group (VG). For each policy, working from the lowest-numbered logical extent of the empty LV space that needs to be filled, as much space as possible is allocated, according to the restrictions imposed by the allocation policy. If more space is needed, LVM moves on to the policy. The allocation policy restrictions are as follows: The contiguous policy requires that the physical location of any logical extent is adjacent to the physical location of the immediately preceding logical extent, with the exception of the first logical extent of a LV. When a LV is striped or mirrored, the contiguous allocation restriction is applied independently to each stripe or raid image that needs space. The cling allocation policy requires that the PV used for any logical extent be added to an existing LV that is already in use by at least one logical extent earlier in that LV. An allocation policy of normal will not choose a physical extent that shares the same PV as a logical extent already allocated to a parallel LV (that is, a different stripe or raid image) at the same offset within that parallel LV. If there are sufficient free extents to satisfy an allocation request but a normal allocation policy would not use them, the anywhere allocation policy will, even if that reduces performance by placing two stripes on the same PV. You can change the allocation policy by using the vgchange command. Note Future updates can bring code changes in layout behavior according to the defined allocation policies. For example, if you supply on the command line two empty physical volumes that have an identical number of free physical extents available for allocation, LVM currently considers using each of them in the order they are listed; there is no guarantee that future releases will maintain that property. If you need a specific layout for a particular LV, build it up through a sequence of lvcreate and lvconvert steps such that the allocation policies applied to each step leave LVM no discretion over the layout. 11.3. Preventing allocation on a physical volume You can prevent allocation of physical extents on the free space of one or more physical volumes with the pvchange command. This might be necessary if there are disk errors, or if you will be removing the physical volume. Procedure Use the following command to disallow the allocation of physical extents on device_name : You can also allow allocation where it had previously been disallowed by using the -xy arguments of the pvchange command. Additional resources pvchange(8) man page on your system | [
"vgcreate <vg_name> <PV>",
"lvcreate -n <lv_name> -L <lv_size> <vg_name> [ <PV> ... ]",
"lvcreate -n lv1 -L1G vg /dev/sda",
"lvcreate -n lv2 L1G vg /dev/sda /dev/sdb",
"lvcreate -n lv3 -L1G vg",
"lvcreate --type <segment_type> -m <mirror_images> -n <lv_name> -L <lv_size> <vg_name> [ <PV> ... ]",
"lvcreate --type raid1 -m 1 -n lv4 -L1G vg /dev/sda /dev/sdb",
"lvcreate --type raid1 -m 2 -n lv5 -L1G vg /dev/sda /dev/sdb /dev/sdc",
"pvchange -x n /dev/sdk1"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_logical_volumes/assembly_controlling-lvm-allocation-configuring-and-managing-logical-volumes |
Chapter 13. Troubleshooting builds | Chapter 13. Troubleshooting builds Use the following to troubleshoot build issues. 13.1. Resolving denial for access to resources If your request for access to resources is denied: Issue A build fails with: requested access to the resource is denied Resolution You have exceeded one of the image quotas set on your project. Check your current quota and verify the limits applied and storage in use: USD oc describe quota 13.2. Service certificate generation failure If your request for access to resources is denied: Issue If a service certificate generation fails with (service's service.beta.openshift.io/serving-cert-generation-error annotation contains): Example output secret/ssl-key references serviceUID 62ad25ca-d703-11e6-9d6f-0e9c0057b608, which does not match 77b6dd80-d716-11e6-9d6f-0e9c0057b60 Resolution The service that generated the certificate no longer exists, or has a different serviceUID . You must force certificates regeneration by removing the old secret, and clearing the following annotations on the service: service.beta.openshift.io/serving-cert-generation-error and service.beta.openshift.io/serving-cert-generation-error-num : USD oc delete secret <secret_name> USD oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error- USD oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-num- Note The command removing annotation has a - after the annotation name to be removed. | [
"requested access to the resource is denied",
"oc describe quota",
"secret/ssl-key references serviceUID 62ad25ca-d703-11e6-9d6f-0e9c0057b608, which does not match 77b6dd80-d716-11e6-9d6f-0e9c0057b60",
"oc delete secret <secret_name>",
"oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-",
"oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-num-"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/builds_using_buildconfig/troubleshooting-builds_build-configuration |
Chapter 5. Managing storage classes | Chapter 5. Managing storage classes OpenShift cluster administrators use storage classes to describe the different types of storage that is available in their cluster. These storage types can represent different quality-of-service levels, backup policies, or other custom policies set by cluster administrators. 5.1. Configuring storage class settings As an OpenShift AI administrator, you can manage OpenShift cluster storage class settings for usage within OpenShift AI, including the display name, description, and whether users can use the storage class when creating or editing cluster storage. These settings do not impact the storage class within OpenShift. Prerequisites You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges. Procedure From the OpenShift AI dashboard, click Settings Storage classes . The Storage classes page appears, displaying the storage classes for your cluster as defined in OpenShift. To enable or disable a storage class for users, on the row containing the storage class, click the toggle in the Enable column. To edit a storage class, on the row containing the storage class, click the action menu (...) and then select Edit . The Edit storage class details dialog opens. Optional: In the Display Name field, update the name for the storage class. This name is used only in OpenShift AI and does not impact the storage class within OpenShift. Optional: In the Description field, update the description for the storage class. This description is used only in OpenShift AI and does not impact the storage class within OpenShift. Click Save . Verification If you enabled a storage class, the storage class is available for selection when a user adds cluster storage to a data science project or workbench. If you disabled a storage class, the storage class is not available for selection when a user adds cluster storage to a data science project or workbench. If you edited a storage class name, the updated storage class name is displayed when a user adds cluster storage to a data science project or workbench. Additional resources Storage classes in OpenShift 5.2. Configuring the default storage class for your cluster As an OpenShift AI administrator, you can configure the default storage class for OpenShift AI to be different from the default storage class in OpenShift. Prerequisites You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges. Procedure From the OpenShift AI dashboard, click Settings Storage classes . The Storage classes page appears, displaying the storage classes for your cluster as defined in OpenShift. If the storage class that you want to set as the default is not enabled, on the row containing the storage class, click the toggle in the Enable column. To set a storage class as the default for OpenShift AI, on the row containing the storage class, select Set as default . Verification When a user adds cluster storage to a data science project or workbench, the default storage class that you configured is automatically selected. Additional resources Storage classes in OpenShift 5.3. Overview of object storage endpoints To ensure correct configuration of object storage in OpenShift AI, you must format endpoints correctly for the different types of object storage supported. These instructions are for formatting endpoints for Amazon S3, MinIO, or other S3-compatible storage solutions, minimizing configuration errors and ensuring compatibility. Important Properly formatted endpoints enable connectivity and reduce the risk of misconfigurations. Use the appropriate endpoint format for your object storage type. Improper formatting might cause connection errors or restrict access to storage resources. 5.3.1. MinIO (On-Cluster) For on-cluster MinIO instances, use a local endpoint URL format. Ensure the following when configuring MinIO endpoints: Prefix the endpoint with http:// or https:// depending on your MinIO security setup. Include the cluster IP or hostname, followed by the port number if specified. Use a port number if your MinIO instance requires one (default is typically 9000 ). Example: Note Verify that the MinIO instance is accessible within the cluster by checking your cluster DNS settings and network configurations. 5.3.2. Amazon S3 When configuring endpoints for Amazon S3, use region-specific URLs. Amazon S3 endpoints generally follow this format: Prefix the endpoint with https:// . Format as <bucket-name>.s3.<region>.amazonaws.com , where <bucket-name> is the name of your S3 bucket, and <region> is the AWS region code (for example, us-west-1 , eu-central-1 ). Example: Note For improved security and compliance, ensure that your Amazon S3 bucket is in the correct region. 5.3.3. Other S3-Compatible Object Stores For S3-compatible storage solutions other than Amazon S3, follow the specific endpoint format required by your provider. Generally, these endpoints include the following items: The provider base URL, prefixed with https:// . The bucket name and region parameters as specified by the provider. Review the documentation from your S3-compatible provider to confirm required endpoint formats. Replace placeholder values like <bucket-name> and <region> with your specific configuration details. Warning Incorrectly formatted endpoints for S3-compatible providers might lead to access denial. Always verify the format in your storage provider documentation to ensure compatibility. 5.3.4. Verification and Troubleshooting After configuring endpoints, verify connectivity by performing a test upload or accessing the object storage directly through the OpenShift AI dashboard. For troubleshooting, check the following items: Network Accessibility : Confirm that the endpoint is reachable from your OpenShift AI cluster. Authentication : Ensure correct access credentials for each storage type. Endpoint Accuracy : Double-check the endpoint URL format for any typos or missing components. Additional resources Amazon S3 Region and Endpoint Documentation: AWS S3 Documentation | [
"http://minio-cluster.local:9000",
"https://my-bucket.s3.us-west-2.amazonaws.com"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/managing_resources/managing-storage-classes |
Chapter 11. Creating Kickstart files | Chapter 11. Creating Kickstart files You can create a Kickstart file using the following methods: Use the online Kickstart configuration tool. Copy the Kickstart file created as a result of a manual installation. Write the entire Kickstart file manually. Convert the Red Hat Enterprise Linux 8 Kickstart file for Red Hat Enterprise Linux 9 installation. For more information about the conversion tool, see Kickstart generator lab . In case of virtual and cloud environment, create a custom system image, using Image Builder. Some highly specific installation options can be configured only by manual editing of the Kickstart file. 11.1. Creating a Kickstart file with the Kickstart configuration tool Users with a Red Hat Customer Portal account can use the Kickstart Generator tool in the Customer Portal Labs to generate Kickstart files online. This tool will walk you through the basic configuration and enables you to download the resulting Kickstart file. Prerequisites You have a Red Hat Customer Portal account and an active Red Hat subscription. Procedure Open the Kickstart generator lab information page at https://access.redhat.com/labsinfo/kickstartconfig . Click the Go to Application button to the left of heading and wait for the page to load. Select Red Hat Enterprise Linux 9 in the drop-down menu and wait for the page to update. Describe the system to be installed using the fields in the form. You can use the links on the left side of the form to quickly navigate between sections of the form. To download the generated Kickstart file, click the red Download button at the top of the page. Your web browser saves the file. Install the pykickstart package. Run ksvalidator on your Kickstart file. Replace /path/to/kickstart.ks with the path to the Kickstart file you want to verify. The validation tool cannot guarantee the installation will be successful. It ensures only that the syntax is correct and that the file does not include deprecated options. It does not attempt to validate the %pre , %post and %packages sections of the Kickstart file. 11.2. Creating a Kickstart file by performing a manual installation The recommended approach to creating Kickstart files is to use the file created by a manual installation of Red Hat Enterprise Linux. After an installation completes, all choices made during the installation are saved into a Kickstart file named anaconda-ks.cfg , located in the /root/ directory on the installed system. You can use this file to reproduce the installation in the same way as before. Alternatively, copy this file, make any changes you need, and use the resulting configuration file for further installations. Procedure Install RHEL. For more details, see Interactively installing RHEL from installation media . During the installation, create a user with administrator privileges. Finish the installation and reboot into the installed system. Log into the system with the administrator account. Copy the file /root/anaconda-ks.cfg to a location of your choice. The file contains information about users and passwords. To display the file contents in terminal: You can copy the output and save to another file of your choice. To copy the file to another location, use the file manager. Remember to change permissions on the copy, so that the file can be read by non-root users. Install the pykickstart package. Run ksvalidator on your Kickstart file. Replace /path/to/kickstart.ks with the path to the Kickstart file you want to verify. Important The validation tool cannot guarantee the installation will be successful. It ensures only that the syntax is correct and that the file does not include deprecated options. It does not attempt to validate the %pre , %post and %packages sections of the Kickstart file. 11.3. Converting a Kickstart file from RHEL installation You can use the Kickstart Converter tool to convert a RHEL 7 Kickstart file for use in a RHEL 8 or 9 installation or convert a RHEL 8 Kickstart file for use it in RHEL 9. For more information about the tool and how to use it to convert a RHEL Kickstart file, see https://access.redhat.com/labs/kickstartconvert/ . Procedure After you prepare your kickstart file, install the pykickstart package. Run ksvalidator on your Kickstart file. Replace /path/to/kickstart.ks with the path to the Kickstart file you want to verify. Important The validation tool cannot guarantee the installation will be successful. It ensures only that the syntax is correct and that the file does not include deprecated options. It does not attempt to validate the %pre , %post and %packages sections of the Kickstart file. 11.4. Creating a custom image using Image Builder You can use Red Hat Image Builder to create a customized system image for virtual and cloud deployments. For more information about creating customized images, using Image Builder, see the Composing a customized RHEL system image document. | [
"dnf install pykickstart",
"ksvalidator -v RHEL9 /path/to/kickstart.ks",
"cat /root/anaconda-ks.cfg",
"dnf install pykickstart",
"ksvalidator -v RHEL9 /path/to/kickstart.ks",
"dnf install pykickstart",
"ksvalidator -v RHEL9 /path/to/kickstart.ks"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/automatically_installing_rhel/creating-kickstart-files_rhel-installer |
Chapter 2. Getting started with hosted control planes | Chapter 2. Getting started with hosted control planes To get started with hosted control planes for OpenShift Container Platform, you first configure your hosted cluster on the provider that you want to use. Then, you complete a few management tasks. You can view the procedures by selecting from one of the following providers: 2.1. Bare metal Hosted control plane sizing guidance Installing the hosted control plane command line interface Distributing hosted cluster workloads Bare metal firewall and port requirements Bare metal infrastructure requirements : Review the infrastructure requirements to create a hosted cluster on bare metal. Configuring hosted control plane clusters on bare metal : Configure DNS Create a hosted cluster and verify cluster creation Scale the NodePool object for the hosted cluster Handle ingress traffic for the hosted cluster Enable node auto-scaling for the hosted cluster Configuring hosted control planes in a disconnected environment To destroy a hosted cluster on bare metal, follow the instructions in Destroying a hosted cluster on bare metal . If you want to disable the hosted control plane feature, see Disabling the hosted control plane feature . 2.2. OpenShift Virtualization Hosted control plane sizing guidance Installing the hosted control plane command line interface Distributing hosted cluster workloads Managing hosted control plane clusters on OpenShift Virtualization : Create OpenShift Container Platform clusters with worker nodes that are hosted by KubeVirt virtual machines. Configuring hosted control planes in a disconnected environment To destroy a hosted cluster is on OpenShift Virtualization, follow the instructions in Destroying a hosted cluster on OpenShift Virtualization . If you want to disable the hosted control plane feature, see Disabling the hosted control plane feature . 2.3. Amazon Web Services (AWS) Important Hosted control planes on the AWS platform is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . AWS infrastructure requirements : Review the infrastructure requirements to create a hosted cluster on AWS. Configuring hosted control plane clusters on AWS (Technology Preview) : The tasks to configure hosted control plane clusters on AWS include creating the AWS S3 OIDC secret, creating a routable public zone, enabling external DNS, enabling AWS PrivateLink, and deploying a hosted cluster. Deploying the SR-IOV Operator for hosted control planes : After you configure and deploy your hosting service cluster, you can create a subscription to the Single Root I/O Virtualization (SR-IOV) Operator on a hosted cluster. The SR-IOV pod runs on worker machines rather than the control plane. To destroy a hosted cluster on AWS, follow the instructions in Destroying a hosted cluster on AWS . If you want to disable the hosted control plane feature, see Disabling the hosted control plane feature . 2.4. IBM Z Important Hosted control planes on the IBM Z platform is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Installing the hosted control plane command line interface Configuring the hosting cluster on x86 bare metal for IBM Z compute nodes (Technology Preview) 2.5. IBM Power Important Hosted control planes on the IBM Power platform is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Installing the hosted control plane command line interface Configuring the hosting cluster on a 64-bit x86 OpenShift Container Platform cluster to create hosted control planes for IBM Power compute nodes (Technology Preview) | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/hosted_control_planes/getting-started-with-hosted-control-planes |
Chapter 10. Configuring bridge mappings | Chapter 10. Configuring bridge mappings In Red Hat OpenStack Platform (RHOSP), a bridge mapping associates a physical network name (an interface label) to a bridge created with the Modular Layer 2 plug-in mechanism drivers Open vSwitch (OVS) or Open Virtual Network (OVN). The RHOSP Networking service (neutron) uses bridge mappings to allow provider network traffic to reach the physical network. The topics included in this section are: Section 10.1, "Overview of bridge mappings" Section 10.2, "Traffic flow" Section 10.3, "Configuring bridge mappings" Section 10.4, "Maintaining bridge mappings for OVS" Section 10.4.1, "Cleaning up OVS patch ports manually" Section 10.4.2, "Cleaning up OVS patch ports automatically" 10.1. Overview of bridge mappings In the Red Hat OpenStack Platform (RHOSP) Networking service (neutron), you use bridge mappings to allow provider network traffic to reach the physical network. Traffic leaves the provider network from the qg-xxx interface of the router and arrives at the intermediate bridge ( br-int ). The part of the data path varies depending on which mechanism driver your deployment uses: ML2/OVS: a patch port between br-int and br-ex allows the traffic to pass through the bridge of the provider network and out to the physical network. ML2/OVN: the Networking service creates a patch port on a hypervisor only when there is a VM bound to the hypervisor and the VM requires the port. You configure the bridge mapping on the network node on which the router is scheduled. Router traffic can egress using the correct physical network, as represented by the provider network. Note The Networking service supports only one bridge for each physical network. Do not map more than one physical network to the same bridge. 10.2. Traffic flow Each external network is represented by an internal VLAN ID, which is tagged to the router qg-xxx port. When a packet reaches phy-br-ex , the br-ex port strips the VLAN tag and moves the packet to the physical interface and then to the external network. The return packet from the external network arrives on br-ex and moves to br-int using phy-br-ex <-> int-br-ex . When the packet is going through br-ex to br-int , the packet's external VLAN ID is replaced by an internal VLAN tag in br-int , and this allows qg-xxx to accept the packet. In the case of egress packets, the packet's internal VLAN tag is replaced with an external VLAN tag in br-ex (or in the external bridge that is defined in the NeutronNetworkVLANRanges parameter). 10.3. Configuring bridge mappings To modify the bridge mappings that the Red Hat OpenStack Platform (RHOSP) Networking service (neutron) uses to connect provider network traffic with the physical network, you modify the necessary heat parameters and redeploy your overcloud. Prerequisites You must be able to access the underclod host as the stack user. You must configure bridge mappings on the network node on which the router is scheduled. You must also configure bridge mappings for your Compute nodes. Procedure Log in to the undercloud host as the stack user. Source the undercloud credentials file: Create a custom YAML environment file. Example Your environment file must contain the keywords parameter_defaults . Add the NeutronBridgeMappings heat parameter with values that are appropriate for your site after the parameter_defaults keyword. Example In this example, the NeutronBridgeMappings parameter associates the physical names, datacentre and tenant , the bridges br-ex and br-tenant , respectively. Note When the NeutronBridgeMappings parameter is not used, the default maps the external bridge on hosts (br-ex) to a physical name (datacentre). If you are using a flat network, add its name using the NeutronFlatNetworks parameter. Example In this example, the parameter associates physical name datacentre with bridge br-ex , and physical name tenant with bridge br-tenant." Note When the NeutronFlatNetworks parameter is not used, the default is datacentre . If you are using a VLAN network, specify the network name along with the range of VLANs it accesses by using the NeutronNetworkVLANRanges parameter. Example In this example, the NeutronNetworkVLANRanges parameter specifies the VLAN range of 1 - 1000 for the tenant network: Run the deployment command and include the core heat templates, environment files, and this new custom environment file. Perform the following steps: Using the network VLAN ranges, create the provider networks that represent the corresponding external networks. (You use the physical name when creating neutron provider networks or floating IP networks.) Connect the external networks to your project networks with router interfaces. Additional resources Updating the format of your network configuration files in the Director Installation and Usage guide Including environment files in overcloud creation in the Director Installation and Usage guide 10.4. Maintaining bridge mappings for OVS After removing any OVS bridge mappings, you must perform a subsequent cleanup to ensure that the bridge configuration is cleared of any associated patch port entries. You can perform this operation in the following ways: Manual port cleanup - requires careful removal of the superfluous patch ports. No outages of network connectivity are required. Automated port cleanup - performs an automated cleanup, but requires an outage, and requires that the necessary bridge mappings be re-added. Choose this option during scheduled maintenance windows when network connectivity outages can be tolerated. Note When OVN bridge mappings are removed, the OVN controller automatically cleans up any associated patch ports. 10.4.1. Cleaning up OVS patch ports manually After removing any OVS bridge mappings, you must also remove the associated patch ports. Prerequisites The patch ports that you are cleaning up must be Open Virtual Switch (OVS) ports. A system outage is not required to perform a manual patch port cleanup. You can identify the patch ports to cleanup by their naming convention: In br-USDexternal_bridge patch ports are named phy-<external bridge name> (for example, phy-br-ex2). In br-int patch ports are named int-<external bridge name> (for example, int-br-ex2 ). Procedure Use ovs-vsctl to remove the OVS patch ports associated with the removed bridge mapping entry: Restart neutron-openvswitch-agent : 10.4.2. Cleaning up OVS patch ports automatically After removing any OVS bridge mappings, you must also remove the associated patch ports. Note When OVN bridge mappings are removed, the OVN controller automatically cleans up any associated patch ports. Prerequisites The patch ports that you are cleaning up must be Open Virtual Switch (OVS) ports. Cleaning up patch ports automatically with the neutron-ovs-cleanup command causes a network connectivity outage, and should be performed only during a scheduled maintenance window. Use the flag --ovs_all_ports to remove all patch ports from br-int , cleaning up tunnel ends from br-tun , and patch ports from bridge to bridge. The neutron-ovs-cleanup command unplugs all patch ports (instances, qdhcp/qrouter, among others) from all OVS bridges. Procedure Run the neutron-ovs-cleanup command with the --ovs_all_ports flag. Important Performing this step will result in a total networking outage. Restore connectivity by redeploying the overcloud. When you rerun the openstack overcloud deploy command, your bridge mapping values are reapplied. Note After a restart, the OVS agent does not interfere with any connections that are not present in bridge_mappings. So, if you have br-int connected to br-ex2 , and br-ex2 has some flows on it, removing br-int from the bridge_mappings configuration does not disconnect the two bridges when you restart the OVS agent or the node. Additional resources Including environment files in overcloud creation in the Director Installation and Usage guide | [
"source ~/stackrc",
"vi /home/stack/templates/my_bridge_mappings.yaml",
"parameter_defaults: NeutronBridgeMappings: \"datacentre:br-ex,tenant:br-tenant\"",
"parameter_defaults: NeutronBridgeMappings: \"datacentre:br-ex,tenant:br-tenant\" NeutronFlatNetworks: \"my_flat_network\"",
"parameter_defaults: NeutronBridgeMappings: \"datacentre:br-ex,tenant:br-tenant\" NeutronNetworkVLANRanges: \"tenant:1:1000\"",
"openstack overcloud deploy --templates -e <your_environment_files> -e /home/stack/templates/my_bridge_mappings.yaml",
"ovs-vsctl del-port br-ex2 datacentre ovs-vsctl del-port br-tenant tenant",
"service neutron-openvswitch-agent restart",
"/usr/bin/neutron-ovs-cleanup --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --log-file /var/log/neutron/ovs-cleanup.log --ovs_all_ports"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/networking_guide/configuring-bridge-mappings_rhosp-network |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation for Red Hat OpenStack Services on OpenShift (RHOSO) or earlier releases of Red Hat OpenStack Platform (RHOSP). When you create an issue for RHOSO or RHOSP documents, the issue is recorded in the RHOSO Jira project, where you can track the progress of your feedback. To complete the Create Issue form, ensure that you are logged in to Jira. If you do not have a Red Hat Jira account, you can create an account at https://issues.redhat.com . Click the following link to open a Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/creating_and_managing_images/proc_providing-feedback-on-red-hat-documentation |
Appendix B. Restoring Manual Changes Overwritten by a Puppet Run | Appendix B. Restoring Manual Changes Overwritten by a Puppet Run If your manual configuration has been overwritten by a Puppet run, you can restore the files to the state. The following example shows you how to restore a DHCP configuration file overwritten by a Puppet run. Procedure Copy the file you intend to restore. This allows you to compare the files to check for any mandatory changes required by the upgrade. This is not common for DNS or DHCP services. Check the log files to note down the md5sum of the overwritten file. For example: Restore the overwritten file: Compare the backup file and the restored file, and edit the restored file to include any mandatory changes required by the upgrade. | [
"cp /etc/dhcp/dhcpd.conf /etc/dhcp/dhcpd.backup",
"journalctl -xe /Stage[main]/Dhcp/File[/etc/dhcp/dhcpd.conf]: Filebucketed /etc/dhcp/dhcpd.conf to puppet with sum 622d9820b8e764ab124367c68f5fa3a1",
"puppet filebucket restore --local --bucket /var/lib/puppet/clientbucket /etc/dhcp/dhcpd.conf \\ 622d9820b8e764ab124367c68f5fa3a1"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/installing_satellite_server_in_a_connected_network_environment/restoring-manual-changes-overwritten-by-a-puppet-run_satellite |
Chapter 11. Creating Kickstart files | Chapter 11. Creating Kickstart files You can create a Kickstart file using the following methods: Use the online Kickstart configuration tool. Copy the Kickstart file created as a result of a manual installation. Write the entire Kickstart file manually. Convert the Red Hat Enterprise Linux 7 Kickstart file for Red Hat Enterprise Linux 8 installation. For more information about the conversion tool, see Kickstart generator lab . In case of virtual and cloud environment, create a custom system image, using Image Builder. Some highly specific installation options can be configured only by manual editing of the Kickstart file. 11.1. Creating a Kickstart file with the Kickstart configuration tool Users with a Red Hat Customer Portal account can use the Kickstart Generator tool in the Customer Portal Labs to generate Kickstart files online. This tool will walk you through the basic configuration and enables you to download the resulting Kickstart file. Prerequisites You have a Red Hat Customer Portal account and an active Red Hat subscription. Procedure Open the Kickstart generator lab information page at https://access.redhat.com/labsinfo/kickstartconfig . Click the Go to Application button to the left of heading and wait for the page to load. Select Red Hat Enterprise Linux 8 in the drop-down menu and wait for the page to update. Describe the system to be installed using the fields in the form. You can use the links on the left side of the form to quickly navigate between sections of the form. To download the generated Kickstart file, click the red Download button at the top of the page. Your web browser saves the file. Install the pykickstart package. Run ksvalidator on your Kickstart file. Replace /path/to/kickstart.ks with the path to the Kickstart file you want to verify. The validation tool cannot guarantee the installation will be successful. It ensures only that the syntax is correct and that the file does not include deprecated options. It does not attempt to validate the %pre , %post and %packages sections of the Kickstart file. 11.2. Creating a Kickstart file by performing a manual installation The recommended approach to creating Kickstart files is to use the file created by a manual installation of Red Hat Enterprise Linux. After an installation completes, all choices made during the installation are saved into a Kickstart file named anaconda-ks.cfg , located in the /root/ directory on the installed system. You can use this file to reproduce the installation in the same way as before. Alternatively, copy this file, make any changes you need, and use the resulting configuration file for further installations. Procedure Install RHEL. For more details, see Interactively installing RHEL from installation media . During the installation, create a user with administrator privileges. Finish the installation and reboot into the installed system. Log into the system with the administrator account. Copy the file /root/anaconda-ks.cfg to a location of your choice. The file contains information about users and passwords. To display the file contents in terminal: You can copy the output and save to another file of your choice. To copy the file to another location, use the file manager. Remember to change permissions on the copy, so that the file can be read by non-root users. Install the pykickstart package. Run ksvalidator on your Kickstart file. Replace /path/to/kickstart.ks with the path to the Kickstart file you want to verify. Important The validation tool cannot guarantee the installation will be successful. It ensures only that the syntax is correct and that the file does not include deprecated options. It does not attempt to validate the %pre , %post and %packages sections of the Kickstart file. 11.3. Converting a Kickstart file from RHEL installation You can use the Kickstart Converter tool to convert a RHEL 7 Kickstart file for use in a RHEL 8 or 9 installation or convert a RHEL 8 Kickstart file for use it in RHEL 9. For more information about the tool and how to use it to convert a RHEL Kickstart file, see https://access.redhat.com/labs/kickstartconvert/ . Procedure After you prepare your kickstart file, install the pykickstart package. Run ksvalidator on your Kickstart file. Replace /path/to/kickstart.ks with the path to the Kickstart file you want to verify. Important The validation tool cannot guarantee the installation will be successful. It ensures only that the syntax is correct and that the file does not include deprecated options. It does not attempt to validate the %pre , %post and %packages sections of the Kickstart file. 11.4. Creating a custom image using Image Builder You can use Red Hat Image Builder to create a customized system image for virtual and cloud deployments. For more information about creating customized images, using Image Builder, see the Composing a customized RHEL system image document. | [
"yum install pykickstart",
"ksvalidator -v RHEL8 /path/to/kickstart.ks",
"cat /root/anaconda-ks.cfg",
"yum install pykickstart",
"ksvalidator -v RHEL8 /path/to/kickstart.ks",
"yum install pykickstart",
"ksvalidator -v RHEL8 /path/to/kickstart.ks"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/automatically_installing_rhel/creating-kickstart-files_rhel-installer |
Chapter 6. Scaling storage of VMware OpenShift Data Foundation cluster | Chapter 6. Scaling storage of VMware OpenShift Data Foundation cluster 6.1. Scaling up storage on a VMware cluster To increase the storage capacity in a dynamically created storage cluster on a VMware user-provisioned infrastructure, you can add storage capacity and performance to your configured Red Hat OpenShift Data Foundation worker nodes. Prerequisites Administrative privilege to the OpenShift Container Platform Console. A running OpenShift Data Foundation Storage Cluster. Make sure that the disk is of the same size and type as the disk used during initial deployment. Procedure Log in to the OpenShift Web Console. Click Operators Installed Operators . Click OpenShift Data Foundation Operator. Click the Storage Systems tab. Click the Action Menu (...) on the far right of the storage system name to extend the options menu. Select Add Capacity from the options menu. Select the Storage Class . Choose the storage class which you wish to use to provision new storage devices. Click Add . To check the status, navigate to Storage Data Foundation and verify that Storage System in the Status card has a green tick. Verification steps Verify the Raw Capacity card. In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Block and File tab, check the Raw Capacity card. Note that the capacity increases based on your selections. Note The raw capacity does not take replication into account and shows the full capacity. Verify that the new OSDs and their corresponding new Persistent Volume Claims (PVCs) are created. To view the state of the newly created OSDs: Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. To view the state of the PVCs: Click Storage Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected hosts. <node-name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset names. Important Cluster reduction is supported only with the Red Hat Support Team's assistance. 6.2. Scaling up a cluster created using local storage devices To scale up an OpenShift Data Foundation cluster which was created using local storage devices, you need to add a new disk to the storage node. The new disks size must be of the same size as the disks used during the deployment because OpenShift Data Foundation does not support heterogeneous disks/OSDs. For deployments having three failure domains, you can scale up the storage by adding disks in the multiples of three, with the same number of disks coming from nodes in each of the failure domains. For example, if we scale by adding six disks, two disks are taken from nodes in each of the three failure domains. If the number of disks is not in multiples of three, it will only consume the disk to the maximum in the multiple of three while the remaining disks remain unused. For deployments having less than three failure domains, there is a flexibility to add any number of disks. Make sure to verify that flexible scaling is enabled. For information, refer to the Knowledgebase article Verify if flexible scaling is enabled . Note Flexible scaling features get enabled at the time of deployment and cannot be enabled or disabled later on. Prerequisites Administrative privilege to the OpenShift Container Platform Console. A running OpenShift Data Foundation Storage Cluster. Make sure that the disks to be used for scaling are attached to the storage node Make sure that LocalVolumeDiscovery and LocalVolumeSet objects are created. Procedure To add capacity, you can either use a storage class that you provisioned during the deployment or any other storage class that matches the filter. In the OpenShift Web Console, click Operators Installed Operators . Click OpenShift Data Foundation Operator. Click the Storage Systems tab. Click the Action menu (...) to the visible list to extend the options menu. Select Add Capacity from the options menu. Select the Storage Class for which you added disks or the new storage class depending on your requirement. Available Capacity displayed is based on the local disks available in storage class. Click Add . To check the status, navigate to Storage Data Foundation and verify that the Storage System in the Status card has a green tick. Verification steps Verify the Raw Capacity card. In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Block and File tab, check the Raw Capacity card. Note that the capacity increases based on your selections. Note The raw capacity does not take replication into account and shows the full capacity. Verify that the new OSDs and their corresponding new Persistent Volume Claims (PVCs) are created. To view the state of the newly created OSDs: Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. To view the state of the PVCs: Click Storage Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected host(s). <node-name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset names. Important Cluster reduction is supported only with the Red Hat Support Team's assistance. 6.3. Scaling out storage capacity on a VMware cluster 6.3.1. Adding a node to an installer-provisioned infrastructure Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Navigate to Compute Machine Sets . On the machine set where you want to add nodes, select Edit Machine Count . Add the amount of nodes, and click Save . Click Compute Nodes and confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node. For the new node, click Action menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. In case of bare metal installer-provisioned infrastructure deployment, you must expand the cluster first. For instructions, see Expanding the cluster . Verification steps Execute the following command in the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 6.3.2. Adding a node to an user-provisioned infrastructure Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Depending on the type of infrastructure, perform the following steps: Get a new machine with the required infrastructure. See Platform requirements . Create a new OpenShift Container Platform worker node using the new machine. Check for certificate signing requests (CSRs) that are in Pending state. Approve all the required CSRs for the new node. <Certificate_Name> Is the name of the CSR. Click Compute Nodes , confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From User interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From Command line interface Apply the OpenShift Data Foundation label to the new node. <new_node_name> Is the name of the new node. Verification steps Execute the following command in the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 6.3.3. Adding a node using a local storage device You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or when there are not enough resources to add new OSDs on the existing nodes. Add nodes in the multiple of 3, each of them in different failure domains. Though it is recommended to add nodes in multiples of 3 nodes, you have the flexibility to add one node at a time in flexible scaling deployment. See Knowledgebase article Verify if flexible scaling is enabled Note OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during initial OpenShift Data Foundation deployment. Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Depending on the type of infrastructure, perform the following steps: Get a new machine with the required infrastructure. See Platform requirements . Create a new OpenShift Container Platform worker node using the new machine. Check for certificate signing requests (CSRs) that are in Pending state. Approve all the required CSRs for the new node. <Certificate_Name> Is the name of the CSR. Click Compute Nodes , confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From User interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From Command line interface Apply the OpenShift Data Foundation label to the new node. <new_node_name> Is the name of the new node. Click Operators Installed Operators from the OpenShift Web Console. From the Project drop-down list, make sure to select the project where the Local Storage Operator is installed. Click Local Storage . Click the Local Volume Discovery tab. Beside the LocalVolumeDiscovery , click Action menu (...) Edit Local Volume Discovery . In the YAML, add the hostname of the new node in the values field under the node selector. Click Save . Click the Local Volume Sets tab. Beside the LocalVolumeSet , click Action menu (...) Edit Local Volume Set . In the YAML, add the hostname of the new node in the values field under the node selector . Click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. Verification steps Execute the following command in the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 6.3.4. Scaling up storage capacity To scale up storage capacity: For dynamic storage devices, see Scaling up storage capacity on a cluster . For local storage devices, see Scaling up a cluster created using local storage devices | [
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm",
"NODE compute-1",
"oc debug node/ <node-name>",
"chroot /host",
"lsblk",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm",
"NODE compute-1",
"oc debug node/ <node-name>",
"chroot /host",
"lsblk",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get csr",
"oc adm certificate approve <Certificate_Name>",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get csr",
"oc adm certificate approve <Certificate_Name>",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/scaling_storage/scaling_storage_of_vmware_openshift_data_foundation_cluster |
Chapter 1. Kernel | Chapter 1. Kernel Enhanced SCSI Unit Attention Handling The kernel in Red Hat Enterprise Linux 6.6 has been enhanced to enable user space to respond to certain SCSI Unit Attention conditions received from SCSI devices via the udev event mechanism. The supported Unit Attention conditions are: 3F 03 INQUIRY DATA HAS CHANGED 2A 09 CAPACITY DATA HAS CHANGED 38 07 THIN PROVISIONING SOFT THRESHOLD REACHED 2A 01 MODE PARAMETERS CHANGED 3F 0E REPORTED LUNS DATA HAS CHANGED Because SCSI Unit Attention conditions are only reported in response to a SCSI command, no conditions are reported if no commands are actively being sent to the SCSI device. Red Hat Enterprise Linux 6.6 does not provide any default udev rules for these events, but user-supplied udev rules can be written to handle them. For example, the following rule causes a SCSI device to be rescanned if the inquiry data changes: The rules for the supported events should match on the following SDEV_UA environment strings: Note that in all cases the DEVPATH environment variable in the udev rule is the path of the device that reported the Unit Attention. Also, multipath I/O currently verifies that multiple paths to a device have some of the same attributes, such as the capacity. As a consequence, automatically rescanning a device in response to a capacity change can cause that some paths to a device have the old capacity and some paths have the new capacity. In such cases, multipath I/O stops using paths with the capacity change. Open vSwitch Kernel Module Red Hat Enterprise Linux 6.6 includes the Open vSwitch kernel module as an enabler for Red Hat's layered products. Open vSwitch is supported only in conjunction with products that contain the accompanying user-space utilities. Please note that without these required user-space utilities, Open vSwitch will not function and cannot be enabled for use. For more information, please refer to the following Knowledge Base article: https://access.redhat.com/knowledge/articles/270223 . | [
"ACTION==\"change\", SUBSYSTEM==\"scsi\", ENV{SDEV_UA}==\"INQUIRY_DATA_HAS_CHANGED\", TEST==\"rescan\", ATTR{rescan}=\"x\"",
"ENV{SDEV_UA}==\"INQUIRY_DATA_HAS_CHANGED\" ENV{SDEV_UA}==\"CAPACITY_DATA_HAS_CHANGED\" ENV{SDEV_UA}==\"THIN_PROVISIONING_SOFT_THRESHOLD_REACHED\" ENV{SDEV_UA}==\"MODE_PARAMETERS_CHANGED\" ENV{SDEV_UA}==\"REPORTED_LUNS_DATA_HAS_CHANGED\""
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_release_notes/kernel |
Data Security and Hardening Guide | Data Security and Hardening Guide Red Hat Ceph Storage 8 Red Hat Ceph Storage Data Security and Hardening Guide Red Hat Ceph Storage Documentation Team | [
"encrypted: true",
"ceph orch daemon rotate-key NAME",
"ceph orch daemon rotate-key mgr.ceph-key-host01 Scheduled to rotate-key mgr.ceph-key-host01 on host 'my-host-host01-installer'",
"ceph orch restart SERVICE_TYPE",
"ceph orch restart rgw",
"DEFAULT_NGINX_IMAGE = 'quay.io/ceph/NGINX_IMAGE'",
"ceph config set mgr mgr/cephadm/container_image_nginx NEW_NGINX_IMAGE ceph orch redeploy mgmt-gateway",
"ceph orch apply mgmt-gateway [--placement= DESTINATION_HOST ] [--enable-auth=true]",
"ceph orch apply mgmt-gateway --placement=host01",
"touch mgmt-gateway.yaml",
"service_type: mgmt-gateway placement: hosts: - ceph-node-1 spec: port: 9443 ssl_protocols: # Optional - TLSv1.3 ssl_ciphers: # Optional - AES128-SHA - AES256-SHA - RC4-SHA ssl_certificate: | # Optional -----BEGIN CERTIFICATE----- < YOU CERT DATA HERE > -----END CERTIFICATE----- ssl_certificate_key: | -----BEGIN RSA PRIVATE KEY----- < YOU PRIV KEY DATA HERE > -----END RSA PRIVATE KEY-----",
"service_type: mgmt-gateway service_id: gateway placement: hosts: - ceph0 spec: port: 5000 ssl_protocols: - TLSv1.3 - ssl_ciphers: - AES128-SHA - AES256-SHA - ssl_certificate: | -----BEGIN CERTIFICATE----- MIIDtTCCAp2gAwIBAgIYMC4xNzc1NDQxNjEzMzc2MjMyXzxvQ7EcMA0GCSqGSIb3 DQEBCwUAMG0xCzAJBgNVBAYTAlVTMQ0wCwYDVQQIDARVdGFoMRcwFQYDVQQHDA5T [...] -----END CERTIFICATE----- ssl_certificate_key: | -----BEGIN PRIVATE KEY----- MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQC5jdYbjtNTAKW4 /CwQr/7wOiLGzVxChn3mmCIF3DwbL/qvTFTX2d8bDf6LjGwLYloXHscRfxszX/4h [...] -----END PRIVATE KEY-----",
"ceph orch apply -i mgmt-gateway.yaml",
"unzip rhsso-7.6.0.zip",
"cd standalone/configuration vi standalone.xml",
"./add-user-keycloak.sh -u admin",
"keytool -import -noprompt -trustcacerts -alias ca -file ../ca.cer -keystore /etc/java/java-1.8.0-openjdk/java-1.8.0-openjdk-1.8.0.272.b10-3.el8_3.x86_64/lib/security/cacert",
"./standalone.sh",
"ceph config get mgr mgr/cephadm/container_image_oauth2_proxy",
"ceph config set mgr mgr/cephadm/container_image_oauth2_proxy NEW_OAUTH2_PROXY_IMAGE ceph orch redeploy oauth2_proxy",
"ceph orch apply oauth2-proxy [--placement= DESTINATION_HOST ]",
"ceph orch apply oauth2-proxy [--placement=host01]",
"touch oauth2-proxy.yaml",
"service_type: oauth2-proxy service_id: auth-proxy placement: hosts: - ceph-node-1 spec: https_address: HTTPS_ADDRESS:PORT provider_display_name: MY OIDC PROVIDER client_id: CLIENT_ID oidc_issuer_url: OIDC ISSUER URL allowlist_domains: - HTTPS_ADDRESS:PORT client_secret: CLIENT_SECRET cookie_secret: COOKIE_SECRET ssl_certificate: | -----BEGIN CERTIFICATE----- < YOU CERT DATA HERE > -----END CERTIFICATE----- ssl_certificate_key: | -----BEGIN RSA PRIVATE KEY----- < YOU PRIV KEY DATA HERE > -----END RSA PRIVATE KEY-----",
"service_type: oauth2-proxy service_id: auth-proxy placement: hosts: - ceph0 spec: https_address: \"0.0.0.0:4180\" provider_display_name: \"My OIDC Provider\" client_id: \"your-client-id\" oidc_issuer_url: \"http://192.168.100.1:5556/realms/ceph\" allowlist_domains: - 192.168.100.1:8080 - 192.168.200.1:5000 client_secret: \"your-client-secret\" cookie_secret: \"your-cookie-secret\" ssl_certificate: | -----BEGIN CERTIFICATE----- MIIDtTCCAp2gAwIBAgIYMC4xNzc1NDQxNjEzMzc2MjMyXzxvQ7EcMA0GCSqGSIb3 DQEBCwUAMG0xCzAJBgNVBAYTAlVTMQ0wCwYDVQQIDARVdGFoMRcwFQYDVQQHDA5T [...] -----END CERTIFICATE----- ssl_certificate_key: | -----BEGIN PRIVATE KEY----- MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQC5jdYbjtNTAKW4 /CwQr/7wOiLGzVxChn3mmCIF3DwbL/qvTFTX2d8bDf6LjGwLYloXHscRfxszX/4h [...] -----END PRIVATE KEY-----",
"ceph orch apply -i oauth2-proxy.yaml",
"public_network = <public-network/netmask>[,<public-network/netmask>] cluster_network = <cluster-network/netmask>[,<cluster-network/netmask>]",
"systemctl enable firewalld systemctl start firewalld systemctl status firewalld",
"firewall-cmd --list-all",
"sources: services: ssh dhcpv6-client",
"getenforce Enforcing",
"setenforce 1",
"firewall-cmd --zone=<zone-name> --add-rich-rule=\"rule family=\"ipv4\" source address=\"<ip-address>/<netmask>\" port protocol=\"tcp\" port=\"<port-number>\" accept\"",
"firewall-cmd --zone=<zone-name> --add-rich-rule=\"rule family=\"ipv4\" source address=\"<ip-address>/<netmask>\" port protocol=\"tcp\" port=\"<port-number>\" accept\" --permanent",
"cat /var/log/ceph/6c58dfb8-4342-11ee-a953-fa163e843234/ceph.audit.log",
"2023-09-01T10:20:21.445990+0000 mon.host01 (mon.0) 122301 : audit [DBG] from='mgr.14189 10.0.210.22:0/1157748332' entity='mgr.host01.mcadea' cmd=[{\"prefix\": \"config generate-minimal-conf\"}]: dispatch 2023-09-01T10:20:21.446972+0000 mon.host01 (mon.0) 122302 : audit [INF] from='mgr.14189 10.0.210.22:0/1157748332' entity='mgr.host01.mcadea' cmd=[{\"prefix\": \"auth get\", \"entity\": \"client.admin\"}]: dispatch 2023-09-01T10:20:21.453790+0000 mon.host01 (mon.0) 122303 : audit [INF] from='mgr.14189 10.0.210.22:0/1157748332' entity='mgr.host01.mcadea' 2023-09-01T10:20:21.457119+0000 mon.host01 (mon.0) 122304 : audit [DBG] from='mgr.14189 10.0.210.22:0/1157748332' entity='mgr.host01.mcadea' cmd=[{\"prefix\": \"osd tree\", \"states\": [\"destroyed\"], \"format\": \"json\"}]: dispatch 2023-09-01T10:20:30.671816+0000 mon.host01 (mon.0) 122305 : audit [DBG] from='mgr.14189 10.0.210.22:0/1157748332' entity='mgr.host01.mcadea' cmd=[{\"prefix\": \"osd blocklist ls\", \"format\": \"json\"}]: dispatch"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html-single/data_security_and_hardening_guide/ceph-filesystem-sec |
Chapter 4. Advisories related to this release | Chapter 4. Advisories related to this release The following advisories have been issued to bugfixes and to CVE fixes included in this release: RHSA-2023:0190 RHSA-2023:0191 RHSA-2023:0192 RHSA-2023:0193 RHSA-2023:0194 Revised on 2024-05-03 15:36:43 UTC | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_red_hat_build_of_openjdk_17.0.6/openjdk-1706-advisory_openjdk |
20.44. Disk I/O Throttling | 20.44. Disk I/O Throttling The virsh blkdeviotune command sets disk I/O throttling for a specified guest virtual machine. This can prevent a guest virtual machine from over utilizing shared resources and thus impacting the performance of other guest virtual machines. The following format should be used: The only required parameter is the domain name of the guest virtual machine. To list the domain name, run the virsh domblklist command. The --config , --live , and --current arguments function the same as in Section 20.43, "Setting Schedule Parameters" . If no limit is specified, it will query current I/O limits setting. Otherwise, alter the limits with the following flags: --total-bytes-sec - specifies total throughput limit in bytes per second. --read-bytes-sec - specifies read throughput limit in bytes per second. --write-bytes-sec - specifies write throughput limit in bytes per second. --total-iops-sec - specifies total I/O operations limit per second. --read-iops-sec - specifies read I/O operations limit per second. --write-iops-sec - specifies write I/O operations limit per second. For more information, see the blkdeviotune section of the virsh man page. For an example domain XML see Figure 23.27, "Devices - Hard drives, floppy disks, CD-ROMs Example" . | [
"virsh blkdeviotune domain < device > [[--config] [--live] | [--current]] [[total-bytes-sec] | [read-bytes-sec] [write-bytes-sec]] [[total-iops-sec] [read-iops-sec] [write-iops-sec]]"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-managing_guest_virtual_machines_with_virsh-disk_io_throttling |
Chapter 12. Troubleshooting Logging | Chapter 12. Troubleshooting Logging 12.1. Viewing OpenShift Logging status You can view the status of the Red Hat OpenShift Logging Operator and for a number of OpenShift Logging components. 12.1.1. Viewing the status of the Red Hat OpenShift Logging Operator You can view the status of your Red Hat OpenShift Logging Operator. Prerequisites OpenShift Logging and Elasticsearch must be installed. Procedure Change to the openshift-logging project. USD oc project openshift-logging To view the OpenShift Logging status: Get the OpenShift Logging status: USD oc get clusterlogging instance -o yaml Example output apiVersion: logging.openshift.io/v1 kind: ClusterLogging .... status: 1 collection: logs: fluentdStatus: daemonSet: fluentd 2 nodes: fluentd-2rhqp: ip-10-0-169-13.ec2.internal fluentd-6fgjh: ip-10-0-165-244.ec2.internal fluentd-6l2ff: ip-10-0-128-218.ec2.internal fluentd-54nx5: ip-10-0-139-30.ec2.internal fluentd-flpnn: ip-10-0-147-228.ec2.internal fluentd-n2frh: ip-10-0-157-45.ec2.internal pods: failed: [] notReady: [] ready: - fluentd-2rhqp - fluentd-54nx5 - fluentd-6fgjh - fluentd-6l2ff - fluentd-flpnn - fluentd-n2frh logstore: 3 elasticsearchStatus: - ShardAllocationEnabled: all cluster: activePrimaryShards: 5 activeShards: 5 initializingShards: 0 numDataNodes: 1 numNodes: 1 pendingTasks: 0 relocatingShards: 0 status: green unassignedShards: 0 clusterName: elasticsearch nodeConditions: elasticsearch-cdm-mkkdys93-1: nodeCount: 1 pods: client: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c data: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c master: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c visualization: 4 kibanaStatus: - deployment: kibana pods: failed: [] notReady: [] ready: - kibana-7fb4fd4cc9-f2nls replicaSets: - kibana-7fb4fd4cc9 replicas: 1 1 In the output, the cluster status fields appear in the status stanza. 2 Information on the Fluentd pods. 3 Information on the Elasticsearch pods, including Elasticsearch cluster health, green , yellow , or red . 4 Information on the Kibana pods. 12.1.1.1. Example condition messages The following are examples of some condition messages from the Status.Nodes section of the OpenShift Logging instance. A status message similar to the following indicates a node has exceeded the configured low watermark and no shard will be allocated to this node: Example output nodes: - conditions: - lastTransitionTime: 2019-03-15T15:57:22Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be not be allocated on this node. reason: Disk Watermark Low status: "True" type: NodeStorage deploymentName: example-elasticsearch-clientdatamaster-0-1 upgradeStatus: {} A status message similar to the following indicates a node has exceeded the configured high watermark and shards will be relocated to other nodes: Example output nodes: - conditions: - lastTransitionTime: 2019-03-15T16:04:45Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be relocated from this node. reason: Disk Watermark High status: "True" type: NodeStorage deploymentName: cluster-logging-operator upgradeStatus: {} A status message similar to the following indicates the Elasticsearch node selector in the CR does not match any nodes in the cluster: Example output Elasticsearch Status: Shard Allocation Enabled: shard allocation unknown Cluster: Active Primary Shards: 0 Active Shards: 0 Initializing Shards: 0 Num Data Nodes: 0 Num Nodes: 0 Pending Tasks: 0 Relocating Shards: 0 Status: cluster health unknown Unassigned Shards: 0 Cluster Name: elasticsearch Node Conditions: elasticsearch-cdm-mkkdys93-1: Last Transition Time: 2019-06-26T03:37:32Z Message: 0/5 nodes are available: 5 node(s) didn't match node selector. Reason: Unschedulable Status: True Type: Unschedulable elasticsearch-cdm-mkkdys93-2: Node Count: 2 Pods: Client: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready: Data: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready: Master: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready: A status message similar to the following indicates that the requested PVC could not bind to PV: Example output Node Conditions: elasticsearch-cdm-mkkdys93-1: Last Transition Time: 2019-06-26T03:37:32Z Message: pod has unbound immediate PersistentVolumeClaims (repeated 5 times) Reason: Unschedulable Status: True Type: Unschedulable A status message similar to the following indicates that the Fluentd pods cannot be scheduled because the node selector did not match any nodes: Example output Status: Collection: Logs: Fluentd Status: Daemon Set: fluentd Nodes: Pods: Failed: Not Ready: Ready: 12.1.2. Viewing the status of OpenShift Logging components You can view the status for a number of OpenShift Logging components. Prerequisites OpenShift Logging and Elasticsearch must be installed. Procedure Change to the openshift-logging project. USD oc project openshift-logging View the status of the OpenShift Logging environment: USD oc describe deployment cluster-logging-operator Example output Name: cluster-logging-operator .... Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable .... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 62m deployment-controller Scaled up replica set cluster-logging-operator-574b8987df to 1---- View the status of the OpenShift Logging replica set: Get the name of a replica set: Example output USD oc get replicaset Example output NAME DESIRED CURRENT READY AGE cluster-logging-operator-574b8987df 1 1 1 159m elasticsearch-cdm-uhr537yu-1-6869694fb 1 1 1 157m elasticsearch-cdm-uhr537yu-2-857b6d676f 1 1 1 156m elasticsearch-cdm-uhr537yu-3-5b6fdd8cfd 1 1 1 155m kibana-5bd5544f87 1 1 1 157m Get the status of the replica set: USD oc describe replicaset cluster-logging-operator-574b8987df Example output Name: cluster-logging-operator-574b8987df .... Replicas: 1 current / 1 desired Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed .... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 66m replicaset-controller Created pod: cluster-logging-operator-574b8987df-qjhqv---- 12.2. Viewing the status of the log store You can view the status of the OpenShift Elasticsearch Operator and for a number of Elasticsearch components. 12.2.1. Viewing the status of the log store You can view the status of your log store. Prerequisites OpenShift Logging and Elasticsearch must be installed. Procedure Change to the openshift-logging project. USD oc project openshift-logging To view the status: Get the name of the log store instance: USD oc get Elasticsearch Example output NAME AGE elasticsearch 5h9m Get the log store status: USD oc get Elasticsearch <Elasticsearch-instance> -o yaml For example: USD oc get Elasticsearch elasticsearch -n openshift-logging -o yaml The output includes information similar to the following: Example output status: 1 cluster: 2 activePrimaryShards: 30 activeShards: 60 initializingShards: 0 numDataNodes: 3 numNodes: 3 pendingTasks: 0 relocatingShards: 0 status: green unassignedShards: 0 clusterHealth: "" conditions: [] 3 nodes: 4 - deploymentName: elasticsearch-cdm-zjf34ved-1 upgradeStatus: {} - deploymentName: elasticsearch-cdm-zjf34ved-2 upgradeStatus: {} - deploymentName: elasticsearch-cdm-zjf34ved-3 upgradeStatus: {} pods: 5 client: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt data: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt master: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt shardAllocationEnabled: all 1 In the output, the cluster status fields appear in the status stanza. 2 The status of the log store: The number of active primary shards. The number of active shards. The number of shards that are initializing. The number of log store data nodes. The total number of log store nodes. The number of pending tasks. The log store status: green , red , yellow . The number of unassigned shards. 3 Any status conditions, if present. The log store status indicates the reasons from the scheduler if a pod could not be placed. Any events related to the following conditions are shown: Container Waiting for both the log store and proxy containers. Container Terminated for both the log store and proxy containers. Pod unschedulable. Also, a condition is shown for a number of issues; see Example condition messages . 4 The log store nodes in the cluster, with upgradeStatus . 5 The log store client, data, and master pods in the cluster, listed under 'failed`, notReady , or ready state. 12.2.1.1. Example condition messages The following are examples of some condition messages from the Status section of the Elasticsearch instance. The following status message indicates that a node has exceeded the configured low watermark, and no shard will be allocated to this node. status: nodes: - conditions: - lastTransitionTime: 2019-03-15T15:57:22Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be not be allocated on this node. reason: Disk Watermark Low status: "True" type: NodeStorage deploymentName: example-elasticsearch-cdm-0-1 upgradeStatus: {} The following status message indicates that a node has exceeded the configured high watermark, and shards will be relocated to other nodes. status: nodes: - conditions: - lastTransitionTime: 2019-03-15T16:04:45Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be relocated from this node. reason: Disk Watermark High status: "True" type: NodeStorage deploymentName: example-elasticsearch-cdm-0-1 upgradeStatus: {} The following status message indicates that the log store node selector in the CR does not match any nodes in the cluster: status: nodes: - conditions: - lastTransitionTime: 2019-04-10T02:26:24Z message: '0/8 nodes are available: 8 node(s) didn''t match node selector.' reason: Unschedulable status: "True" type: Unschedulable The following status message indicates that the log store CR uses a non-existent persistent volume claim (PVC). status: nodes: - conditions: - last Transition Time: 2019-04-10T05:55:51Z message: pod has unbound immediate PersistentVolumeClaims (repeated 5 times) reason: Unschedulable status: True type: Unschedulable The following status message indicates that your log store cluster does not have enough nodes to support the redundancy policy. status: clusterHealth: "" conditions: - lastTransitionTime: 2019-04-17T20:01:31Z message: Wrong RedundancyPolicy selected. Choose different RedundancyPolicy or add more nodes with data roles reason: Invalid Settings status: "True" type: InvalidRedundancy This status message indicates your cluster has too many control plane nodes (also known as the master nodes): status: clusterHealth: green conditions: - lastTransitionTime: '2019-04-17T20:12:34Z' message: >- Invalid master nodes count. Please ensure there are no more than 3 total nodes with master roles reason: Invalid Settings status: 'True' type: InvalidMasters The following status message indicates that Elasticsearch storage does not support the change you tried to make. For example: status: clusterHealth: green conditions: - lastTransitionTime: "2021-05-07T01:05:13Z" message: Changing the storage structure for a custom resource is not supported reason: StorageStructureChangeIgnored status: 'True' type: StorageStructureChangeIgnored The reason and type fields specify the type of unsupported change: StorageClassNameChangeIgnored Unsupported change to the storage class name. StorageSizeChangeIgnored Unsupported change the storage size. StorageStructureChangeIgnored Unsupported change between ephemeral and persistent storage structures. Important If you try to configure the ClusterLogging custom resource (CR) to switch from ephemeral to persistent storage, the OpenShift Elasticsearch Operator creates a persistent volume claim (PVC) but does not create a persistent volume (PV). To clear the StorageStructureChangeIgnored status, you must revert the change to the ClusterLogging CR and delete the PVC. 12.2.2. Viewing the status of the log store components You can view the status for a number of the log store components. Elasticsearch indices You can view the status of the Elasticsearch indices. Get the name of an Elasticsearch pod: USD oc get pods --selector component=elasticsearch -o name Example output pod/elasticsearch-cdm-1godmszn-1-6f8495-vp4lw pod/elasticsearch-cdm-1godmszn-2-5769cf-9ms2n pod/elasticsearch-cdm-1godmszn-3-f66f7d-zqkz7 Get the status of the indices: USD oc exec elasticsearch-cdm-4vjor49p-2-6d4d7db474-q2w7z -- indices Example output Defaulting container name to elasticsearch. Use 'oc describe pod/elasticsearch-cdm-4vjor49p-2-6d4d7db474-q2w7z -n openshift-logging' to see all of the containers in this pod. green open infra-000002 S4QANnf1QP6NgCegfnrnbQ 3 1 119926 0 157 78 green open audit-000001 8_EQx77iQCSTzFOXtxRqFw 3 1 0 0 0 0 green open .security iDjscH7aSUGhIdq0LheLBQ 1 1 5 0 0 0 green open .kibana_-377444158_kubeadmin yBywZ9GfSrKebz5gWBZbjw 3 1 1 0 0 0 green open infra-000001 z6Dpe__ORgiopEpW6Yl44A 3 1 871000 0 874 436 green open app-000001 hIrazQCeSISewG3c2VIvsQ 3 1 2453 0 3 1 green open .kibana_1 JCitcBMSQxKOvIq6iQW6wg 1 1 0 0 0 0 green open .kibana_-1595131456_user1 gIYFIEGRRe-ka0W3okS-mQ 3 1 1 0 0 0 Log store pods You can view the status of the pods that host the log store. Get the name of a pod: USD oc get pods --selector component=elasticsearch -o name Example output pod/elasticsearch-cdm-1godmszn-1-6f8495-vp4lw pod/elasticsearch-cdm-1godmszn-2-5769cf-9ms2n pod/elasticsearch-cdm-1godmszn-3-f66f7d-zqkz7 Get the status of a pod: USD oc describe pod elasticsearch-cdm-1godmszn-1-6f8495-vp4lw The output includes the following status information: Example output .... Status: Running .... Containers: elasticsearch: Container ID: cri-o://b7d44e0a9ea486e27f47763f5bb4c39dfd2 State: Running Started: Mon, 08 Jun 2020 10:17:56 -0400 Ready: True Restart Count: 0 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 .... proxy: Container ID: cri-o://3f77032abaddbb1652c116278652908dc01860320b8a4e741d06894b2f8f9aa1 State: Running Started: Mon, 08 Jun 2020 10:18:38 -0400 Ready: True Restart Count: 0 .... Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True .... Events: <none> Log storage pod deployment configuration You can view the status of the log store deployment configuration. Get the name of a deployment configuration: USD oc get deployment --selector component=elasticsearch -o name Example output deployment.extensions/elasticsearch-cdm-1gon-1 deployment.extensions/elasticsearch-cdm-1gon-2 deployment.extensions/elasticsearch-cdm-1gon-3 Get the deployment configuration status: USD oc describe deployment elasticsearch-cdm-1gon-1 The output includes the following status information: Example output .... Containers: elasticsearch: Image: registry.redhat.io/openshift-logging/elasticsearch6-rhel8 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 .... Conditions: Type Status Reason ---- ------ ------ Progressing Unknown DeploymentPaused Available True MinimumReplicasAvailable .... Events: <none> Log store replica set You can view the status of the log store replica set. Get the name of a replica set: USD oc get replicaSet --selector component=elasticsearch -o name replicaset.extensions/elasticsearch-cdm-1gon-1-6f8495 replicaset.extensions/elasticsearch-cdm-1gon-2-5769cf replicaset.extensions/elasticsearch-cdm-1gon-3-f66f7d Get the status of the replica set: USD oc describe replicaSet elasticsearch-cdm-1gon-1-6f8495 The output includes the following status information: Example output .... Containers: elasticsearch: Image: registry.redhat.io/openshift-logging/elasticsearch6-rhel8@sha256:4265742c7cdd85359140e2d7d703e4311b6497eec7676957f455d6908e7b1c25 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 .... Events: <none> 12.3. Understanding OpenShift Logging alerts All of the logging collector alerts are listed on the Alerting UI of the OpenShift Container Platform web console. 12.3.1. Viewing logging collector alerts Alerts are shown in the OpenShift Container Platform web console, on the Alerts tab of the Alerting UI. Alerts are in one of the following states: Firing . The alert condition is true for the duration of the timeout. Click the Options menu at the end of the firing alert to view more information or silence the alert. Pending The alert condition is currently true, but the timeout has not been reached. Not Firing . The alert is not currently triggered. Procedure To view OpenShift Logging and other OpenShift Container Platform alerts: In the OpenShift Container Platform console, click Monitoring Alerting . Click the Alerts tab. The alerts are listed, based on the filters selected. Additional resources For more information on the Alerting UI, see Managing alerts . 12.3.2. About logging collector alerts The following alerts are generated by the logging collector. You can view these alerts in the OpenShift Container Platform web console, on the Alerts page of the Alerting UI. Table 12.1. Fluentd Prometheus alerts Alert Message Description Severity FluentDHighErrorRate <value> of records have resulted in an error by fluentd <instance>. The number of FluentD output errors is high, by default more than 10 in the 15 minutes. Warning FluentdNodeDown Prometheus could not scrape fluentd <instance> for more than 10m. Fluentd is reporting that Prometheus could not scrape a specific Fluentd instance. Critical FluentdQueueLengthIncreasing In the last 12h, fluentd <instance> buffer queue length constantly increased more than 1. Current value is <value>. Fluentd is reporting that the queue size is increasing. Critical FluentDVeryHighErrorRate <value> of records have resulted in an error by fluentd <instance>. The number of FluentD output errors is very high, by default more than 25 in the 15 minutes. Critical 12.3.3. About Elasticsearch alerting rules You can view these alerting rules in Prometheus. Table 12.2. Alerting rules Alert Description Severity ElasticsearchClusterNotHealthy The cluster health status has been RED for at least 2 minutes. The cluster does not accept writes, shards may be missing, or the master node hasn't been elected yet. Critical ElasticsearchClusterNotHealthy The cluster health status has been YELLOW for at least 20 minutes. Some shard replicas are not allocated. Warning ElasticsearchDiskSpaceRunningLow The cluster is expected to be out of disk space within the 6 hours. Critical ElasticsearchHighFileDescriptorUsage The cluster is predicted to be out of file descriptors within the hour. Warning ElasticsearchJVMHeapUseHigh The JVM Heap usage on the specified node is high. Alert ElasticsearchNodeDiskWatermarkReached The specified node has hit the low watermark due to low free disk space. Shards can not be allocated to this node anymore. You should consider adding more disk space to the node. Info ElasticsearchNodeDiskWatermarkReached The specified node has hit the high watermark due to low free disk space. Some shards will be re-allocated to different nodes if possible. Make sure more disk space is added to the node or drop old indices allocated to this node. Warning ElasticsearchNodeDiskWatermarkReached The specified node has hit the flood watermark due to low free disk space. Every index that has a shard allocated on this node is enforced a read-only block. The index block must be manually released when the disk use falls below the high watermark. Critical ElasticsearchJVMHeapUseHigh The JVM Heap usage on the specified node is too high. Alert ElasticsearchWriteRequestsRejectionJumps Elasticsearch is experiencing an increase in write rejections on the specified node. This node might not be keeping up with the indexing speed. Warning AggregatedLoggingSystemCPUHigh The CPU used by the system on the specified node is too high. Alert ElasticsearchProcessCPUHigh The CPU used by Elasticsearch on the specified node is too high. Alert 12.4. Collecting logging data for Red Hat Support When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support. The must-gather tool enables you to collect diagnostic information for project-level resources, cluster-level resources, and each of the OpenShift Logging components. For prompt support, supply diagnostic information for both OpenShift Container Platform and OpenShift Logging. Note Do not use the hack/logging-dump.sh script. The script is no longer supported and does not collect data. 12.4.1. About the must-gather tool The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues. For your OpenShift Logging environment, must-gather collects the following information: Project-level resources, including pods, configuration maps, service accounts, roles, role bindings, and events at the project level Cluster-level resources, including nodes, roles, and role bindings at the cluster level OpenShift Logging resources in the openshift-logging and openshift-operators-redhat namespaces, including health status for the log collector, the log store, and the log visualizer When you run oc adm must-gather , a new pod is created on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local . This directory is created in the current working directory. 12.4.2. Prerequisites OpenShift Logging and Elasticsearch must be installed. 12.4.3. Collecting OpenShift Logging data You can use the oc adm must-gather CLI command to collect information about your OpenShift Logging environment. Procedure To collect OpenShift Logging information with must-gather : Navigate to the directory where you want to store the must-gather information. Run the oc adm must-gather command against the OpenShift Logging image: USD oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == "cluster-logging-operator")].image}') The must-gather tool creates a new directory that starts with must-gather.local within the current directory. For example: must-gather.local.4157245944708210408 . Create a compressed file from the must-gather directory that was just created. For example, on a computer that uses a Linux operating system, run the following command: USD tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408 Attach the compressed file to your support case on the Red Hat Customer Portal . 12.5. Troubleshooting for Critical Alerts 12.5.1. Elasticsearch Cluster Health is Red At least one primary shard and its replicas are not allocated to a node. Troubleshooting Check the Elasticsearch cluster health and verify that the cluster status is red. oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- health List the nodes that have joined the cluster. oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cat/nodes?v List the Elasticsearch pods and compare them with the nodes in the command output from the step. oc -n openshift-logging get pods -l component=elasticsearch If some of the Elasticsearch nodes have not joined the cluster, perform the following steps. Confirm that Elasticsearch has an elected control plane node. oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cat/master?v Review the pod logs of the elected control plane node for issues. oc logs <elasticsearch_master_pod_name> -c elasticsearch -n openshift-logging Review the logs of nodes that have not joined the cluster for issues. oc logs <elasticsearch_node_name> -c elasticsearch -n openshift-logging If all the nodes have joined the cluster, perform the following steps, check if the cluster is in the process of recovering. oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cat/recovery?active_only=true If there is no command output, the recovery process might be delayed or stalled by pending tasks. Check if there are pending tasks. oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- health |grep number_of_pending_tasks If there are pending tasks, monitor their status. If their status changes and indicates that the cluster is recovering, continue waiting. The recovery time varies according to the size of the cluster and other factors. Otherwise, if the status of the pending tasks does not change, this indicates that the recovery has stalled. If it seems like the recovery has stalled, check if cluster.routing.allocation.enable is set to none . oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cluster/settings?pretty If cluster.routing.allocation.enable is set to none , set it to all . oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cluster/settings?pretty -X PUT -d '{"persistent": {"cluster.routing.allocation.enable":"all"}}' Check which indices are still red. oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cat/indices?v If any indices are still red, try to clear them by performing the following steps. Clear the cache. oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name>/_cache/clear?pretty Increase the max allocation retries. oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name>/_settings?pretty -X PUT -d '{"index.allocation.max_retries":10}' Delete all the scroll items. oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_search/scroll/_all -X DELETE Increase the timeout. oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name>/_settings?pretty -X PUT -d '{"index.unassigned.node_left.delayed_timeout":"10m"}' If the preceding steps do not clear the red indices, delete the indices individually. Identify the red index name. oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cat/indices?v Delete the red index. oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_red_index_name> -X DELETE If there are no red indices and the cluster status is red, check for a continuous heavy processing load on a data node. Check if the Elasticsearch JVM Heap usage is high. oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_nodes/stats?pretty In the command output, review the node_name.jvm.mem.heap_used_percent field to determine the JVM Heap usage. Check for high CPU utilization. Additional resources Search for "Free up or increase disk space" in the Elasticsearch topic, Fix a red or yellow cluster status . 12.5.2. Elasticsearch Cluster Health is Yellow Replica shards for at least one primary shard are not allocated to nodes. Troubleshooting Increase the node count by adjusting nodeCount in the ClusterLogging CR. Additional resources About the Cluster Logging custom resource Configuring persistent storage for the log store Search for "Free up or increase disk space" in the Elasticsearch topic, Fix a red or yellow cluster status . 12.5.3. Elasticsearch Node Disk Low Watermark Reached Elasticsearch does not allocate shards to nodes that reach the low watermark . Troubleshooting Identify the node on which Elasticsearch is deployed. oc -n openshift-logging get po -o wide Check if there are unassigned shards . oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cluster/health?pretty | grep unassigned_shards If there are unassigned shards, check the disk space on each node. for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done Check the nodes.node_name.fs field to determine the free disk space on that node. If the used disk percentage is above 85%, the node has exceeded the low watermark, and shards can no longer be allocated to this node. Try to increase the disk space on all nodes. If increasing the disk space is not possible, try adding a new data node to the cluster. If adding a new data node is problematic, decrease the total cluster redundancy policy. Check the current redundancyPolicy . oc -n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}' Note If you are using a ClusterLogging CR, enter: oc -n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}' If the cluster redundancyPolicy is higher than SingleRedundancy , set it to SingleRedundancy and save this change. If the preceding steps do not fix the issue, delete the old indices. Check the status of all indices on Elasticsearch. oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- indices Identify an old index that can be deleted. Delete the index. oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name> -X DELETE Additional resources Search for "redundancyPolicy" in the "Sample ClusterLogging custom resource (CR)" in About the Cluster Logging custom resource 12.5.4. Elasticsearch Node Disk High Watermark Reached Elasticsearch attempts to relocate shards away from a node that has reached the high watermark . Troubleshooting Identify the node on which Elasticsearch is deployed. oc -n openshift-logging get po -o wide Check the disk space on each node. for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done Check if the cluster is rebalancing. oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cluster/health?pretty | grep relocating_shards If the command output shows relocating shards, the High Watermark has been exceeded. The default value of the High Watermark is 90%. The shards relocate to a node with low disk usage that has not crossed any watermark threshold limits. To allocate shards to a particular node, free up some space. Try to increase the disk space on all nodes. If increasing the disk space is not possible, try adding a new data node to the cluster. If adding a new data node is problematic, decrease the total cluster redundancy policy. Check the current redundancyPolicy . oc -n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}' Note If you are using a ClusterLogging CR, enter: oc -n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}' If the cluster redundancyPolicy is higher than SingleRedundancy , set it to SingleRedundancy and save this change. If the preceding steps do not fix the issue, delete the old indices. Check the status of all indices on Elasticsearch. oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- indices Identify an old index that can be deleted. Delete the index. oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name> -X DELETE Additional resources Search for "redundancyPolicy" in the "Sample ClusterLogging custom resource (CR)" in About the Cluster Logging custom resource 12.5.5. Elasticsearch Node Disk Flood Watermark Reached Elasticsearch enforces a read-only index block on every index that has both of these conditions: One or more shards are allocated to the node. One or more disks exceed the flood stage . Troubleshooting Check the disk space of the Elasticsearch node. for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done Check the nodes.node_name.fs field to determine the free disk space on that node. If the used disk percentage is above 95%, it signifies that the node has crossed the flood watermark. Writing is blocked for shards allocated on this particular node. Try to increase the disk space on all nodes. If increasing the disk space is not possible, try adding a new data node to the cluster. If adding a new data node is problematic, decrease the total cluster redundancy policy. Check the current redundancyPolicy . oc -n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}' Note If you are using a ClusterLogging CR, enter: oc -n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}' If the cluster redundancyPolicy is higher than SingleRedundancy , set it to SingleRedundancy and save this change. If the preceding steps do not fix the issue, delete the old indices. Check the status of all indices on Elasticsearch. oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- indices Identify an old index that can be deleted. Delete the index. oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name> -X DELETE Continue freeing up and monitoring the disk space until the used disk space drops below 90%. Then, unblock write to this particular node. oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_all/_settings?pretty -X PUT -d '{"index.blocks.read_only_allow_delete": null}' Additional resources Search for "redundancyPolicy" in the "Sample ClusterLogging custom resource (CR)" in About the Cluster Logging custom resource 12.5.6. Elasticsearch JVM Heap Use is High The Elasticsearch node JVM Heap memory used is above 75%. Troubleshooting Consider increasing the heap size . 12.5.7. Aggregated Logging System CPU is High System CPU usage on the node is high. Troubleshooting Check the CPU of the cluster node. Consider allocating more CPU resources to the node. 12.5.8. Elasticsearch Process CPU is High Elasticsearch process CPU usage on the node is high. Troubleshooting Check the CPU of the cluster node. Consider allocating more CPU resources to the node. 12.5.9. Elasticsearch Disk Space is Running Low The Elasticsearch Cluster is predicted to be out of disk space within the 6 hours based on current disk usage. Troubleshooting Get the disk space of the Elasticsearch node. for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done In the command output, check the nodes.node_name.fs field to determine the free disk space on that node. Try to increase the disk space on all nodes. If increasing the disk space is not possible, try adding a new data node to the cluster. If adding a new data node is problematic, decrease the total cluster redundancy policy. Check the current redundancyPolicy . oc -n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}' Note If you are using a ClusterLogging CR, enter: oc -n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}' If the cluster redundancyPolicy is higher than SingleRedundancy , set it to SingleRedundancy and save this change. If the preceding steps do not fix the issue, delete the old indices. Check the status of all indices on Elasticsearch. oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- indices Identify an old index that can be deleted. Delete the index. oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name> -X DELETE Additional resources Search for "redundancyPolicy" in the "Sample ClusterLogging custom resource (CR)" in About the Cluster Logging custom resource Search for "ElasticsearchDiskSpaceRunningLow" in About Elasticsearch alerting rules . Search for "Free up or increase disk space" in the Elasticsearch topic, Fix a red or yellow cluster status . 12.5.10. Elasticsearch FileDescriptor Usage is high Based on current usage trends, the predicted number of file descriptors on the node is insufficient. Troubleshooting Check and, if needed, configure the value of max_file_descriptors for each node, as described in the Elasticsearch File descriptors topic. Additional resources Search for "ElasticsearchHighFileDescriptorUsage" in About Elasticsearch alerting rules . Search for "File Descriptors In Use" in OpenShift Logging dashboards . | [
"oc project openshift-logging",
"oc get clusterlogging instance -o yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging . status: 1 collection: logs: fluentdStatus: daemonSet: fluentd 2 nodes: fluentd-2rhqp: ip-10-0-169-13.ec2.internal fluentd-6fgjh: ip-10-0-165-244.ec2.internal fluentd-6l2ff: ip-10-0-128-218.ec2.internal fluentd-54nx5: ip-10-0-139-30.ec2.internal fluentd-flpnn: ip-10-0-147-228.ec2.internal fluentd-n2frh: ip-10-0-157-45.ec2.internal pods: failed: [] notReady: [] ready: - fluentd-2rhqp - fluentd-54nx5 - fluentd-6fgjh - fluentd-6l2ff - fluentd-flpnn - fluentd-n2frh logstore: 3 elasticsearchStatus: - ShardAllocationEnabled: all cluster: activePrimaryShards: 5 activeShards: 5 initializingShards: 0 numDataNodes: 1 numNodes: 1 pendingTasks: 0 relocatingShards: 0 status: green unassignedShards: 0 clusterName: elasticsearch nodeConditions: elasticsearch-cdm-mkkdys93-1: nodeCount: 1 pods: client: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c data: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c master: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c visualization: 4 kibanaStatus: - deployment: kibana pods: failed: [] notReady: [] ready: - kibana-7fb4fd4cc9-f2nls replicaSets: - kibana-7fb4fd4cc9 replicas: 1",
"nodes: - conditions: - lastTransitionTime: 2019-03-15T15:57:22Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be not be allocated on this node. reason: Disk Watermark Low status: \"True\" type: NodeStorage deploymentName: example-elasticsearch-clientdatamaster-0-1 upgradeStatus: {}",
"nodes: - conditions: - lastTransitionTime: 2019-03-15T16:04:45Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be relocated from this node. reason: Disk Watermark High status: \"True\" type: NodeStorage deploymentName: cluster-logging-operator upgradeStatus: {}",
"Elasticsearch Status: Shard Allocation Enabled: shard allocation unknown Cluster: Active Primary Shards: 0 Active Shards: 0 Initializing Shards: 0 Num Data Nodes: 0 Num Nodes: 0 Pending Tasks: 0 Relocating Shards: 0 Status: cluster health unknown Unassigned Shards: 0 Cluster Name: elasticsearch Node Conditions: elasticsearch-cdm-mkkdys93-1: Last Transition Time: 2019-06-26T03:37:32Z Message: 0/5 nodes are available: 5 node(s) didn't match node selector. Reason: Unschedulable Status: True Type: Unschedulable elasticsearch-cdm-mkkdys93-2: Node Count: 2 Pods: Client: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready: Data: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready: Master: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready:",
"Node Conditions: elasticsearch-cdm-mkkdys93-1: Last Transition Time: 2019-06-26T03:37:32Z Message: pod has unbound immediate PersistentVolumeClaims (repeated 5 times) Reason: Unschedulable Status: True Type: Unschedulable",
"Status: Collection: Logs: Fluentd Status: Daemon Set: fluentd Nodes: Pods: Failed: Not Ready: Ready:",
"oc project openshift-logging",
"oc describe deployment cluster-logging-operator",
"Name: cluster-logging-operator . Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable . Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 62m deployment-controller Scaled up replica set cluster-logging-operator-574b8987df to 1----",
"oc get replicaset",
"NAME DESIRED CURRENT READY AGE cluster-logging-operator-574b8987df 1 1 1 159m elasticsearch-cdm-uhr537yu-1-6869694fb 1 1 1 157m elasticsearch-cdm-uhr537yu-2-857b6d676f 1 1 1 156m elasticsearch-cdm-uhr537yu-3-5b6fdd8cfd 1 1 1 155m kibana-5bd5544f87 1 1 1 157m",
"oc describe replicaset cluster-logging-operator-574b8987df",
"Name: cluster-logging-operator-574b8987df . Replicas: 1 current / 1 desired Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed . Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 66m replicaset-controller Created pod: cluster-logging-operator-574b8987df-qjhqv----",
"oc project openshift-logging",
"oc get Elasticsearch",
"NAME AGE elasticsearch 5h9m",
"oc get Elasticsearch <Elasticsearch-instance> -o yaml",
"oc get Elasticsearch elasticsearch -n openshift-logging -o yaml",
"status: 1 cluster: 2 activePrimaryShards: 30 activeShards: 60 initializingShards: 0 numDataNodes: 3 numNodes: 3 pendingTasks: 0 relocatingShards: 0 status: green unassignedShards: 0 clusterHealth: \"\" conditions: [] 3 nodes: 4 - deploymentName: elasticsearch-cdm-zjf34ved-1 upgradeStatus: {} - deploymentName: elasticsearch-cdm-zjf34ved-2 upgradeStatus: {} - deploymentName: elasticsearch-cdm-zjf34ved-3 upgradeStatus: {} pods: 5 client: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt data: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt master: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt shardAllocationEnabled: all",
"status: nodes: - conditions: - lastTransitionTime: 2019-03-15T15:57:22Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be not be allocated on this node. reason: Disk Watermark Low status: \"True\" type: NodeStorage deploymentName: example-elasticsearch-cdm-0-1 upgradeStatus: {}",
"status: nodes: - conditions: - lastTransitionTime: 2019-03-15T16:04:45Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be relocated from this node. reason: Disk Watermark High status: \"True\" type: NodeStorage deploymentName: example-elasticsearch-cdm-0-1 upgradeStatus: {}",
"status: nodes: - conditions: - lastTransitionTime: 2019-04-10T02:26:24Z message: '0/8 nodes are available: 8 node(s) didn''t match node selector.' reason: Unschedulable status: \"True\" type: Unschedulable",
"status: nodes: - conditions: - last Transition Time: 2019-04-10T05:55:51Z message: pod has unbound immediate PersistentVolumeClaims (repeated 5 times) reason: Unschedulable status: True type: Unschedulable",
"status: clusterHealth: \"\" conditions: - lastTransitionTime: 2019-04-17T20:01:31Z message: Wrong RedundancyPolicy selected. Choose different RedundancyPolicy or add more nodes with data roles reason: Invalid Settings status: \"True\" type: InvalidRedundancy",
"status: clusterHealth: green conditions: - lastTransitionTime: '2019-04-17T20:12:34Z' message: >- Invalid master nodes count. Please ensure there are no more than 3 total nodes with master roles reason: Invalid Settings status: 'True' type: InvalidMasters",
"status: clusterHealth: green conditions: - lastTransitionTime: \"2021-05-07T01:05:13Z\" message: Changing the storage structure for a custom resource is not supported reason: StorageStructureChangeIgnored status: 'True' type: StorageStructureChangeIgnored",
"oc get pods --selector component=elasticsearch -o name",
"pod/elasticsearch-cdm-1godmszn-1-6f8495-vp4lw pod/elasticsearch-cdm-1godmszn-2-5769cf-9ms2n pod/elasticsearch-cdm-1godmszn-3-f66f7d-zqkz7",
"oc exec elasticsearch-cdm-4vjor49p-2-6d4d7db474-q2w7z -- indices",
"Defaulting container name to elasticsearch. Use 'oc describe pod/elasticsearch-cdm-4vjor49p-2-6d4d7db474-q2w7z -n openshift-logging' to see all of the containers in this pod. green open infra-000002 S4QANnf1QP6NgCegfnrnbQ 3 1 119926 0 157 78 green open audit-000001 8_EQx77iQCSTzFOXtxRqFw 3 1 0 0 0 0 green open .security iDjscH7aSUGhIdq0LheLBQ 1 1 5 0 0 0 green open .kibana_-377444158_kubeadmin yBywZ9GfSrKebz5gWBZbjw 3 1 1 0 0 0 green open infra-000001 z6Dpe__ORgiopEpW6Yl44A 3 1 871000 0 874 436 green open app-000001 hIrazQCeSISewG3c2VIvsQ 3 1 2453 0 3 1 green open .kibana_1 JCitcBMSQxKOvIq6iQW6wg 1 1 0 0 0 0 green open .kibana_-1595131456_user1 gIYFIEGRRe-ka0W3okS-mQ 3 1 1 0 0 0",
"oc get pods --selector component=elasticsearch -o name",
"pod/elasticsearch-cdm-1godmszn-1-6f8495-vp4lw pod/elasticsearch-cdm-1godmszn-2-5769cf-9ms2n pod/elasticsearch-cdm-1godmszn-3-f66f7d-zqkz7",
"oc describe pod elasticsearch-cdm-1godmszn-1-6f8495-vp4lw",
". Status: Running . Containers: elasticsearch: Container ID: cri-o://b7d44e0a9ea486e27f47763f5bb4c39dfd2 State: Running Started: Mon, 08 Jun 2020 10:17:56 -0400 Ready: True Restart Count: 0 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 . proxy: Container ID: cri-o://3f77032abaddbb1652c116278652908dc01860320b8a4e741d06894b2f8f9aa1 State: Running Started: Mon, 08 Jun 2020 10:18:38 -0400 Ready: True Restart Count: 0 . Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True . Events: <none>",
"oc get deployment --selector component=elasticsearch -o name",
"deployment.extensions/elasticsearch-cdm-1gon-1 deployment.extensions/elasticsearch-cdm-1gon-2 deployment.extensions/elasticsearch-cdm-1gon-3",
"oc describe deployment elasticsearch-cdm-1gon-1",
". Containers: elasticsearch: Image: registry.redhat.io/openshift-logging/elasticsearch6-rhel8 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 . Conditions: Type Status Reason ---- ------ ------ Progressing Unknown DeploymentPaused Available True MinimumReplicasAvailable . Events: <none>",
"oc get replicaSet --selector component=elasticsearch -o name replicaset.extensions/elasticsearch-cdm-1gon-1-6f8495 replicaset.extensions/elasticsearch-cdm-1gon-2-5769cf replicaset.extensions/elasticsearch-cdm-1gon-3-f66f7d",
"oc describe replicaSet elasticsearch-cdm-1gon-1-6f8495",
". Containers: elasticsearch: Image: registry.redhat.io/openshift-logging/elasticsearch6-rhel8@sha256:4265742c7cdd85359140e2d7d703e4311b6497eec7676957f455d6908e7b1c25 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 . Events: <none>",
"oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == \"cluster-logging-operator\")].image}')",
"tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- health",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cat/nodes?v",
"-n openshift-logging get pods -l component=elasticsearch",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cat/master?v",
"logs <elasticsearch_master_pod_name> -c elasticsearch -n openshift-logging",
"logs <elasticsearch_node_name> -c elasticsearch -n openshift-logging",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cat/recovery?active_only=true",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- health |grep number_of_pending_tasks",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cluster/settings?pretty",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cluster/settings?pretty -X PUT -d '{\"persistent\": {\"cluster.routing.allocation.enable\":\"all\"}}'",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cat/indices?v",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name>/_cache/clear?pretty",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name>/_settings?pretty -X PUT -d '{\"index.allocation.max_retries\":10}'",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_search/scroll/_all -X DELETE",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name>/_settings?pretty -X PUT -d '{\"index.unassigned.node_left.delayed_timeout\":\"10m\"}'",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cat/indices?v",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_red_index_name> -X DELETE",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_nodes/stats?pretty",
"-n openshift-logging get po -o wide",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cluster/health?pretty | grep unassigned_shards",
"for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done",
"-n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'",
"-n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- indices",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name> -X DELETE",
"-n openshift-logging get po -o wide",
"for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cluster/health?pretty | grep relocating_shards",
"-n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'",
"-n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- indices",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name> -X DELETE",
"for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done",
"-n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'",
"-n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- indices",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name> -X DELETE",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_all/_settings?pretty -X PUT -d '{\"index.blocks.read_only_allow_delete\": null}'",
"for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done",
"-n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'",
"-n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- indices",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name> -X DELETE"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/logging/troubleshooting-logging |
Chapter 3. Spring and Spring Boot | Chapter 3. Spring and Spring Boot 3.1. Spring and Spring Boot tutorials Note These code tutorials use Data Grid Server and require at least one running instance. Run Spring examples Two simple tutorials can be run with Spring without Spring Boot: Test caching Test annotations Run Spring Boot examples USD mvn -s /path/to/maven-settings.xml spring-boot:run Displaying actuator statistics Navigate to http://localhost:8080/actuator/metrics in your browser to display a list of available metrics. Cache metrics are prefixed with "cache." Display each metric for each cache using tags. For example for the 'puts' stats in the basque-names cache: http://localhost:8080/actuator/metrics/cache.puts?tag=name:basque-names Collecting statistics with Prometheus The prometheus.yml file in this project contains a host.docker.internal binding that allows Prometheus to scrap metrics that the Spring actuator exposes. Change the YOUR_PATH value in the following command to the directory where Prometheus is running and then run: Podman USD podman run -d --name=prometheus -p 9090:9090 -v YOUR_PATH/integrations/spring-boot/prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus --config.file=/etc/prometheus/prometheus.yml Tutorial link Description Spring Boot and Spring Cache remote mode Demonstrates how to use Spring Caches with Spring Boot and the Data Grid Server. Spring Boot and Spring Session remote mode Demonstrates how to use Spring Session with Spring Boot and the Data Grid Server. Spring Boot and Spring Cache embedded mode Demonstrates how to use Spring Caches with Spring Boot and Data Grid Embedded. Spring Boot and Spring Session embedded mode Demonstrates how to use Spring Session with Spring Boot and Data Grid Embedded. Spring cache embedded without Spring Boot Demonstrates how to use Spring Cache and Data Grid Embedded without Spring Boot. Data Grid documentation You can find more resources in our documentation at: Using Data Grid with Spring Data Grid Spring Boot Starter | [
"{package_exec}@spring-caching",
"{package_exec}@spring-annotations",
"mvn -s /path/to/maven-settings.xml spring-boot:run",
"podman run -d --name=prometheus -p 9090:9090 -v YOUR_PATH/integrations/spring-boot/prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus --config.file=/etc/prometheus/prometheus.yml"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_code_tutorials/spring-tutorials |
5.18. brltty | 5.18. brltty 5.18.1. RHBA-2012:1231 - brltty bug fix update Updated brltty packages that fix two bugs are now available for Red Hat Enterprise Linux 6. BRLTTY is a background process (daemon) which provides access to the Linux console (when in text mode) for a blind person using a refreshable braille display. It drives the braille display, and provides complete screen review functionality. Bug Fixes BZ# 684526 Previously, building the brltty package could fail on the ocaml's unpackaged files error. This happened only if the ocaml package was pre-installed in the build root. The "--disable-caml-bindings" option has been added in the %configure macro so that the package now builds correctly. BZ# 809326 Previously, the /usr/lib/libbrlapi.so symbolic link installed by the brlapi-devel package incorrectly pointed to ../../lib/libbrlapi.so. The link has been fixed to correctly point to ../../lib/libbrlapi.so.0.5. All users of brltty are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/brltty |
26.2. Renewing Certificates | 26.2. Renewing Certificates For details on: automatic certificate renewal, see Section 26.2.1, "Renewing Certificates Automatically" manual certificate renewal, see Section 26.2.2, "Renewing CA Certificates Manually" 26.2.1. Renewing Certificates Automatically The certmonger service automatically renews the following certificates 28 days before their expiration date: CA certificate issued by the IdM CA as the root CA Subsystem and server certificates issued by the integrated IdM CA that are used by internal IdM services To automatically renew sub-CA CA certificates, they must be listed on the certmonger tracking list. To update the tracking list: Note If you are using an external CA as the root CA, you must renew the certificates manually, as described in Section 26.2.2, "Renewing CA Certificates Manually" . The certmonger service cannot automatically renew certificates signed by an external CA. For more information on how certmonger monitors certificate expiration dates, see Tracking Certificates with certmonger in the System-Level Authentication Guide . To verify that automatic renewal works as expected, examine certmonger log messages in the /var/log/messages file: After a certificate is renewed, certmonger records message like the following to indicate that the renewal operation has succeeded or failed: As the certificate nears its expiration, certmonger logs the following message: 26.2.2. Renewing CA Certificates Manually You can use the ipa-cacert-manage utility to manually renew: self-signed IdM CA certificate externally-signed IdM CA certificate The certificates renewed with the ipa-cacert-manage renew command use the same key pair and subject name as the old certificates. Renewing a certificate does not remove its version to enable certificate rollover. For details, see the ipa-cacert-manage (1) man page. 26.2.2.1. Renewing a Self-Signed IdM CA Certificate Manually Run the ipa-cacert-manage renew command. The command does not require you to specify the path to the certificate. The renewed certificate is now present in the LDAP certificate store and in the /etc/pki/pki-tomcat/alias NSS database. Run the ipa-certupdate utility on all servers and clients to update them with the information about the new certificate from LDAP. You must run ipa-certupdate on every server and client separately. Important Always run ipa-certupdate after manually installing a certificate. If you do not, the certificate will not be distributed to the other machines. To make sure the renewed certificate is properly installed, use the certutil utility to list the certificates in the database. For example: 26.2.2.2. Renewing an Externally-Signed IdM CA Certificate Manually Run the ipa-cacert-manage renew --external-ca command. The command creates the /var/lib/ipa/ca.csr CSR file. Submit the CSR to the external CA to get the renewed CA certificate issued. Run ipa-cacert-manage renew again, and this time specify the renewed CA certificate and the external CA certificate chain files using the --external-cert-file option. For example: The renewed CA certificate and the external CA certificate chain are now present in the LDAP certificate store and in the /etc/pki/pki-tomcat/alias/ NSS database. Run the ipa-certupdate utility on all servers and clients to update them with the information about the new certificate from LDAP. You must run ipa-certupdate on every server and client separately. Important Always run ipa-certupdate after manually installing a certificate. If you do not, the certificate will not be distributed to the other machines. To make sure the renewed certificate is properly installed, use the certutil utility to list the certificates in the database. For example: 26.2.3. Renewing Expired System Certificates When IdM is Offline If a system certificate has expired, IdM fails to start. IdM supports renewing system certificates even in this situation by using the ipa-cert-fix tool. Prerequisite Ensure that the LDAP service is running by entering the ipactl start --ignore-service-failures command on the host. Procedure 26.1. Renewing all expired system certificates on IdM servers On a CA in the IdM domain: Start the ipa-cert-fix utility to analyse the system and list expired certificates: Enter yes to start the renewal process: It can take up to one minute before ipa-cert-fix renews all expired certificates. Note If you ran the ipa-cert-fix utility on a CA host that was not the renewal master, and the utility renewed shared certificates, this host automatically becomes the new renewal master in the domain. There must be always only one renewal master in the domain to avoid inconsistencies. Optionally, verify that all services are running: On other servers in the IdM domain: Restart IdM with the --force parameter: With the --force parameter, the ipactl utility ignores individual startup failures. For example, if the server is also a CA, the pki-tomcat service fails to start. This is expected and ignored because of using the --force parameter. After the restart, verify that the certmonger service renewed the certificates: Note that it can take some time before certmonger renews the shared certificates on the replica. If the server is also a CA, the command reports CA_UNREACHABLE for the certificate the pki-tomcat service uses: To renew this certificate, use the ipa-cert-fix utility: | [
"ipa-certupdate trying https://idmserver.idm.example.com/ipa/json Forwarding 'schema' to json server 'https://idmserver.idm.example.com/ipa/json' trying https://idmserver.idm.example.com/ipa/json Forwarding 'ca_is_enabled' to json server 'https://idmserver.idm.example.com/ipa/json' Forwarding 'ca_find/1' to json server 'https://idmserver.idm.example.com/ipa/json' Systemwide CA database updated. Systemwide CA database updated. The ipa-certupdate command was successful",
"Certificate named \"NSS Certificate DB\" in token \"auditSigningCert cert-pki-ca\" in database \"/var/lib/pki-ca/alias\" renew success",
"certmonger: Certificate named \"NSS Certificate DB\" in token \"auditSigningCert cert-pki-ca\" in database \"/var/lib/pki-ca/alias\" will not be valid after 20160204065136.",
"certutil -L -d /etc/pki/pki-tomcat/alias",
"ipa-cacert-manage renew --external-cert-file= /tmp/servercert20110601.pem --external-cert-file= /tmp/cacert.pem",
"certutil -L -d /etc/pki/pki-tomcat/alias/",
"ipa-cert-fix The following certificates will be renewed: Dogtag sslserver certificate: Subject: CN=ca1.example.com,O=EXAMPLE.COM 201905222205 Serial: 13 Expires: 2019-05-12 05:55:47 Enter \"yes\" to proceed:",
"Enter \"yes\" to proceed: yes Proceeding. Renewed Dogtag sslserver certificate: Subject: CN=ca1.example.com,O=EXAMPLE.COM 201905222205 Serial: 268369925 Expires: 2021-08-14 02:19:33 Becoming renewal master. The ipa-cert-fix command was successful",
"ipactl status Directory Service: RUNNING krb5kdc Service: RUNNING kadmin Service: RUNNING httpd Service: RUNNING ipa-custodia Service: RUNNING pki-tomcatd Service: RUNNING ipa-otpd Service: RUNNING ipa: INFO: The ipactl command was successful",
"ipactl restart --force",
"getcert list | egrep '^Request|status:|subject:' Request ID '20190522120745': status: MONITORING subject: CN=IPA RA,O=EXAMPLE.COM 201905222205 Request ID '20190522120834': status: MONITORING subject: CN=Certificate Authority,O=EXAMPLE.COM 201905222205",
"Request ID '20190522120835': status: CA_UNREACHABLE subject: CN=ca2.example.com,O=EXAMPLE.COM 201905222205",
"ipa-cert-fix Dogtag sslserver certificate: Subject: CN=ca2.example.com,O=EXAMPLE.COM Serial: 3 Expires: 2019-05-11 12:07:11 Enter \"yes\" to proceed: yes Proceeding. Renewed Dogtag sslserver certificate: Subject: CN=ca2.example.com,O=EXAMPLE.COM 201905222205 Serial: 15 Expires: 2019-08-14 04:25:05 The ipa-cert-fix command was successful"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/cert-renewal |
25.8. Persistent Naming | 25.8. Persistent Naming Red Hat Enterprise Linux provides a number of ways to identify storage devices. It is important to use the correct option to identify each device when used in order to avoid inadvertently accessing the wrong device, particularly when installing to or reformatting drives. 25.8.1. Major and Minor Numbers of Storage Devices Storage devices managed by the sd driver are identified internally by a collection of major device numbers and their associated minor numbers. The major device numbers used for this purpose are not in a contiguous range. Each storage device is represented by a major number and a range of minor numbers, which are used to identify either the entire device or a partition within the device. There is a direct association between the major and minor numbers allocated to a device and numbers in the form of sd <letter(s)>[ number(s) ] . Whenever the sd driver detects a new device, an available major number and minor number range is allocated. Whenever a device is removed from the operating system, the major number and minor number range is freed for later reuse. The major and minor number range and associated sd names are allocated for each device when it is detected. This means that the association between the major and minor number range and associated sd names can change if the order of device detection changes. Although this is unusual with some hardware configurations (for example, with an internal SCSI controller and disks that have their SCSI target ID assigned by their physical location within a chassis), it can nevertheless occur. Examples of situations where this can happen are as follows: A disk may fail to power up or respond to the SCSI controller. This will result in it not being detected by the normal device probe. The disk will not be accessible to the system and subsequent devices will have their major and minor number range, including the associated sd names shifted down. For example, if a disk normally referred to as sdb is not detected, a disk that is normally referred to as sdc would instead appear as sdb . A SCSI controller (host bus adapter, or HBA) may fail to initialize, causing all disks connected to that HBA to not be detected. Any disks connected to subsequently probed HBAs would be assigned different major and minor number ranges, and different associated sd names. The order of driver initialization could change if different types of HBAs are present in the system. This would cause the disks connected to those HBAs to be detected in a different order. This can also occur if HBAs are moved to different PCI slots on the system. Disks connected to the system with Fibre Channel, iSCSI, or FCoE adapters might be inaccessible at the time the storage devices are probed, due to a storage array or intervening switch being powered off, for example. This could occur when a system reboots after a power failure, if the storage array takes longer to come online than the system take to boot. Although some Fibre Channel drivers support a mechanism to specify a persistent SCSI target ID to WWPN mapping, this will not cause the major and minor number ranges, and the associated sd names to be reserved, it will only provide consistent SCSI target ID numbers. These reasons make it undesirable to use the major and minor number range or the associated sd names when referring to devices, such as in the /etc/fstab file. There is the possibility that the wrong device will be mounted and data corruption could result. Occasionally, however, it is still necessary to refer to the sd names even when another mechanism is used (such as when errors are reported by a device). This is because the Linux kernel uses sd names (and also SCSI host/channel/target/LUN tuples) in kernel messages regarding the device. 25.8.2. World Wide Identifier (WWID) The World Wide Identifier (WWID) can be used in reliably identifying devices. It is a persistent, system-independent ID that the SCSI Standard requires from all SCSI devices. The WWID identifier is guaranteed to be unique for every storage device, and independent of the path that is used to access the device. This identifier can be obtained by issuing a SCSI Inquiry to retrieve the Device Identification Vital Product Data (page 0x83 ) or Unit Serial Number (page 0x80 ). The mappings from these WWIDs to the current /dev/sd names can be seen in the symlinks maintained in the /dev/disk/by-id/ directory. Example 25.4. WWID For example, a device with a page 0x83 identifier would have: Or, a device with a page 0x80 identifier would have: Red Hat Enterprise Linux automatically maintains the proper mapping from the WWID-based device name to a current /dev/sd name on that system. Applications can use the /dev/disk/by-id/ name to reference the data on the disk, even if the path to the device changes, and even when accessing the device from different systems. If there are multiple paths from a system to a device, DM Multipath uses the WWID to detect this. DM Multipath then presents a single "pseudo-device" in the /dev/mapper/wwid directory, such as /dev/mapper/3600508b400105df70000e00000ac0000 . The command multipath -l shows the mapping to the non-persistent identifiers: Host : Channel : Target : LUN , /dev/sd name, and the major:minor number. DM Multipath automatically maintains the proper mapping of each WWID-based device name to its corresponding /dev/sd name on the system. These names are persistent across path changes, and they are consistent when accessing the device from different systems. When the user_friendly_names feature (of DM Multipath ) is used, the WWID is mapped to a name of the form /dev/mapper/mpath n . By default, this mapping is maintained in the file /etc/multipath/bindings . These mpath n names are persistent as long as that file is maintained. Important If you use user_friendly_names , then additional steps are required to obtain consistent names in a cluster. Refer to the Consistent Multipath Device Names in a Cluster section in the DM Multipath book. In addition to these persistent names provided by the system, you can also use udev rules to implement persistent names of your own, mapped to the WWID of the storage. 25.8.3. Device Names Managed by the udev Mechanism in /dev/disk/by-* The udev mechanism consists of three major components: The kernel Generates events that are sent to user space when devices are added, removed, or changed. The udevd service Receives the events. The udev rules Specifies the action to take when the udev service receives the kernel events. This mechanism is used for all types of devices in Linux, not just for storage devices. In the case of storage devices, Red Hat Enterprise Linux contains udev rules that create symbolic links in the /dev/disk/ directory allowing storage devices to be referred to by their contents, a unique identifier, their serial number, or the hardware path used to access the device. /dev/disk/by-label/ Entries in this directory provide a symbolic name that refers to the storage device by a label in the contents (that is, the data) stored on the device. The blkid utility is used to read data from the device and determine a name (that is, a label) for the device. For example: Note The information is obtained from the contents (that is, the data) on the device so if the contents are copied to another device, the label will remain the same. The label can also be used to refer to the device in /etc/fstab using the following syntax: /dev/disk/by-uuid/ Entries in this directory provide a symbolic name that refers to the storage device by a unique identifier in the contents (that is, the data) stored on the device. The blkid utility is used to read data from the device and obtain a unique identifier (that is, the UUID) for the device. For example: /dev/disk/by-id/ Entries in this directory provide a symbolic name that refers to the storage device by a unique identifier (different from all other storage devices). The identifier is a property of the device but is not stored in the contents (that is, the data) on the devices. For example: The id is obtained from the world-wide ID of the device, or the device serial number. The /dev/disk/by-id/ entries may also include a partition number. For example: /dev/disk/by-path/ Entries in this directory provide a symbolic name that refers to the storage device by the hardware path used to access the device, beginning with a reference to the storage controller in the PCI hierarchy, and including the SCSI host, channel, target, and LUN numbers and, optionally, the partition number. Although these names are preferable to using major and minor numbers or sd names, caution must be used to ensure that the target numbers do not change in a Fibre Channel SAN environment (for example, through the use of persistent binding) and that the use of the names is updated if a host adapter is moved to a different PCI slot. In addition, there is the possibility that the SCSI host numbers could change if a HBA fails to probe, if drivers are loaded in a different order, or if a new HBA is installed on the system. An example of by-path listing is: The /dev/disk/by-path/ entries may also include a partition number, such as: 25.8.3.1. Limitations of the udev Device Naming Convention The following are some limitations of the udev naming convention. It is possible that the device may not be accessible at the time the query is performed because the udev mechanism may rely on the ability to query the storage device when the udev rules are processed for a udev event. This is more likely to occur with Fibre Channel, iSCSI or FCoE storage devices when the device is not located in the server chassis. The kernel may also send udev events at any time, causing the rules to be processed and possibly causing the /dev/disk/by-*/ links to be removed if the device is not accessible. There can be a delay between when the udev event is generated and when it is processed, such as when a large number of devices are detected and the user-space udevd service takes some amount of time to process the rules for each one). This could cause a delay between when the kernel detects the device and when the /dev/disk/by-*/ names are available. External programs such as blkid invoked by the rules may open the device for a brief period of time, making the device inaccessible for other uses. 25.8.3.2. Modifying Persistent Naming Attributes Although udev naming attributes are persistent, in that they do not change on their own across system reboots, some are also configurable. You can set custom values for the following persistent naming attributes: UUID : file system UUID LABEL : file system label Because the UUID and LABEL attributes are related to the file system, the tool you need to use depends on the file system on that partition. To change the UUID or LABEL attributes of an XFS file system, unmount the file system and then use the xfs_admin utility to change the attribute: To change the UUID or LABEL attributes of an ext4, ext3, or ext2 file system, use the tune2fs utility: Replace new_uuid with the UUID you want to set; for example, 1cdfbc07-1c90-4984-b5ec-f61943f5ea50 . Replace new_label with a label; for example, backup_data . Note Changing udev attributes happens in the background and might take a long time. The udevadm settle command waits until the change is fully registered, which ensures that your command will be able to utilize the new attribute correctly. You should also use the command after creating new devices; for example, after using the parted tool to create a partition with a custom PARTUUID or PARTLABEL attribute, or after creating a new file system. | [
"scsi-3600508b400105e210000900000490000 -> ../../sda",
"scsi-SSEAGATE_ST373453LW_3HW1RHM6 -> ../../sda",
"3600508b400105df70000e00000ac0000 dm-2 vendor,product [size=20G][features=1 queue_if_no_path][hwhandler=0][rw] \\_ round-robin 0 [prio=0][active] \\_ 5:0:1:1 sdc 8:32 [active][undef] \\_ 6:0:1:1 sdg 8:96 [active][undef] \\_ round-robin 0 [prio=0][enabled] \\_ 5:0:0:1 sdb 8:16 [active][undef] \\_ 6:0:0:1 sdf 8:80 [active][undef]",
"/dev/disk/by-label/Boot",
"LABEL=Boot",
"UUID=3e6be9de-8139-11d1-9106-a43f08d823a6",
"/dev/disk/by-id/scsi-3600508e000000000ce506dc50ab0ad05",
"/dev/disk/by-id/wwn-0x600508e000000000ce506dc50ab0ad05",
"/dev/disk/by-id/scsi-3600508e000000000ce506dc50ab0ad05-part1",
"/dev/disk/by-id/wwn-0x600508e000000000ce506dc50ab0ad05-part1",
"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:0",
"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:0-part1",
"umount /dev/ device # xfs_admin [ -U new_uuid ] [ -L new_label ] /dev/ device # udevadm settle",
"tune2fs [ -U new_uuid ] [ -L new_label ] /dev/ device # udevadm settle"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/persistent_naming |
30.3. Getting Started with VDO | 30.3. Getting Started with VDO 30.3.1. Introduction Virtual Data Optimizer (VDO) provides inline data reduction for Linux in the form of deduplication, compression, and thin provisioning. When you set up a VDO volume, you specify a block device on which to construct your VDO volume and the amount of logical storage you plan to present. When hosting active VMs or containers, Red Hat recommends provisioning storage at a 10:1 logical to physical ratio: that is, if you are utilizing 1 TB of physical storage, you would present it as 10 TB of logical storage. For object storage, such as the type provided by Ceph, Red Hat recommends using a 3:1 logical to physical ratio: that is, 1 TB of physical storage would present as 3 TB logical storage. In either case, you can simply put a file system on top of the logical device presented by VDO and then use it directly or as part of a distributed cloud storage architecture. This chapter describes the following use cases of VDO deployment: the direct-attached use case for virtualization servers, such as those built using Red Hat Virtualization, and the cloud storage use case for object-based distributed storage clusters, such as those built using Ceph Storage. Note VDO deployment with Ceph is currently not supported. This chapter provides examples for configuring VDO for use with a standard Linux file system that can be easily deployed for either use case; see the diagrams in Section 30.3.5, "Deployment Examples" . 30.3.2. Installing VDO VDO is deployed using the following RPM packages: vdo kmod-kvdo To install VDO, use the yum package manager to install the RPM packages: 30.3.3. Creating a VDO Volume Create a VDO volume for your block device. Note that multiple VDO volumes can be created for separate devices on the same machine. If you choose this approach, you must supply a different name and device for each instance of VDO on the system. Important Use expandable storage as the backing block device. For more information, see Section 30.2, "System Requirements" . In all the following steps, replace vdo_name with the identifier you want to use for your VDO volume; for example, vdo1 . Create the VDO volume using the VDO Manager: Replace block_device with the persistent name of the block device where you want to create the VDO volume. For example, /dev/disk/by-id/scsi-3600508b1001c264ad2af21e903ad031f . Important Use a persistent device name. If you use a non-persistent device name, then VDO might fail to start properly in the future if the device name changes. For more information on persistent names, see Section 25.8, "Persistent Naming" . Replace logical_size with the amount of logical storage that the VDO volume should present: For active VMs or container storage, use logical size that is ten times the physical size of your block device. For example, if your block device is 1 TB in size, use 10T here. For object storage, use logical size that is three times the physical size of your block device. For example, if your block device is 1 TB in size, use 3T here. If the block device is larger than 16 TiB, add the --vdoSlabSize=32G to increase the slab size on the volume to 32 GiB. Using the default slab size of 2 GiB on block devices larger than 16 TiB results in the vdo create command failing with the following error: For more information, see Section 30.1.3, "VDO Volume" . Example 30.1. Creating VDO for Container Storage For example, to create a VDO volume for container storage on a 1 TB block device, you might use: When a VDO volume is created, VDO adds an entry to the /etc/vdoconf.yml configuration file. The vdo.service systemd unit then uses the entry to start the volume by default. Important If a failure occurs when creating the VDO volume, remove the volume to clean up. See Section 30.4.3.1, "Removing an Unsuccessfully Created Volume" for details. Create a file system: For the XFS file system: For the ext4 file system: Mount the file system: To configure the file system to mount automatically, use either the /etc/fstab file or a systemd mount unit: If you decide to use the /etc/fstab configuration file, add one of the following lines to the file: For the XFS file system: For the ext4 file system: Alternatively, if you decide to use a systemd unit, create a systemd mount unit file with the appropriate filename. For the mount point of your VDO volume, create the /etc/systemd/system/mnt- vdo_name .mount file with the following content: An example systemd unit file is also installed at /usr/share/doc/vdo/examples/systemd/VDO.mount.example . Enable the discard feature for the file system on your VDO device. Both batch and online operations work with VDO. For information on how to set up the discard feature, see Section 2.4, "Discard Unused Blocks" . 30.3.4. Monitoring VDO Because VDO is thin provisioned, the file system and applications will only see the logical space in use and will not be aware of the actual physical space available. VDO space usage and efficiency can be monitored using the vdostats utility: When the physical storage capacity of a VDO volume is almost full, VDO reports a warning in the system log, similar to the following: Important Monitor physical space on your VDO volumes to prevent out-of-space situations. Running out of physical blocks might result in losing recently written, unacknowledged data on the VDO volume. 30.3.5. Deployment Examples The following examples illustrate how VDO might be used in KVM and other deployments. VDO Deployment with KVM To see how VDO can be deployed successfully on a KVM server configured with Direct Attached Storage, see Figure 30.2, "VDO Deployment with KVM" . Figure 30.2. VDO Deployment with KVM More Deployment Scenarios For more information on VDO deployment, see Section 30.5, "Deployment Scenarios" . | [
"yum install vdo kmod-kvdo",
"vdo create --name= vdo_name --device= block_device --vdoLogicalSize= logical_size [ --vdoSlabSize= slab_size ]",
"vdo: ERROR - vdoformat: formatVDO failed on '/dev/ device ': VDO Status: Exceeds maximum number of slabs supported",
"vdo create --name=vdo1 --device=/dev/disk/by-id/scsi-3600508b1001c264ad2af21e903ad031f --vdoLogicalSize=10T",
"mkfs.xfs -K /dev/mapper/ vdo_name",
"mkfs.ext4 -E nodiscard /dev/mapper/ vdo_name",
"mkdir -m 1777 /mnt/ vdo_name # mount /dev/mapper/ vdo_name /mnt/ vdo_name",
"/dev/mapper/ vdo_name /mnt/ vdo_name xfs defaults,_netdev,x-systemd.device-timeout=0,x-systemd.requires=vdo.service 0 0",
"/dev/mapper/ vdo_name /mnt/ vdo_name ext4 defaults,_netdev,x-systemd.device-timeout=0,x-systemd.requires=vdo.service 0 0",
"[Unit] Description = VDO unit file to mount file system name = vdo_name .mount Requires = vdo.service After = multi-user.target Conflicts = umount.target [Mount] What = /dev/mapper/ vdo_name Where = /mnt/ vdo_name Type = xfs [Install] WantedBy = multi-user.target",
"vdostats --human-readable Device 1K-blocks Used Available Use% Space saving% /dev/mapper/node1osd1 926.5G 21.0G 905.5G 2% 73% /dev/mapper/node1osd2 926.5G 28.2G 898.3G 3% 64%",
"Oct 2 17:13:39 system lvm[13863]: Monitoring VDO pool vdo_name. Oct 2 17:27:39 system lvm[13863]: WARNING: VDO pool vdo_name is now 80.69% full. Oct 2 17:28:19 system lvm[13863]: WARNING: VDO pool vdo_name is now 85.25% full. Oct 2 17:29:39 system lvm[13863]: WARNING: VDO pool vdo_name is now 90.64% full. Oct 2 17:30:29 system lvm[13863]: WARNING: VDO pool vdo_name is now 96.07% full."
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/vdo-quick-start |
Chapter 252. OpenStack Glance Component | Chapter 252. OpenStack Glance Component Available as of Camel version 2.19 The openstack-glance component allows messages to be sent to an OpenStack image services. 252.1. Dependencies Maven users will need to add the following dependency to their pom.xml. pom.xml <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-openstack</artifactId> <version>USD{camel-version}</version> </dependency> where USD{camel-version } must be replaced by the actual version of Camel. 252.2. URI Format openstack-glance://hosturl[?options] You can append query options to the URI in the following format ?options=value&option2=value&... 252.3. URI Options The OpenStack Glance component has no options. The OpenStack Glance endpoint is configured using URI syntax: with the following path and query parameters: 252.3.1. Path Parameters (1 parameters): Name Description Default Type host Required OpenStack host url String 252.3.2. Query Parameters (8 parameters): Name Description Default Type apiVersion (producer) OpenStack API version V3 String config (producer) OpenStack configuration Config domain (producer) Authentication domain default String operation (producer) The operation to do String password (producer) Required OpenStack password String project (producer) Required The project ID String username (producer) Required OpenStack username String synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 252.4. Spring Boot Auto-Configuration The component supports 2 options, which are listed below. Name Description Default Type camel.component.openstack-glance.enabled Enable openstack-glance component true Boolean camel.component.openstack-glance.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 252.5. Usage Operation Description reserve Reserve image. create Create new image. update Update image. upload Upload image. get Get the image. getAll Get all image. delete Delete the image. 252.5.1. Message headers evaluated by the Glance producer Header Type Description operation String The operation to perform. ID String ID of the flavor. name String The flavor name. diskFormat org.openstack4j.model.image.DiskFormat The number of flavor VCPU. containerFormat org.openstack4j.model.image.ContainerFormat Size of RAM. owner String Image owner. isPublic Boolean Is public. minRam Long Minimum ram. minDisk Long Minimum disk. size Long Size. checksum String Checksum. properties Map Image properties. 252.6. See Also Configuring Camel Component Endpoint Getting Started openstack Component | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-openstack</artifactId> <version>USD{camel-version}</version> </dependency>",
"openstack-glance://hosturl[?options]",
"openstack-glance:host"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/openstack-glance-component |
12.3.2. Deleting a Storage Pool Using virt-manager | 12.3.2. Deleting a Storage Pool Using virt-manager This procedure demonstrates how to delete a storage pool. To avoid any issues with other guest virtual machines using the same pool, it is best to stop the storage pool and release any resources in use by it. To do this, select the storage pool you want to stop and click the red X icon at the bottom of the Storage window. Figure 12.11. Stop Icon Delete the storage pool by clicking the Trash can icon. This icon is only enabled if you stop the storage pool first. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/del-stor-pool-dir |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Provide as much detail as possible so that your request can be addressed. Prerequisites You have a Red Hat account. You are logged in to your Red Hat account. Procedure To provide your feedback, click the following link: Create Issue Describe the issue or enhancement in the Summary text box. Provide more details about the issue or enhancement in the Description text box. If your Red Hat user name does not automatically appear in the Reporter text box, enter it. Scroll to the bottom of the page and then click the Create button. A documentation issue is created and routed to the appropriate documentation team. Thank you for taking the time to provide feedback. | null | https://docs.redhat.com/en/documentation/red_hat_hybrid_cloud_console/1-latest/html/configuring_cloud_integrations_for_red_hat_services/proc-providing-feedback-on-redhat-documentation |
Installing and viewing plugins in Red Hat Developer Hub | Installing and viewing plugins in Red Hat Developer Hub Red Hat Developer Hub 1.4 Red Hat Customer Content Services | [
"kind: ConfigMap apiVersion: v1 metadata: name: dynamic-plugins-rhdh data: dynamic-plugins.yaml: | includes: - dynamic-plugins.default.yaml plugins: - package: './dynamic-plugins/dist/backstage-plugin-catalog-backend-module-github-dynamic' disabled: false pluginConfig: catalog: providers: github: organization: \"USD{GITHUB_ORG}\" schedule: frequency: { minutes: 1 } timeout: { minutes: 1 } initialDelay: { seconds: 100 }",
"apiVersion: rhdh.redhat.com/v1alpha3 kind: Backstage metadata: name: my-rhdh spec: application: dynamicPluginsConfigMapName: dynamic-plugins-rhdh",
"[ { \"name\": \"backstage-plugin-catalog-backend-module-github-dynamic\", \"version\": \"0.5.2\", \"platform\": \"node\", \"role\": \"backend-plugin-module\" }, { \"name\": \"backstage-plugin-techdocs\", \"version\": \"1.10.0\", \"role\": \"frontend-plugin\", \"platform\": \"web\" }, { \"name\": \"backstage-plugin-techdocs-backend-dynamic\", \"version\": \"1.9.5\", \"platform\": \"node\", \"role\": \"backend-plugin\" }, ]",
"global: dynamic: plugins: - package: <alocal package-spec used by npm pack> - package: <external package-spec used by npm pack> integrity: sha512-<some hash> pluginConfig:",
"global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: <some imported plugins listed in dynamic-plugins.default.yaml> disabled: true",
"global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: <some imported plugins listed in dynamic-plugins.custom.yaml> disabled: false",
"global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: <some imported plugins listed in dynamic-plugins.custom.yaml> disabled: false",
"apiVersion: v1 kind: Secret metadata: name: <release_name> -dynamic-plugins-npmrc 1 type: Opaque stringData: .npmrc: | registry=<registry-url> //<registry-url>:_authToken=<auth-token>",
"npx @janus-idp/cli@latest export-dynamic-plugin --shared-package '!/@backstage/plugin-notifications/' --embed-package @backstage/plugin-notifications-backend",
"npx @janus-idp/cli@latest export-dynamic",
"\"scalprum\": { \"name\": \"<package_name>\", // The Webpack container name matches the NPM package name, with \"@\" replaced by \".\" and \"/\" removed. \"exposedModules\": { \"PluginRoot\": \"./src/index.ts\" // The default module name is \"PluginRoot\" and doesn't need explicit specification in the app-config.yaml file. } }",
"\"scalprum\": { \"name\": \"custom-package-name\", \"exposedModules\": { \"FooModuleName\": \"./src/foo.ts\", \"BarModuleName\": \"./src/bar.ts\" // Define multiple modules here, with each exposed as a separate entry point in the Webpack container. } }",
"// For a static plugin export const EntityTechdocsContent = () => {...} // For a dynamic plugin export const DynamicEntityTechdocsContent = { element: EntityTechdocsContent, staticJSXContent: ( <TechDocsAddons> <ReportIssue /> </TechDocsAddons> ), };",
"npx @janus-idp/cli@latest package export-dynamic-plugin",
"npx @janus-idp/cli@latest package package-dynamic-plugins --tag quay.io/example/image:v0.0.1",
"push quay.io/example/image:v0.0.1",
"docker push quay.io/example/image:v0.0.1",
"npm pack",
"npm pack --json | head -n 10",
"plugins: - package: https://example.com/backstage-plugin-myplugin-1.0.0.tgz integrity: sha512-<hash>",
"npm pack --pack-destination ~/test/dynamic-plugins-root/",
"project my-rhdh-project new-build httpd --name=plugin-registry --binary start-build plugin-registry --from-dir=dynamic-plugins-root --wait new-app --image-stream=plugin-registry",
"plugins: - package: http://plugin-registry:8080/backstage-plugin-myplugin-1.9.6.tgz",
"npm publish --registry <npm_registry_url>",
"{ \"publishConfig\": { \"registry\": \"<npm_registry_url>\" } }",
"plugins: - disabled: false package: oci://quay.io/example/image:v0.0.1!backstage-plugin-myplugin",
"plugins: - disabled: false package: oci://quay.io/example/image@sha256:28036abec4dffc714394e4ee433f16a59493db8017795049c831be41c02eb5dc!backstage-plugin-myplugin",
"plugins: - disabled: false package: https://example.com/backstage-plugin-myplugin-1.0.0.tgz integrity: sha512-9WlbgEdadJNeQxdn1973r5E4kNFvnT9GjLD627GWgrhCaxjCmxqdNW08cj+Bf47mwAtZMt1Ttyo+ZhDRDj9PoA==",
"npm view --registry <registry-url> <npm package>@<version> dist.integrity",
"plugins: - disabled: false package: @example/[email protected] integrity: sha512-9WlbgEdadJNeQxdn1973r5E4kNFvnT9GjLD627GWgrhCaxjCmxqdNW08cj+Bf47mwAtZMt1Ttyo+ZhDRDj9PoA==",
"registry=<registry-url> //<registry-url>:_authToken=<auth-token>",
"apiVersion: v1 kind: Secret metadata: name: <release_name> -dynamic-plugins-npmrc 1 type: Opaque stringData: .npmrc: | registry=<registry-url> //<registry-url>:_authToken=<auth-token>",
"git clone https://github.com/backstage/community-plugins cd community-plugins/workspaces/todo yarn install",
"cd todo-backend npx @janus-idp/cli@latest package export-dynamic-plugin",
"Building main package executing yarn build ✔ Packing main package to dist-dynamic/package.json Customizing main package in dist-dynamic/package.json for dynamic loading moving @backstage/backend-common to peerDependencies moving @backstage/backend-openapi-utils to peerDependencies moving @backstage/backend-plugin-api to peerDependencies moving @backstage/catalog-client to peerDependencies moving @backstage/catalog-model to peerDependencies moving @backstage/config to peerDependencies moving @backstage/errors to peerDependencies moving @backstage/integration to peerDependencies moving @backstage/plugin-catalog-node to peerDependencies Installing private dependencies of the main package executing yarn install --no-immutable ✔ Validating private dependencies Validating plugin entry points Saving self-contained config schema in /Users/user/Code/community-plugins/workspaces/todo/plugins/todo-backend/dist-dynamic/dist/configSchema.json",
"cd ../todo npx @janus-idp/cli@latest package export-dynamic-plugin",
"No scalprum config. Using default dynamic UI configuration: { \"name\": \"backstage-community.plugin-todo\", \"exposedModules\": { \"PluginRoot\": \"./src/index.ts\" } } If you wish to change the defaults, add \"scalprum\" configuration to plugin \"package.json\" file, or use the '--scalprum-config' option to specify an external config. Packing main package to dist-dynamic/package.json Customizing main package in dist-dynamic/package.json for dynamic loading Generating dynamic frontend plugin assets in /Users/user/Code/community-plugins/workspaces/todo/plugins/todo/dist-dynamic/dist-scalprum 263.46 kB dist-scalprum/static/1417.d5271413.chunk.js 250 B dist-scalprum/static/react-syntax-highlighter_languages_highlight_plaintext.0b7d6592.chunk.js Saving self-contained config schema in /Users/user/Code/community-plugins/workspaces/todo/plugins/todo/dist-dynamic/dist-scalprum/configSchema.json",
"cd ../.. npx @janus-idp/cli@latest package package-dynamic-plugins --tag quay.io/user/backstage-community-plugin-todo:v0.1.1",
"executing podman --version ✔ Using existing 'dist-dynamic' directory at plugins/todo Using existing 'dist-dynamic' directory at plugins/todo-backend Copying 'plugins/todo/dist-dynamic' to '/var/folders/5c/67drc33d0018j6qgtzqpcsbw0000gn/T/package-dynamic-pluginsmcP4mU/backstage-community-plugin-todo No plugin configuration found at undefined create this file as needed if this plugin requires configuration Copying 'plugins/todo-backend/dist-dynamic' to '/var/folders/5c/67drc33d0018j6qgtzqpcsbw0000gn/T/package-dynamic-pluginsmcP4mU/backstage-community-plugin-todo-backend-dynamic No plugin configuration found at undefined create this file as needed if this plugin requires configuration Writing plugin registry metadata to '/var/folders/5c/67drc33d0018j6qgtzqpcsbw0000gn/T/package-dynamic-pluginsmcP4mU/index.json' Creating image using podman executing echo \"from scratch COPY . . \" | podman build --annotation com.redhat.rhdh.plugins='[{\"backstage-community-plugin-todo\":{\"name\":\"@backstage-community/plugin-todo\",\"version\":\"0.2.40\",\"description\":\"A Backstage plugin that lets you browse TODO comments in your source code\",\"backstage\":{\"role\":\"frontend-plugin\",\"pluginId\":\"todo\",\"pluginPackages\":[\"@backstage-community/plugin-todo\",\"@backstage-community/plugin-todo-backend\"]},\"homepage\":\"https://backstage.io\",\"repository\":{\"type\":\"git\",\"url\":\"https://github.com/backstage/community-plugins\",\"directory\":\"workspaces/todo/plugins/todo\"},\"license\":\"Apache-2.0\"}},{\"backstage-community-plugin-todo-backend-dynamic\":{\"name\":\"@backstage-community/plugin-todo-backend\",\"version\":\"0.3.19\",\"description\":\"A Backstage backend plugin that lets you browse TODO comments in your source code\",\"backstage\":{\"role\":\"backend-plugin\",\"pluginId\":\"todo\",\"pluginPackages\":[\"@backstage-community/plugin-todo\",\"@backstage-community/plugin-todo-backend\"]},\"homepage\":\"https://backstage.io\",\"repository\":{\"type\":\"git\",\"url\":\"https://github.com/backstage/community-plugins\",\"directory\":\"workspaces/todo/plugins/todo-backend\"},\"license\":\"Apache-2.0\"}}]' -t 'quay.io/user/backstage-community-plugin-todo:v0.1.1' -f - . ✔ Successfully built image quay.io/user/backstage-community-plugin-todo:v0.1.1 with following plugins: backstage-community-plugin-todo backstage-community-plugin-todo-backend-dynamic Here is an example dynamic-plugins.yaml for these plugins: plugins: - package: oci://quay.io/user/backstage-community-plugin-todo:v0.1.1!backstage-community-plugin-todo disabled: false - package: oci://quay.io/user/backstage-community-plugin-todo:v0.1.1!backstage-community-plugin-todo-backend-dynamic disabled: false",
"podman push quay.io/user/backstage-community-plugin-todo:v0.1.1",
"Getting image source signatures Copying blob sha256:86a372c456ae6a7a305cd464d194aaf03660932efd53691998ab3403f87cacb5 Copying config sha256:3b7f074856ecfbba95a77fa87cfad341e8a30c7069447de8144aea0edfcb603e Writing manifest to image destination",
"packages: - package: oci://quay.io/user/backstage-community-plugin-todo:v0.1.1!backstage-community-plugin-todo pluginConfig: dynamicPlugins: frontend: backstage-community.plugin-todo: mountPoints: - mountPoint: entity.page.todo/cards importName: EntityTodoContent entityTabs: - path: /todo title: Todo mountPoint: entity.page.todo - package: oci://quay.io/user/backstage-community-plugin-todo:v0.1.1!backstage-community-plugin-todo-backend-dynamic disabled: false",
"plugins: - disabled: false package: ./dynamic-plugins/dist/backstage-plugin-catalog-backend-module-github-dynamic"
] | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html-single/installing_and_viewing_plugins_in_red_hat_developer_hub/rhdh-installing-rhdh-plugins_title-plugins-rhdh-about |
Chapter 11. Configuring your Logging deployment | Chapter 11. Configuring your Logging deployment 11.1. Configuring CPU and memory limits for logging components You can configure both the CPU and memory limits for each of the logging components as needed. 11.1.1. Configuring CPU and memory limits The logging components allow for adjustments to both the CPU and memory limits. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc -n openshift-logging edit ClusterLogging instance apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: openshift-logging ... spec: managementState: "Managed" logStore: type: "elasticsearch" elasticsearch: nodeCount: 3 resources: 1 limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: storageClassName: "gp2" size: "200G" redundancyPolicy: "SingleRedundancy" visualization: type: "kibana" kibana: resources: 2 limits: memory: 1Gi requests: cpu: 500m memory: 1Gi proxy: resources: 3 limits: memory: 100Mi requests: cpu: 100m memory: 100Mi replicas: 2 collection: resources: 4 limits: memory: 736Mi requests: cpu: 200m memory: 736Mi type: fluentd 1 Specify the CPU and memory limits and requests for the log store as needed. For Elasticsearch, you must adjust both the request value and the limit value. 2 3 Specify the CPU and memory limits and requests for the log visualizer as needed. 4 Specify the CPU and memory limits and requests for the log collector as needed. 11.2. Configuring systemd-journald and Fluentd Because Fluentd reads from the journal, and the journal default settings are very low, journal entries can be lost because the journal cannot keep up with the logging rate from system services. We recommend setting RateLimitIntervalSec=30s and RateLimitBurst=10000 (or even higher if necessary) to prevent the journal from losing entries. 11.2.1. Configuring systemd-journald for OpenShift Logging As you scale up your project, the default logging environment might need some adjustments. For example, if you are missing logs, you might have to increase the rate limits for journald. You can adjust the number of messages to retain for a specified period of time to ensure that OpenShift Logging does not use excessive resources without dropping logs. You can also determine if you want the logs compressed, how long to retain logs, how or if the logs are stored, and other settings. Procedure Create a Butane config file, 40-worker-custom-journald.bu , that includes an /etc/systemd/journald.conf file with the required settings. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.16.0 metadata: name: 40-worker-custom-journald labels: machineconfiguration.openshift.io/role: "worker" storage: files: - path: /etc/systemd/journald.conf mode: 0644 1 overwrite: true contents: inline: | Compress=yes 2 ForwardToConsole=no 3 ForwardToSyslog=no MaxRetentionSec=1month 4 RateLimitBurst=10000 5 RateLimitIntervalSec=30s Storage=persistent 6 SyncIntervalSec=1s 7 SystemMaxUse=8G 8 SystemKeepFree=20% 9 SystemMaxFileSize=10M 10 1 Set the permissions for the journald.conf file. It is recommended to set 0644 permissions. 2 Specify whether you want logs compressed before they are written to the file system. Specify yes to compress the message or no to not compress. The default is yes . 3 Configure whether to forward log messages. Defaults to no for each. Specify: ForwardToConsole to forward logs to the system console. ForwardToKMsg to forward logs to the kernel log buffer. ForwardToSyslog to forward to a syslog daemon. ForwardToWall to forward messages as wall messages to all logged-in users. 4 Specify the maximum time to store journal entries. Enter a number to specify seconds. Or include a unit: "year", "month", "week", "day", "h" or "m". Enter 0 to disable. The default is 1month . 5 Configure rate limiting. If more logs are received than what is specified in RateLimitBurst during the time interval defined by RateLimitIntervalSec , all further messages within the interval are dropped until the interval is over. It is recommended to set RateLimitIntervalSec=30s and RateLimitBurst=10000 , which are the defaults. 6 Specify how logs are stored. The default is persistent : volatile to store logs in memory in /run/log/journal/ . These logs are lost after rebooting. persistent to store logs to disk in /var/log/journal/ . systemd creates the directory if it does not exist. auto to store logs in /var/log/journal/ if the directory exists. If it does not exist, systemd temporarily stores logs in /run/systemd/journal . none to not store logs. systemd drops all logs. 7 Specify the timeout before synchronizing journal files to disk for ERR , WARNING , NOTICE , INFO , and DEBUG logs. systemd immediately syncs after receiving a CRIT , ALERT , or EMERG log. The default is 1s . 8 Specify the maximum size the journal can use. The default is 8G . 9 Specify how much disk space systemd must leave free. The default is 20% . 10 Specify the maximum size for individual journal files stored persistently in /var/log/journal . The default is 10M . Note If you are removing the rate limit, you might see increased CPU utilization on the system logging daemons as it processes any messages that would have previously been throttled. For more information on systemd settings, see https://www.freedesktop.org/software/systemd/man/journald.conf.html . The default settings listed on that page might not apply to OpenShift Container Platform. Use Butane to generate a MachineConfig object file, 40-worker-custom-journald.yaml , containing the configuration to be delivered to the nodes: USD butane 40-worker-custom-journald.bu -o 40-worker-custom-journald.yaml Apply the machine config. For example: USD oc apply -f 40-worker-custom-journald.yaml The controller detects the new MachineConfig object and generates a new rendered-worker-<hash> version. Monitor the status of the rollout of the new rendered configuration to each node: USD oc describe machineconfigpool/worker Example output Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool ... Conditions: Message: Reason: All nodes are updating to rendered-worker-913514517bcea7c93bd446f4830bc64e | [
"oc -n openshift-logging edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 resources: 1 limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: storageClassName: \"gp2\" size: \"200G\" redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: resources: 2 limits: memory: 1Gi requests: cpu: 500m memory: 1Gi proxy: resources: 3 limits: memory: 100Mi requests: cpu: 100m memory: 100Mi replicas: 2 collection: resources: 4 limits: memory: 736Mi requests: cpu: 200m memory: 736Mi type: fluentd",
"variant: openshift version: 4.16.0 metadata: name: 40-worker-custom-journald labels: machineconfiguration.openshift.io/role: \"worker\" storage: files: - path: /etc/systemd/journald.conf mode: 0644 1 overwrite: true contents: inline: | Compress=yes 2 ForwardToConsole=no 3 ForwardToSyslog=no MaxRetentionSec=1month 4 RateLimitBurst=10000 5 RateLimitIntervalSec=30s Storage=persistent 6 SyncIntervalSec=1s 7 SystemMaxUse=8G 8 SystemKeepFree=20% 9 SystemMaxFileSize=10M 10",
"butane 40-worker-custom-journald.bu -o 40-worker-custom-journald.yaml",
"oc apply -f 40-worker-custom-journald.yaml",
"oc describe machineconfigpool/worker",
"Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool Conditions: Message: Reason: All nodes are updating to rendered-worker-913514517bcea7c93bd446f4830bc64e"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/logging/configuring-your-logging-deployment |
Chapter 1. Using Kustomize manifests to deploy applications | Chapter 1. Using Kustomize manifests to deploy applications You can use the kustomize configuration management tool with application manifests to deploy applications. Read through the following procedures for an example of how Kustomize works in MicroShift. 1.1. How Kustomize works with manifests to deploy applications The kustomize configuration management tool is integrated with MicroShift. You can use Kustomize and the OpenShift CLI ( oc ) together to apply customizations to your application manifests and deploy those applications to a MicroShift cluster. A kustomization.yaml file is a specification of resources plus customizations. Kustomize uses a kustomization.yaml file to load a resource, such as an application, then applies any changes you want to that application manifest and produces a copy of the manifest with the changes overlaid. Using a manifest copy with an overlay keeps the original configuration file for your application intact, while enabling you to deploy iterations and customizations of your applications efficiently. You can then deploy the application in your MicroShift cluster with an oc command. Note At each system start, MicroShift deletes the manifests found in the delete subdirectories and then applies the manifest files found in the manifest directories to the cluster. 1.1.1. How MicroShift uses manifests At every start, MicroShift searches the following manifest directories for Kustomize manifest files: /etc/microshift/manifests /etc/microshift/manifests.d/* /usr/lib/microshift/ /usr/lib/microshift/manifests.d/* MicroShift automatically runs the equivalent of the kubectl apply -k command to apply the manifests to the cluster if any of the following file types exists in the searched directories: kustomization.yaml kustomization.yml Kustomization This automatic loading from multiple directories means you can manage MicroShift workloads with the flexibility of having different workloads run independently of each other. Table 1.1. MicroShift manifest directories Location Intent /etc/microshift/manifests Read-write location for configuration management systems or development. /etc/microshift/manifests.d/* Read-write location for configuration management systems or development. /usr/lib/microshift/manifests Read-only location for embedding configuration manifests on OSTree-based systems. /usr/lib/microshift/manifestsd./* Read-only location for embedding configuration manifests on OSTree-based systems. 1.2. Override the list of manifest paths You can override the list of default manifest paths by using a new single path, or by using a new glob pattern for multiple files. Use the following procedure to customize your manifest paths. Procedure Override the list of default paths by inserting your own values and running one of the following commands: Set manifests.kustomizePaths to <"/opt/alternate/path"> in the configuration file for a single path. Set kustomizePaths to ,"/opt/alternative/path.d/*". in the configuration file for a glob pattern. manifests: kustomizePaths: - <location> 1 1 Set each location entry to an exact path by using "/opt/alternate/path" or a glob pattern by using "/opt/alternative/path.d/*" . To disable loading manifests, set the configuration option to an empty list. manifests: kustomizePaths: [] Note The configuration file overrides the defaults entirely. If the kustomizePaths value is set, only the values in the configuration file are used. Setting the value to an empty list disables manifest loading. Additional resources Deleting or updating Kustomize manifest resources 1.3. Using manifests example This example demonstrates automatic deployment of a BusyBox container using kustomize manifests in the /etc/microshift/manifests directory. Procedure Create the BusyBox manifest files by running the following commands: Define the directory location: USD MANIFEST_DIR=/etc/microshift/manifests Make the directory: USD sudo mkdir -p USD{MANIFEST_DIR} Place the YAML file in the directory: sudo tee USD{MANIFEST_DIR}/busybox.yaml &>/dev/null <<EOF apiVersion: v1 kind: Namespace metadata: name: busybox --- apiVersion: apps/v1 kind: Deployment metadata: name: busybox namespace: busybox-deployment spec: selector: matchLabels: app: busybox template: metadata: labels: app: busybox spec: containers: - name: busybox image: BUSYBOX_IMAGE command: [ "/bin/sh", "-c", "while true ; do date; sleep 3600; done;" ] EOF , create the kustomize manifest files by running the following commands: Place the YAML file in the directory: sudo tee USD{MANIFEST_DIR}/kustomization.yaml &>/dev/null <<EOF apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namespace: busybox resources: - busybox.yaml images: - name: BUSYBOX_IMAGE newName: busybox:1.35 EOF Restart MicroShift to apply the manifests by running the following command: USD sudo systemctl restart microshift Apply the manifests and start the busybox pod by running the following command: USD oc get pods -n busybox | [
"manifests: kustomizePaths: - <location> 1",
"manifests: kustomizePaths: []",
"MANIFEST_DIR=/etc/microshift/manifests",
"sudo mkdir -p USD{MANIFEST_DIR}",
"sudo tee USD{MANIFEST_DIR}/busybox.yaml &>/dev/null <<EOF apiVersion: v1 kind: Namespace metadata: name: busybox --- apiVersion: apps/v1 kind: Deployment metadata: name: busybox namespace: busybox-deployment spec: selector: matchLabels: app: busybox template: metadata: labels: app: busybox spec: containers: - name: busybox image: BUSYBOX_IMAGE command: [ \"/bin/sh\", \"-c\", \"while true ; do date; sleep 3600; done;\" ] EOF",
"sudo tee USD{MANIFEST_DIR}/kustomization.yaml &>/dev/null <<EOF apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namespace: busybox resources: - busybox.yaml images: - name: BUSYBOX_IMAGE newName: busybox:1.35 EOF",
"sudo systemctl restart microshift",
"oc get pods -n busybox"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/running_applications/applications-with-microshift |
Appendix E. Editors | Appendix E. Editors E.1. Editors Editors are the UI components designed to assist editing your models and to maintain the state for a given model or resource in your workspace. When editing a model, the model will be opened in a Model Editor. Editing a property value, for instance, will require an open editor prior to actually changing the property. Any number of editors can be open at once, but only one can be active at a time. The main menu bar and toolbar for Teiid Designer may contain operations that are applicable to the active editor (and removed when editor becomes inactive). Tabs in the editor area indicate the names of models that are currently open for editing. An asterisk (*) indicates that an editor has unsaved changes. Figure E.1. Editor Tabs By default, editors are stacked in the editors area, but you can choose to tile them vertically, and or horizontally in order to view multiple models simultaneously. Figure E.2. Viewing Multiple Editors The Teiid Designer provides main editor views for XMI models and VDBs. The Model Editor contains sub-editors which provide different views of the data or parts of data within an XMI model. These sub-editors, specific to model types are listed below. Diagram Editor - All models except XML Schema models. Table Editor - All models. Simple Datatypes Editor - XML Schema models only. Semantics Editor - XML Schema models only. Source Editor - XML Schema models only. The VDB Editor is a single page editor containing panels for editing description, model contents and data roles. In addition to general Editors for models, there are detailed editors designed for editing specific model object types. These object editors include: Transformation Editor - Manages Transformation SQL for Relational View Base Tables, Procedures and XML Web Service Operations. Choice Editor - Manages properties and criteria for XML choice elements in XML Document View models. Input Editor - Manages Input Set parameters used between Mapping Classes in XML Document View models. Recursion Editor - Manages recursion properties for recursive XML Elements in XML Document View models. Operation Editor - Manages SQL and Input Variables for Web Service Operations. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/appe-editors |
18.12.10.8. ICMP | 18.12.10.8. ICMP Protocol ID: icmp Note: The chain parameter is ignored for this type of traffic and should either be omitted or set to root. Table 18.10. ICMP protocol types Attribute Name Datatype Definition srcmacaddr MAC_ADDR MAC address of sender srcmacmask MAC_MASK Mask applied to the MAC address of the sender dstmacaddr MAD_ADDR MAC address of the destination dstmacmask MAC_MASK Mask applied to the MAC address of the destination srcipaddr IP_ADDR Source IP address srcipmask IP_MASK Mask applied to source IP address dstipaddr IP_ADDR Destination IP address dstipmask IP_MASK Mask applied to destination IP address srcipfrom IP_ADDR start of range of source IP address scripto IP_ADDR end of range of source IP address dstipfrom IP_ADDR Start of range of destination IP address dstipto IP_ADDR End of range of destination IP address type UNIT16 ICMP type code UNIT16 ICMP code comment STRING text string up to 256 characters state STRING comma separated list of NEW,ESTABLISHED,RELATED,INVALID or NONE ipset STRING The name of an IPSet managed outside of libvirt ipsetflags IPSETFLAGS flags for the IPSet; requires ipset attribute | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sub-sect-icmp |
4.199. nss | 4.199. nss 4.199.1. RHBA-2011:1838 - nss bug fix update Updated nss packages that fix one bug are now available for Red Hat Enterprise Linux 6. Network Security Services (NSS) is a set of libraries designed to support the cross-platform development of security-enabled client and server applications. Bug Fix BZ# 766056 Recent changes to NSS re-introduced a problem where applications could not use multiple SSL client certificates in the same process. Therefore, any attempt to run commands that worked with multiple SSL client certificates, such as the "yum repolist" command, resulted in a re-negotiation handshake failure. With this update, a revised patch correcting this problem has been applied to NSS, and using multiple SSL client certificates in the same process is now possible again. All users of nss are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/nss |
5.68. firefox | 5.68. firefox 5.68.1. RHSA-2013:0271 - Critical: firefox security update Updated firefox packages that fix several security issues are now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having critical security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Mozilla Firefox is an open source web browser. XULRunner provides the XUL Runtime environment for Mozilla Firefox. Security Fixes CVE-2013-0775 , CVE-2013-0780 , CVE-2013-0782 , CVE-2013-0783 Several flaws were found in the processing of malformed web content. A web page containing malicious content could cause Firefox to crash or, potentially, execute arbitrary code with the privileges of the user running Firefox. CVE-2013-0776 It was found that, after canceling a proxy server's authentication prompt, the address bar continued to show the requested site's address. An attacker could use this flaw to conduct phishing attacks by tricking a user into believing they are viewing a trusted site. Red Hat would like to thank the Mozilla project for reporting these issues. Upstream acknowledges Nils, Abhishek Arya, Olli Pettay, Christoph Diehl, Gary Kwong, Jesse Ruderman, Andrew McCreight, Joe Drew, Wayne Mery, and Michal Zalewski as the original reporters of these issues. For technical details regarding these flaws, refer to the Mozilla security advisories for Firefox 17.0.3 ESR: http://www.mozilla.org/security/known-vulnerabilities/firefoxESR.html Note that due to a Kerberos credentials change, the following configuration steps may be required when using Firefox 17.0.3 ESR with the Enterprise Identity Management (IPA) web interface: https://access.redhat.com/site/solutions/294303 Important Firefox 17 is not completely backwards-compatible with all Mozilla add-ons and Firefox plug-ins that worked with Firefox 10.0. Firefox 17 checks compatibility on first-launch, and, depending on the individual configuration and the installed add-ons and plug-ins, may disable said Add-ons and plug-ins, or attempt to check for updates and upgrade them. Add-ons and plug-ins may have to be manually updated. All Firefox users should upgrade to these updated packages, which contain Firefox version 17.0.3 ESR, which corrects these issues. After installing the update, Firefox must be restarted for the changes to take effect. 5.68.2. RHSA-2012:1088 - Critical: firefox security update Updated firefox packages that fix multiple security issues are now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having critical security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE links associated with each description below. Mozilla Firefox is an open source web browser. XULRunner provides the XUL Runtime environment for Mozilla Firefox. Security Fixes CVE-2012-1948 , CVE-2012-1951 , CVE-2012-1952 , CVE-2012-1953 , CVE-2012-1954 , CVE-2012-1958 , CVE-2012-1962 , CVE-2012-1967 A web page containing malicious content could cause Firefox to crash or, potentially, execute arbitrary code with the privileges of the user running Firefox. CVE-2012-1959 A malicious web page could bypass same-compartment security wrappers (SCSW) and execute arbitrary code with chrome privileges. CVE-2012-1966 A flaw in the context menu functionality in Firefox could allow a malicious website to bypass intended restrictions and allow a cross-site scripting attack. CVE-2012-1950 A page different to that in the address bar could be displayed when dragging and dropping to the address bar, possibly making it easier for a malicious site or user to perform a phishing attack. CVE-2012-1955 A flaw in the way Firefox called history.forward and history.back could allow an attacker to conceal a malicious URL, possibly tricking a user into believing they are viewing a trusted site. CVE-2012-1957 A flaw in a parser utility class used by Firefox to parse feeds (such as RSS) could allow an attacker to execute arbitrary JavaScript with the privileges of the user running Firefox. This issue could have affected other browser components or add-ons that assume the class returns sanitized input. CVE-2012-1961 A flaw in the way Firefox handled X-Frame-Options headers could allow a malicious website to perform a clickjacking attack. CVE-2012-1963 A flaw in the way Content Security Policy (CSP) reports were generated by Firefox could allow a malicious web page to steal a victim's OAuth 2.0 access tokens and OpenID credentials. CVE-2012-1964 A flaw in the way Firefox handled certificate warnings could allow a man-in-the-middle attacker to create a crafted warning, possibly tricking a user into accepting an arbitrary certificate as trusted. CVE-2012-1965 A flaw in the way Firefox handled feed:javascript URLs could allow output filtering to be bypassed, possibly leading to a cross-site scripting attack. The nss update RHBA-2012:0337 for Red Hat Enterprise Linux 5 and 6 introduced a mitigation for the CVE-2011-3389 flaw. For compatibility reasons, it remains disabled by default in the nss packages. This update makes Firefox enable the mitigation by default. It can be disabled by setting the NSS_SSL_CBC_RANDOM_IV environment variable to 0 before launching Firefox. (BZ# 838879 ) For technical details regarding these flaws, refer to the Mozilla security advisories for Firefox 10.0.6 ESR. Red Hat would like to thank the Mozilla project for reporting these issues. Upstream acknowledges Benoit Jacob, Jesse Ruderman, Christian Holler, Bill McCloskey, Abhishek Arya, Arthur Gerkis, Bill Keese, moz_bug_r_a4, Bobby Holley, Code Audit Labs, Mariusz Mlynski, Mario Heiderich, Frederic Buclin, Karthikeyan Bhargavan, Matt McCutchen, Mario Gomes, and Soroush Dalili as the original reporters of these issues. All Firefox users should upgrade to these updated packages, which contain Firefox version 10.0.6 ESR, which corrects these issues. After installing the update, Firefox must be restarted for the changes to take effect. 5.68.3. RHSA-2012:1210 - Critical: firefox security update Updated firefox packages that fix multiple security issues are now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having critical security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Mozilla Firefox is an open source web browser. XULRunner provides the XUL Runtime environment for Mozilla Firefox. Security Fixes CVE-2012-1970 , CVE-2012-1972 , CVE-2012-1973 , CVE-2012-1974 , CVE-2012-1975 , CVE-2012-1976 , CVE-2012-3956 , CVE-2012-3957 , CVE-2012-3958 , CVE-2012-3959 , CVE-2012-3960 , CVE-2012-3961 , CVE-2012-3962 , CVE-2012-3963 , CVE-2012-3964 A web page containing malicious content could cause Firefox to crash or, potentially, execute arbitrary code with the privileges of the user running Firefox. CVE-2012-3969 , CVE-2012-3970 A web page containing a malicious Scalable Vector Graphics (SVG) image file could cause Firefox to crash or, potentially, execute arbitrary code with the privileges of the user running Firefox. CVE-2012-3967 , CVE-2012-3968 Two flaws were found in the way Firefox rendered certain images using WebGL. A web page containing malicious content could cause Firefox to crash or, under certain conditions, possibly execute arbitrary code with the privileges of the user running Firefox. CVE-2012-3966 A flaw was found in the way Firefox decoded embedded bitmap images in Icon Format (ICO) files. A web page containing a malicious ICO file could cause Firefox to crash or, under certain conditions, possibly execute arbitrary code with the privileges of the user running Firefox. CVE-2012-3980 A flaw was found in the way the "eval" command was handled by the Firefox Web Console. Running "eval" in the Web Console while viewing a web page containing malicious content could possibly cause Firefox to execute arbitrary code with the privileges of the user running Firefox. CVE-2012-3972 An out-of-bounds memory read flaw was found in the way Firefox used the format-number feature of XSLT (Extensible Stylesheet Language Transformations). A web page containing malicious content could possibly cause an information leak, or cause Firefox to crash. CVE-2012-3976 It was found that the SSL certificate information for a previously visited site could be displayed in the address bar while the main window displayed a new page. This could lead to phishing attacks as attackers could use this flaw to trick users into believing they are viewing a trusted site. CVE-2012-3978 A flaw was found in the location object implementation in Firefox. Malicious content could use this flaw to possibly allow restricted content to be loaded. For technical details regarding these flaws, refer to the Mozilla security advisories for Firefox 10.0.7 ESR. Red Hat would like to thank the Mozilla project for reporting these issues. Upstream acknowledges Gary Kwong, Christian Holler, Jesse Ruderman, John Schoenick, Vladimir Vukicevic, Daniel Holbert, Abhishek Arya, Frederic Hoguin, miaubiz, Arthur Gerkis, Nicolas Gregoire, Mark Poticha, moz_bug_r_a4, and Colby Russell as the original reporters of these issues. All Firefox users should upgrade to these updated packages, which contain Firefox version 10.0.7 ESR, which corrects these issues. After installing the update, Firefox must be restarted for the changes to take effect. 5.68.4. RHSA-2012:1407 - Critical: firefox security update Updated firefox packages that fix multiple security issues are now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having critical security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Mozilla Firefox is an open source web browser. XULRunner provides the XUL Runtime environment for Mozilla Firefox. Security Fix CVE-2012-4194 , CVE-2012-4195 , CVE-2012-4196 Multiple flaws were found in the location object implementation in Firefox. Malicious content could be used to perform cross-site scripting attacks, bypass the same-origin policy, or cause Firefox to execute arbitrary code. For technical details regarding these flaws, refer to the Mozilla security advisories for Firefox 10.0.10 ESR: http://www.mozilla.org/security/known-vulnerabilities/firefoxESR.html Red Hat would like to thank the Mozilla project for reporting these issues. Upstream acknowledges Mariusz Mlynski, moz_bug_r_a4, and Antoine Delignat-Lavaud as the original reporters of these issues. All Firefox users should upgrade to these updated packages, which contain Firefox version 10.0.10 ESR, which corrects these issues. After installing the update, Firefox must be restarted for the changes to take effect. 5.68.5. RHSA-2013:0144 - Critical: firefox security update Updated firefox packages that fix several security issues are now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having critical security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Mozilla Firefox is an open source web browser. XULRunner provides the XUL Runtime environment for Mozilla Firefox. Security Fixes CVE-2013-0744 , CVE-2013-0746 , CVE-2013-0750 , CVE-2013-0753 , CVE-2013-0754 , CVE-2013-0762 , CVE-2013-0766 , CVE-2013-0767 , CVE-2013-0769 Several flaws were found in the processing of malformed web content. A web page containing malicious content could cause Firefox to crash or, potentially, execute arbitrary code with the privileges of the user running Firefox. CVE-2013-0758 A flaw was found in the way Chrome Object Wrappers were implemented. Malicious content could be used to cause Firefox to execute arbitrary code via plug-ins installed in Firefox. CVE-2013-0759 A flaw in the way Firefox displayed URL values in the address bar could allow a malicious site or user to perform a phishing attack. CVE-2013-0748 An information disclosure flaw was found in the way certain JavaScript functions were implemented in Firefox. An attacker could use this flaw to bypass Address Space Layout Randomization (ASLR) and other security restrictions. For technical details regarding these flaws, refer to the Mozilla security advisories for Firefox 10.0.12 ESR: http://www.mozilla.org/security/known-vulnerabilities/firefoxESR.html Red Hat would like to thank the Mozilla project for reporting these issues. Upstream acknowledges Atte Kettunen, Boris Zbarsky, pa_kt, regenrecht, Abhishek Arya, Christoph Diehl, Christian Holler, Mats Palmgren, Chiaki Ishikawa, Mariusz Mlynski, Masato Kinugawa, and Jesse Ruderman as the original reporters of these issues. All Firefox users should upgrade to these updated packages, which contain Firefox version 10.0.12 ESR, which corrects these issues. After installing the update, Firefox must be restarted for the changes to take effect. 5.68.6. RHSA-2012:1350 - Critical: firefox security and bug fix update Updated firefox packages that fix several security issues and one bug are now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having critical security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Mozilla Firefox is an open source web browser. XULRunner provides the XUL Runtime environment for Mozilla Firefox. Security Fixes CVE-2012-3982 , CVE-2012-3988 , CVE-2012-3990 , CVE-2012-3995 , CVE-2012-4179 , CVE-2012-4180 , CVE-2012-4181 , CVE-2012-4182 , CVE-2012-4183 , CVE-2012-4185 , CVE-2012-4186 , CVE-2012-4187 , CVE-2012-4188 Several flaws were found in the processing of malformed web content. A web page containing malicious content could cause Firefox to crash or, potentially, execute arbitrary code with the privileges of the user running Firefox. CVE-2012-3986 , CVE-2012-3991 Two flaws in Firefox could allow a malicious website to bypass intended restrictions, possibly leading to information disclosure, or Firefox executing arbitrary code. Note that the information disclosure issue could possibly be combined with other flaws to achieve arbitrary code execution. CVE-2012-1956 , CVE-2012-3992 , CVE-2012-3994 Multiple flaws were found in the location object implementation in Firefox. Malicious content could be used to perform cross-site scripting attacks, script injection, or spoofing attacks. CVE-2012-3993 , CVE-2012-4184 Two flaws were found in the way Chrome Object Wrappers were implemented. Malicious content could be used to perform cross-site scripting attacks or cause Firefox to execute arbitrary code. For technical details regarding these flaws, refer to the Mozilla security advisories for Firefox 10.0.8 ESR. Red Hat would like to thank the Mozilla project for reporting these issues. Upstream acknowledges Christian Holler, Jesse Ruderman, Soroush Dalili, miaubiz, Abhishek Arya, Atte Kettunen, Johnny Stenback, Alice White, moz_bug_r_a4, and Mariusz Mlynski as the original reporters of these issues. Bug Fix BZ# 809571 , BZ# 816234 In certain environments, storing personal Firefox configuration files (~/.mozilla/) on an NFS share, such as when your home directory is on a NFS share, led to Firefox functioning incorrectly, for example, navigation buttons not working as expected, and bookmarks not saving. This update adds a new configuration option, storage.nfs_filesystem, that can be used to resolve this issue. If you experience this issue: Start Firefox. Type "about:config" (without quotes) into the URL bar and press the Enter key. If prompted with "This might void your warranty!", click the "I'll be careful, I promise!" button. Right-click in the Preference Name list. In the menu that opens, select New -> Boolean. Type "storage.nfs_filesystem" (without quotes) for the preference name and then click the OK button. Select "true" for the boolean value and then press the OK button. All Firefox users should upgrade to these updated packages, which contain Firefox version 10.0.8 ESR, which corrects these issues. After installing the update, Firefox must be restarted for the changes to take effect. 5.68.7. RHSA-2012:1482 - Critical: firefox security update Updated firefox packages that fix several security issues are now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having critical security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Mozilla Firefox is an open source web browser. XULRunner provides the XUL Runtime environment for Mozilla Firefox. Security Fixes CVE-2012-4214 , CVE-2012-4215 , CVE-2012-4216 , CVE-2012-5829 , CVE-2012-5830 , CVE-2012-5833 , CVE-2012-5835 , CVE-2012-5839 , CVE-2012-5840 , CVE-2012-5842 Several flaws were found in the processing of malformed web content. A web page containing malicious content could cause Firefox to crash or, potentially, execute arbitrary code with the privileges of the user running Firefox. CVE-2012-4202 A buffer overflow flaw was found in the way Firefox handled GIF (Graphics Interchange Format) images. A web page containing a malicious GIF image could cause Firefox to crash or, possibly, execute arbitrary code with the privileges of the user running Firefox. CVE-2012-4210 A flaw was found in the way the Style Inspector tool in Firefox handled certain Cascading Style Sheets (CSS). Running the tool (Tools -> Web Developer -> Inspect) on malicious CSS could result in the execution of HTML and CSS content with chrome privileges. CVE-2012-4207 A flaw was found in the way Firefox decoded the HZ-GB-2312 character encoding. A web page containing malicious content could cause Firefox to run JavaScript code with the permissions of a different website. CVE-2012-4209 A flaw was found in the location object implementation in Firefox. Malicious content could possibly use this flaw to allow restricted content to be loaded by plug-ins. CVE-2012-5841 A flaw was found in the way cross-origin wrappers were implemented. Malicious content could use this flaw to perform cross-site scripting attacks. CVE-2012-4201 A flaw was found in the evalInSandbox implementation in Firefox. Malicious content could use this flaw to perform cross-site scripting attacks. For technical details regarding these flaws, refer to the Mozilla security advisories for Firefox 10.0.11 ESR: http://www.mozilla.org/security/known-vulnerabilities/firefoxESR.html Red Hat would like to thank the Mozilla project for reporting these issues. Upstream acknowledges Abhishek Arya, miaubiz, Jesse Ruderman, Andrew McCreight, Bob Clary, Kyle Huey, Atte Kettunen, Mariusz Mlynski, Masato Kinugawa, Bobby Holley, and moz_bug_r_a4 as the original reporters of these issues. All Firefox users should upgrade to these updated packages, which contain Firefox version 10.0.11 ESR, which corrects these issues. After installing the update, Firefox must be restarted for the changes to take effect. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/firefox |
Chapter 5. Restricting hosts' access to content | Chapter 5. Restricting hosts' access to content Satellite offers multiple options for restricting host access to content. To give hosts access to a specific subset of the content managed by Satellite, you can use the following strategies. Red Hat recommends to consider implementing the strategies in the order listed here: Content views and lifecycle environments Use content views and lifecycle environments, incorporating content view filters as needed. For more information about content views, see Chapter 7, Managing content views . For more information about lifecycle environments, see Chapter 6, Managing application lifecycles . Content overrides By default, content hosted by Satellite can be either enabled or disabled. In custom products, repositories are always disabled by default, while Red Hat products can be either enabled or disabled by default depending on the specific repository. Enabling a repository gives the host access to the repository packages or other content, allowing hosts to download and install the available content. If a repository is disabled, the host is not able to access the repository content. A content override provides you with the option to override the default enablement value of either Enabled or Disabled for any repository. You can add content overrides to hosts or activation keys. For more information about adding content overrides to hosts, see Enabling and Disabling Repositories on Hosts in Managing hosts . For more information about adding content overrides to activation keys, see Section 9.7, "Enabling and disabling repositories on activation key" . Composite content views You can use composite content views to combine and give hosts access to the content from multiple content views. For more information about composite content views, see Section 7.9, "Creating a composite content view" . Architecture and OS version restrictions In custom products, you can set restrictions on the architecture and OS versions for yum repositories on which the product will be available. For example, if you restrict a custom repository to Red Hat Enterprise Linux 8 , it is only available on hosts running Red Hat Enterprise Linux 8. Architecture and OS version restrictions hold the highest priority among all other strategies. They cannot be overridden or invalidated by content overrides, changes to content views, or changes to lifecycle environments. For this reason, it is recommended to consider the other strategies mentioned before using architecture or OS version restrictions. Red Hat repositories set architecture and OS version restrictions automatically. Release version Certain Red Hat repositories, such as the Red Hat Enterprise Linux dot release repositories, include a Release version in their repository metadata. The release version is then compared with the release version specified in the System purpose properties of the host. Access to content may be limited or restricted based on this comparison. For more information about setting system purpose attributes, see Creating a Host in Red Hat Satellite in Managing hosts . Incorporating all strategies A particular package or repository is available to a host only if all of the following are true: The repository is included in the host's content view and lifecycle environment. The host's content view has been published after the repository was added to it. The repository has not been filtered out by a content view filter. The repository is enabled by default or overridden to Enabled using a content override. The repository has no architecture or OS version restrictions or it has architecture or OS version restrictions that match the host. For certain Red Hat repositories either no release version is set or the release version matches that of the host. Using activation keys Using activation keys can simplify the workflow for some of these strategies. You can use activation keys to perform the following actions: Assign hosts to content views and lifecycle environments. Add content overrides to hosts. Set system purpose attributes on hosts, including release version. Activation keys only affect hosts during registration. If a host is already registered, the above attributes can be changed individually for each host or through content host bulk actions. For more information, see Managing Activation Keys in Managing content . | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/managing_content/restricting_hosts_access_to_content_content-management |
Chapter 7. Setting the Number of Directory Server Threads | Chapter 7. Setting the Number of Directory Server Threads The number of threads Directory Server uses to handle simultaneous connections affects the performance of the server. For example, if all threads are busy handling time-consuming tasks (such as add operations), new incoming connections are queued until a free thread can process the request. If the server provides a low number of CPU threads, configuring a higher number of threads can increase the performance. However, on a server with many CPU threads, setting a too high value does not further increase the performance. By default, Directory Server automatically calculates the number of threads automatically. This number is based on the hardware resources of the server when the instance starts. Note Red Hat recommends to use the auto-tuning settings. Do not set the number of threads manually. 7.1. Automatic Thread Tuning If you enable automatic thread tuning, Directory Server will use the following optimized number of threads: Number of CPU Threads Number of Directory Server Threads 1 16 2 16 4 24 8 32 16 48 32 64 64 96 128 192 256 384 512 512 [a] 1024 512 [a] 2048 512 [a] [a] The recommended maximum number of threads is applied. 7.1.1. Enabling Automatic Thread Tuning Using the Command Line Directory Server can automatically set the number of threads based on the available hardware threads. To enable this feature: Enable automatic setting of the number of threads: Restart the Directory Server instance: Important If you enabled the automatic setting of the number of threads, the nsslapd-threadnumber parameter shows the calculated number of threads while Directory Server is running. 7.1.2. Enabling Automatic Thread Tuning Using the Web Console Directory Server can automatically set the number of threads based on the available hardware threads. To enable this feature: Open the Directory Server user interface in the web console. For details, see Logging Into Directory Server Using the Web Console section in the Red Hat Directory Server Administration Guide . Select the instance. Open the Server Settings menu, and select Tuning & Limits . Set the Number Of Worker Threads field to -1 . Click Save . Click the Actions button, and select Restart Instance . Important If you enabled the automatic setting, the Number Of Worker Threads field shows the calculated number of threads while Directory Server is running. | [
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-threadnumber=\"-1\"",
"dsctl instance_name restart"
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/performance_tuning_guide/ds-threads |
Chapter 74. Condition schema reference | Chapter 74. Condition schema reference Used in: KafkaBridgeStatus , KafkaConnectorStatus , KafkaConnectStatus , KafkaMirrorMaker2Status , KafkaMirrorMakerStatus , KafkaNodePoolStatus , KafkaRebalanceStatus , KafkaStatus , KafkaTopicStatus , KafkaUserStatus , StrimziPodSetStatus Property Property type Description type string The unique identifier of a condition, used to distinguish between other conditions in the resource. status string The status of the condition, either True, False or Unknown. lastTransitionTime string Last time the condition of a type changed from one status to another. The required format is 'yyyy-MM-ddTHH:mm:ssZ', in the UTC time zone. reason string The reason for the condition's last transition (a single word in CamelCase). message string Human-readable message indicating details about the condition's last transition. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-condition-reference |
Chapter 6. MachineConfigPool [machineconfiguration.openshift.io/v1] | Chapter 6. MachineConfigPool [machineconfiguration.openshift.io/v1] Description MachineConfigPool describes a pool of MachineConfigs. Type object 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object MachineConfigPoolSpec is the spec for MachineConfigPool resource. status object MachineConfigPoolStatus is the status for MachineConfigPool resource. 6.1.1. .spec Description MachineConfigPoolSpec is the spec for MachineConfigPool resource. Type object Property Type Description configuration object The targeted MachineConfig object for the machine config pool. machineConfigSelector object machineConfigSelector specifies a label selector for MachineConfigs. Refer https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ on how label and selectors work. maxUnavailable integer-or-string maxUnavailable defines either an integer number or percentage of nodes in the corresponding pool that can go Unavailable during an update. This includes nodes Unavailable for any reason, including user initiated cordons, failing nodes, etc. The default value is 1. A value larger than 1 will mean multiple nodes going unavailable during the update, which may affect your workload stress on the remaining nodes. You cannot set this value to 0 to stop updates (it will default back to 1); to stop updates, use the 'paused' property instead. Drain will respect Pod Disruption Budgets (PDBs) such as etcd quorum guards, even if maxUnavailable is greater than one. nodeSelector object nodeSelector specifies a label selector for Machines paused boolean paused specifies whether or not changes to this machine config pool should be stopped. This includes generating new desiredMachineConfig and update of machines. 6.1.2. .spec.configuration Description The targeted MachineConfig object for the machine config pool. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency source array source is the list of MachineConfig objects that were used to generate the single MachineConfig object specified in content . source[] object ObjectReference contains enough information to let you inspect or modify the referred object. uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 6.1.3. .spec.configuration.source Description source is the list of MachineConfig objects that were used to generate the single MachineConfig object specified in content . Type array 6.1.4. .spec.configuration.source[] Description ObjectReference contains enough information to let you inspect or modify the referred object. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 6.1.5. .spec.machineConfigSelector Description machineConfigSelector specifies a label selector for MachineConfigs. Refer https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ on how label and selectors work. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 6.1.6. .spec.machineConfigSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 6.1.7. .spec.machineConfigSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 6.1.8. .spec.nodeSelector Description nodeSelector specifies a label selector for Machines Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 6.1.9. .spec.nodeSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 6.1.10. .spec.nodeSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 6.1.11. .status Description MachineConfigPoolStatus is the status for MachineConfigPool resource. Type object Property Type Description conditions array conditions represents the latest available observations of current state. conditions[] object MachineConfigPoolCondition contains condition information for an MachineConfigPool. configuration object configuration represents the current MachineConfig object for the machine config pool. degradedMachineCount integer degradedMachineCount represents the total number of machines marked degraded (or unreconcilable). A node is marked degraded if applying a configuration failed.. machineCount integer machineCount represents the total number of machines in the machine config pool. observedGeneration integer observedGeneration represents the generation observed by the controller. readyMachineCount integer readyMachineCount represents the total number of ready machines targeted by the pool. unavailableMachineCount integer unavailableMachineCount represents the total number of unavailable (non-ready) machines targeted by the pool. A node is marked unavailable if it is in updating state or NodeReady condition is false. updatedMachineCount integer updatedMachineCount represents the total number of machines targeted by the pool that have the CurrentMachineConfig as their config. 6.1.12. .status.conditions Description conditions represents the latest available observations of current state. Type array 6.1.13. .status.conditions[] Description MachineConfigPoolCondition contains condition information for an MachineConfigPool. Type object Property Type Description lastTransitionTime `` lastTransitionTime is the timestamp corresponding to the last status change of this condition. message string message is a human readable description of the details of the last transition, complementing reason. reason string reason is a brief machine readable explanation for the condition's last transition. status string status of the condition, one of ('True', 'False', 'Unknown'). type string type of the condition, currently ('Done', 'Updating', 'Failed'). 6.1.14. .status.configuration Description configuration represents the current MachineConfig object for the machine config pool. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency source array source is the list of MachineConfig objects that were used to generate the single MachineConfig object specified in content . source[] object ObjectReference contains enough information to let you inspect or modify the referred object. uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 6.1.15. .status.configuration.source Description source is the list of MachineConfig objects that were used to generate the single MachineConfig object specified in content . Type array 6.1.16. .status.configuration.source[] Description ObjectReference contains enough information to let you inspect or modify the referred object. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 6.2. API endpoints The following API endpoints are available: /apis/machineconfiguration.openshift.io/v1/machineconfigpools DELETE : delete collection of MachineConfigPool GET : list objects of kind MachineConfigPool POST : create a MachineConfigPool /apis/machineconfiguration.openshift.io/v1/machineconfigpools/{name} DELETE : delete a MachineConfigPool GET : read the specified MachineConfigPool PATCH : partially update the specified MachineConfigPool PUT : replace the specified MachineConfigPool /apis/machineconfiguration.openshift.io/v1/machineconfigpools/{name}/status GET : read status of the specified MachineConfigPool PATCH : partially update status of the specified MachineConfigPool PUT : replace status of the specified MachineConfigPool 6.2.1. /apis/machineconfiguration.openshift.io/v1/machineconfigpools Table 6.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of MachineConfigPool Table 6.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 6.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind MachineConfigPool Table 6.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 6.5. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPoolList schema 401 - Unauthorized Empty HTTP method POST Description create a MachineConfigPool Table 6.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.7. Body parameters Parameter Type Description body MachineConfigPool schema Table 6.8. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPool schema 201 - Created MachineConfigPool schema 202 - Accepted MachineConfigPool schema 401 - Unauthorized Empty 6.2.2. /apis/machineconfiguration.openshift.io/v1/machineconfigpools/{name} Table 6.9. Global path parameters Parameter Type Description name string name of the MachineConfigPool Table 6.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a MachineConfigPool Table 6.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 6.12. Body parameters Parameter Type Description body DeleteOptions schema Table 6.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified MachineConfigPool Table 6.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 6.15. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPool schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified MachineConfigPool Table 6.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.17. Body parameters Parameter Type Description body Patch schema Table 6.18. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPool schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified MachineConfigPool Table 6.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.20. Body parameters Parameter Type Description body MachineConfigPool schema Table 6.21. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPool schema 201 - Created MachineConfigPool schema 401 - Unauthorized Empty 6.2.3. /apis/machineconfiguration.openshift.io/v1/machineconfigpools/{name}/status Table 6.22. Global path parameters Parameter Type Description name string name of the MachineConfigPool Table 6.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified MachineConfigPool Table 6.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 6.25. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPool schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified MachineConfigPool Table 6.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.27. Body parameters Parameter Type Description body Patch schema Table 6.28. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPool schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified MachineConfigPool Table 6.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.30. Body parameters Parameter Type Description body MachineConfigPool schema Table 6.31. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPool schema 201 - Created MachineConfigPool schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/machine_apis/machineconfigpool-machineconfiguration-openshift-io-v1 |
8.97. ipvsadm | 8.97. ipvsadm 8.97.1. RHBA-2014:1511 - ipvsadm bug fix update Updated ipvsadm packages that fix one bug are now available for Red Hat Enterprise Linux 6. The ipvsadm packages provide the ipsvadm tool to administer the IP Virtual Server services offered by the Linux kernel. Bug Fix BZ# 1099687 Previously, the ipvsadm tool did not handle printing of existing sync daemons correctly under certain circumstances. Consequently, the "ipvsadm --list --daemon" command did not report the existence of a backup sync daemon when only a backup sync daemon was running on a node. A patch has been applied to address this bug, and "ipvsadm --list --daemon" now correctly shows the backup sync daemon even if it is the only daemon running. Users of ipvsadm are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/ipvsadm |
Chapter 2. Power monitoring overview | Chapter 2. Power monitoring overview Important Power monitoring is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 2.1. About power monitoring You can use power monitoring for Red Hat OpenShift to monitor the power usage and identify power-consuming containers running in an OpenShift Container Platform cluster. Power monitoring collects and exports energy-related system statistics from various components, such as CPU and DRAM. It provides granular power consumption data for Kubernetes pods, namespaces, and nodes. Warning Power monitoring Technology Preview works only in bare-metal deployments. Most public cloud vendors do not expose Kernel Power Management Subsystems to virtual machines. 2.2. Power monitoring architecture Power monitoring is made up of the following major components: The Power monitoring Operator For administrators, the Power monitoring Operator streamlines the monitoring of power usage for workloads by simplifying the deployment and management of Kepler in an OpenShift Container Platform cluster. The setup and configuration for the Power monitoring Operator are simplified by adding a Kepler custom resource definition (CRD). The Operator also manages operations, such as upgrading, removing, configuring, and redeploying Kepler. Kepler Kepler is a key component of power monitoring. It is responsible for monitoring the power usage of containers running in OpenShift Container Platform. It generates metrics related to the power usage of both nodes and containers. 2.3. Kepler hardware and virtualization support Kepler is the key component of power monitoring that collects real-time power consumption data from a node through one of the following methods: Kernel Power Management Subsystem (preferred) rapl-sysfs : This requires access to the /sys/class/powercap/intel-rapl host file. rapl-msr : This requires access to the /dev/cpu/*/msr host file. The estimator power source Without access to the kernel's power cap subsystem, Kepler uses a machine learning model to estimate the power usage of the CPU on the node. Warning The estimator feature is experimental, not supported, and should not be relied upon. You can identify the power estimation method for a node by using the Power Monitoring / Overview dashboard. 2.4. Additional resources Power monitoring dashboards overview | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/power_monitoring/power-monitoring-overview |
Chapter 68. Configuring certificates issued by ADCS for smart card authentication in IdM | Chapter 68. Configuring certificates issued by ADCS for smart card authentication in IdM To configure smart card authentication in IdM for users whose certificates are issued by Active Directory (AD) certificate services: Your deployment is based on cross-forest trust between Identity Management (IdM) and Active Directory (AD). You want to allow smart card authentication for users whose accounts are stored in AD. Certificates are created and stored in Active Directory Certificate Services (ADCS). For an overview of smart card authentication, see Understanding smart card authentication . Configuration is accomplished in the following steps: Copying CA and user certificates from Active Directory to the IdM server and client Configuring the IdM server and clients for smart card authentication using ADCS certificates Converting a PFX (PKCS#12) file to be able to store the certificate and private key into the smart card Configuring timeouts in the sssd.conf file Creating certificate mapping rules for smart card authentication Prerequisites Identity Management (IdM) and Active Directory (AD) trust is installed For details, see Installing trust between IdM and AD . Active Directory Certificate Services (ADCS) is installed and certificates for users are generated 68.1. Windows Server settings required for trust configuration and certificate usage You must configure the following on the Windows Server: Active Directory Certificate Services (ADCS) is installed Certificate Authority is created Optional: If you are using Certificate Authority Web Enrollment, the Internet Information Services (IIS) must be configured Export the certificate: Key must have 2048 bits or more Include a private key You will need a certificate in the following format: Personal Information Exchange - PKCS #12(.PFX) Enable certificate privacy 68.2. Copying certificates from Active Directory using sftp To be able to use smart card authetication, you need to copy the following certificate files: A root CA certificate in the CER format: adcs-winserver-ca.cer on your IdM server. A user certificate with a private key in the PFX format: aduser1.pfx on an IdM client. Note This procedure expects SSH access is allowed. If SSH is unavailable the user must copy the file from the AD Server to the IdM server and client. Procedure Connect from the IdM server and copy the adcs-winserver-ca.cer root certificate to the IdM server: Connect from the IdM client and copy the aduser1.pfx user certificate to the client: Now the CA certificate is stored in the IdM server and the user certificates is stored on the client machine. 68.3. Configuring the IdM server and clients for smart card authentication using ADCS certificates You must configure the IdM (Identity Management) server and clients to be able to use smart card authentication in the IdM environment. IdM includes the ipa-advise scripts which makes all necessary changes: Install necessary packages Configure IdM server and clients Copy the CA certificates into the expected locations You can run ipa-advise on your IdM server. Follow this procedure to configure your server and clients for smart card authentication: On an IdM server: Preparing the ipa-advise script to configure your IdM server for smart card authentication. On an IdM server: Preparing the ipa-advise script to configure your IdM client for smart card authentication. On an IdM server: Applying the the ipa-advise server script on the IdM server using the AD certificate. Moving the client script to the IdM client machine. On an IdM client: Applying the the ipa-advise client script on the IdM client using the AD certificate. Prerequisites The certificate has been copied to the IdM server. Obtain the Kerberos ticket. Log in as a user with administration rights. Procedure On the IdM server, use the ipa-advise script for configuring a client: On the IdM server, use the ipa-advise script for configuring a server: On the IdM server, execute the script: It configures the IdM Apache HTTP Server. It enables Public Key Cryptography for Initial Authentication in Kerberos (PKINIT) on the Key Distribution Center (KDC). It configures the IdM Web UI to accept smart card authorization requests. Copy the sc_client.sh script to the client system: Copy the Windows certificate to the client system: On the client system, run the client script: The CA certificate is installed in the correct format on the IdM server and client systems and step is to copy the user certificates onto the smart card itself. 68.4. Converting the PFX file Before you store the PFX (PKCS#12) file into the smart card, you must: Convert the file to the PEM format Extract the private key and the certificate to two different files Prerequisites The PFX file is copied into the IdM client machine. Procedure On the IdM client, into the PEM format: Extract the key into the separate file: Extract the public certificate into the separate file: At this point, you can store the aduser1.key and aduser1.crt into the smart card. 68.5. Installing tools for managing and using smart cards Prerequisites The gnutls-utils package is installed. The opensc package is installed. The pcscd service is running. Before you can configure your smart card, you must install the corresponding tools, which can generate certificates and start the pscd service. Procedure Install the opensc and gnutls-utils packages: Start the pcscd service. Verification Verify that the pcscd service is up and running 68.6. Preparing your smart card and uploading your certificates and keys to your smart card Follow this procedure to configure your smart card with the pkcs15-init tool, which helps you to configure: Erasing your smart card Setting new PINs and optional PIN Unblocking Keys (PUKs) Creating a new slot on the smart card Storing the certificate, private key, and public key in the slot If required, locking the smart card settings as certain smart cards require this type of finalization Note The pkcs15-init tool may not work with all smart cards. You must use the tools that work with the smart card you are using. Prerequisites The opensc package, which includes the pkcs15-init tool, is installed. For more details, see Installing tools for managing and using smart cards . The card is inserted in the reader and connected to the computer. You have a private key, a public key, and a certificate to store on the smart card. In this procedure, testuser.key , testuserpublic.key , and testuser.crt are the names used for the private key, public key, and the certificate. You have your current smart card user PIN and Security Officer PIN (SO-PIN). Procedure Erase your smart card and authenticate yourself with your PIN: The card has been erased. Initialize your smart card, set your user PIN and PUK, and your Security Officer PIN and PUK: The pcks15-init tool creates a new slot on the smart card. Set a label and the authentication ID for the slot: The label is set to a human-readable value, in this case, testuser . The auth-id must be two hexadecimal values, in this case it is set to 01 . Store and label the private key in the new slot on the smart card: Note The value you specify for --id must be the same when storing your private key and storing your certificate in the step. Specifying your own value for --id is recommended as otherwise a more complicated value is calculated by the tool. Store and label the certificate in the new slot on the smart card: Optional: Store and label the public key in the new slot on the smart card: Note If the public key corresponds to a private key or certificate, specify the same ID as the ID of the private key or certificate. Optional: Certain smart cards require you to finalize the card by locking the settings: At this stage, your smart card includes the certificate, private key, and public key in the newly created slot. You have also created your user PIN and PUK and the Security Officer PIN and PUK. 68.7. Configuring timeouts in sssd.conf Authentication with a smart card certificate might take longer than the default timeouts used by SSSD. Time out expiration can be caused by: Slow reader A forwarding form a physical device into a virtual environment Too many certificates stored on the smart card Slow response from the OCSP (Online Certificate Status Protocol) responder if OCSP is used to verify the certificates In this case you can prolong the following timeouts in the sssd.conf file, for example, to 60 seconds: p11_child_timeout krb5_auth_timeout Prerequisites You must be logged in as root. Procedure Open the sssd.conf file: Change the value of p11_child_timeout : Change the value of krb5_auth_timeout : Save the settings. Now, the interaction with the smart card is allowed to run for 1 minute (60 seconds) before authentication will fail with a timeout. 68.8. Creating certificate mapping rules for smart card authentication If you want to use one certificate for a user who has accounts in AD (Active Directory) and in IdM (Identity Management), you can create a certificate mapping rule on the IdM server. After creating such a rule, the user is able to authenticate with their smart card in both domains. For details about certificate mapping rules, see Certificate mapping rules for configuring authentication . | [
"root@idmserver ~]# sftp [email protected] [email protected]'s password: Connected to [email protected]. sftp> cd <Path to certificates> sftp> ls adcs-winserver-ca.cer aduser1.pfx sftp> sftp> get adcs-winserver-ca.cer Fetching <Path to certificates>/adcs-winserver-ca.cer to adcs-winserver-ca.cer <Path to certificates>/adcs-winserver-ca.cer 100% 1254 15KB/s 00:00 sftp quit",
"sftp [email protected] [email protected]'s password: Connected to [email protected]. sftp> cd /<Path to certificates> sftp> get aduser1.pfx Fetching <Path to certificates>/aduser1.pfx to aduser1.pfx <Path to certificates>/aduser1.pfx 100% 1254 15KB/s 00:00 sftp quit",
"ipa-advise config-client-for-smart-card-auth > sc_client.sh",
"ipa-advise config-server-for-smart-card-auth > sc_server.sh",
"sh -x sc_server.sh adcs-winserver-ca.cer",
"scp sc_client.sh [email protected]:/root Password: sc_client.sh 100% 2857 1.6MB/s 00:00",
"scp adcs-winserver-ca.cer [email protected]:/root Password: adcs-winserver-ca.cer 100% 1254 952.0KB/s 00:00",
"sh -x sc_client.sh adcs-winserver-ca.cer",
"openssl pkcs12 -in aduser1.pfx -out aduser1_cert_only.pem -clcerts -nodes Enter Import Password:",
"openssl pkcs12 -in adduser1.pfx -nocerts -out adduser1.pem > aduser1.key",
"openssl pkcs12 -in adduser1.pfx -clcerts -nokeys -out aduser1_cert_only.pem > aduser1.crt",
"yum -y install opensc gnutls-utils",
"systemctl start pcscd",
"systemctl status pcscd",
"pkcs15-init --erase-card --use-default-transport-keys Using reader with a card: Reader name PIN [Security Officer PIN] required. Please enter PIN [Security Officer PIN]:",
"pkcs15-init --create-pkcs15 --use-default-transport-keys \\ --pin 963214 --puk 321478 --so-pin 65498714 --so-puk 784123 Using reader with a card: Reader name",
"pkcs15-init --store-pin --label testuser \\ --auth-id 01 --so-pin 65498714 --pin 963214 --puk 321478 Using reader with a card: Reader name",
"pkcs15-init --store-private-key testuser.key --label testuser_key \\ --auth-id 01 --id 01 --pin 963214 Using reader with a card: Reader name",
"pkcs15-init --store-certificate testuser.crt --label testuser_crt \\ --auth-id 01 --id 01 --format pem --pin 963214 Using reader with a card: Reader name",
"pkcs15-init --store-public-key testuserpublic.key --label testuserpublic_key --auth-id 01 --id 01 --pin 963214 Using reader with a card: Reader name",
"pkcs15-init -F",
"vim /etc/sssd/sssd.conf",
"[pam] p11_child_timeout = 60",
"[domain/IDM.EXAMPLE.COM] krb5_auth_timeout = 60"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/configuring-certificates-issued-by-adcs-for-smart-card-authentication-in-idm_configuring-and-managing-idm |
Preface | Preface As an OpenShift AI administrator, you can create, delete, and manage permissions for model registries in OpenShift AI. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/managing_model_registries/pr01 |
Chapter 1. Red Hat OpenShift Cluster Manager | Chapter 1. Red Hat OpenShift Cluster Manager Red Hat OpenShift Cluster Manager is a managed service where you can install, modify, operate, and upgrade your Red Hat OpenShift clusters. This service allows you to work with all of your organization's clusters from a single dashboard. OpenShift Cluster Manager guides you to install OpenShift Container Platform, Red Hat OpenShift Service on AWS (ROSA), and OpenShift Dedicated clusters. It is also responsible for managing both OpenShift Container Platform clusters after self-installation as well as your ROSA and OpenShift Dedicated clusters. You can use OpenShift Cluster Manager to do the following actions: Create clusters View cluster details and metrics Manage your clusters with tasks such as scaling, changing node labels, networking, authentication Manage access control Monitor clusters Schedule upgrades For more information about OpenShift Cluster Manager, see the entire OpenShift Cluster Manager documentation . 1.1. Accessing Red Hat OpenShift Cluster Manager You can access OpenShift Cluster Manager with your configured OpenShift account. Prerequisites You have an account that is part of an OpenShift organization. If you are creating a cluster, your organization has specified quota. Procedure Log in to OpenShift Cluster Manager using your login credentials. 1.2. General actions On the top right of the cluster page, there are some actions that a user can perform on the entire cluster: Open console launches a web console so that the cluster owner can issue commands to the cluster. Actions drop-down menu allows the cluster owner to rename the display name of the cluster, change the amount of load balancers and persistent storage on the cluster, if applicable, manually set the node count, and delete the cluster. Refresh icon forces a refresh of the cluster. 1.3. Cluster tabs Selecting an active, installed cluster shows tabs associated with that cluster. The following tabs display after the cluster's installation completes: Overview Access control Add-ons Cluster history Networking Machine pools Support Settings 1.3.1. Overview tab The Overview tab provides information about how the cluster was configured: Cluster ID is the unique identification for the created cluster. This ID can be used when issuing commands to the cluster from the command line. Domain prefix is the prefix that is used throughout the cluster. The default value is the cluster's name. Type shows the OpenShift version that the cluster is using. Region is the server region. Availability shows which type of availability zone that the cluster uses, either single or multizone. Version is the OpenShift version that is installed on the cluster. If there is an update available, you can update from this field. Created at shows the date and time that the cluster was created. Owner identifies who created the cluster and has owner rights. Delete Protection: <status> shows whether or not the cluster's delete protection is enabled. Status displays the current status of the cluster. Total vCPU shows the total available virtual CPU for this cluster. Total memory shows the total available memory for this cluster. Infrastructure AWS account displays the AWS account that is responsible for cluster creation and maintenance. Additional encryption field shows any applicable additional encryption options. Nodes shows the actual and desired nodes on the cluster. These numbers might not match due to cluster scaling. Cluster autoscaling field shows whether or not you have enabled autoscaling on the cluster. Instance Metadata Service (IMDS) field shows your selected instance metadata service for the cluster. Network field shows the address and prefixes for network connectivity. OIDC configuration field shows the Open ID Connect configuration for the cluster. Resource usage section of the tab displays the resources in use with a graph. Advisor recommendations section gives insight in relation to security, performance, availability, and stability. This section requires the use of remote health functionality. See Using Insights to identify issues with the cluster in the Additional resources section. Additional resources Using Insights to identify issues with your cluster 1.3.2. Access control tab The Access control tab allows the cluster owner to set up an identity provider, grant elevated permissions, and grant roles to other users. Prerequisites You must be the cluster owner or have the correct permissions to grant roles on the cluster. Procedure Select the Grant role button. Enter the Red Hat account login for the user that you wish to grant a role on the cluster. Select the Grant role button on the dialog box. The dialog box closes, and the selected user shows the "Cluster Editor" access. 1.3.3. Add-ons tab The Add-ons tab displays all of the optional add-ons that can be added to the cluster. Select the desired add-on, and then select Install below the description for the add-on that displays. 1.3.4. Cluster history tab The Cluster history tab shows every change to the cluster from creation onward for each version. You can specify date ranges for your cluster history and use filters to search based on the description of the notification, the severity of the notification, the type of notification, and which role logged it. You may download your cluster history as a JSON or CSV file. 1.3.5. Networking tab The Networking tab provides a control plane API endpoint as well as the default application router. Both the control plane API endpoint and the default application router can be made private by selecting the respective box below label. If applicable, you can also find your virtual private cloud (VPC) details on this tab. Select the Edit application ingress button to edit the existing application ingress. You can change your application ingress to private or public by checking or unchecking the "Make router private" checkbox. Important For Security Token Service (STS) installations, these options cannot be changed. STS installations also do not allow you to change privacy nor allow you to add an additional router. 1.3.6. Machine pools tab The Machine pools tab allows the cluster owner to create new machine pools if there is enough available quota, or edit an existing machine pool. Selecting the > Edit option opens the "Edit machine pool" dialog. In this dialog, you can change the node count per availability zone, edit node labels and taints, and view any associated AWS security groups. Select the Edit cluster autoscaling button to specify your autoscaling strategy. 1.3.7. Support tab In the Support tab, you can add notification contacts for individuals that should receive cluster notifications. The username or email address that you provide must relate to a user account in the Red Hat organization where the cluster is deployed. For the steps to add a notification contact, see Adding cluster notification contacts . Also from this tab, you can open a support case to request technical support for your cluster. 1.3.8. Settings tab The Settings tab provides a few options for the cluster owner: Monitoring , which is enabled by default, allows for reporting done on user-defined actions. See Understanding the monitoring stack . Update strategy allows you to determine if the cluster automatically updates on a certain day of the week at a specified time or if all updates are scheduled manually. Node draining sets the duration that protected workloads are respected during updates. When this duration has passed, the node is forcibly removed. Update status shows the current version and if there are any updates available. 1.4. Additional resources For the complete documentation for OpenShift Cluster Manager, see OpenShift Cluster Manager documentation . For steps to add cluster notification contacts, see Adding cluster notification contacts | null | https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/red_hat_openshift_cluster_manager/ocm-overview |
Chapter 15. Replacing the web server and LDAP server certificates if they have expired in the whole IdM deployment | Chapter 15. Replacing the web server and LDAP server certificates if they have expired in the whole IdM deployment Identity Management (IdM) uses the following service certificates: The LDAP (or Directory ) server certificate The web (or httpd ) server certificate The PKINIT certificate In an IdM deployment without a CA, certmonger does not by default track IdM service certificates or notify of their expiration. If the IdM system administrator does not manually set up notifications for these certificates, or configure certmonger to track them, the certificates will expire without notice. Follow this procedure to manually replace expired certificates for the httpd and LDAP services running on the server.idm.example.com IdM server. Note The HTTP and LDAP service certificates have different keypairs and subject names on different IdM servers. Therefore, you must renew the certificates on each IdM server individually. Prerequisites The HTTP and LDAP certificates have expired on all IdM replicas in the topology. If not, see Replacing the web server and LDAP server certificates if they have not yet expired on an IdM replica . You have root access to the IdM server and replicas. You know the Directory Manager password. You have created backups of the following directories and files: /etc/dirsrv/slapd- IDM-EXAMPLE-COM / /etc/httpd/alias /var/lib/certmonger /var/lib/ipa/certs/ Procedure Optional: Perform a backup of /var/lib/ipa/private and /var/lib/ipa/passwds . If you are not using the same CA to sign the new certificates or if the already installed CA certificate is no longer valid, update the information about the external CA in your local database with a file that contains a valid CA certificate chain of the external CA. The file is accepted in PEM and DER certificate, PKCS#7 certificate chain, PKCS#8 and raw private key and PKCS#12 formats. Install the certificates available in ca_certificate_chain_file.crt as additional CA certificates into IdM: Update the local IdM certificate databases with certificates from ca_certicate_chain_file.crt : Request the certificates for httpd and LDAP: Create a certificate signing request (CSR) for the Apache web server running on your IdM instances to your third party CA using the OpenSSL utility. The creation of a new private key is optional. If you still have the original private key, you can use the -in option with the openssl req command to specify the input file name to read the request from: If you want to create a new key: Create a certificate signing request (CSR) for the LDAP server running on your IdM instances to your third party CA using the OpenSSL utility: Submit the CSRs, /tmp/http.csr and tmp/ldap.csr , to the external CA, and obtain a certificate for httpd and a certificate for LDAP. The process differs depending on the service to be used as the external CA. Install the certificate for httpd : Install the LDAP certificate into an NSS database: Optional: List the available certificates: The default certificate nickname is Server-Cert , but it is possible that a different name was applied. Remove the old invalid certificate from the NSS database ( NSSDB ) by using the certificate nickname from the step: Create a PKCS12 file to ease the import process into NSSDB : Install the created PKCS#12 file into the NSSDB : Check that the new certificate has been successfully imported: Restart the httpd service: Restart the Directory service: Perform all the steps on all your IdM replicas. This is a prerequisite for establishing TLS connections between the replicas. Enroll the new certificates to LDAP storage: Replace the Apache web server's old private key and certificate with the new key and the newly-signed certificate: In the command above: The -w option specifies that you are installing a certificate into the web server. The --pin option specifies the password protecting the private key. When prompted, enter the Directory Manager password. Replace the LDAP server's old private key and certificate with the new key and the newly-signed certificate: In the command above: The -d option specifies that you are installing a certificate into the LDAP server. The --pin option specifies the password protecting the private key. When prompted, enter the Directory Manager password. Restart the httpd service: Restart the Directory service: Execute the commands from the step on all the other affected replicas. Additional resources man ipa-server-certinstall(1) How do I manually renew Identity Management (IPA) certificates on RHEL 8 after they have expired? (CA-less IPA) (Red Hat Knowledgebase) Converting certificate formats to work with IdM | [
"ipa-cacert-manage install ca_certificate_chain_file.crt",
"ipa-certupdate",
"openssl req -new -nodes -in /var/lib/ipa/private/httpd.key -out /tmp/http.csr -addext 'subjectAltName = DNS:_server.idm.example.com_, otherName:1.3.6.1.4.1.311.20.2.3;UTF8:HTTP/ [email protected] ' -subj '/O= IDM.EXAMPLE.COM/CN=server.idm.example.com '",
"openssl req -new -newkey rsa:2048 -nodes -keyout /var/lib/ipa/private/httpd.key -out /tmp/http.csr -addext 'subjectAltName = DNS: server.idm.example.com , otherName:1.3.6.1.4.1.311.20.2.3;UTF8:HTTP/ [email protected] ' -subj '/O= IDM.EXAMPLE.COM /CN= server.idm.example.com '",
"openssl req -new -newkey rsa:2048 -nodes -keyout ~/ldap.key -out /tmp/ldap.csr -addext 'subjectAltName = DNS: server.idm.example.com , otherName:1.3.6.1.4.1.311.20.2.3;UTF8:ldap/ [email protected] ' -subj '/O= IDM.EXAMPLE.COM /CN= server.idm.example.com '",
"cp /path/to/httpd.crt /var/lib/ipa/certs/",
"certutil -d /etc/dirsrv/slapd- IDM-EXAMPLE-COM / -L Certificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI Server-Cert u,u,u",
"certutil -D -d /etc/dirsrv/slapd- IDM-EXAMPLE-COM / -n 'Server-Cert' -f /etc/dirsrv/slapd- IDM-EXAMPLE-COM /pwdfile.txt",
"openssl pkcs12 -export -in ldap.crt -inkey ldap.key -out ldap.p12 -name Server-Cert",
"pk12util -i ldap.p12 -d /etc/dirsrv/slapd- IDM-EXAMPLE-COM / -k /etc/dirsrv/slapd- IDM-EXAMPLE-COM /pwdfile.txt",
"certutil -L -d /etc/dirsrv/slapd- IDM-EXAMPLE-COM /",
"systemctl restart httpd.service",
"systemctl restart dirsrv@ IDM-EXAMPLE-COM .service",
"ipa-server-certinstall -w --pin=password /var/lib/ipa/private/httpd.key /var/lib/ipa/certs/httpd.crt",
"ipa-server-certinstall -d --pin=password /etc/dirsrv/slapd- IDM-EXAMPLE-COM /ldap.key /path/to/ldap.crt",
"systemctl restart httpd.service",
"systemctl restart dirsrv@ IDM-EXAMPLE-COM .service"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_certificates_in_idm/proc_replacing-the-web-server-and-ldap-server-certificates-if-they-have-expired-in-the-whole-idm-deployment_working-with-idm-certificates |
4.283. sblim-gather | 4.283. sblim-gather 4.283.1. RHBA-2011:1593 - sblim-gather bug fix update Updated sblim-gather packages that fix several bugs are now available for Red Hat Enterprise Linux 6. The sblim-gather package (Standards Based Linux Instrumentation for Manageability Performance Data Gatherer Base) contains agents and control programs for gathering and providing performance data and CIM (Common Information Model) Providers. The sblim-gather package has been upgraded to upstream version 2.2.3, which provides a number of bug fixes over the version. (BZ# 633991 ) Bug Fixes BZ# 712043 Previously, CIM Metrics providers specific to IBM System z were missing from the sblim-gather package, preventing proper functionality of the package on that architecture. This update ensures that the CIM Metrics providers are now properly included in the IBM System z packages, with the result that full functionality is now provided. BZ# 713174 The sblim-gather-provider package is DSP1053 compliant and advertises this via the Linux_MetricRegisteredProfile class under the root/interop namespace. Prior to this update, the registration of this class and provider was missing from the package, preventing communication with the class via CIM object managers. This bug has been fixed, and now the appropriate provider for the Linux_MetricRegisteredProfile class is properly registered under the root/interop namespace. BZ# 626769 Previously, the sblim-gather init script was incorrectly placed in the /etc/init.d directory, causing difficulties during installation of the package. With this update, the init script is correctly placed in the /etc/rc.d/init.d directory, thus fixing this bug. BZ# 627919 Previously, the sblim-gather init script exit status codes were incorrect in two scenarios: when restarting a service as a non-privileged user and when passing an invalid argument. This bug has been fixed, and all exit status codes of the sblim-gather init script are now correct. All users of sblim-gather are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/sblim-gather |
Chapter 5. Creating new bricks and configuration | Chapter 5. Creating new bricks and configuration 5.1. Host replacement prerequisites Determine which node to use as the Ansible controller node (the node from which all Ansible playbooks are executed). Red Hat recommends using a healthy node in the same cluster as the failed node as the Ansible controller node. Stop brick processes and unmount file systems on the failed host, to avoid file system inconsistency issues. Check which operating system is running on your hyperconverged hosts by running the following command: Reinstall the same operating system on the failed hyperconverged host. 5.2. Preparing the cluster for host replacement Verify host state in the Administrator Portal. Log in to the Red Hat Virtualization Administrator Portal. The host is listed as NonResponsive in the Administrator Portal. Virtual machines that previously ran on this host are in the Unknown state. Click Compute Hosts and click the Action menu (...). Click Confirm host has been rebooted and confirm the operation. Verify that the virtual machines are now listed with a state of Down . Update the SSH fingerprint for the failed node. Log in to the Ansible controller node as the root user. Remove the existing SSH fingerprint for the failed node. Copy the public key from the Ansible controller node to the freshly installed node. Verify that you can log in to all hosts in the cluster, including the Ansible controller node, using key-based SSH authentication without a password. Test access using all network addresses. The following example assumes that the Ansible controller node is host1 . Use ssh-copy-id to copy the public key to any host you cannot log into without a password using this method. 5.3. Creating the node_prep_inventory.yml file Define the replacement node in the node_prep_inventory.yml file. Procedure Familiarize yourself with your Gluster configuration. The configuration that you define in your inventory file must match the existing Gluster volume configuration. Use gluster volume info to check where your bricks should be mounted for each Gluster volume, for example: Back up the node_prep_inventory.yml file. Edit the node_prep_inventory.yml file to define your node preparation. See Appendix B, Understanding the node_prep_inventory.yml file for more information about this inventory file and its parameters. 5.4. Creating the node_replace_inventory.yml file Define your cluster hosts by creating a node_replacement_inventory.yml file. Procedure Back up the node_replace_inventory.yml file. Edit the node_replace_inventory.yml file to define your cluster. See Appendix C, Understanding the node_replace_inventory.yml file for more information about this inventory file and its parameters. 5.5. Executing the replace_node.yml playbook file The replace_node.yml playbook reconfigures a Red Hat Hyperconverged Infrastructure for Virtualization cluster to use a new node after an existing cluster node has failed. Procedure Execute the playbook. 5.6. Finalizing host replacement After you have replaced a failed host with a new host, follow these steps to ensure that the cluster is connected to the new host and properly activated. Procedure Activate the host. Log in to the Red Hat Virtualization Administrator Portal. Click Compute Hosts and observe that the replacement host is listed with a state of Maintenance . Select the host and click Management Activate . Wait for the host to reach the Up state. Attach the gluster network to the host. Click Compute Hosts and select the host. Click Network Interfaces Setup Host Networks . Drag and drop the newly created network to the correct interface. Ensure that the Verify connectivity between Host and Engine checkbox is checked. Ensure that the Save network configuration checkbox is checked. Click OK to save. Verify the health of the network. Click the Network Interfaces tab and check the state of the host's network. If the network interface enters an "Out of sync" state or does not have an IP Address, click Management Refresh Capabilities . 5.7. Verifying healing in progress After replacing a failed host with a new host, verify that your storage is healing as expected. Procedure Verify that healing is in progress. Run the following command on any hyperconverged host: The output shows a summary of healing activity on each brick in each volume, for example: Depending on brick size, volumes can take a long time to heal. You can still run and migrate virtual machines using this node while the underlying storage heals. | [
"pkill glusterfsd umount /gluster_bricks/{engine,vmstore,data}",
"nodectl info",
"sed -i `/ failed-host-frontend.example.com /d` /root/.ssh/known_hosts sed -i `/ failed-host-backend.example.com /d` /root/.ssh/known_hosts",
"ssh-copy-id root@ new-host-backend.example.com ssh-copy-id root@ new-host-frontend.example.com",
"ssh root@ host1-backend.example.com ssh root@ host1-frontend.example.com ssh root@ host2-backend.example.com ssh root@ host2-frontend.example.com ssh root@ new-host-backend.example.com ssh root@ new-host-frontend.example.com",
"ssh-copy-id root@ host-frontend.example.com ssh-copy-id root@ host-backend.example.com",
"gluster volume info engine | grep -i brick Number of Bricks: 1 x 3 = 3 Bricks: Brick1: host1.example.com:/gluster_bricks/engine/engine Brick2: host2.example.com:/gluster_bricks/engine/engine Brick3: host3.example.com:/gluster_bricks/engine/engine",
"cd /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment cp node_prep_inventory.yml node_prep_inventory.yml.bk",
"cd /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment cp node_replace_inventory.yml node_replace_inventory.yml.bk",
"cd /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/ ansible-playbook -i node_prep_inventory.yml -i node_replace_inventory.yml tasks/replace_node.yml",
"for vol in `gluster volume list`; do gluster volume heal USDvol info summary; done",
"Brick brick1 Status: Connected Total Number of entries: 3 Number of entries in heal pending: 2 Number of entries in split-brain: 1 Number of entries possibly healing: 0"
] | https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/replacing_failed_hosts/replacing-hosts_same-fqdn-new-disks |
Updating Red Hat Satellite | Updating Red Hat Satellite Red Hat Satellite 6.16 Update Satellite Server and Capsule to a new minor release Red Hat Satellite Documentation Team [email protected] | [
"subscription-manager repos --enable satellite-maintenance-6.16-for-rhel-9-x86_64-rpms",
"subscription-manager repos --enable satellite-maintenance-6.16-for-rhel-8-x86_64-rpms",
"satellite-maintain update check",
"satellite-maintain update run",
"dnf needs-restarting --reboothint",
"reboot",
"dnf install 'dnf-command(reposync)'",
"[rhel-8-for-x86_64-baseos-rpms] name=Red Hat Enterprise Linux 8 for x86_64 - BaseOS (RPMs) baseurl= https://satellite.example.com /pulp/content/ My_Organization /Library/content/dist/rhel8/8/x86_64/baseos/os enabled = 1 sslclientcert = /etc/pki/katello/certs/org-debug-cert.pem sslclientkey = /etc/pki/katello/certs/org-debug-cert.pem sslcacert = /etc/pki/katello/certs/katello-server-ca.crt sslverify = 1 [rhel-8-for-x86_64-appstream-rpms] name=Red Hat Enterprise Linux 8 for x86_64 - AppStream (RPMs) baseurl= https://satellite.example.com /pulp/content/ My_Organization /Library/content/dist/rhel8/8/x86_64/appstream/os enabled = 1 sslclientcert = /etc/pki/katello/certs/org-debug-cert.pem sslclientkey = /etc/pki/katello/certs/org-debug-cert.pem sslcacert = /etc/pki/katello/certs/katello-server-ca.crt sslverify = 1 [satellite-6.16-for-rhel-8-x86_64-rpms] name=Red Hat Satellite 6.16 for RHEL 8 RPMs x86_64 baseurl= https://satellite.example.com /pulp/content/ My_Organization /Library/content/dist/layered/rhel8/x86_64/satellite/6.16/os enabled = 1 sslclientcert = /etc/pki/katello/certs/org-debug-cert.pem sslclientkey = /etc/pki/katello/certs/org-debug-cert.pem sslcacert = /etc/pki/katello/certs/katello-server-ca.crt sslverify = 1 [satellite-maintenance-6.16-for-rhel-8-x86_64-rpms] name=Red Hat Satellite Maintenance 6.16 for RHEL 8 RPMs x86_64 baseurl= https://satellite.example.com /pulp/content/ My_Organization /Library/content/dist/layered/rhel8/x86_64/sat-maintenance/6.16/os enabled = 1 sslclientcert = /etc/pki/katello/certs/org-debug-cert.pem sslclientkey = /etc/pki/katello/certs/org-debug-cert.pem sslcacert = /etc/pki/katello/certs/katello-server-ca.crt sslverify = 1",
"hammer organization list",
"dnf reposync --delete --disableplugin=foreman-protector --download-metadata --repoid rhel-8-for-x86_64-appstream-rpms --repoid rhel-8-for-x86_64-baseos-rpms --repoid satellite-maintenance-6.16-for-rhel-8-x86_64-rpms --repoid satellite-6.16-for-rhel-8-x86_64-rpms -n -p ~/Satellite-repos",
"tar czf Satellite-repos.tgz -C ~ Satellite-repos",
"tar zxf Satellite-repos.tgz -C /root",
"[rhel-8-for-x86_64-baseos-rpms] name=Red Hat Enterprise Linux 8 for x86_64 - BaseOS (RPMs) baseurl=file:///root/Satellite-repos/rhel-8-for-x86_64-baseos-rpms enabled = 1 [rhel-8-for-x86_64-appstream-rpms] name=Red Hat Enterprise Linux 8 for x86_64 - AppStream (RPMs) baseurl=file:///root/Satellite-repos/rhel-8-for-x86_64-appstream-rpms enabled = 1 [satellite-6.16-for-rhel-8-x86_64-rpms] name=Red Hat Satellite 6 for RHEL 8 Server RPMs x86_64 baseurl=file:///root/Satellite-repos/satellite-6.16-for-rhel-8-x86_64-rpms enabled = 1 [satellite-maintenance-6.16-for-rhel-8-x86_64-rpms] name=Red Hat Satellite Maintenance 6 for RHEL 8 Server RPMs x86_64 baseurl=file:///root/Satellite-repos/satellite-maintenance-6.16-for-rhel-8-x86_64-rpms enabled = 1",
"satellite-maintain update check --whitelist=\"check-upstream-repository,repositories-validate\"",
"satellite-maintain update run --whitelist=\"check-upstream-repository,repositories-setup,repositories-validate\"",
"dnf needs-restarting --reboothint",
"reboot",
"[rhel-9-for-x86_64-baseos-rpms] name=Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs) baseurl= https://satellite.example.com /pulp/content/ My_Organization /Library/content/dist/rhel9/9/x86_64/baseos/os enabled = 1 sslclientcert = /etc/pki/katello/certs/org-debug-cert.pem sslclientkey = /etc/pki/katello/certs/org-debug-cert.pem sslcacert = /etc/pki/katello/certs/katello-server-ca.crt sslverify = 1 [rhel-9-for-x86_64-appstream-rpms] name=Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs) baseurl= https://satellite.example.com /pulp/content/ My_Organization /Library/content/dist/rhel9/9/x86_64/appstream/os enabled = 1 sslclientcert = /etc/pki/katello/certs/org-debug-cert.pem sslclientkey = /etc/pki/katello/certs/org-debug-cert.pem sslcacert = /etc/pki/katello/certs/katello-server-ca.crt sslverify = 1 [satellite-6.16-for-rhel-9-x86_64-rpms] name=Red Hat Satellite 6.16 for RHEL 9 RPMs x86_64 baseurl= https://satellite.example.com /pulp/content/ My_Organization /Library/content/dist/layered/rhel9/x86_64/satellite/6.16/os enabled = 1 sslclientcert = /etc/pki/katello/certs/org-debug-cert.pem sslclientkey = /etc/pki/katello/certs/org-debug-cert.pem sslcacert = /etc/pki/katello/certs/katello-server-ca.crt sslverify = 1 [satellite-maintenance-6.16-for-rhel-9-x86_64-rpms] name=Red Hat Satellite Maintenance 6.16 for RHEL 9 RPMs x86_64 baseurl= https://satellite.example.com /pulp/content/ My_Organization /Library/content/dist/layered/rhel9/x86_64/sat-maintenance/6.16/os enabled = 1 sslclientcert = /etc/pki/katello/certs/org-debug-cert.pem sslclientkey = /etc/pki/katello/certs/org-debug-cert.pem sslcacert = /etc/pki/katello/certs/katello-server-ca.crt sslverify = 1",
"hammer organization list",
"dnf reposync --delete --disableplugin=foreman-protector --download-metadata --repoid rhel-9-for-x86_64-appstream-rpms --repoid rhel-9-for-x86_64-baseos-rpms --repoid satellite-maintenance-6.16-for-rhel-9-x86_64-rpms --repoid satellite-6.16-for-rhel-9-x86_64-rpms -n -p ~/Satellite-repos",
"tar czf Satellite-repos.tgz -C ~ Satellite-repos",
"tar zxf Satellite-repos.tgz -C /root",
"[rhel-9-for-x86_64-baseos-rpms] name=Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs) baseurl=file:///root/Satellite-repos/rhel-9-for-x86_64-baseos-rpms enabled = 1 [rhel-9-for-x86_64-appstream-rpms] name=Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs) baseurl=file:///root/Satellite-repos/rhel-9-for-x86_64-appstream-rpms enabled = 1 [satellite-6.16-for-rhel-9-x86_64-rpms] name=Red Hat Satellite 6 for RHEL 9 Server RPMs x86_64 baseurl=file:///root/Satellite-repos/satellite-6.16-for-rhel-9-x86_64-rpms enabled = 1 [satellite-maintenance-6.16-for-rhel-9-x86_64-rpms] name=Red Hat Satellite Maintenance 6 for RHEL 9 Server RPMs x86_64 baseurl=file:///root/Satellite-repos/satellite-maintenance-6.16-for-rhel-9-x86_64-rpms enabled = 1",
"satellite-maintain update check --whitelist=\"check-upstream-repository,repositories-validate\"",
"satellite-maintain update run --whitelist=\"check-upstream-repository,repositories-setup,repositories-validate\"",
"dnf needs-restarting --reboothint",
"reboot",
"subscription-manager repos --enable satellite-maintenance-6.16-for-rhel-9-x86_64-rpms",
"subscription-manager repos --enable satellite-maintenance-6.16-for-rhel-8-x86_64-rpms",
"satellite-maintain update check",
"satellite-maintain update run",
"dnf needs-restarting --reboothint",
"reboot"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html-single/updating_red_hat_satellite/index |
Packaging and distributing software | Packaging and distributing software Red Hat Enterprise Linux 9 Packaging software by using the RPM package management system Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/packaging_and_distributing_software/index |
Chapter 15. Network Booting with libvirt | Chapter 15. Network Booting with libvirt Guest virtual machines can be booted with PXE enabled. PXE allows guest virtual machines to boot and load their configuration off the network itself. This section demonstrates some basic configuration steps to configure PXE guests with libvirt. This section does not cover the creation of boot images or PXE servers. It is used to explain how to configure libvirt, in a private or bridged network, to boot a guest virtual machine with PXE booting enabled. Warning These procedures are provided only as an example. Ensure that you have sufficient backups before proceeding. 15.1. Preparing the Boot Server To perform the steps in this chapter you will need: A PXE Server (DHCP and TFTP) - This can be a libvirt internal server, manually-configured dhcpd and tftpd, dnsmasq, a server configured by Cobbler, or some other server. Boot images - for example, PXELINUX configured manually or by Cobbler. 15.1.1. Setting up a PXE Boot Server on a Private libvirt Network This example uses the default network. Perform the following steps: Procedure 15.1. Configuring the PXE boot server Place the PXE boot images and configuration in /var/lib/tftp . Run the following commands: Edit the <ip> element in the configuration file for the default network to include the appropriate address, network mask, DHCP address range, and boot file, where BOOT_FILENAME represents the file name you are using to boot the guest virtual machine. Boot the guest using PXE (refer to Section 15.2, "Booting a Guest Using PXE" ). | [
"virsh net-destroy default virsh net-edit default",
"<ip address='192.168.122.1' netmask='255.255.255.0'> <tftp root='/var/lib/tftp' /> <dhcp> <range start='192.168.122.2' end='192.168.122.254' /> <bootp file=' BOOT_FILENAME ' /> </dhcp> </ip>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_host_configuration_and_guest_installation_guide/chap-virtualization_host_configuration_and_guest_installation_guide-libvirt_network_booting |
Chapter 4. Sharding clusters across Argo CD Application Controller replicas | Chapter 4. Sharding clusters across Argo CD Application Controller replicas You can shard clusters across multiple Argo CD Application Controller replicas if the controller is managing too many clusters and uses too much memory. 4.1. Enabling the round-robin sharding algorithm Important The round-robin sharding algorithm is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . By default, the Argo CD Application Controller uses the non-uniform legacy hash-based sharding algorithm to assign clusters to shards. This can result in uneven cluster distribution. You can enable the round-robin sharding algorithm to achieve more equal cluster distribution across all shards. Using the round-robin sharding algorithm in Red Hat OpenShift GitOps provides the following benefits: Ensure more balanced workload distribution Prevent shards from being overloaded or underutilized Optimize the efficiency of computing resources Reduce the risk of bottlenecks Improve overall performance and reliability of the Argo CD system The introduction of alternative sharding algorithms allows for further customization based on specific use cases. You can select the algorithm that best aligns with your deployment needs, which results in greater flexibility and adaptability in diverse operational scenarios. Tip To leverage the benefits of alternative sharding algorithms in GitOps, it is crucial to enable sharding during deployment. 4.1.1. Enabling the round-robin sharding algorithm in the web console You can enable the round-robin sharding algorithm by using the OpenShift Container Platform web console. Prerequisites You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster. You have access to the OpenShift Container Platform web console. You have access to the cluster with cluster-admin privileges. Procedure In the Administrator perspective of the web console, go to Operators Installed Operators . Click Red Hat OpenShift GitOps from the installed operators and go to the Argo CD tab. Click the Argo CD instance where you want to enable the round-robin sharding algorithm, for example, openshift-gitops . Click the YAML tab and edit the YAML file as shown in the following example: Example Argo CD instance with round-robin sharding algorithm enabled apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: openshift-gitops namespace: openshift-gitops spec: controller: sharding: enabled: true 1 replicas: 3 2 env: 3 - name: ARGOCD_CONTROLLER_SHARDING_ALGORITHM value: round-robin logLevel: debug 4 1 Set the sharding.enabled parameter to true to enable sharding. 2 Set the number of replicas to the wanted value, for example, 3 . 3 Set the sharding algorithm to round-robin . 4 Set the log level to debug so that you can verify to which shard each cluster is attached. Click Save . A success notification alert, openshift-gitops has been updated to version <version> , appears. Note If you edit the default openshift-gitops instance, the Managed resource dialog box is displayed. Click Save again to confirm the changes. Verify that the sharding is enabled with round-robin as the sharding algorithm by performing the following steps: Go to Workloads StatefulSets . Select the namespace where you installed the Argo CD instance from the Project drop-down list. Click <instance_name>-application-controller , for example, openshift-gitops-application-controller , and go to the Pods tab. Observe the number of created application controller pods. It should correspond with the number of set replicas. Click on the controller pod you want to examine and go to the Logs tab to view the pod logs. Example controller pod logs snippet time="2023-12-13T09:05:34Z" level=info msg="ArgoCD Application Controller is starting" built="2023-12-01T19:21:49Z" commit=a3vd5c3df52943a6fff6c0rg181fth3248976299 namespace=openshift-gitops version=v2.9.2+c5ea5c4 time="2023-12-13T09:05:34Z" level=info msg="Processing clusters from shard 1" time="2023-12-13T09:05:34Z" level=info msg="Using filter function: round-robin" 1 time="2023-12-13T09:05:34Z" level=info msg="Using filter function: round-robin" time="2023-12-13T09:05:34Z" level=info msg="appResyncPeriod=3m0s, appHardResyncPeriod=0s" 1 Look for the "Using filter function: round-robin" message. In the log Search field, search for processed by shard to verify that the cluster distribution across shards is even, as shown in the following example. Important Ensure that you set the log level to debug to observe these logs. Example controller pod logs snippet time="2023-12-13T09:05:34Z" level=debug msg="ClustersList has 3 items" time="2023-12-13T09:05:34Z" level=debug msg="Adding cluster with id= and name=in-cluster to cluster's map" time="2023-12-13T09:05:34Z" level=debug msg="Adding cluster with id=068d8b26-6rhi-4w23-jrf6-wjjfyw833n23 and name=in-cluster2 to cluster's map" time="2023-12-13T09:05:34Z" level=debug msg="Adding cluster with id=836d8b53-96k4-f68r-8wq0-sh72j22kl90w and name=in-cluster3 to cluster's map" time="2023-12-13T09:05:34Z" level=debug msg="Cluster with id= will be processed by shard 0" 1 time="2023-12-13T09:05:34Z" level=debug msg="Cluster with id=068d8b26-6rhi-4w23-jrf6-wjjfyw833n23 will be processed by shard 1" 2 time="2023-12-13T09:05:34Z" level=debug msg="Cluster with id=836d8b53-96k4-f68r-8wq0-sh72j22kl90w will be processed by shard 2" 3 1 2 3 In this example, 3 clusters are attached consecutively to shard 0, shard 1, and shard 2. Note If the number of clusters "C" is a multiple of the number of shard replicas "R", then each shard must have the same number of assigned clusters "N", which is equal to "C" divided by "R". The example shows 3 clusters and 3 replicas; therefore, each shard has 1 cluster assigned. 4.1.2. Enabling the round-robin sharding algorithm by using the CLI You can enable the round-robin sharding algorithm by using the command-line interface. Prerequisites You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster. You have access to the cluster with cluster-admin privileges. Procedure Enable sharding and set the number of replicas to the wanted value by running the following command: USD oc patch argocd <argocd_instance> -n <namespace> --patch='{"spec":{"controller":{"sharding":{"enabled":true,"replicas":<value>}}}}' --type=merge Example output argocd.argoproj.io/<argocd_instance> patched Configure the sharding algorithm to round-robin by running the following command: USD oc patch argocd <argocd_instance> -n <namespace> --patch='{"spec":{"controller":{"env":[{"name":"ARGOCD_CONTROLLER_SHARDING_ALGORITHM","value":"round-robin"}]}}}' --type=merge Example output argocd.argoproj.io/<argocd_instance> patched Verify that the number of Argo CD Application Controller pods corresponds with the number of set replicas by running the following command: USD oc get pods -l app.kubernetes.io/name=<argocd_instance>-application-controller -n <namespace> Example output NAME READY STATUS RESTARTS AGE <argocd_instance>-application-controller-0 1/1 Running 0 11s <argocd_instance>-application-controller-1 1/1 Running 0 32s <argocd_instance>-application-controller-2 1/1 Running 0 22s Verify that the sharding is enabled with round-robin as the sharding algorithm by running the following command: USD oc logs <argocd_application_controller_pod> -n <namespace> Example output snippet time="2023-12-13T09:05:34Z" level=info msg="ArgoCD Application Controller is starting" built="2023-12-01T19:21:49Z" commit=a3vd5c3df52943a6fff6c0rg181fth3248976299 namespace=<namespace> version=v2.9.2+c5ea5c4 time="2023-12-13T09:05:34Z" level=info msg="Processing clusters from shard 1" time="2023-12-13T09:05:34Z" level=info msg="Using filter function: round-robin" 1 time="2023-12-13T09:05:34Z" level=info msg="Using filter function: round-robin" time="2023-12-13T09:05:34Z" level=info msg="appResyncPeriod=3m0s, appHardResyncPeriod=0s" 1 Look for the "Using filter function: round-robin" message. Verify that the cluster distribution across shards is even by performing the following steps: Set the log level to debug by running the following command: USD oc patch argocd <argocd_instance> -n <namespace> --patch='{"spec":{"controller":{"logLevel":"debug"}}}' --type=merge Example output argocd.argoproj.io/<argocd_instance> patched View the logs and search for processed by shard to observe to which shard each cluster is attached by running the following command: USD oc logs <argocd_application_controller_pod> -n <namespace> | grep "processed by shard" Example output snippet time="2023-12-13T09:05:34Z" level=debug msg="Cluster with id= will be processed by shard 0" 1 time="2023-12-13T09:05:34Z" level=debug msg="Cluster with id=068d8b26-6rhi-4w23-jrf6-wjjfyw833n23 will be processed by shard 1" 2 time="2023-12-13T09:05:34Z" level=debug msg="Cluster with id=836d8b53-96k4-f68r-8wq0-sh72j22kl90w will be processed by shard 2" 3 1 2 3 In this example, 3 clusters are attached consecutively to shard 0, shard 1, and shard 2. Note If the number of clusters "C" is a multiple of the number of shard replicas "R", then each shard must have the same number of assigned clusters "N", which is equal to "C" divided by "R". The example shows 3 clusters and 3 replicas; therefore, each shard has 1 cluster assigned. 4.2. Enabling dynamic scaling of shards of the Argo CD Application Controller Important Dynamic scaling of shards is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . By default, the Argo CD Application Controller assigns clusters to shards indefinitely. If you are using the round-robin sharding algorithm, this static assignment can result in uneven distribution of shards, particularly when replicas are added or removed. You can enable dynamic scaling of shards to automatically adjust the number of shards based on the number of clusters managed by the Argo CD Application Controller at a given time. This ensures that shards are well-balanced and optimizes the use of compute resources. Note After you enable dynamic scaling, you cannot manually modify the shard count. The system automatically adjusts the number of shards based on the number of clusters managed by the Argo CD Application Controller at a given time. 4.2.1. Enabling dynamic scaling of shards in the web console You can enable dynamic scaling of shards by using the OpenShift Container Platform web console. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster. Procedure In the Administator perspective of the OpenShift Container Platform web console, go to Operators Installed Operators . From the the list of Installed Operators , select the Red Hat OpenShift GitOps Operator, and then click the ArgoCD tab. Select the Argo CD instance name for which you want to enable dynamic scaling of shards, for example, openshift-gitops . Click the YAML tab, and then edit and configure the spec.controller.sharding properties as follows: Example Argo CD YAML file with dynamic scaling enabled apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: openshift-gitops namespace: openshift-gitops spec: controller: sharding: dynamicScalingEnabled: true 1 minShards: 1 2 maxShards: 3 3 clustersPerShard: 1 4 1 Set dynamicScalingEnabled to true to enable dynamic scaling. 2 Set minShards to the minimum number of shards that you want to have. The value must be set to 1 or greater. 3 Set maxShards to the maximum number of shards that you want to have. The value must be greater than the value of minShards . 4 Set clustersPerShard to the number of clusters that you want to have per shard. The value must be set to 1 or greater. Click Save . A success notification alert, openshift-gitops has been updated to version <version> , appears. Note If you edit the default openshift-gitops instance, the Managed resource dialog box is displayed. Click Save again to confirm the changes. Verification Verify that sharding is enabled by checking the number of pods in the namespace: Go to Workloads StatefulSets . Select the namespace where the Argo CD instance is deployed from the Project drop-down list, for example, openshift-gitops . Click the name of the StatefulSet object that has the name of the Argo CD instance, for example openshift-gitops-apllication-controller . Click the Pods tab, and then verify that the number of pods is equal to or greater than the value of minShards that you have set in the Argo CD YAML file. 4.2.2. Enabling dynamic scaling of shards by using the CLI You can enable dynamic scaling of shards by using the OpenShift CLI ( oc ). Prerequisites You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster. You have access to the cluster with cluster-admin privileges. Procedure Log in to the cluster by using the oc tool as a user with cluster-admin privileges. Enable dynamic scaling by running the following command: USD oc patch argocd <argocd_instance> -n <namespace> --type=merge --patch='{"spec":{"controller":{"sharding":{"dynamicScalingEnabled":true,"minShards":<value>,"maxShards":<value>,"clustersPerShard":<value>}}}}' Example command USD oc patch argocd openshift-gitops -n openshift-gitops --type=merge --patch='{"spec":{"controller":{"sharding":{"dynamicScalingEnabled":true,"minShards":1,"maxShards":3,"clustersPerShard":1}}}}' 1 1 The example command enables dynamic scaling for the openshift-gitops Argo CD instance in the openshift-gitops namespace, and sets the minimum number of shards to 1 , the maximum number of shards to 3 , and the number of clusters per shard to 1 . The values of minShard and clustersPerShard must be set to 1 or greater. The value of maxShard must be equal to or greater than the value of minShard . Example output argocd.argoproj.io/openshift-gitops patched Verification Check the spec.controller.sharding properties of the Argo CD instance: USD oc get argocd <argocd_instance> -n <namespace> -o jsonpath='{.spec.controller.sharding}' Example command USD oc get argocd openshift-gitops -n openshift-gitops -o jsonpath='{.spec.controller.sharding}' Example output when dynamic scaling of shards is enabled {"dynamicScalingEnabled":true,"minShards":1,"maxShards":3,"clustersPerShard":1} Optional: Verify that dynamic scaling is enabled by checking the configured spec.controller.sharding properties in the configuration YAML file of the Argo CD instance in the OpenShift Container Platform web console. Check the number of Argo CD Application Controller pods: USD oc get pods -n <namespace> -l app.kubernetes.io/name=<argocd_instance>-application-controller Example command USD oc get pods -n openshift-gitops -l app.kubernetes.io/name=openshift-gitops-application-controller Example output NAME READY STATUS RESTARTS AGE openshift-gitops-application-controller-0 1/1 Running 0 2m 1 1 The number of Argo CD Application Controller pods must be greater than or equal to the value of minShard . Additional resources Argo CD custom resource properties Automatically scaling pods with the horizontal pod autoscaler | [
"apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: openshift-gitops namespace: openshift-gitops spec: controller: sharding: enabled: true 1 replicas: 3 2 env: 3 - name: ARGOCD_CONTROLLER_SHARDING_ALGORITHM value: round-robin logLevel: debug 4",
"time=\"2023-12-13T09:05:34Z\" level=info msg=\"ArgoCD Application Controller is starting\" built=\"2023-12-01T19:21:49Z\" commit=a3vd5c3df52943a6fff6c0rg181fth3248976299 namespace=openshift-gitops version=v2.9.2+c5ea5c4 time=\"2023-12-13T09:05:34Z\" level=info msg=\"Processing clusters from shard 1\" time=\"2023-12-13T09:05:34Z\" level=info msg=\"Using filter function: round-robin\" 1 time=\"2023-12-13T09:05:34Z\" level=info msg=\"Using filter function: round-robin\" time=\"2023-12-13T09:05:34Z\" level=info msg=\"appResyncPeriod=3m0s, appHardResyncPeriod=0s\"",
"time=\"2023-12-13T09:05:34Z\" level=debug msg=\"ClustersList has 3 items\" time=\"2023-12-13T09:05:34Z\" level=debug msg=\"Adding cluster with id= and name=in-cluster to cluster's map\" time=\"2023-12-13T09:05:34Z\" level=debug msg=\"Adding cluster with id=068d8b26-6rhi-4w23-jrf6-wjjfyw833n23 and name=in-cluster2 to cluster's map\" time=\"2023-12-13T09:05:34Z\" level=debug msg=\"Adding cluster with id=836d8b53-96k4-f68r-8wq0-sh72j22kl90w and name=in-cluster3 to cluster's map\" time=\"2023-12-13T09:05:34Z\" level=debug msg=\"Cluster with id= will be processed by shard 0\" 1 time=\"2023-12-13T09:05:34Z\" level=debug msg=\"Cluster with id=068d8b26-6rhi-4w23-jrf6-wjjfyw833n23 will be processed by shard 1\" 2 time=\"2023-12-13T09:05:34Z\" level=debug msg=\"Cluster with id=836d8b53-96k4-f68r-8wq0-sh72j22kl90w will be processed by shard 2\" 3",
"oc patch argocd <argocd_instance> -n <namespace> --patch='{\"spec\":{\"controller\":{\"sharding\":{\"enabled\":true,\"replicas\":<value>}}}}' --type=merge",
"argocd.argoproj.io/<argocd_instance> patched",
"oc patch argocd <argocd_instance> -n <namespace> --patch='{\"spec\":{\"controller\":{\"env\":[{\"name\":\"ARGOCD_CONTROLLER_SHARDING_ALGORITHM\",\"value\":\"round-robin\"}]}}}' --type=merge",
"argocd.argoproj.io/<argocd_instance> patched",
"oc get pods -l app.kubernetes.io/name=<argocd_instance>-application-controller -n <namespace>",
"NAME READY STATUS RESTARTS AGE <argocd_instance>-application-controller-0 1/1 Running 0 11s <argocd_instance>-application-controller-1 1/1 Running 0 32s <argocd_instance>-application-controller-2 1/1 Running 0 22s",
"oc logs <argocd_application_controller_pod> -n <namespace>",
"time=\"2023-12-13T09:05:34Z\" level=info msg=\"ArgoCD Application Controller is starting\" built=\"2023-12-01T19:21:49Z\" commit=a3vd5c3df52943a6fff6c0rg181fth3248976299 namespace=<namespace> version=v2.9.2+c5ea5c4 time=\"2023-12-13T09:05:34Z\" level=info msg=\"Processing clusters from shard 1\" time=\"2023-12-13T09:05:34Z\" level=info msg=\"Using filter function: round-robin\" 1 time=\"2023-12-13T09:05:34Z\" level=info msg=\"Using filter function: round-robin\" time=\"2023-12-13T09:05:34Z\" level=info msg=\"appResyncPeriod=3m0s, appHardResyncPeriod=0s\"",
"oc patch argocd <argocd_instance> -n <namespace> --patch='{\"spec\":{\"controller\":{\"logLevel\":\"debug\"}}}' --type=merge",
"argocd.argoproj.io/<argocd_instance> patched",
"oc logs <argocd_application_controller_pod> -n <namespace> | grep \"processed by shard\"",
"time=\"2023-12-13T09:05:34Z\" level=debug msg=\"Cluster with id= will be processed by shard 0\" 1 time=\"2023-12-13T09:05:34Z\" level=debug msg=\"Cluster with id=068d8b26-6rhi-4w23-jrf6-wjjfyw833n23 will be processed by shard 1\" 2 time=\"2023-12-13T09:05:34Z\" level=debug msg=\"Cluster with id=836d8b53-96k4-f68r-8wq0-sh72j22kl90w will be processed by shard 2\" 3",
"apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: openshift-gitops namespace: openshift-gitops spec: controller: sharding: dynamicScalingEnabled: true 1 minShards: 1 2 maxShards: 3 3 clustersPerShard: 1 4",
"oc patch argocd <argocd_instance> -n <namespace> --type=merge --patch='{\"spec\":{\"controller\":{\"sharding\":{\"dynamicScalingEnabled\":true,\"minShards\":<value>,\"maxShards\":<value>,\"clustersPerShard\":<value>}}}}'",
"oc patch argocd openshift-gitops -n openshift-gitops --type=merge --patch='{\"spec\":{\"controller\":{\"sharding\":{\"dynamicScalingEnabled\":true,\"minShards\":1,\"maxShards\":3,\"clustersPerShard\":1}}}}' 1",
"argocd.argoproj.io/openshift-gitops patched",
"oc get argocd <argocd_instance> -n <namespace> -o jsonpath='{.spec.controller.sharding}'",
"oc get argocd openshift-gitops -n openshift-gitops -o jsonpath='{.spec.controller.sharding}'",
"{\"dynamicScalingEnabled\":true,\"minShards\":1,\"maxShards\":3,\"clustersPerShard\":1}",
"oc get pods -n <namespace> -l app.kubernetes.io/name=<argocd_instance>-application-controller",
"oc get pods -n openshift-gitops -l app.kubernetes.io/name=openshift-gitops-application-controller",
"NAME READY STATUS RESTARTS AGE openshift-gitops-application-controller-0 1/1 Running 0 2m 1"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.15/html/declarative_cluster_configuration/sharding-clusters-across-argo-cd-application-controller-replicas |
Preface | Preface Open Java Development Kit (OpenJDK) is a free and open source implementation of the Java Platform, Standard Edition (Java SE). Eclipse Temurin is available in four LTS versions: OpenJDK 8u, OpenJDK 11u, OpenJDK 17u, and OpenJDK 21u. Binary files for Eclipse Temurin are available for macOS, Microsoft Windows, and multiple Linux x86 Operating Systems including Red Hat Enterprise Linux and Ubuntu. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_eclipse_temurin_11.0.26/pr01 |
6.2. Suspending a Virtual Machine | 6.2. Suspending a Virtual Machine Suspending a virtual machine is equal to placing that virtual machine into Hibernate mode. Suspending a Virtual Machine Click Compute Virtual Machines and select a running virtual machine. Click Suspend or right-click the virtual machine and select Suspend from the pop-up menu. The Status of the virtual machine changes to Suspended . | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/virtual_machine_management_guide/suspending_a_virtual_machine |
1.2. Default Cgroup Hierarchies | 1.2. Default Cgroup Hierarchies By default, systemd automatically creates a hierarchy of slice , scope and service units to provide a unified structure for the cgroup tree. With the systemctl command, you can further modify this structure by creating custom slices, as shown in Section 2.1, "Creating Control Groups" . Also, systemd automatically mounts hierarchies for important kernel resource controllers (see Available Controllers in Red Hat Enterprise Linux 7 ) in the /sys/fs/cgroup/ directory. Warning The deprecated cgconfig tool from the libcgroup package is available to mount and handle hierarchies for controllers not yet supported by systemd (most notably the net-prio controller). Never use libcgropup tools to modify the default hierarchies mounted by systemd since it would lead to unexpected behavior. The libcgroup library will be removed in future versions of Red Hat Enterprise Linux. For more information on how to use cgconfig , see Chapter 3, Using libcgroup Tools . Systemd Unit Types All processes running on the system are child processes of the systemd init process. Systemd provides three unit types that are used for the purpose of resource control (for a complete list of systemd 's unit types, see the chapter called Managing Services with systemd in Red Hat Enterprise Linux 7 System Administrator's Guide ): Service - A process or a group of processes, which systemd started based on a unit configuration file. Services encapsulate the specified processes so that they can be started and stopped as one set. Services are named in the following way: name . service Where name stands for the name of the service. Scope - A group of externally created processes. Scopes encapsulate processes that are started and stopped by arbitrary processes through the fork() function and then registered by systemd at runtime. For instance, user sessions, containers, and virtual machines are treated as scopes. Scopes are named as follows: name . scope Here, name stands for the name of the scope. Slice - A group of hierarchically organized units. Slices do not contain processes, they organize a hierarchy in which scopes and services are placed. The actual processes are contained in scopes or in services. In this hierarchical tree, every name of a slice unit corresponds to the path to a location in the hierarchy. The dash (" - ") character acts as a separator of the path components. For example, if the name of a slice looks as follows: parent - name . slice it means that a slice called parent - name . slice is a subslice of the parent . slice . This slice can have its own subslice named parent - name - name2 . slice , and so on. There is one root slice denoted as: -.slice Service, scope, and slice units directly map to objects in the cgroup tree. When these units are activated, they map directly to cgroup paths built from the unit names. For example, the ex.service residing in the test-waldo.slice is mapped to the cgroup test.slice/test-waldo.slice/ex.service/ . Services, scopes, and slices are created manually by the system administrator or dynamically by programs. By default, the operating system defines a number of built-in services that are necessary to run the system. Also, there are four slices created by default: -.slice - the root slice; system.slice - the default place for all system services; user.slice - the default place for all user sessions; machine.slice - the default place for all virtual machines and Linux containers. Note that all user sessions are automatically placed in a separated scope unit, as well as virtual machines and container processes. Furthermore, all users are assigned with an implicit subslice. Besides the above default configuration, the system administrator can define new slices and assign services and scopes to them. The following tree is a simplified example of a cgroup tree. This output was generated with the systemd-cgls command described in Section 2.4, "Obtaining Information about Control Groups" : As you can see, services and scopes contain processes and are placed in slices that do not contain processes of their own. The only exception is PID 1 that is located in the special systemd slice marked as -.slice . Also note that -.slice is not shown as it is implicitly identified with the root of the entire tree. Service and slice units can be configured with persistent unit files as described in Section 2.3.2, "Modifying Unit Files" , or created dynamically at runtime by API calls to PID 1 (see the section called "Online Documentation" for API reference). Scope units can be created only dynamically. Units created dynamically with API calls are transient and exist only during runtime. Transient units are released automatically as soon as they finish, get deactivated, or the system is rebooted. | [
"├─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 20 ├─user.slice │ └─user-1000.slice │ └─session-1.scope │ ├─11459 gdm-session-worker [pam/gdm-password] │ ├─11471 gnome-session --session gnome-classic │ ├─11479 dbus-launch --sh-syntax --exit-with-session │ ├─11480 /bin/dbus-daemon --fork --print-pid 4 --print-address 6 --session │ │ └─system.slice ├─systemd-journald.service │ └─422 /usr/lib/systemd/systemd-journald ├─bluetooth.service │ └─11691 /usr/sbin/bluetoothd -n ├─systemd-localed.service │ └─5328 /usr/lib/systemd/systemd-localed ├─colord.service │ └─5001 /usr/libexec/colord ├─sshd.service │ └─1191 /usr/sbin/sshd -D │"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/resource_management_guide/sec-Default_Cgroup_Hierarchies |
2.3. Graphical Interface | 2.3. Graphical Interface Important pkiconsole is being deprecated. The Certificate System console, pkiconsole , is a graphical interface that is designed for users with the Administrator role privilege to manage the subsystem itself. This includes adding users, configuring logs, managing profiles and plug-ins, and the internal database, among many other functions. This utility communicates with the Certificate System server via TLS using client-authentication and can be used to manage the server remotely. 2.3.1. pkiconsole Initialization To use the pkiconsole interface for the first time, specify a new password and use the following command: This command creates a new client NSS database in the ~/.redhat-idm-console/ directory. To import the CA certificate into the PKI client NSS database, see the Importing a certificate into an NSS Database section in the Red Hat Certificate System Planning, Installation, and Deployment Guide . To request a new client certificate, see Chapter 5, Requesting, Enrolling, and Managing Certificates . Execute the following command to extract the admin client certificate from the .p12 file: Validate and import the admin client certificate as described in the Managing Certificate/Key Crypto Token section in the Red Hat Certificate System Planning, Installation, and Deployment Guide : Important Make sure all intermediate certificates and the root CA certificate have been imported before importing the CA admin client certificate. To import an existing client certificate and its key into the client NSS database: Verify the client certificate with the following command: 2.3.2. Using pkiconsole for CA, OCSP, KRA, and TKS Subsystems The Java console is used by four subsystems: the CA, OCSP, KRA, and TKS. The console is accessed using a locally-installed pkiconsole utility. It can access any subsystem because the command requires the host name, the subsystem's administrative TLS port, and the specific subsystem type. If DNS is not configured, you can use an IPv4 or IPv6 address to connect to the console. For example: This opens a console, as in Figure 2.1, "Certificate System Console" . Figure 2.1. Certificate System Console The Configuration tab controls all of the setup for the subsystem, as the name implies. The choices available in this tab are different depending on which subsystem type the instance is; the CA has the most options since it has additional configuration for jobs, notifications, and certificate enrollment authentication. All subsystems have four basic options: Users and groups Access control lists Log configuration Subsystem certificates (meaning the certificates issued to the subsystem for use, for example, in the security domain or audit signing) The Status tab shows the logs maintained by the subsystem. | [
"pki -c password -d ~/.redhat-idm-console client-init",
"openssl pkcs12 -in file -clcerts -nodes -nokeys -out file.crt",
"PKICertImport -d ~/.redhat-idm-console -n \" nickname \" -t \",,\" -a -i file.crt -u C",
"pki -c password -d ~/.redhat-idm-console pkcs12-import --pkcs12-file file --pkcs12-password pkcs12-password",
"certutil -V -u C -n \" nickname \" -d ~/.redhat-idm-console",
"pkiconsole https://server.example.com: admin_port/subsystem_type",
"https://192.0.2.1:8443/ca https://[2001:DB8::1111]:8443/ca"
] | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/graphical-interface |
4.365. yum | 4.365. yum 4.365.1. RHBA-2011:1702 - yum bug fix and enhancement update An updated yum package that fixes several bugs and adds three enhancements is now available for Red Hat Enterprise Linux 6. Yum is a command line utility that allows a user to check for and automatically download and install updated RPM packages. It automatically obtains and downloads dependencies, prompting the user for permission as necessary. Bug Fixes BZ# 661962 When uninstalling a package, the "yum remove" command may have previously reported success even when the package could not be removed due to an error in the %pre scriptlet. With this update, this error has been fixed, and when yum fails to remove a package, it no longer claims that it succeeded. BZ# 697885 When running the "yum -v repolist" command, the version of the yum utility may have incorrectly displayed a duplicate "Repo-baseurl" line for a repository with no mirrors. This update applies a patch that corrects this error, and the output of the "yum -v repolist" command no longer contains duplicate lines. BZ# 704600 Previously, an attempt to install a package that was larger than 4 GB on a 32-bit architecture caused yum to terminate unexpectedly with a traceback. With this update, the underlying source code has been adapted to work around this problem, and packages larger than 4 GB can now be installed as expected. BZ# 707358 Prior to this update, running a yum command with the "--installroot" command line option caused it to report the following warning: This update adapts the underlying source code not to display this warning when the "--installroot" option is in use, resolving this issue. BZ# 727574 Under certain circumstances, an attempt to use the RepoStorage API may have failed with an AttributeError. With this update, this error has been fixed, and the RepoStorage API can now be used as expected. BZ# 727586 Previously, the repodiff utility used a stale metadata cache in subsequent runs. When two repodiff commands were executed in succession, the second run reused cached data from the first. This bug has been fixed and repodiff now properly validates the metadata if a connection cannot be established or the cached data are about to be reused. BZ# 728253 Prior to this update, when the "yum -q history addon-info last saved_tx" command was used to store transaction data in a file, an attempt to supply this file to the "yum load-transaction" command in order to repeat the transaction failed with an error, because the output contained extra lines. This update corrects the underlying source code to make sure the "yum -q history addon-info last saved_tx" command produces valid output, and adapts "yum load-transaction" to accept older version of the output as well. BZ# 733391 In very rare cases, the yum utility may have incorrectly kept using old updateinfo, pkgtags, and groups metadata. When this happened, users may have been unaware of available updates for up to 6 hours. This update applies a patch that prevents yum from using outdated metadata, resolving this issue. Enhancements BZ# 662243 The "yum history" command has been adapted to store and display yumdb and rpmdb information, such as from which repository was a particular package installed. BZ# 728526 The "yum update" command can now be used to update a package to a specific version. BZ# 694401 The yum utility no longer asks the user to report a bug when the dependency solver (depsolve) encounters an error. All users of yum are advised to upgrade to this updated package, which fixes these bugs and adds these enhancements. 4.365.2. RHBA-2012:0386 - yum bug fix update An updated yum package that fixes one bug is now available for Red Hat Enterprise Linux 6. Yum is a command line utility that allows the user to check for updates and automatically download and install updated RPM packages. Yum automatically obtains and downloads dependencies, prompting the user for permission as necessary. Bug Fix BZ# 795455 The anacron scheduler starts the yum-cron utility with the "nice" value of 10. This caused Yum's RPM transactions to run at very low priority level. Also, any updated service inherited this "nice" value, which influenced the system behavior. This update adds the "reset_nice" configuration option, which allows Yum to reset the "nice" value to 0 before running an RPM transaction. With this option set, Yum's RPM transactions run at normal priority level so that updated services are restarted with normal priority as expected. All users of yum are advised to upgrade to this updated package, which fixes this bug. | [
"Ignored option -c (probably due to merging -yc != -y -c)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/yum |
Chapter 69. Consul Component | Chapter 69. Consul Component Available as of Camel version 2.18 The Consul component is a component for integrating your application with Consul. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-consul</artifactId> <version>USD{camel-version}</version> </dependency> 69.1. URI format consul://domain?[options] You can append query options to the URI in the following format: 69.2. Options The Consul component supports 9 options, which are listed below. Name Description Default Type url (common) The Consul agent URL String datacenter (common) The data center String sslContextParameters (common) SSL configuration using an org.apache.camel.util.jsse.SSLContextParameters instance. SSLContextParameters useGlobalSslContext Parameters (security) Enable usage of global SSL context parameters. false boolean aclToken (common) Sets the ACL token to be used with Consul String userName (common) Sets the username to be used for basic authentication String password (common) Sets the password to be used for basic authentication String configuration (advanced) Sets the common configuration shared among endpoints ConsulConfiguration resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Consul endpoint is configured using URI syntax: with the following path and query parameters: 69.2.1. Path Parameters (1 parameters): Name Description Default Type apiEndpoint Required The API endpoint String 69.2.2. Query Parameters (4 parameters): Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 69.3. Spring Boot Auto-Configuration The component supports 90 options, which are listed below. Name Description Default Type camel.component.consul.acl-token Sets the ACL token to be used with Consul String camel.component.consul.cluster.service.acl-token String camel.component.consul.cluster.service.attributes Custom service attributes. Map camel.component.consul.cluster.service.block-seconds Integer camel.component.consul.cluster.service.connect-timeout-millis Long camel.component.consul.cluster.service.consistency-mode ConsistencyMode camel.component.consul.cluster.service.datacenter String camel.component.consul.cluster.service.enabled Sets if the consul cluster service should be enabled or not, default is false. false Boolean camel.component.consul.cluster.service.first-index BigInteger camel.component.consul.cluster.service.id Cluster Service ID String camel.component.consul.cluster.service.near-node String camel.component.consul.cluster.service.node-meta List camel.component.consul.cluster.service.order Service lookup order/priority. Integer camel.component.consul.cluster.service.password String camel.component.consul.cluster.service.ping-instance Boolean camel.component.consul.cluster.service.read-timeout-millis Long camel.component.consul.cluster.service.recursive Boolean camel.component.consul.cluster.service.root-path String camel.component.consul.cluster.service.session-lock-delay Integer camel.component.consul.cluster.service.session-refresh-interval Integer camel.component.consul.cluster.service.session-ttl Integer camel.component.consul.cluster.service.ssl-context-parameters SSLContextParameters camel.component.consul.cluster.service.tags Set camel.component.consul.cluster.service.url String camel.component.consul.cluster.service.user-name String camel.component.consul.cluster.service.write-timeout-millis Long camel.component.consul.configuration.acl-token Sets the ACL token to be used with Consul String camel.component.consul.configuration.action The default action. Can be overridden by CamelConsulAction String camel.component.consul.configuration.block-seconds The second to wait for a watch event, default 10 seconds Integer camel.component.consul.configuration.connect-timeout-millis Connect timeout for OkHttpClient Long camel.component.consul.configuration.consistency-mode The consistencyMode used for queries, default ConsistencyMode.DEFAULT ConsistencyMode camel.component.consul.configuration.consul-client Reference to a com.orbitz.consul.Consul in the registry. Consul camel.component.consul.configuration.datacenter The data center String camel.component.consul.configuration.first-index The first index for watch for, default 0 BigInteger camel.component.consul.configuration.key The default key. Can be overridden by CamelConsulKey String camel.component.consul.configuration.near-node The near node to use for queries. String camel.component.consul.configuration.node-meta The note meta-data to use for queries. List camel.component.consul.configuration.password Sets the password to be used for basic authentication String camel.component.consul.configuration.ping-instance Configure if the AgentClient should attempt a ping before returning the Consul instance Boolean camel.component.consul.configuration.read-timeout-millis Read timeout for OkHttpClient Long camel.component.consul.configuration.recursive Recursively watch, default false Boolean camel.component.consul.configuration.ssl-context-parameters SSL configuration using an org.apache.camel.util.jsse.SSLContextParameters instance. SSLContextParameters camel.component.consul.configuration.tags Set tags. You can separate multiple tags by comma. Set camel.component.consul.configuration.url The Consul agent URL String camel.component.consul.configuration.user-name Sets the username to be used for basic authentication String camel.component.consul.configuration.value-as-string Default to transform values retrieved from Consul i.e. on KV endpoint to string. Boolean camel.component.consul.configuration.write-timeout-millis Write timeout for OkHttpClient Long camel.component.consul.datacenter The data center String camel.component.consul.enabled Enable consul component true Boolean camel.component.consul.health.check.repository.checks Define the checks to include. List camel.component.consul.health.check.repository.configurations Health check configurations. Map camel.component.consul.health.check.repository.enabled Boolean camel.component.consul.health.check.repository.failure-threshold Integer camel.component.consul.health.check.repository.interval String camel.component.consul.password Sets the password to be used for basic authentication String camel.component.consul.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.component.consul.service-registry.acl-token String camel.component.consul.service-registry.attributes Custom service attributes. Map camel.component.consul.service-registry.block-seconds Integer camel.component.consul.service-registry.check-interval Integer camel.component.consul.service-registry.check-ttl Integer camel.component.consul.service-registry.connect-timeout-millis Long camel.component.consul.service-registry.consistency-mode ConsistencyMode camel.component.consul.service-registry.datacenter String camel.component.consul.service-registry.deregister-after Integer camel.component.consul.service-registry.deregister-services-on-stop Boolean camel.component.consul.service-registry.enabled Sets if the consul service registry should be enabled or not, default is false. false Boolean camel.component.consul.service-registry.first-index BigInteger camel.component.consul.service-registry.id Service Registry ID String camel.component.consul.service-registry.near-node String camel.component.consul.service-registry.node-meta List camel.component.consul.service-registry.order Service lookup order/priority. Integer camel.component.consul.service-registry.override-service-host Boolean camel.component.consul.service-registry.password String camel.component.consul.service-registry.ping-instance Boolean camel.component.consul.service-registry.read-timeout-millis Long camel.component.consul.service-registry.recursive Boolean camel.component.consul.service-registry.service-host String camel.component.consul.service-registry.ssl-context-parameters SSLContextParameters camel.component.consul.service-registry.tags Set camel.component.consul.service-registry.url String camel.component.consul.service-registry.user-name String camel.component.consul.service-registry.write-timeout-millis Long camel.component.consul.ssl-context-parameters SSL configuration using an org.apache.camel.util.jsse.SSLContextParameters instance. The option is a org.apache.camel.util.jsse.SSLContextParameters type. String camel.component.consul.url The Consul agent URL String camel.component.consul.use-global-ssl-context-parameters Enable usage of global SSL context parameters. false Boolean camel.component.consul.user-name Sets the username to be used for basic authentication String camel.component.consul.cluster.service.dc String camel.component.consul.configuration.dc The data center @deprecated replaced by {@link #setDatacenter(String)} ()} String camel.component.consul.service-registry.dc String 69.4. Headers Name Type Description CamelConsulAction String The Producer action CamelConsulKey String The Key on which the action should applied CamelConsulEventId String The event id (consumer only) CamelConsulEventName String The event name (consumer only) CamelConsulEventLTime Long The event LTime CamelConsulNodeFilter String The Node filter CamelConsulTagFilter String The tag filter CamelConsulSessionFilter String The session filter CamelConsulVersion int The data version CamelConsulFlags Long Flags associated with a value CamelConsulCreateIndex Long The internal index value that represents when the entry was created CamelConsulLockIndex Long The number of times this key has successfully been acquired in a lock CamelConsulModifyIndex Long The last index that modified this key CamelConsulOptions Object Options associated to the request CamelConsulResult boolean true if the response has a result CamelConsulSession String The session id CamelConsulValueAsString boolean To transform values retrieved from Consul i.e. on KV endpoint to string. | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-consul</artifactId> <version>USD{camel-version}</version> </dependency>",
"consul://domain?[options]",
"?option=value&option=value&",
"consul:apiEndpoint"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/consul-component |
Chapter 6. Upgrade Quay Bridge Operator | Chapter 6. Upgrade Quay Bridge Operator To upgrade the Quay Bridge Operator (QBO), change the Channel Subscription update channel in the Subscription tab to the desired channel. When upgrading QBO from version 3.5 to 3.7, a number of extra steps are required: You need to create a new QuayIntegration custom resource. This can be completed in the Web Console or from the command line. upgrade-quay-integration.yaml - apiVersion: quay.redhat.com/v1 kind: QuayIntegration metadata: name: example-quayintegration-new spec: clusterID: openshift 1 credentialsSecret: name: quay-integration namespace: openshift-operators insecureRegistry: false quayHostname: https://registry-quay-quay35.router-default.apps.cluster.openshift.com 1 Make sure that the clusterID matches the value for the existing QuayIntegration resource. Create the new QuayIntegration custom resource: USD oc create -f upgrade-quay-integration.yaml Delete the old QuayIntegration custom resource. Delete the old mutatingwebhookconfigurations : USD oc delete mutatingwebhookconfigurations.admissionregistration.k8s.io quay-bridge-operator | [
"- apiVersion: quay.redhat.com/v1 kind: QuayIntegration metadata: name: example-quayintegration-new spec: clusterID: openshift 1 credentialsSecret: name: quay-integration namespace: openshift-operators insecureRegistry: false quayHostname: https://registry-quay-quay35.router-default.apps.cluster.openshift.com",
"oc create -f upgrade-quay-integration.yaml",
"oc delete mutatingwebhookconfigurations.admissionregistration.k8s.io quay-bridge-operator"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/upgrade_red_hat_quay/qbo-operator-upgrade |
8.3.4. Customizing Security Profiles | 8.3.4. Customizing Security Profiles After selecting the security profile that suits your security policy, you can further adjust it by clicking the Customize button. This will open the new Customization window that allows you to modify the currently selected XCCDF profile without actually changing the respective XCCDF file. Figure 8.4. Customizing the Selected Security Profile The Customization window contains a complete set of XCCDF elements relevant to the selected security profile with detailed information about each element and its functionality. You can enable or disable these elements by selecting or de-selecting the respective check boxes in the main field of this window. The Customization window also supports undo and redo functionality; you can undo or redo your selections by clicking the respective arrow icon in the top left corner of the window. You can also change variables that will later be used for evaluation. Find the desired item in the Customization window, navigate to the right part and use the Modify value field. Figure 8.5. Setting a value for the selected item in the Customization window After you have finished your profile customizations, confirm the changes by clicking the Confirm Customization button. Your changes are now in the memory and do not persist if SCAP Workbench is closed or certain changes, such as selecting a new SCAP content or choosing another customization option, are made. To store your changes, click the Save Customization button in the SCAP Workbench window. This action allows you to save your changes to the security profile as an XCCDF customization file in the chosen directory. Note that this customization file can be further selected with other profiles. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-customizing_security_profiles-scap_workbench |
Chapter 7. Logging in to the Identity Management Web UI using one time passwords | Chapter 7. Logging in to the Identity Management Web UI using one time passwords Access to IdM Web UI can be secured using several methods. The basic one is password authentication. To increase the security of password authentication, you can add a second step and require automatically generated one-time passwords (OTPs). The most common usage is to combine password connected with the user account and a time limited one time password generated by a hardware or software token. The following sections help you to: Understand how the OTP authentication works in IdM. Configure OTP authentication on the IdM server. Configure a RADIUS server for OTP validation in IdM. Create OTP tokens and synchronize them with the FreeOTP app in your phone. Authenticate to the IdM Web UI with the combination of user password and one time password. Re-synchronize tokens in the Web UI. Retrieve an IdM ticket-granting ticket as an OTP or RADIUS user Enforce OTP usage for all LDAP clients 7.1. Prerequisites Accessing the IdM Web UI in a web browser 7.2. One time password (OTP) authentication in Identity Management One-time passwords bring an additional step to your authentication security. The authentication uses your password + an automatically generated one time password. To generate one time passwords, you can use a hardware or software token. IdM supports both software and hardware tokens. Identity Management supports the following two standard OTP mechanisms: The HMAC-Based One-Time Password (HOTP) algorithm is based on a counter. HMAC stands for Hashed Message Authentication Code. The Time-Based One-Time Password (TOTP) algorithm is an extension of HOTP to support time-based moving factor. Important IdM does not support OTP logins for Active Directory trust users. 7.3. Enabling the one-time password in the Web UI Identity Management (IdM) administrators can enable two-factor authentication (2FA) for IdM users either globally or individually. The user enters the one-time password (OTP) after their regular password on the command line or in the dedicated field in the Web UI login dialog, with no space between these passwords. Enabling 2FA is not the same as enforcing it. If you use logins based on LDAP-binds, IdM users can still authenticate by entering a password only. However, if you use krb5 -based logins, the 2FA is enforced. Note that there is an option to enforce 2FA for LDAP-binds by enforcing OTP usage for all LDAP clients. For more information, see Enforcing OTP usage for all LDAP clients . In a future release, Red Hat plans to provide a configuration option for administrators to select one of the following: Allow users to set their own tokens. In this case, LDAP-binds are still not going to enforce 2FA though krb5 -based logins are. Not allow users to set their own tokens. In this case, 2FA is going to be enforced in both LDAP-binds and krb5 -based logins. Complete this procedure to use the IdM Web UI to enable 2FA for the individual example.user IdM user. Prerequisites Administration privileges Procedure Log in to the IdM Web UI with IdM admin privileges. Open the Identity Users Active users tab. Select example.user to open the user settings. In the User authentication types , select Two factor authentication (password + OTP) . Click Save . At this point, the OTP authentication is enabled for the IdM user. Now you or example.user must assign a new token ID to the example.user account. 7.4. Configuring a RADIUS server for OTP validation in IdM To enable the migration of a large deployment from a proprietary one-time password (OTP) solution to the Identity Management (IdM)-native OTP solution, IdM offers a way to offload OTP validation to a third-party RADIUS server for a subset of users. The administrator creates a set of RADIUS proxies where each proxy can only reference a single RADIUS server. If more than one server needs to be addressed, it is recommended to create a virtual IP solution that points to multiple RADIUS servers. Such a solution must be built outside of RHEL IdM with the help of the keepalived daemon, for example. The administrator then assigns one of these proxy sets to a user. As long as the user has a RADIUS proxy set assigned, IdM bypasses all other authentication mechanisms. Note IdM does not provide any token management or synchronization support for tokens in the third-party system. Complete the procedure to configure a RADIUS server for OTP validation and to add a user to the proxy server: Prerequisites The radius user authentication method is enabled. See Enabling the one-time password in the Web UI for details. Procedure Add a RADIUS proxy: The command prompts you for inserting the required information. The configuration of the RADIUS proxy requires the use of a common secret between the client and the server to wrap credentials. Specify this secret in the --secret parameter. Assign a user to the added proxy: If required, configure the user name to be sent to RADIUS: As a result, the RADIUS proxy server starts to process the user OTP authentication. When the user is ready to be migrated to the IdM native OTP system, you can simply remove the RADIUS proxy assignment for the user. 7.4.1. Changing the timeout value of a KDC when running a RADIUS server in a slow network In certain situations, such as running a RADIUS proxy in a slow network, the Identity Management (IdM) Kerberos Distribution Center (KDC) closes the connection before the RADIUS server responds because the connection timed out while waiting for the user to enter the token. To change the timeout settings of the KDC: Change the value of the timeout parameter in the [otp] section in the /var/kerberos/krb5kdc/kdc.conf file. For example, to set the timeout to 120 seconds: Restart the krb5kdc service: Additional resources How to configure FreeRADIUS authentication in FIPS mode (Red Hat Knowledgebase) 7.5. Adding OTP tokens in the Web UI The following section helps you to add token to the IdM Web UI and to your software token generator. Prerequisites Active user account on the IdM server. Administrator has enabled OTP for the particular user account in the IdM Web UI. A software device generating OTP tokens, for example FreeOTP. Procedure Log in to the IdM Web UI with your user name and password. To create the token in your mobile phone, open the Authentication OTP Tokens tab. Click Add . In the Add OTP token dialog box, leave everything unfilled and click Add . At this stage, the IdM server creates a token with default parameters at the server and opens a page with a QR code. Copy the QR code into your mobile phone. Click OK to close the QR code. Now you can generate one time passwords and log in with them to the IdM Web UI. 7.6. Logging into the Web UI with a one time password Follow this procedure to login for the first time into the IdM Web UI using a one time password (OTP). Prerequisites OTP configuration enabled on the Identity Management server for the user account you are using for the OTP authentication. Administrators as well as users themselves can enable OTP. To enable the OTP configuration, see Enabling the one time password in the Web UI . A hardware or software device generating OTP tokens configured. Procedure In the Identity Management login screen, enter your user name or a user name of the IdM server administrator account. Add the password for the user name entered above. Generate a one time password on your device. Enter the one time password right after the password (without space). Click Log in . If the authentication fails, synchronize OTP tokens. If your CA uses a self-signed certificate, the browser issues a warning. Check the certificate and accept the security exception to proceed with the login. If the IdM Web UI does not open, verify the DNS configuration of your Identity Management server. After successful login, the IdM Web UI appears. 7.7. Synchronizing OTP tokens using the Web UI If the login with OTP (One Time Password) fails, OTP tokens are not synchronized correctly. The following text describes token re-synchronization. Prerequisites A login screen opened. A device generating OTP tokens configured. Procedure On the IdM Web UI login screen, click Sync OTP Token . In the login screen, enter your username and the Identity Management password. Generate one time password and enter it in the First OTP field. Generate another one time password and enter it in the Second OTP field. Optional: Enter the token ID. Click Sync OTP Token . After the successful synchronization, you can log in to the IdM server. 7.8. Changing expired passwords Administrators of Identity Management can enforce you having to change your password at the login. It means that you cannot successfully log in to the IdM Web UI until you change the password. Password expiration can happen during your first login to the Web UI. If the expiration password dialog appears, follow the instructions in the procedure. Prerequisites A login screen opened. Active account to the IdM server. Procedure In the password expiration login screen, enter the user name. Add the password for the user name entered above. In the OTP field, generate a one time password, if you use the one time password authentication. If you do not have enabled the OTP authentication, leave the field empty. Enter the new password twice for verification. Click Reset Password . After the successful password change, the usual login dialog displays. Log in with the new password. 7.9. Retrieving an IdM ticket-granting ticket as an OTP or RADIUS user To retrieve a Kerberos ticket-granting ticket (TGT) as an OTP user, request an anonymous Kerberos ticket and enable Flexible Authentication via Secure Tunneling (FAST) channel to provide a secure connection between the Kerberos client and Kerberos Distribution Center (KDC). Prerequisites Your IdM client and IdM servers use RHEL 8.7 or later. Your IdM client and IdM servers use SSSD 2.7.0 or later. You have enabled OTP for the required user account. Procedure Initialize the credentials cache by running the following command: Note that this command creates the armor.ccache file that you need to point to whenever you request a new Kerberos ticket. Request a Kerberos ticket by running the command: Verification Display your Kerberos ticket information: The pa_type = 141 indicates OTP/RADIUS authentication. 7.10. Enforcing OTP usage for all LDAP clients In RHEL IdM, you can set the default behavior for LDAP server authentication of user accounts with two-factor (OTP) authentication configured. If OTP is enforced, LDAP clients cannot authenticate against an LDAP server using single-factor authentication (a password) for users that have associated OTP tokens. RHEL IdM already enforces this method through the Kerberos backend by using a special LDAP control with OID 2.16.840.1.113730.3.8.10.7 without any data. Procedure To enforce OTP usage for all LDAP clients, use the following command: To change back to the OTP behavior for all LDAP clients, use the following command: | [
"ipa radiusproxy-add proxy_name --secret secret",
"ipa user-mod radiususer --radius=proxy_name",
"ipa user-mod radiususer --radius-username=radius_user",
"[otp] DEFAULT = { timeout = 120 }",
"systemctl restart krb5kdc",
"kinit -n @IDM.EXAMPLE.COM -c FILE:armor.ccache",
"kinit -T FILE:armor.ccache <username>@IDM.EXAMPLE.COM Enter your OTP Token Value.",
"klist -C Ticket cache: KCM:0:58420 Default principal: <username>@IDM.EXAMPLE.COM Valid starting Expires Service principal 05/09/22 07:48:23 05/10/22 07:03:07 krbtgt/[email protected] config: fast_avail(krbtgt/[email protected]) = yes 08/17/2022 20:22:45 08/18/2022 20:22:43 krbtgt/[email protected] config: pa_type(krbtgt/[email protected]) = 141",
"ipa config-mod --addattr ipaconfigstring=EnforceLDAPOTP",
"ipa config-mod --delattr ipaconfigstring=EnforceLDAPOTP"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/logging-in-to-the-ipa-web-ui-using-one-time-passwords_configuring-and-managing-idm |
Chapter 5. OLMConfig [operators.coreos.com/v1] | Chapter 5. OLMConfig [operators.coreos.com/v1] Description OLMConfig is a resource responsible for configuring OLM. Type object Required metadata 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object OLMConfigSpec is the spec for an OLMConfig resource. status object OLMConfigStatus is the status for an OLMConfig resource. 5.1.1. .spec Description OLMConfigSpec is the spec for an OLMConfig resource. Type object Property Type Description features object Features contains the list of configurable OLM features. 5.1.2. .spec.features Description Features contains the list of configurable OLM features. Type object Property Type Description disableCopiedCSVs boolean DisableCopiedCSVs is used to disable OLM's "Copied CSV" feature for operators installed at the cluster scope, where a cluster scoped operator is one that has been installed in an OperatorGroup that targets all namespaces. When reenabled, OLM will recreate the "Copied CSVs" for each cluster scoped operator. 5.1.3. .status Description OLMConfigStatus is the status for an OLMConfig resource. Type object Property Type Description conditions array conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } 5.1.4. .status.conditions Description Type array 5.1.5. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 5.2. API endpoints The following API endpoints are available: /apis/operators.coreos.com/v1/olmconfigs DELETE : delete collection of OLMConfig GET : list objects of kind OLMConfig POST : create an OLMConfig /apis/operators.coreos.com/v1/olmconfigs/{name} DELETE : delete an OLMConfig GET : read the specified OLMConfig PATCH : partially update the specified OLMConfig PUT : replace the specified OLMConfig /apis/operators.coreos.com/v1/olmconfigs/{name}/status GET : read status of the specified OLMConfig PATCH : partially update status of the specified OLMConfig PUT : replace status of the specified OLMConfig 5.2.1. /apis/operators.coreos.com/v1/olmconfigs Table 5.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of OLMConfig Table 5.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 5.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind OLMConfig Table 5.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 5.5. HTTP responses HTTP code Reponse body 200 - OK OLMConfigList schema 401 - Unauthorized Empty HTTP method POST Description create an OLMConfig Table 5.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.7. Body parameters Parameter Type Description body OLMConfig schema Table 5.8. HTTP responses HTTP code Reponse body 200 - OK OLMConfig schema 201 - Created OLMConfig schema 202 - Accepted OLMConfig schema 401 - Unauthorized Empty 5.2.2. /apis/operators.coreos.com/v1/olmconfigs/{name} Table 5.9. Global path parameters Parameter Type Description name string name of the OLMConfig Table 5.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an OLMConfig Table 5.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 5.12. Body parameters Parameter Type Description body DeleteOptions schema Table 5.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified OLMConfig Table 5.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 5.15. HTTP responses HTTP code Reponse body 200 - OK OLMConfig schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OLMConfig Table 5.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.17. Body parameters Parameter Type Description body Patch schema Table 5.18. HTTP responses HTTP code Reponse body 200 - OK OLMConfig schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OLMConfig Table 5.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.20. Body parameters Parameter Type Description body OLMConfig schema Table 5.21. HTTP responses HTTP code Reponse body 200 - OK OLMConfig schema 201 - Created OLMConfig schema 401 - Unauthorized Empty 5.2.3. /apis/operators.coreos.com/v1/olmconfigs/{name}/status Table 5.22. Global path parameters Parameter Type Description name string name of the OLMConfig Table 5.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified OLMConfig Table 5.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 5.25. HTTP responses HTTP code Reponse body 200 - OK OLMConfig schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified OLMConfig Table 5.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.27. Body parameters Parameter Type Description body Patch schema Table 5.28. HTTP responses HTTP code Reponse body 200 - OK OLMConfig schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified OLMConfig Table 5.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.30. Body parameters Parameter Type Description body OLMConfig schema Table 5.31. HTTP responses HTTP code Reponse body 200 - OK OLMConfig schema 201 - Created OLMConfig schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/operatorhub_apis/olmconfig-operators-coreos-com-v1 |
1.2. A Detailed Look at the Boot Process | 1.2. A Detailed Look at the Boot Process The beginning of the boot process varies depending on the hardware platform being used. However, once the kernel is found and loaded by the boot loader, the default boot process is identical across all architectures. This chapter focuses primarily on the x86 architecture. 1.2.1. The BIOS When an x86 computer is booted, the processor looks at the end of system memory for the Basic Input/Output System or BIOS program and runs it. The BIOS controls not only the first step of the boot process, but also provides the lowest level interface to peripheral devices. For this reason it is written into read-only, permanent memory and is always available for use. Other platforms use different programs to perform low-level tasks roughly equivalent to those of the BIOS on an x86 system. For instance, Itanium-based computers use the Extensible Firmware Interface ( EFI ) Shell . Once loaded, the BIOS tests the system, looks for and checks peripherals, and then locates a valid device with which to boot the system. Usually, it checks any diskette drives and CD-ROM drives present for bootable media, then, failing that, looks to the system's hard drives. In most cases, the order of the drives searched while booting is controlled with a setting in the BIOS, and it looks on the master IDE device on the primary IDE bus. The BIOS then loads into memory whatever program is residing in the first sector of this device, called the Master Boot Record or MBR . The MBR is only 512 bytes in size and contains machine code instructions for booting the machine, called a boot loader, along with the partition table. Once the BIOS finds and loads the boot loader program into memory, it yields control of the boot process to it. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-boot-init-shutdown-process |
Chapter 5. Authenticating Camel K against Kafka | Chapter 5. Authenticating Camel K against Kafka You can authenticate Camel K against Apache Kafka. The following example demonstrates how to set up a Kafka Topic and use it in a simple Producer/Consumer pattern Integration. 5.1. Setting up Kafka To set up Kafka, you must: Install the required OpenShift operators Create a Kafka instance Create a Kafka topic Use the Red Hat product mentioned below to set up Kafka: Red Hat Advanced Message Queuing (AMQ) streams - A self-managed Apache Kafka offering. AMQ Streams is based on open source Strimzi and is included as part of Red Hat Integration . AMQ Streams is a distributed and scalable streaming platform based on Apache Kafka that includes a publish/subscribe messaging broker. Kafka Connect provides a framework to integrate Kafka-based systems with external systems. Using Kafka Connect, you can configure source and sink connectors to stream data from external systems into and out of a Kafka broker. 5.1.1. Setting up Kafka by using AMQ streams AMQ Streams simplifies the process of running Apache Kafka in an OpenShift cluster. 5.1.1.1. Preparing your OpenShift cluster for AMQ Streams To use Camel K or Kamelets and Red Hat AMQ Streams, you must install the following operators and tools: Red Hat Integration - AMQ Streams operator - Manages the communication between your Openshift Cluster and AMQ Streams for Apache Kafka instances. Red Hat Integration - Camel K operator - Installs and manages Camel K - a lightweight integration framework that runs natively in the cloud on OpenShift. Camel K CLI tool - Allows you to access all Camel K features. Prerequisites You are familiar with Apache Kafka concepts. You can access an OpenShift 4.6 (or later) cluster with the correct access level, the ability to create projects and install operators, and the ability to install the OpenShift and the Camel K CLI on your local system. You installed the OpenShift CLI tool ( oc ) so that you can interact with the OpenShift cluster at the command line. Procedure To set up Kafka by using AMQ Streams: Log in to your OpenShift cluster's web console. Create or open a project in which you plan to create your integration, for example my-camel-k-kafka . Install the Camel K operator and Camel K CLI as described in Installing Camel K . Install the AMQ streams operator: From any project, select Operators > OperatorHub . In the Filter by Keyword field, type AMQ Streams . Click the Red Hat Integration - AMQ Streams card and then click Install . The Install Operator page opens. Accept the defaults and then click Install . Select Operators > Installed Operators to verify that the Camel K and AMQ Streams operators are installed. steps Setting up a Kafka topic with AMQ Streams 5.1.1.2. Setting up a Kafka topic with AMQ Streams A Kafka topic provides a destination for the storage of data in a Kafka instance. You must set up a Kafka topic before you can send data to it. Prerequisites You can access an OpenShift cluster. You installed the Red Hat Integration - Camel K and Red Hat Integration - AMQ Streams operators as described in Preparing your OpenShift cluster . You installed the OpenShift CLI ( oc ) and the Camel K CLI ( kamel ). Procedure To set up a Kafka topic by using AMQ Streams: Log in to your OpenShift cluster's web console. Select Projects and then click the project in which you installed the Red Hat Integration - AMQ Streams operator. For example, click the my-camel-k-kafka project. Select Operators > Installed Operators and then click Red Hat Integration - AMQ Streams . Create a Kafka cluster: Under Kafka , click Create instance . Type a name for the cluster, for example kafka-test . Accept the other defaults and then click Create . The process to create the Kafka instance might take a few minutes to complete. When the status is ready, continue to the step. Create a Kafka topic: Select Operators > Installed Operators and then click Red Hat Integration - AMQ Streams . Under Kafka Topic , click Create Kafka Topic . Type a name for the topic, for example test-topic . Accept the other defaults and then click Create . 5.1.2. Setting up Kafka by using OpenShift streams To use OpenShift Streams for Apache Kafka, you must be logged into your Red Hat account. 5.1.2.1. Preparing your OpenShift cluster for OpenShift Streams To use managed cloud service, you must install the following operators and tools: OpenShift Application Services (RHOAS) CLI - Allows you to manage your application services from a terminal. Red Hat Integration - Camel K operator Installs and manages Camel K - a lightweight integration framework that runs natively in the cloud on OpenShift. Camel K CLI tool - Allows you to access all Camel K features. Prerequisites You are familiar with Apache Kafka concepts. You can access an OpenShift 4.6 (or later) cluster with the correct access level, the ability to create projects and install operators, and the ability to install the OpenShift and Apache Camel K CLI on your local system. You installed the OpenShift CLI tool ( oc ) so that you can interact with the OpenShift cluster at the command line. Procedure Log in to your OpenShift web console with a cluster admin account. Create the OpenShift project for your Camel K or Kamelets application. Select Home > Projects . Click Create Project . Type the name of the project, for example my-camel-k-kafka , then click Create . Download and install the RHOAS CLI as described in Getting started with the rhoas CLI . Install the Camel K operator and Camel K CLI as described in Installing Camel K . To verify that the Red Hat Integration - Camel K operator is installed, click Operators > Installed Operators . step Setting up a Kafka topic with RHOAS 5.1.2.2. Setting up a Kafka topic with RHOAS Kafka organizes messages around topics . Each topic has a name. Applications send messages to topics and retrieve messages from topics. A Kafka topic provides a destination for the storage of data in a Kafka instance. You must set up a Kafka topic before you can send data to it. Prerequisites You can access an OpenShift cluster with the correct access level, the ability to create projects and install operators, and the ability to install the OpenShift and the Camel K CLI on your local system. You installed the OpenShift CLI ( oc ) , the Camel K CLI ( kamel ) , and RHOAS CLI ( rhoas ) tools as described in Preparing your OpenShift cluster . You installed the Red Hat Integration - Camel K operator as described in Preparing your OpenShift cluster . You are logged in to the Red Hat Cloud site . Procedure To set up a Kafka topic: From the command line, log in to your OpenShift cluster. Open your project, for example: oc project my-camel-k-kafka Verify that the Camel K operator is installed in your project: oc get csv The result lists the Red Hat Camel K operator and indicates that it is in the Succeeded phase. Prepare and connect a Kafka instance to RHOAS: Login to the RHOAS CLI by using this command: rhoas login Create a kafka instance, for example kafka-test : rhoas kafka create kafka-test The process to create the Kafka instance might take a few minutes to complete. To check the status of your Kafka instance: rhoas status You can also view the status in the web console: https://cloud.redhat.com/application-services/streams/kafkas/ When the status is ready , continue to the step. Create a new Kafka topic: rhoas kafka topic create --name test-topic Connect your Kafka instance (cluster) with the Openshift Application Services instance: rhoas cluster connect Follow the script instructions for obtaining a credential token. You should see output similar to the following: step Obtaining Kafka credentials 5.1.2.3. Obtaining Kafka credentials To connect your applications or services to a Kafka instance, you must first obtain the following Kafka credentials: Obtain the bootstrap URL. Create a service account with credentials (username and password). For OpenShift Streams, the authentication protocol is SASL_SSL. Prerequisite You have created a Kafka instance, and it has a ready status. You have created a Kafka topic. Procedure Obtain the Kafka Broker URL (Bootstrap URL): rhoas status This command returns output similar to the following: To obtain a username and password, create a service account by using the following syntax: rhoas service-account create --name "<account-name>" --file-format json Note When creating a service account, you can choose the file format and location to save the credentials. For more information, type rhoas service-account create --help For example: rhoas service-account create --name "my-service-acct" --file-format json The service account is created and saved to a JSON file. To verify your service account credentials, view the credentials.json file: cat credentials.json This command returns output similar to the following: Grant permission for sending and receiving messages to or from the Kakfa topic. Use the following command, where clientID is the value provided in the credentials.json file (from Step 3). For example: 5.1.2.4. Creating a secret by using the SASL/Plain authentication method You can create a secret with the credentials that you obtained (Kafka bootstrap URL, service account ID, and service account secret). Procedure Edit the application.properties file and add the Kafka credentials. application.properties file Run the following command to create a secret that contains the sensitive properties in the application.properties file: You use this secret when you run a Camel K integration. See Also The Camel K Kafka Basic Quickstart 5.1.2.5. Creating a secret by using the SASL/OAUTHBearer authentication method You can create a secret with the credentials that you obtained (Kafka bootstrap URL, service account ID, and service account secret). Procedure Edit the application-oauth.properties file and add the Kafka credentials. application-oauth.properties file Run the following command to create a secret that contains the sensitive properties in the application.properties file: You use this secret when you run a Camel K integration. See Also The Camel K Kafka Basic Quickstart 5.2. Running a Kafka integration Running a producer integration Create a sample producer integration. This fills the topic with a message, every 10 seconds. Sample SaslSSLKafkaProducer.java Then run the procedure integration. The producer will create a new message and push into the topic and log some information. Running a consumer integration Create a consumer integration. Sample SaslSSLKafkaProducer.java Open another shell and run the consumer integration using the command: A consumer will start logging the events found in the Topic: | [
"Token Secret \"rh-cloud-services-accesstoken-cli\" created successfully Service Account Secret \"rh-cloud-services-service-account\" created successfully KafkaConnection resource \"kafka-test\" has been created KafkaConnection successfully installed on your cluster.",
"Kafka --------------------------------------------------------------- ID: 1ptdfZRHmLKwqW6A3YKM2MawgDh Name: my-kafka Status: ready Bootstrap URL: my-kafka--ptdfzrhmlkwqw-a-ykm-mawgdh.kafka.devshift.org:443",
"{\"clientID\":\"srvc-acct-eb575691-b94a-41f1-ab97-50ade0cd1094\", \"password\":\"facf3df1-3c8d-4253-aa87-8c95ca5e1225\"}",
"rhoas kafka acl grant-access --producer --consumer --service-account USDCLIENT_ID --topic test-topic --group all",
"rhoas kafka acl grant-access --producer --consumer --service-account srvc-acct-eb575691-b94a-41f1-ab97-50ade0cd1094 --topic test-topic --group all",
"camel.component.kafka.brokers = <YOUR-KAFKA-BOOTSTRAP-URL-HERE> camel.component.kafka.security-protocol = SASL_SSL camel.component.kafka.sasl-mechanism = PLAIN camel.component.kafka.sasl-jaas-config=org.apache.kafka.common.security.plain.PlainLoginModule required username='<YOUR-SERVICE-ACCOUNT-ID-HERE>' password='<YOUR-SERVICE-ACCOUNT-SECRET-HERE>'; consumer.topic=<TOPIC-NAME> producer.topic=<TOPIC-NAME>",
"create secret generic kafka-props --from-file application.properties",
"camel.component.kafka.brokers = <YOUR-KAFKA-BOOTSTRAP-URL-HERE> camel.component.kafka.security-protocol = SASL_SSL camel.component.kafka.sasl-mechanism = OAUTHBEARER camel.component.kafka.sasl-jaas-config = org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.client.id='<YOUR-SERVICE-ACCOUNT-ID-HERE>' oauth.client.secret='<YOUR-SERVICE-ACCOUNT-SECRET-HERE>' oauth.token.endpoint.uri=\"https://identity.api.openshift.com/auth/realms/rhoas/protocol/openid-connect/token\" ; camel.component.kafka.additional-properties[sasl.login.callback.handler.class]=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler consumer.topic=<TOPIC-NAME> producer.topic=<TOPIC-NAME>",
"create secret generic kafka-props --from-file application-oauth.properties",
"// kamel run --secret kafka-props SaslSSLKafkaProducer.java --dev // camel-k: language=java dependency=mvn:org.apache.camel.quarkus:camel-quarkus-kafka dependency=mvn:io.strimzi:kafka-oauth-client:0.7.1.redhat-00003 import org.apache.camel.builder.RouteBuilder; import org.apache.camel.component.kafka.KafkaConstants; public class SaslSSLKafkaProducer extends RouteBuilder { @Override public void configure() throws Exception { log.info(\"About to start route: Timer -> Kafka \"); from(\"timer:foo\") .routeId(\"FromTimer2Kafka\") .setBody() .simple(\"Message #USD{exchangeProperty.CamelTimerCounter}\") .to(\"kafka:{{producer.topic}}\") .log(\"Message correctly sent to the topic!\"); } }",
"kamel run --secret kafka-props SaslSSLKafkaProducer.java --dev",
"[2] 2021-05-06 08:48:11,854 INFO [FromTimer2Kafka] (Camel (camel-1) thread #1 - KafkaProducer[test]) Message correctly sent to the topic! [2] 2021-05-06 08:48:11,854 INFO [FromTimer2Kafka] (Camel (camel-1) thread #3 - KafkaProducer[test]) Message correctly sent to the topic! [2] 2021-05-06 08:48:11,973 INFO [FromTimer2Kafka] (Camel (camel-1) thread #5 - KafkaProducer[test]) Message correctly sent to the topic! [2] 2021-05-06 08:48:12,970 INFO [FromTimer2Kafka] (Camel (camel-1) thread #7 - KafkaProducer[test]) Message correctly sent to the topic! [2] 2021-05-06 08:48:13,970 INFO [FromTimer2Kafka] (Camel (camel-1) thread #9 - KafkaProducer[test]) Message correctly sent to the topic!",
"// kamel run --secret kafka-props SaslSSLKafkaConsumer.java --dev // camel-k: language=java dependency=mvn:org.apache.camel.quarkus:camel-quarkus-kafka dependency=mvn:io.strimzi:kafka-oauth-client:0.7.1.redhat-00003 import org.apache.camel.builder.RouteBuilder; public class SaslSSLKafkaConsumer extends RouteBuilder { @Override public void configure() throws Exception { log.info(\"About to start route: Kafka -> Log \"); from(\"kafka:{{consumer.topic}}\") .routeId(\"FromKafka2Log\") .log(\"USD{body}\"); } }",
"kamel run --secret kafka-props SaslSSLKafkaConsumer.java --dev",
"[1] 2021-05-06 08:51:08,991 INFO [FromKafka2Log] (Camel (camel-1) thread #0 - KafkaConsumer[test]) Message #8 [1] 2021-05-06 08:51:10,065 INFO [FromKafka2Log] (Camel (camel-1) thread #0 - KafkaConsumer[test]) Message #9 [1] 2021-05-06 08:51:10,991 INFO [FromKafka2Log] (Camel (camel-1) thread #0 - KafkaConsumer[test]) Message #10 [1] 2021-05-06 08:51:11,991 INFO [FromKafka2Log] (Camel (camel-1) thread #0 - KafkaConsumer[test]) Message #11"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.5/html/developing_and_managing_integrations_using_camel_k/authenticate-camel-k-against-kafka |
Chapter 8. DNS [config.openshift.io/v1] | Chapter 8. DNS [config.openshift.io/v1] Description DNS holds cluster-wide information about DNS. The canonical name is cluster Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 8.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values from the cluster. They may not be overridden. 8.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description baseDomain string baseDomain is the base domain of the cluster. All managed DNS records will be sub-domains of this base. For example, given the base domain openshift.example.com , an API server DNS record may be created for cluster-api.openshift.example.com . Once set, this field cannot be changed. platform object platform holds configuration specific to the underlying infrastructure provider for DNS. When omitted, this means the user has no opinion and the platform is left to choose reasonable defaults. These defaults are subject to change over time. privateZone object privateZone is the location where all the DNS records that are only available internally to the cluster exist. If this field is nil, no private records should be created. Once set, this field cannot be changed. publicZone object publicZone is the location where all the DNS records that are publicly accessible to the internet exist. If this field is nil, no public records should be created. Once set, this field cannot be changed. 8.1.2. .spec.platform Description platform holds configuration specific to the underlying infrastructure provider for DNS. When omitted, this means the user has no opinion and the platform is left to choose reasonable defaults. These defaults are subject to change over time. Type object Required type Property Type Description aws object aws contains DNS configuration specific to the Amazon Web Services cloud provider. type string type is the underlying infrastructure provider for the cluster. Allowed values: "", "AWS". Individual components may not support all platforms, and must handle unrecognized platforms with best-effort defaults. 8.1.3. .spec.platform.aws Description aws contains DNS configuration specific to the Amazon Web Services cloud provider. Type object Property Type Description privateZoneIAMRole string privateZoneIAMRole contains the ARN of an IAM role that should be assumed when performing operations on the cluster's private hosted zone specified in the cluster DNS config. When left empty, no role should be assumed. 8.1.4. .spec.privateZone Description privateZone is the location where all the DNS records that are only available internally to the cluster exist. If this field is nil, no private records should be created. Once set, this field cannot be changed. Type object Property Type Description id string id is the identifier that can be used to find the DNS hosted zone. on AWS zone can be fetched using ID as id in [1] on Azure zone can be fetched using ID as a pre-determined name in [2], on GCP zone can be fetched using ID as a pre-determined name in [3]. [1]: https://docs.aws.amazon.com/cli/latest/reference/route53/get-hosted-zone.html#options [2]: https://docs.microsoft.com/en-us/cli/azure/network/dns/zone?view=azure-cli-latest#az-network-dns-zone-show [3]: https://cloud.google.com/dns/docs/reference/v1/managedZones/get tags object (string) tags can be used to query the DNS hosted zone. on AWS, resourcegroupstaggingapi [1] can be used to fetch a zone using Tags as tag-filters, [1]: https://docs.aws.amazon.com/cli/latest/reference/resourcegroupstaggingapi/get-resources.html#options 8.1.5. .spec.publicZone Description publicZone is the location where all the DNS records that are publicly accessible to the internet exist. If this field is nil, no public records should be created. Once set, this field cannot be changed. Type object Property Type Description id string id is the identifier that can be used to find the DNS hosted zone. on AWS zone can be fetched using ID as id in [1] on Azure zone can be fetched using ID as a pre-determined name in [2], on GCP zone can be fetched using ID as a pre-determined name in [3]. [1]: https://docs.aws.amazon.com/cli/latest/reference/route53/get-hosted-zone.html#options [2]: https://docs.microsoft.com/en-us/cli/azure/network/dns/zone?view=azure-cli-latest#az-network-dns-zone-show [3]: https://cloud.google.com/dns/docs/reference/v1/managedZones/get tags object (string) tags can be used to query the DNS hosted zone. on AWS, resourcegroupstaggingapi [1] can be used to fetch a zone using Tags as tag-filters, [1]: https://docs.aws.amazon.com/cli/latest/reference/resourcegroupstaggingapi/get-resources.html#options 8.1.6. .status Description status holds observed values from the cluster. They may not be overridden. Type object 8.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/dnses DELETE : delete collection of DNS GET : list objects of kind DNS POST : create a DNS /apis/config.openshift.io/v1/dnses/{name} DELETE : delete a DNS GET : read the specified DNS PATCH : partially update the specified DNS PUT : replace the specified DNS /apis/config.openshift.io/v1/dnses/{name}/status GET : read status of the specified DNS PATCH : partially update status of the specified DNS PUT : replace status of the specified DNS 8.2.1. /apis/config.openshift.io/v1/dnses Table 8.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of DNS Table 8.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 8.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind DNS Table 8.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 8.5. HTTP responses HTTP code Reponse body 200 - OK DNSList schema 401 - Unauthorized Empty HTTP method POST Description create a DNS Table 8.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.7. Body parameters Parameter Type Description body DNS schema Table 8.8. HTTP responses HTTP code Reponse body 200 - OK DNS schema 201 - Created DNS schema 202 - Accepted DNS schema 401 - Unauthorized Empty 8.2.2. /apis/config.openshift.io/v1/dnses/{name} Table 8.9. Global path parameters Parameter Type Description name string name of the DNS Table 8.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a DNS Table 8.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 8.12. Body parameters Parameter Type Description body DeleteOptions schema Table 8.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified DNS Table 8.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 8.15. HTTP responses HTTP code Reponse body 200 - OK DNS schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified DNS Table 8.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 8.17. Body parameters Parameter Type Description body Patch schema Table 8.18. HTTP responses HTTP code Reponse body 200 - OK DNS schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified DNS Table 8.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.20. Body parameters Parameter Type Description body DNS schema Table 8.21. HTTP responses HTTP code Reponse body 200 - OK DNS schema 201 - Created DNS schema 401 - Unauthorized Empty 8.2.3. /apis/config.openshift.io/v1/dnses/{name}/status Table 8.22. Global path parameters Parameter Type Description name string name of the DNS Table 8.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified DNS Table 8.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 8.25. HTTP responses HTTP code Reponse body 200 - OK DNS schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified DNS Table 8.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 8.27. Body parameters Parameter Type Description body Patch schema Table 8.28. HTTP responses HTTP code Reponse body 200 - OK DNS schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified DNS Table 8.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.30. Body parameters Parameter Type Description body DNS schema Table 8.31. HTTP responses HTTP code Reponse body 200 - OK DNS schema 201 - Created DNS schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/config_apis/dns-config-openshift-io-v1 |
Chapter 8. Synchronizing content between Satellite Servers | Chapter 8. Synchronizing content between Satellite Servers In a Satellite setup with multiple Satellite Servers, you can use Inter-Satellite Synchronization (ISS) to synchronize content from one upstream server to one or more downstream servers. There are two possible ISS configurations of Satellite, depending on how you deployed your infrastructure. Configure your Satellite for ISS as appropriate for your scenario. For more information, see Inter-Satellite Synchronization scenarios in Installing Satellite Server in a disconnected network environment . To change the Pulp export path, see Hammer content export fails with "Path '/the/path' is not an allowed export path" in the Red Hat Knowledgebase . 8.1. Content synchronization by using export and import There are multiple approaches for synchronizing content by using the export and import workflow: You employ the upstream Satellite Server as a content store, which means that you sync the whole Library rather than content view versions. This approach offers the simplest export/import workflow. In such case, you can manage the content view versions downstream. For more information, see Section 8.1.1, "Using an upstream Satellite Server as a content store" . You use the upstream Satellite Server to sync content view versions. This approach offers more control over what content is synced between Satellite Servers. For more information, see Section 8.1.2, "Using an upstream Satellite Server to synchronize content view versions" . You sync a single repository. This can be useful if you use the content-view syncing approach, but you want to sync an additional repository without adding it to an existing content view. For more information, see Section 8.1.3, "Synchronizing a single repository" . Note Synchronizing content by using export and import requires the same major, minor, and patch version of Satellite on both the downstream and upstream Satellite Servers. When you are unable to match upstream and downstream Satellite versions, you can use: Syncable exports and imports. Inter-Satellite Synchronization (ISS) with your upstream Satellite connected to the Internet and your downstream Satellite connected to the upstream Satellite. 8.1.1. Using an upstream Satellite Server as a content store In this scenario, you use the upstream Satellite Server as a content store for updates rather than to manage content. You use the downstream Satellite Server to manage content for all infrastructure behind the isolated network. You export the Library content from the upstream Satellite Server and import it into the downstream Satellite Server. On the upstream Satellite Server Ensure that repositories are using the Immediate download policy in one of the following ways: For existing repositories using On Demand , change their download policy on the repository details page to Immediate . For new repositories, ensure that the Default Red Hat Repository download policy setting is set to Immediate before enabling Red Hat repositories, and that the Default download policy is set to Immediate for custom repositories. For more information, see Section 4.9, "Download policies overview" . Enable the content that you want to synchronize. For more information, see Section 4.6, "Enabling Red Hat repositories" . If you want to sync custom content, first create a custom Product and synchronize Product repositories . Synchronize the enabled content: On the first export, perform a complete Library export so that all the synchronized content is exported. This generates content archives that you can later import into one or more downstream Satellite Servers. For more information on performing a complete Library export, see Section 8.3, "Exporting the Library environment" . Export all future updates on the upstream Satellite Server incrementally. This generates leaner content archives that contain only a recent set of updates. For example, if you enable and synchronize a new repository, the exported content archive contains content only from the newly enabled repository. For more information on performing an incremental Library export, see Section 8.6, "Exporting the Library environment incrementally" . On the downstream Satellite Server Bring the content exported from the upstream Satellite Server over to the hard disk. Place it inside a directory under /var/lib/pulp/imports . Import the content to an organization using the procedure outlined in Section 8.15, "Importing into the Library environment" . You can then manage content using content views or lifecycle environments as you require. 8.1.2. Using an upstream Satellite Server to synchronize content view versions In this scenario, you use the upstream Satellite Server not only as a content store, but also to synchronize content for all infrastructure behind the isolated network. You curate updates coming from the CDN into content views and lifecycle environments. Once you promote content to a designated lifecycle environment, you can export the content from the upstream Satellite Server and import it into the downstream Satellite Server. On the upstream Satellite Server Ensure that repositories are using the Immediate download policy in one of the following ways: For existing repositories using On Demand , change their download policy on the repository details page to Immediate . For new repositories, ensure that the Default Red Hat Repository download policy setting is set to Immediate before enabling Red Hat repositories, and that the Default download policy is set to Immediate for custom repositories. For more information, see Section 4.9, "Download policies overview" . Enable the content that you want to synchronize. For more information, see Section 4.6, "Enabling Red Hat repositories" . If you want to sync custom content, first create a custom Product and synchronize Product repositories . Synchronize the enabled content: For the first export, perform a complete version export on the content view version that you want to export. For more information see, Section 8.7, "Exporting a content view version" . This generates content archives that you can import into one or more downstream Satellite Servers. Export all future updates in the connected Satellite Servers incrementally. This generates leaner content archives that contain changes only from the recent set of updates. For example, if your content view has a new repository, this exported content archive contains only the latest changes. For more information, see Section 8.9, "Exporting a content view version incrementally" . When you have new content, republish the content views that include this content before exporting the increment. For more information, see Chapter 7, Managing content views . This creates a new content view version with the appropriate content to export. On the downstream Satellite Server Bring the content exported from the upstream Satellite Server over to the hard disk. Place it inside a directory under /var/lib/pulp/imports . Import the content to the organization that you want. For more information, see Section 8.17, "Importing a content view version" . This will create a content view version from the exported content archives and then import content appropriately. 8.1.3. Synchronizing a single repository In this scenario, you export and import a single repository. On the upstream Satellite Server Ensure that the repository is using the Immediate download policy in one of the following ways: For existing repositories using On Demand , change their download policy on the repository details page to Immediate . For new repositories, ensure that the Default Red Hat Repository download policy setting is set to Immediate before enabling Red Hat repositories, and that the Default download policy is set to Immediate for custom repositories. For more information, see Section 4.9, "Download policies overview" . Enable the content that you want to synchronize. For more information, see Section 4.6, "Enabling Red Hat repositories" . If you want to sync custom content, first create a custom Product and synchronize Product repositories . Synchronize the enabled content: On the first export, perform a complete repository export so that all the synchronized content is exported. This generates content archives that you can later import into one or more downstream Satellite Servers. For more information on performing a complete repository export, see Section 8.10, "Exporting a repository" . Export all future updates on the upstream Satellite Server incrementally. This generates leaner content archives that contain only a recent set of updates. For more information on performing an incremental repository export, see Section 8.12, "Exporting a repository incrementally" . On the downstream Satellite Server Bring the content exported from the upstream Satellite Server over to the hard disk. Place it inside a directory under /var/lib/pulp/imports . Import the content to an organization. See Section 8.19, "Importing a repository" . You can then manage content using content views or lifecycle environments as you require. 8.2. Synchronizing a custom repository When using Inter-Satellite Synchronization Network Sync, Red Hat repositories are configured automatically, but custom repositories are not. Use this procedure to synchronize content from a custom repository on a connected Satellite Server to a disconnected Satellite Server through Inter-Satellite Synchronization (ISS) Network Sync. Follow the procedure for the connected Satellite Server before completing the procedure for the disconnected Satellite Server. Connected Satellite Server In the Satellite web UI, navigate to Content > Products . Click on the custom product. Click on the custom repository. Copy the Published At: URL. Continue with the procedure on disconnected Satellite Server. Disconnected Satellite Server Download the katello-server-ca.crt file from the connected Satellite Server: Create an SSL Content Credential with the contents of katello-server-ca.crt . For more information on creating an SSL Content Credential, see Section 4.3, "Importing custom SSL certificates" . In the Satellite web UI, navigate to Content > Products . Create your custom product with the following: Upstream URL : Paste the link that you copied earlier. SSL CA Cert : Select the SSL certificate that was transferred from your connected Satellite Server. For more information on creating a custom product, see Section 4.4, "Creating a custom product" . After completing these steps, the custom repository is properly configured on the disconnected Satellite Server. 8.3. Exporting the Library environment You can export contents of all Yum repositories in the Library environment of an organization to an archive file from Satellite Server and use this archive file to create the same repositories in another Satellite Server or in another Satellite Server organization. The exported archive file contains the following data: A JSON file containing content view version metadata. An archive file containing all the repositories from the Library environment of the organization. Satellite Server exports only RPM, kickstart files, and Docker Content included in the Library environment. Prerequisites Ensure that the export directory has free storage space to accommodate the export. Ensure that the /var/lib/pulp/exports directory has free storage space equivalent to the size of the repositories being exported for temporary files created during the export process. Ensure that you set download policy to Immediate for all repositories within the Library lifecycle environment you export. For more information, see Section 4.9, "Download policies overview" . Ensure that you synchronize Products that you export to the required date. Procedure Use the organization name or ID to export. Verify that the archive containing the exported version of a content view is located in the export directory: You need all three files, the tar.gz , the toc.json , and the metadata.json file to be able to import. A new content view Export-Library is created in the organization. This content view contains all the repositories belonging to this organization. A new version of this content view is published and exported automatically. Export with chunking In many cases the exported archive content may be several gigabytes in size. If you want to split it into smaller sizes or chunks. You can use the --chunk-size-gb flag directly in the export command to handle this. In the following example, you can see how to specify --chunk-size-gb=2 to split the archives in 2 GB chunks. 8.4. Exporting the library environment in a syncable format You can export contents of all yum repositories, Kickstart repositories and file repositories in the Library environment of an organization to a syncable format that you can use to create your custom CDN and synchronize the content from the custom CDN over HTTP/HTTPS. You can then serve the generated content on a local web server and synchronize it on the importing Satellite Server or in another Satellite Server organization. You can use the generated content to create the same repository in another Satellite Server or in another Satellite Server organization by using content import. On import of the exported archive, a regular content view is created or updated on your importing Satellite Server. For more information, see Section 8.17, "Importing a content view version" . You can export the following content in the syncable format from Satellite Server: Yum repositories Kickstart repositories File repositories You cannot export Ansible, Deb, or Docker content. The export contains directories with the packages, listing files, and metadata of the repository in Yum format that can be used to synchronize in the importing Satellite Server. Prerequisites Ensure that you set the download policy to Immediate for all repositories within the Library lifecycle environment you export. For more information, see Section 4.9, "Download policies overview" . Ensure that you synchronize products you export to the required date. Ensure that the user exporting the content has the Content Exporter role. Procedure Use the organization name or ID to export: Optional: Verify that the exported content is located in the export directory: 8.5. Importing syncable exports Procedure Use the organization name or ID to import syncable exports: Note Syncable exports must be located in one of your ALLOWED_IMPORT_PATHS as specified in /etc/pulp/settings.py . By default, this includes /var/lib/pulp/imports . 8.6. Exporting the Library environment incrementally Exporting Library content can be a very expensive operation in terms of system resources. Organizations that have multiple Red Hat Enterprise Linux trees can occupy several gigabytes of space on Satellite Server. In such cases, you can create an incremental export which contains only pieces of content that have changed since the last export. Incremental exports typically result in smaller archive files than the full exports. The example below shows incremental export of all repositories in the organization's Library. Procedure Create an incremental export: If you want to create a syncable export, add --format=syncable . By default, Satellite creates an importable export. steps Optional: View the exported data: 8.7. Exporting a content view version You can export a version of a content view to an archive file from Satellite Server and use this archive file to create the same content view version on another Satellite Server or on another Satellite Server organization. Satellite exports composite content views as normal content views. The composite nature is not retained. On importing the exported archive, a regular content view is created or updated on your downstream Satellite Server. The exported archive file contains the following data: A JSON file containing content view version metadata An archive file containing all the repositories included into the content view version You can only export Yum repositories, Kickstart files, and Docker content added to a version of a content view. Satellite does not export the following content: Content view definitions and metadata, such as package filters. Prerequisites To export a content view, ensure that Satellite Server where you want to export meets the following conditions: Ensure that the export directory has free storage space to accommodate the export. Ensure that the /var/lib/pulp/exports directory has free storage space equivalent to the size of the repositories being exported for temporary files created during the export process. Ensure that you set download policy to Immediate for all repositories within the content view you export. For more information, see Section 4.9, "Download policies overview" . Ensure that you synchronize Products that you export to the required date. Ensure that the user exporting the content has the Content Exporter role. To export a content view version List versions of the content view that are available for export: Export a content view version Get the version number of desired version. The following example targets version 1.0 for export. Verify that the archive containing the exported version of a content view is located in the export directory: You require all three files, for example, the tar.gz archive file, the toc.json and metadata.json to import the content successfully. Export with chunking In many cases, the exported archive content can be several gigabytes in size. You might want to split it smaller sizes or chunks. You can use the --chunk-size-gb option with in the hammer content-export command to handle this. The following example uses the --chunk-size-gb=2 to split the archives into 2 GB chunks. 8.8. Exporting a content view version in a syncable format You can export a version of a content view to a syncable format that you can use to create your custom CDN. After you have exported the content view, you can do either of the following: Synchronize the content from your custom CDN over HTTP/HTTPS. Import the content using hammer content-import . Note that this requires both the Export and Import servers to run Satellite 6.15. You can then serve the generated content using a local web server on the importing Satellite Server or in another Satellite Server organization. You cannot directly import Syncable Format exports. Instead, on the importing Satellite Server you must: Copy the generated content to an HTTP/HTTPS web server that is accessible to importing Satellite Server. Update your CDN configuration to Custom CDN . Set the CDN URL to point to the web server. Optional: Set an SSL/TLS CA Credential if the web server requires it. Enable the repository. Synchronize the repository. You can export the following content in a syncable format from Satellite Server: Yum repositories Kickstart repositories File repositories You cannot export Ansible, Deb, or Docker content. The export contains directories with the packages, listing files, and metadata of the repository in Yum format that can be used to synchronize in the importing Satellite Server. Prerequisites Ensure that you set the download policy to Immediate for all repositories within the content view you export. For more information, see Section 4.9, "Download policies overview" . Ensure that you synchronize products you export to the required date. Ensure that the user exporting the content has the Content Exporter role. To export a content view version List versions of the content view that are available for export: Procedure Get the version number of desired version. The following example targets version 1.0 for export: Optional: Verify that the exported content is located in the export directory: 8.9. Exporting a content view version incrementally Exporting complete content view versions can be a very expensive operation in terms of system resources. Content view versions that have multiple Red Hat Enterprise Linux trees can occupy several gigabytes of space on Satellite Server. In such cases, you can create an incremental export which contains only pieces of content that have changed since the last export. Incremental exports typically result in smaller archive files than the full exports. Procedure Create an incremental export: If you want to create a syncable export, add --format=syncable . By default, Satellite creates an importable export. steps Optional: View the exported content view: You can import your exported content view version into Satellite Server. For more information, see Section 8.17, "Importing a content view version" . 8.10. Exporting a repository You can export the content of a repository in the Library environment of an organization from Satellite Server. You can use this archive file to create the same repository in another Satellite Server or in another Satellite Server organization. You can export the following content from Satellite Server: Ansible repositories Kickstart repositories Yum repositories File repositories Docker content The export contains the following data: Two JSON files containing repository metadata. One or more archive files containing the contents of the repository from the Library environment of the organization. You need all the files, tar.gz , toc.json and metadata.json , to be able to import. Prerequisites Ensure that the export directory has enough free storage space to accommodate the export. Ensure that the /var/lib/pulp/exports directory has enough free storage space equivalent to the size of all repositories that you want to export. Ensure that you set download policy to Immediate for the repository within the Library lifecycle environment you export. For more information, see Section 4.9, "Download policies overview" . Ensure that you synchronize products that you export to the required date. Procedure Export a repository: Note The size of the exported archive depends on the number and size of the packages within the repository. If you want to split the exported archive into chunks, export your repository using the --chunk-size-gb argument to limit the size by an integer value in gigabytes, for example ---chunk-size-gb= 2 . Optional: Verify that the exported archive is located in the export directory: 8.11. Exporting a repository in a syncable format You can export the content of a repository in the Library environment of an organization to a syncable format that you can use to create your custom CDN and synchronize the content from the custom CDN over HTTP/HTTPS. You can then serve the generated content using a local web server on the importing Satellite Server or in another Satellite Server organization. You cannot directly import Syncable Format exports. Instead, on the importing Satellite Server you must: Copy the generated content to an HTTP/HTTPS web server that is accessible to importing Satellite Server. Update your CDN configuration to Custom CDN . Set the CDN URL to point to the web server. Optional: Set an SSL/TLS CA Credential if the web server requires it. Enable the repository. Synchronize the repository. You can export the following content in a syncable format from Satellite Server: Yum repositories Kickstart repositories File repositories You cannot export Ansible, Deb, or Docker content. The export contains directories with the packages, listing files, and metadata of the repository in Yum format that can be used to synchronize in the importing Satellite Server. Prerequisites Ensure that you set the download policy to Immediate for the repository within the Library lifecycle environment you export. For more information, see Section 4.9, "Download policies overview" . Procedure Export a repository using the repository name or ID: Optional: Verify that the exported content is located in the export directory: 8.12. Exporting a repository incrementally Exporting a repository can be a very expensive operation in terms of system resources. A typical Red Hat Enterprise Linux tree may occupy several gigabytes of space on Satellite Server. In such cases, you can use Incremental Export to export only pieces of content that changed since the export. Incremental exports typically result in smaller archive files than the full exports. The example below shows incremental export of a repository in the Library lifecycle environment. Procedure Create an incremental export: Optional: View the exported data: 8.13. Exporting a repository incrementally in a syncable format Exporting a repository can be a very expensive operation in terms of system resources. A typical Red Hat Enterprise Linux tree may occupy several gigabytes of space on Satellite Server. In such cases, you can use Incremental Export to export only pieces of content that changed since the export. Incremental exports typically result in smaller archive files than full exports. The procedure below shows an incremental export of a repository in the Library lifecycle environment. Procedure Create an incremental export: Optional: View the exported data: 8.14. Keeping track of your exports Satellite keeps records of all exports. Each time you export content on the upstream Satellite Server, the export is recorded and maintained for future querying. You can use the records to organize and manage your exports, which is useful especially when exporting incrementally. When exporting content from the upstream Satellite Server for several downstream Satellite Servers, you can also keep track of content exported for specific servers. This helps you track which content was exported and to where. Use the --destination-server argument during export to indicate the target server. This option is available for all content-export operations. Tracking destinations of Library exports Specify the destination server when exporting the Library: Tracking destinations of content view exports Specify the destination server when exporting a content view version: Querying export records List content exports using the following command: 8.15. Importing into the Library environment You can import exported Library content into the Library lifecycle environment of an organization on another Satellite Server. For more information about exporting contents from the Library environment, see Section 8.3, "Exporting the Library environment" . Prerequisites The exported files must be in a directory under /var/lib/pulp/imports . If there are any Red Hat repositories in the exported content, the importing organization's manifest must contain subscriptions for the products contained within the export. The user importing the content must have the Content Importer Role. Procedure Copy the exported files to a subdirectory of /var/lib/pulp/imports on Satellite Server where you want to import. Set the ownership of the import directory and its contents to pulp:pulp . Verify that the ownership is set correctly: Identify the Organization that you wish to import into. To import the Library content to Satellite Server, enter the following command: Note you must enter the full path /var/lib/pulp/imports/ My_Exported_Library_Dir . Relative paths do not work. To verify that you imported the Library content, check the contents of the Product and Repositories. A new content view called Import-Library is created in the target organization. This content view is used to facilitate the Library content import. By default, this content view is not shown in the Satellite web UI. Import-Library is not meant to be assigned directly to hosts. Instead, assign your hosts to Default Organization View or another content view as you would normally. The importing Satellite Server extracts the /var/lib/pulp/imports directory to /var/lib/pulp/ . You can empty the /var/lib/pulp/imports directory after a successful import. 8.16. Importing into the Library environment from a web server You can import exported Library content directly from a web server into the Library lifecycle environment of an organization on another Satellite Server. For more information about exporting contents from the Library environment, see Section 8.3, "Exporting the Library environment" . Prerequisites The exported files must be in a syncable format. The exported files must be accessible through HTTP/HTTPS. If there are any Red Hat repositories in the exported content, the importing organization's manifest must contain subscriptions for the products contained within the export. The user importing the content view version must have the Content Importer role. Procedure Identify the Organization that you wish to import into. To import the Library content to Satellite Server, enter the following command: A new content view called Import-Library is created in the target organization. This content view is used to facilitate the Library content import. By default, this content view is not shown in the Satellite web UI. Import-Library is not meant to be assigned directly to hosts. Instead, assign your hosts to Default Organization View or another content view. 8.17. Importing a content view version You can import an exported content view version to create a version with the same content in an organization on another Satellite Server. For more information about exporting a content view version, see Section 8.7, "Exporting a content view version" . When you import a content view version, it has the same major and minor version numbers and contains the same repositories with the same packages and errata. Custom repositories, products and content views are automatically created if they do not exist in the importing organization. Prerequisites The exported files must be in a directory under /var/lib/pulp/imports . If there are any Red Hat repositories in the exported content, the importing organization's manifest must contain subscriptions for the products contained within the export. The user importing the content view version must have the Content Importer Role. Procedure Copy the exported files to a subdirectory of /var/lib/pulp/imports on Satellite Server where you want to import. Set the ownership of the import directory and its contents to pulp:pulp . Verify that the ownership is set correctly: To import the content view version to Satellite Server, enter the following command: Note that you must enter the full path /var/lib/pulp/imports/ My_Exported_Version_Dir . Relative paths do not work. To verify that you imported the content view version successfully, list content view versions for your organization: The importing Satellite Server extracts the /var/lib/pulp/imports directory to /var/lib/pulp/ . You can empty the /var/lib/pulp/imports directory after a successful import. 8.18. Importing a content view version from a web server You can import an exported content view version directly from a web server to create a version with the same content in an organization on another Satellite Server. For more information about exporting a content view version, see Section 8.7, "Exporting a content view version" . When you import a content view version, it has the same major and minor version numbers and contains the same repositories with the same packages and errata. Custom repositories, products, and content views are automatically created if they do not exist in the importing organization. Prerequisites The exported files must be in a syncable format. The exported files must be accessible through HTTP/HTTPS. If there are any Red Hat repositories in the exported content, the importing organization's manifest must contain subscriptions for the products contained within the export. The user importing the content view version must have the Content Importer role. Procedure Import the content view version into Satellite Server: 8.19. Importing a repository You can import an exported repository into an organization on another Satellite Server. For more information about exporting content of a repository, see Section 8.10, "Exporting a repository" . Prerequisites The export files must be in a directory under /var/lib/pulp/imports . If the export contains any Red Hat repositories, the manifest of the importing organization must contain subscriptions for the products contained within the export. The user importing the content must have the Content Importer Role. Procedure Copy the exported files to a subdirectory of /var/lib/pulp/imports on Satellite Server where you want to import. Set the ownership of the import directory and its contents to pulp:pulp . Verify that the ownership is set correctly: Identify the Organization that you wish to import into. To import the repository content to Satellite Server, enter the following command: Note that you must enter the full path /var/lib/pulp/imports/ My_Exported_Repo_Dir . Relative paths do not work. To verify that you imported the repository, check the contents of the product and repository. The importing Satellite Server extracts the /var/lib/pulp/imports directory to /var/lib/pulp/ . You can empty the /var/lib/pulp/imports directory after a successful import. 8.20. Importing a repository from a web server You can import an exported repository directly from a web server into an organization on another Satellite Server. For more information about exporting the content of a repository, see Section 8.10, "Exporting a repository" . Prerequisites The exported files must be in a syncable format. The exported files must be accessible through HTTP/HTTPS. If the export contains any Red Hat repositories, the manifest of the importing organization must contain subscriptions for the products contained within the export. The user importing the content view version must have the Content Importer Role. Procedure Select the organization into which you want to import. To import the repository to Satellite Server, enter the following command: 8.21. Exporting and importing content using Hammer CLI cheat sheet Table 8.1. Export Intent Command Fully export an Organization's Library hammer content-export complete library --organization=" My_Organization " Incrementally export an Organization's Library (assuming you have exported something previously) hammer content-export incremental library --organization=" My_Organization " Fully export a content view version hammer content-export complete version --content-view=" My_Content_View " --version=1.0 --organization=" My_Organization " Export a content view version promoted to the Dev Environment hammer content-export complete version --content-view=" My_Content_View " --organization=" My_Organization " --lifecycle-environment="Dev" Export a content view in smaller chunks (2-GB slabs) hammer content-export complete version --content-view=" My_Content_View " --version=1.0 --organization=" My_Organization " --chunk-size-gb=2 Incrementally export a content view version (assuming you have exported something previously) hammer content-export incremental version --content-view=" My_Content_View " --version=2.0 --organization=" My_Organization " Fully export a Repository hammer content-export complete repository --product=" My_Product " --name=" My_Repository " --organization=" My_Organization " Incrementally export a Repository (assuming you have exported something previously) hammer content-export incremental repository --product=" My_Product " --name=" My_Repository " --organization=" My_Organization " List exports hammer content-export list --content-view=" My_Content_View " --organization=" My_Organization " Table 8.2. Import Intent Command Import into an Organization's Library hammer content-import library --organization=" My_Organization " --path="/var/lib/pulp/imports/ My_Exported_Library_Dir " Import to a content view version hammer content-import version --organization=" My_Organization " --path="/var/lib/pulp/imports/ My_Exported_Version_Dir " Import a Repository hammer content-import repository --organization=" My_Organization " --path="/var/lib/pulp/imports/ My_Exported_Repo_Dir " | [
"curl http://satellite.example.com/pub/katello-server-ca.crt",
"hammer content-export complete library --organization=\" My_Organization \"",
"ls -lh /var/lib/pulp/exports/ My_Organization /Export-Library/1.0/2021-03-02T03-35-24-00-00 total 68M -rw-r--r--. 1 pulp pulp 68M Mar 2 03:35 export-1e25417c-6d09-49d4-b9a5-23df4db3d52a-20210302_0335.tar.gz -rw-r--r--. 1 pulp pulp 333 Mar 2 03:35 export-1e25417c-6d09-49d4-b9a5-23df4db3d52a-20210302_0335-toc.json -rw-r--r--. 1 pulp pulp 443 Mar 2 03:35 metadata.json",
"hammer content-export complete library --chunk-size-gb=2 --organization=\" My_Organization \" Generated /var/lib/pulp/exports/ My_Organization /Export-Library/2.0/2021-03-02T04-01-25-00-00/metadata.json ls -lh /var/lib/pulp/exports/ My_Organization /Export-Library/2.0/2021-03-02T04-01-25-00-00/",
"hammer content-export complete library --organization=\" My_Organization \" --format=syncable",
"du -sh /var/lib/pulp/exports/ My_Organization /Export- My_Repository /1.0/2021-03-02T03-35-24-00-00",
"hammer content-import library --organization=\" My_Organization \" --path=\" My_Path_To_Syncable_Export \"",
"hammer content-export incremental library --organization=\" My_Organization \"",
"find /var/lib/pulp/exports/ My_Organization /Export-Library/",
"hammer content-view version list --content-view=\" My_Content_View \" --organization=\" My_Organization \" ---|----------|---------|-------------|----------------------- ID | NAME | VERSION | DESCRIPTION | LIFECYCLE ENVIRONMENTS ---|----------|---------|-------------|----------------------- 5 | view 3.0 | 3.0 | | Library 4 | view 2.0 | 2.0 | | 3 | view 1.0 | 1.0 | | ---|----------|---------|-------------|----------------------",
"hammer content-export complete version --content-view=\" Content_View_Name \" --version=1.0 --organization=\" My_Organization \"",
"ls -lh /var/lib/pulp/exports/ My_Organization / Content_View_Name /1.0/2021-02-25T18-59-26-00-00/",
"hammer content-export complete version --chunk-size-gb=2 --content-view=\" Content_View_Name \" --organization=\" My_Organization \" --version=1.0 ls -lh /var/lib/pulp/exports/ My_Organization /view/1.0/2021-02-25T21-15-22-00-00/",
"hammer content-view version list --content-view=\" My_Content_View \" --organization=\" My_Organization \"",
"hammer content-export complete version --content-view=\" Content_View_Name \" --version=1.0 --organization=\" My_Organization \" --format=syncable",
"ls -lh /var/lib/pulp/exports/ My_Organization / My_Content_View_Name /1.0/2021-02-25T18-59-26-00-00/",
"hammer content-export incremental version --content-view=\" My_Content_View \" --organization=\" My_Organization \" --version=\" My_Content_View_Version \"",
"find /var/lib/pulp/exports/ My_Organization / My_Exported_Content_View / My_Content_View_Version /",
"hammer content-export complete repository --name=\" My_Repository \" --product=\" My_Product \" --organization=\" My_Organization \"",
"ls -lh /var/lib/pulp/exports/ My_Organization /Export- My_Repository /1.0/2022-09-02T03-35-24-00-00/",
"hammer content-export complete repository --organization=\" My_Organization \" --product=\" My_Product \" --name=\" My_Repository \" --format=syncable",
"du -sh /var/lib/pulp/exports/ My_Organization /Export- My_Repository /1.0/2021-03-02T03-35-24-00-00",
"hammer content-export incremental repository --name=\" My_Repository \" --organization=\" My_Organization \" --product=\" My_Product \"",
"ls -lh /var/lib/pulp/exports/ My_Organization /Export- My_Repository /3.0/2021-03-02T03-35-24-00-00/ total 172K -rw-r--r--. 1 pulp pulp 20M Mar 2 04:22 export-436882d8-de5a-48e9-a30a-17169318f908-20210302_0422.tar.gz -rw-r--r--. 1 pulp pulp 333 Mar 2 04:22 export-436882d8-de5a-48e9-a30a-17169318f908-20210302_0422-toc.json -rw-r--r--. 1 root root 492 Mar 2 04:22 metadata.json",
"hammer content-export incremental repository --format=syncable --name=\" My_Repository \" --organization=\" My_Organization \" --product=\" My_Product \"",
"find /var/lib/pulp/exports/Default_Organization/ My_Product /2.0/2023-03-09T10-55-48-05-00/ -name \"*.rpm\"",
"hammer content-export complete library --destination-server= My_Downstream_Server_1 --organization=\" My_Organization \" --version=1.0",
"hammer content-export complete version --content-view=\" Content_View_Name \" --destination-server= My_Downstream_Server_1 --organization=\" My_Organization \" --version=1.0",
"hammer content-export list --organization=\" My_Organization \"",
"chown -R pulp:pulp /var/lib/pulp/imports/2021-03-02T03-35-24-00-00",
"ls -lh /var/lib/pulp/imports/2021-03-02T03-35-24-00-00 total 68M -rw-r--r--. 1 pulp pulp 68M Mar 2 04:29 export-1e25417c-6d09-49d4-b9a5-23df4db3d52a-20210302_0335.tar.gz -rw-r--r--. 1 pulp pulp 333 Mar 2 04:29 export-1e25417c-6d09-49d4-b9a5-23df4db3d52a-20210302_0335-toc.json -rw-r--r--. 1 pulp pulp 443 Mar 2 04:29 metadata.json",
"hammer content-import library --organization=\" My_Organization \" --path=/var/lib/pulp/imports/2021-03-02T03-35-24-00-00",
"hammer content-import library --organization=\" My_Organization \" --path=http:// server.example.com /pub/exports/2021-02-25T21-15-22-00-00/",
"chown -R pulp:pulp /var/lib/pulp/imports/2021-02-25T21-15-22-00-00/",
"ls -lh /var/lib/pulp/imports/2021-02-25T21-15-22-00-00/",
"hammer content-import version --organization= My_Organization --path=/var/lib/pulp/imports/2021-02-25T21-15-22-00-00/",
"hammer content-view version list --organization-id= My_Organization_ID",
"hammer content-import version --organization= My_Organization --path=http:// server.example.com /pub/exports/2021-02-25T21-15-22-00-00/",
"chown -R pulp:pulp /var/lib/pulp/imports/2021-03-02T03-35-24-00-00",
"ls -lh /var/lib/pulp/imports/2021-03-02T03-35-24-00-00 total 68M -rw-r--r--. 1 pulp pulp 68M Mar 2 04:29 export-1e25417c-6d09-49d4-b9a5-23df4db3d52a-20210302_0335.tar.gz -rw-r--r--. 1 pulp pulp 333 Mar 2 04:29 export-1e25417c-6d09-49d4-b9a5-23df4db3d52a-20210302_0335-toc.json -rw-r--r--. 1 pulp pulp 443 Mar 2 04:29 metadata.json",
"hammer content-import repository --organization=\" My_Organization \" --path=/var/lib/pulp/imports/ 2021-03-02T03-35-24-00-00",
"hammer content-import repository --organization=\" My_Organization \" --path=http:// server.example.com /pub/exports/2021-02-25T21-15-22-00-00/"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/managing_content/synchronizing_content_between_servers_content-management |
Chapter 6. Troubleshooting Red Hat Quay components | Chapter 6. Troubleshooting Red Hat Quay components This document focuses on troubleshooting specific components within Red Hat Quay, providing targeted guidance for resolving issues that might arise. Designed for system administrators, operators, and developers, this resource aims to help diagnose and troubleshoot problems related to individual components of Red Hat Quay. In addition to the following procedures, Red Hat Quay components can also be troubleshot by running Red Hat Quay in debug mode, obtaining log information, obtaining configuration information, and performing health checks on endpoints. By using the following procedures, you are able to troubleshoot common component issues. Afterwards, you can search for solutions on the Red Hat Knowledgebase , or file a support ticket with the Red Hat Support team. 6.1. Troubleshooting the Red Hat Quay database The PostgreSQL database used for Red Hat Quay store various types of information related to container images and their management. Some of the key pieces of information that the PostgreSQL database stores includes: Image Metadata . The database stores metadata associated with container images, such as image names, versions, creation timestamps, and the user or organization that owns the image. This information allows for easy identification and organization of container images within the registry. Image Tags . Red Hat Quay allows users to assign tags to container images, enabling convenient labeling and versioning. The PostgreSQL database maintains the mapping between image tags and their corresponding image manifests, allowing users to retrieve specific versions of container images based on the provided tags. Image Layers . Container images are composed of multiple layers, which are stored as individual objects. The database records information about these layers, including their order, checksums, and sizes. This data is crucial for efficient storage and retrieval of container images. User and Organization Data . Red Hat Quay supports user and organization management, allowing users to authenticate and manage access to container images. The PostgreSQL database stores user and organization information, including usernames, email addresses, authentication tokens, and access permissions. Repository Information . Red Hat Quay organizes container images into repositories, which act as logical units for grouping related images. The database maintains repository data, including names, descriptions, visibility settings, and access control information, enabling users to manage and share their repositories effectively. Event Logs . Red Hat Quay tracks various events and activities related to image management and repository operations. These event logs, including image pushes, pulls, deletions, and repository modifications, are stored in the PostgreSQL database, providing an audit trail and allowing administrators to monitor and analyze system activities. The content in this section covers the following procedures: Checking the type of deployment : Determine if the database is deployed as a container on a virtual machine or as a pod on OpenShift Container Platform. Checking the container or pod status : Verify the status of the database pod or container using specific commands based on the deployment type. Examining the database container or pod logs : Access and examine the logs of the database pod or container, including commands for different deployment types. Checking the connectivity between Red Hat Quay and the database pod : Check the connectivity between Red Hat Quay and the database pod using relevant commands. Checking the database configuration : Check the database configuration at various levels (OpenShift Container Platform or PostgreSQL level) based on the deployment type. Checking resource allocation : Monitor resource allocation for the Red Hat Quay deployment, including disk usage and other resource usage. Interacting with the Red Hat Quay database : Learn how to interact with the PostgreSQL database, including commands to access and query databases. 6.1.1. Troubleshooting Red Hat Quay database issues Use the following procedures to troubleshoot the PostgreSQL database. 6.1.1.1. Interacting with the Red Hat Quay database Use the following procedure to interact with the PostgreSQL database. Warning Interacting with the PostgreSQL database is potentially destructive. It is highly recommended that you perform the following procedure with the help of a Red Hat Quay Support Specialist. Note Interacting with the PostgreSQL database can also be used to troubleshoot authorization and authentication issues. Procedure Exec into the Red Hat Quay database. Enter the following commands to exec into the Red Hat Quay database pod on OpenShift Container Platform: USD oc exec -it <quay_database_pod> -- psql Enter the following command to exec into the Red Hat Quay database on a standalone deployment: USD sudo podman exec -it <quay_container_name> /bin/bash Enter the PostgreSQL shell. Warning Interacting with the PostgreSQL database is potentially destructive. It is highly recommended that you perform the following procedure with the help of a Red Hat Quay Support Specialist. If you are using the Red Hat Quay Operator, enter the following command to enter the PostgreSQL shell: USD oc rsh <quay_pod_name> psql -U your_username -d your_database_name If you are on a standalone Red Hat Quay deployment, enter the following command to enter the PostgreSQL shell: bash-4.4USD psql -U your_username -d your_database_name 6.1.1.2. Troubleshooting crashloopbackoff states Use the following procedure to troueblshoot crashloopbackoff states. Procedure If your container or pod is in a crashloopbackoff state, you can enter the following commands. Enter the following command to scale down the Red Hat Quay Operator: USD oc scale deployment/quay-operator.v3.8.z --replicas=0 Example output deployment.apps/quay-operator.v3.8.z scaled Enter the following command to scale down the Red Hat Quay database: USD oc scale deployment/<quay_database> --replicas=0 Example output deployment.apps/<quay_database> scaled Enter the following command to edit the Red Hat Quay database: Warning Interacting with the PostgreSQL database is potentially destructive. It is highly recommended that you perform the following procedure with the help of a Red Hat Quay Support Specialist. USD oc edit deployment <quay_database> ... template: metadata: creationTimestamp: null labels: quay-component: <quay_database> quay-operator/quayregistry: quay-operator.v3.8.z spec: containers: - env: - name: POSTGRESQL_USER value: postgres - name: POSTGRESQL_DATABASE value: postgres - name: POSTGRESQL_PASSWORD value: postgres - name: POSTGRESQL_ADMIN_PASSWORD value: postgres - name: POSTGRESQL_MAX_CONNECTIONS value: "1000" image: registry.redhat.io/rhel8/postgresql-10@sha256:a52ad402458ec8ef3f275972c6ebed05ad64398f884404b9bb8e3010c5c95291 imagePullPolicy: IfNotPresent name: postgres command: ["/bin/bash", "-c", "sleep 86400"] 1 ... 1 Add this line in the same indentation. Example output deployment.apps/<quay_database> edited Execute the following command inside of your <quay_database> : USD oc exec -it <quay_database> -- cat /var/lib/pgsql/data/userdata/postgresql/logs/* /path/to/desired_directory_on_host 6.1.1.3. Checking the connectivity between Red Hat Quay and the database pod Use the following procedure to check the connectivity between Red Hat Quay and the database pod Procedure Check the connectivity between Red Hat Quay and the database pod. If you are using the Red Hat Quay Operator on OpenShift Container Platform, enter the following command: USD oc exec -it _quay_pod_name_ -- curl -v telnet://<database_pod_name>:5432 If you are using a standalone deployment of Red Hat Quay, enter the following command: USD podman exec -it <quay_container_name >curl -v telnet://<database_container_name>:5432 6.1.1.4. Checking resource allocation Use the following procedure to check resource allocation. Procedure Obtain a list of running containers. Monitor disk usage of your Red Hat Quay deployment. If you are using the Red Hat Quay Operator on OpenShift Container Platform, enter the following command: USD oc exec -it <quay_database_pod_name> -- df -ah If you are using a standalone deployment of Red Hat Quay, enter the following command: USD podman exec -it <quay_database_conatiner_name> df -ah Monitor other resource usage. Enter the following command to check resource allocation on a Red Hat Quay Operator deployment: USD oc adm top pods Enter the following command to check the status of a specific pod on a standalone deployment of Red Hat Quay: USD podman pod stats <pod_name> Enter the following command to check the status of a specific container on a standalone deployment of Red Hat Quay: USD podman stats <container_name> The following information is returned: CPU % . The percentage of CPU usage by the container since the last measurement. This value represents the container's share of the available CPU resources. MEM USAGE / LIMIT . The current memory usage of the container followed by its memory limit. The values are displayed in the format current_usage / memory_limit . For example, 300.4MiB / 7.795GiB indicates that the container is currently using 300.4 megabytes of memory out of a limit of 7.795 gigabytes. MEM % . The percentage of memory usage by the container in relation to its memory limit. NET I/O . The network I/O (input/output) statistics of the container. It displays the amount of data transmitted and received by the container over the network. The values are displayed in the format: transmitted_bytes / received_bytes . BLOCK I/O . The block I/O (input/output) statistics of the container. It represents the amount of data read from and written to the block devices (for example, disks) used by the container. The values are displayed in the format read_bytes / written_bytes . 6.1.2. Resetting superuser passwords on Red Hat Quay standalone deployments Use the following procedure to reset a superuser's password. Prerequisites You have created a Red Hat Quay superuser. You have installed Python 3.9. You have installed the pip package manager for Python. You have installed the bcrypt package for pip . Procedure Generate a secure, hashed password using the bcrypt package in Python 3.9 by entering the following command: USD python3.9 -c 'import bcrypt; print(bcrypt.hashpw(b"newpass1234", bcrypt.gensalt(12)).decode("utf-8"))' Example output USD2bUSD12USDT8pkgtOoys3G5ut7FV1She6vXlYgU.6TeoGmbbAVQtN8X8ch4knKm Enter the following command to show the container ID of your Red Hat Quay container registry: USD sudo podman ps -a Example output CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 70560beda7aa registry.redhat.io/rhel8/redis-5:1 run-redis 2 hours ago Up 2 hours ago 0.0.0.0:6379->6379/tcp redis 8012f4491d10 registry.redhat.io/quay/quay-rhel8:v3.8.2 registry 3 minutes ago Up 8 seconds ago 0.0.0.0:80->8080/tcp, 0.0.0.0:443->8443/tcp quay 8b35b493ac05 registry.redhat.io/rhel8/postgresql-10:1 run-postgresql 39 seconds ago Up 39 seconds ago 0.0.0.0:5432->5432/tcp postgresql-quay Execute an interactive shell for the postgresql container image by entering the following command: USD sudo podman exec -it 8b35b493ac05 /bin/bash Re-enter the quay PostgreSQL database server, specifying the database, username, and host address: bash-4.4USD psql -d quay -U quayuser -h 192.168.1.28 -W Update the password_hash of the superuser admin who lost their password: quay=> UPDATE public.user SET password_hash = 'USD2bUSD12USDT8pkgtOoys3G5ut7FV1She6vXlYgU.6TeoGmbbAVQtN8X8ch4knKm' where username = 'quayadmin'; Example output UPDATE 1 Enter the following to command to ensure that the password_hash has been updated: quay=> select * from public.user; Example output id | uuid | username | password_hash | email | verified | stripe_id | organization | robot | invoice_email | invalid_login_attempts | last_invalid_login |removed_tag_expiration_s | enabled | invoice_email_address | company | family_name | given_name | location | maximum_queued_builds_count | creation_date | last_accessed ----+--------------------------------------+-----------+--------------------------------------------------------------+-----------------------+--- -------+-----------+--------------+-------+---------------+------------------------+----------------------------+--------------------------+------ ---+-----------------------+---------+-------------+------------+----------+-----------------------------+----------------------------+----------- 1 | 73f04ef6-19ba-41d3-b14d-f2f1eed94a4a | quayadmin | USD2bUSD12USDT8pkgtOoys3G5ut7FV1She6vXlYgU.6TeoGmbbAVQtN8X8ch4knKm | [email protected] | t | | f | f | f | 0 | 2023-02-23 07:54:39.116485 | 1209600 | t | | | | | | | 2023-02-23 07:54:39.116492 Log in to your Red Hat Quay deployment using the new password: USD sudo podman login -u quayadmin -p newpass1234 http://quay-server.example.com --tls-verify=false Example output Login Succeeded! Additional resources For more information, see Resetting Superuser Password for Quay . 6.1.3. Resetting superuser passwords on the Red Hat Quay Operator Prerequisites You have created a Red Hat Quay superuser. You have installed Python 3.9. You have installed the pip package manager for Python. You have installed the bcrypt package for pip . Procedure Log in to your Red Hat Quay deployment. On the OpenShift Container Platform UI, navigate to Workloads Secrets . Select the namespace for your Red Hat Quay deployment, for example, Project quay . Locate and store the PostgreSQL database credentials. Generate a secure, hashed password using the bcrypt package in Python 3.9 by entering the following command: USD python3.9 -c 'import bcrypt; print(bcrypt.hashpw(b"newpass1234", bcrypt.gensalt(12)).decode("utf-8"))' Example output USD2bUSD12USDzoilcTG6XQeAoVuDuIZH0..UpvQEZcKh3V6puksQJaUQupHgJ4.4y On the CLI, log in to the database, for example: USD oc rsh quayuser-quay-quay-database-669c8998f-v9qsl Enter the following command to open a connection to the quay PostgreSQL database server, specifying the database, username, and host address: sh-4.4USD psql -U quayuser-quay-quay-database -d quayuser-quay-quay-database -W Enter the following command to connect to the default database for the current user: quay=> \c Update the password_hash of the superuser admin who lost their password: quay=> UPDATE public.user SET password_hash = 'USD2bUSD12USDzoilcTG6XQeAoVuDuIZH0..UpvQEZcKh3V6puksQJaUQupHgJ4.4y' where username = 'quayadmin'; Enter the following to command to ensure that the password_hash has been updated: quay=> select * from public.user; Example output id | uuid | username | password_hash | email | verified | stripe_id | organization | robot | invoice_email | invalid_login_attempts | last_invalid_login |removed_tag_expiration_s | enabled | invoice_email_address | company | family_name | given_name | location | maximum_queued_builds_count | creation_date | last_accessed ----+--------------------------------------+-----------+--------------------------------------------------------------+-----------------------+--- -------+-----------+--------------+-------+---------------+------------------------+----------------------------+--------------------------+------ ---+-----------------------+---------+-------------+------------+----------+-----------------------------+----------------------------+----------- 1 | 73f04ef6-19ba-41d3-b14d-f2f1eed94a4a | quayadmin | USD2bUSD12USDzoilcTG6XQeAoVuDuIZH0..UpvQEZcKh3V6puksQJaUQupHgJ4.4y | [email protected] | t | | f | f | f | 0 | 2023-02-23 07:54:39.116485 | 1209600 | t | | | | | | | 2023-02-23 07:54:39.116492 Navigate to your Red Hat Quay UI on OpenShift Container Platform and log in using the new credentials. 6.2. Troubleshooting Red Hat Quay authentication Authentication and authorization is crucial for secure access to Red Hat Quay. Together, they safeguard sensitive container images, verify user identities, enforce access controls, facilitate auditing and accountability, and enable seamless integration with external identity providers. By prioritizing authentication, organizations can bolster the overall security and integrity of their container registry environment. The following authentication methods are supported by Red Hat Quay: Username and password . Users can authentication by providing their username and password, which are validated against the user database configured in Red Hat Quay. This traditional method requires users to enter their credentials to gain access. OAuth . Red Hat Quay supports OAuth authentication, which allows users to authenticate using their credentials from third party services like Google, GitHub, or Keycloak. OAuth enables a seamless and federated login experience, eliminating the need for separate account creation and simplifying user management. OIDC . OpenID Connect enables single sign-on (SSO) capabilities and integration with enterprise identity providers. With OpenID Connect, users can authenticate using their existing organizational credentials, providing a unified authentication experience across various systems and applications. Token-based authentication . Users can obtain unique tokens that grant access to specific resources within Red Hat Quay. Tokens can be obtained through various means, such as OAuth or by generating API tokens within the Red Hat Quay user interface. Token-based authentication is often used for automated or programmatic access to the registry. External identity provider . Red Hat Quay can integrate with external identity providers, such as LDAP or AzureAD, for authentication purposes. This integration allows organizations to use their existing identity management infrastructure, enabling centralized user authentication and reducing the need for separate user databases. 6.2.1. Troubleshooting Red Hat Quay authentication and authorization issues for specific users Use the following procedure to troubleshoot authentication and authorization issues for specific users. Procedure Exec into the Red Hat Quay pod or container. For more information, see "Interacting with the Red Hat Quay database". Enter the following command to show all users for external authentication: quay=# select * from federatedlogin; Example output id | user_id | service_id | service_ident | metadata_json ----+---------+------------+---------------------------------------------+------------------------------------------- 1 | 1 | 3 | testuser0 | {} 2 | 1 | 8 | PK7Zpg2Yu2AnfUKG15hKNXqOXirqUog6G-oE7OgzSWc | {"service_username": "live.com#testuser0"} 3 | 2 | 3 | testuser1 | {} 4 | 2 | 4 | 110875797246250333431 | {"service_username": "testuser1"} 5 | 3 | 3 | testuser2 | {} 6 | 3 | 1 | 26310880 | {"service_username": "testuser2"} (6 rows) Verify that the users are inserted into the user table: quay=# select username, email from "user"; Example output username | email -----------+---------------------- testuser0 | [email protected] testuser1 | [email protected] testuser2 | [email protected] (3 rows) 6.3. Troubleshooting Red Hat Quay object storage Object storage is a type of data storage architecture that manages data as discrete units called objects . Unlike traditional file systems that organize data into hierarchical directories and files, object storage treats data as independent entities with unique identifiers. Each object contains the data itself, along with metadata that describes the object and enables efficient retrieval. Red Hat Quay uses object storage as the underlying storage mechanism for storing and managing container images. It stores container images as individual objects. Each container image is treated as an object, with its own unique identifier and associated metadata. 6.3.1. Troubleshooting Red Hat Quay object storage issues Use the following options to troubleshoot Red Hat Quay object storage issues. Procedure Enter the following command to see what object storage is used: USD oc get quayregistry quay-registry-name -o yaml Ensure that the object storage you are using is officially supported by Red Hat Quay by checking the tested integrations page. Enable debug mode. For more information, see "Running Red Hat Quay in debug mode". Check your object storage configuration in your config.yaml file. Ensure that it is accurate and matches the settings provided by your object storage provider. You can check information like access credentials, endpoint URLs, bucket and container names, and other relevant configuration parameters. Ensure that Red Hat Quay has network connectivity to the object storage endpoint. Check the network configurations to ensure that there are no restrictions blocking the communication between Red Hat Quay and the object storage endpoint. If FEATURE_STORAGE_PROXY is enabled in your config.yaml file, check to see if its download URL is accessible. This can be found in the Red Hat Quay debug logs. For example: USD curl -vvv "https://QUAY_HOSTNAME/_storage_proxy/dhaWZKRjlyO......Kuhc=/https/quay.hostname.com/quay-test/datastorage/registry/sha256/0e/0e1d17a1687fa270ba4f52a85c0f0e7958e13d3ded5123c3851a8031a9e55681?AWSAccessKeyId=xxxx&Signature=xxxxxx4%3D&Expires=1676066703" Try access the object storage service outside of Red Hat Quay to determine if the issue is specific to your deployment, or the underlying object storage. You can use command line tools like aws , gsutil , or s3cmd provided by the object storage provider to perform basic operations like listing buckets, containers, or uploading and downloading objects. This might help you isolate the problem. 6.4. Geo-replication Note Currently, the geo-replication feature is not supported on IBM Power and IBM Z. Geo-replication allows multiple, geographically distributed Red Hat Quay deployments to work as a single registry from the perspective of a client or user. It significantly improves push and pull performance in a globally-distributed Red Hat Quay setup. Image data is asynchronously replicated in the background with transparent failover and redirect for clients. Deployments of Red Hat Quay with geo-replication is supported on standalone and Operator deployments. 6.4.1. Troubleshooting geo-replication for Red Hat Quay Use the following sections to troubleshoot geo-replication for Red Hat Quay. 6.4.1.1. Checking data replication in backend buckets Use the following procedure to ensure that your data is properly replicated in all backend buckets. Prerequisites You have installed the aws CLI. Procedure Enter the following command to ensure that your data is replicated in all backend buckets: USD aws --profile quay_prod_s3 --endpoint=http://10.0.x.x:port s3 ls ocp-quay --recursive --human-readable --summarize Example output Total Objects: 17996 Total Size: 514.4 GiB 6.4.1.2. Checking the status of your backend storage Use the following resources to check the status of your backend storage. Amazon Web Service Storage (AWS) . Check the AWS S3 service health status on the AWS Service Health Dashboard . Validate your access to S3 by listing objects in a known bucket using the aws CLI or SDKs. Google Cloud Storage (GCS) . Check the Google Cloud Status Dashboard for the status of the GCS service. Verify your access to GCS by listing objects in a known bucket using the Google Cloud SDK or GCS client libraries. NooBaa . Check the NooBaa management console or administrative interface for any health or status indicators. Ensure that the NooBaa services and related components are running and accessible. Verify access to NooBaa by listing objects in a known bucket using the NooBaa CLI or SDK. Red Hat OpenShift Data Foundation . Check the OpenShift Container Platform Console or management interface for the status of the Red Hat OpenShift Data Foundation components. Verify the availability of Red Hat OpenShift Data Foundation S3 interface and services. Ensure that the Red Hat OpenShift Data Foundation services are running and accessible. Validate access to Red Hat OpenShift Data Foundation S3 by listing objects in a known bucket using the appropriate S3-compatible SDK or CLI. Ceph . Check the status of Ceph services, including Ceph monitors, OSDs, and RGWs. Validate that the Ceph cluster is healthy and operational. Verify access to Ceph object storage by listing objects in a known bucket using the appropriate Ceph object storage API or CLI. Azure Blob Storage . Check the Azure Status Dashboard to see the health status of the Azure Blob Storage service. Validate your access to Azure Blob Storage by listing containers or objects using the Azure CLI or Azure SDKs. OpenStack Swift . Check the OpenStack Status page to verify the status of the OpenStack Swift service. Ensure that the Swift services, like the proxy server, container servers, object servers, are running and accessible. Validate your access to Swift by listing containers or objects using the appropriate Swift CLI or SDK. After checking the status of your backend storage, ensure that all Red Hat Quay instances have access to all s3 storage backends. 6.5. Repository mirroring Red Hat Quay repository mirroring lets you mirror images from external container registries, or another local registry, into your Red Hat Quay cluster. Using repository mirroring, you can synchronize images to Red Hat Quay based on repository names and tags. From your Red Hat Quay cluster with repository mirroring enabled, you can perform the following: Choose a repository from an external registry to mirror Add credentials to access the external registry Identify specific container image repository names and tags to sync Set intervals at which a repository is synced Check the current state of synchronization To use the mirroring functionality, you need to perform the following actions: Enable repository mirroring in the Red Hat Quay configuration file Run a repository mirroring worker Create mirrored repositories All repository mirroring configurations can be performed using the configuration tool UI or by the Red Hat Quay API. 6.5.1. Troubleshooting repository mirroring Use the following sections to troubleshoot repository mirroring for Red Hat Quay. 6.5.1.1. Verifying authentication and permissions Ensure that the authentication credentials used for mirroring have the necessary permissions and access rights on both the source and destination Red Hat Quay instances. On the Red Hat Quay UI, check the following settings: The access control settings. Ensure that the user or service account performing the mirroring operation has the required privileges. The permissions of your robot account on the Red Hat Quay registry. 6.6. Clair security scanner 6.6.1. Troubleshooting Clair issue Use the following procedures to troubleshoot Clair. 6.6.1.1. Verifying image compatibility If you are using Clair, ensure that the images you are trying to scan are supported by Clair. Clair has certain requirements and does not support all image formats or configurations. For more information, see Clair vulnerability databases . 6.6.1.2. Allowlisting Clair updaters If you are using Clair behind a proxy configuration, you must allowlist the updaters in your proxy or firewall configuration. For more information about updater URLs, see Clair updater URLs . 6.6.1.3. Updating Clair scanner and its dependencies Ensure that you are using the latest version of Clair security scanner. Outdated versions might lack support for newer image formats, or might have known issues. Use the following procedure to check your version of Clair. Note Checking Clair logs can also be used to check if there are any errors from the updaters microservice in your Clair logs. By default, Clair updates the vulnerability database every 30 minutes. Procedure Check your version of Clair. If you are running Clair on the Red Hat Quay Operator, enter the following command: USD oc logs clair-pod If you are running a standalone deployment of Red Hat Quay and using a Clair container, enter the following command: USD podman logs clair-container Example output "level":"info", "component":"main", "version":"v4.5.1", 6.6.1.4. Enabling debug mode for Clair By default, debug mode for Clair is enabled. You can enable debug mode for Clair by updating your Clair config.yaml file. Use the following procedure to enable debug mode for Clair. Procedure Enable debug mode for Clair If you are running Clair on the Red Hat Quay Operator, enter the following command: USD oc exec -it clair-pod-name -- cat /clair/config.yaml If you are running a standalone deployment of Red Hat Quay and using a Clair container, enter the following command: USD podman exec -it clair-container-name cat /clair/config.yaml Update your Clair config.yaml file to enable debugging: http_listen_addr: :8081 introspection_addr: :8088 log_level: debug 6.6.1.5. Checking Clair configuration Check your Clair config.yaml file to ensure that there are no misconfigurations or inconsistencies that could lead to issues. For more information, see Clair configuration overview . 6.6.1.6. Inspect image metadata In some cases, you might receive an Unsupported message. This might indicate that the scanner is unable to extract the necessary metadata from the image. Check if the image metadata is properly formatted and accessible. Additional resources For more information, see Troubleshooting Clair . | [
"oc exec -it <quay_database_pod> -- psql",
"sudo podman exec -it <quay_container_name> /bin/bash",
"oc rsh <quay_pod_name> psql -U your_username -d your_database_name",
"bash-4.4USD psql -U your_username -d your_database_name",
"oc scale deployment/quay-operator.v3.8.z --replicas=0",
"deployment.apps/quay-operator.v3.8.z scaled",
"oc scale deployment/<quay_database> --replicas=0",
"deployment.apps/<quay_database> scaled",
"oc edit deployment <quay_database>",
"template: metadata: creationTimestamp: null labels: quay-component: <quay_database> quay-operator/quayregistry: quay-operator.v3.8.z spec: containers: - env: - name: POSTGRESQL_USER value: postgres - name: POSTGRESQL_DATABASE value: postgres - name: POSTGRESQL_PASSWORD value: postgres - name: POSTGRESQL_ADMIN_PASSWORD value: postgres - name: POSTGRESQL_MAX_CONNECTIONS value: \"1000\" image: registry.redhat.io/rhel8/postgresql-10@sha256:a52ad402458ec8ef3f275972c6ebed05ad64398f884404b9bb8e3010c5c95291 imagePullPolicy: IfNotPresent name: postgres command: [\"/bin/bash\", \"-c\", \"sleep 86400\"] 1",
"deployment.apps/<quay_database> edited",
"oc exec -it <quay_database> -- cat /var/lib/pgsql/data/userdata/postgresql/logs/* /path/to/desired_directory_on_host",
"oc exec -it _quay_pod_name_ -- curl -v telnet://<database_pod_name>:5432",
"podman exec -it <quay_container_name >curl -v telnet://<database_container_name>:5432",
"oc exec -it <quay_database_pod_name> -- df -ah",
"podman exec -it <quay_database_conatiner_name> df -ah",
"oc adm top pods",
"podman pod stats <pod_name>",
"podman stats <container_name>",
"python3.9 -c 'import bcrypt; print(bcrypt.hashpw(b\"newpass1234\", bcrypt.gensalt(12)).decode(\"utf-8\"))'",
"USD2bUSD12USDT8pkgtOoys3G5ut7FV1She6vXlYgU.6TeoGmbbAVQtN8X8ch4knKm",
"sudo podman ps -a",
"CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 70560beda7aa registry.redhat.io/rhel8/redis-5:1 run-redis 2 hours ago Up 2 hours ago 0.0.0.0:6379->6379/tcp redis 8012f4491d10 registry.redhat.io/quay/quay-rhel8:v3.8.2 registry 3 minutes ago Up 8 seconds ago 0.0.0.0:80->8080/tcp, 0.0.0.0:443->8443/tcp quay 8b35b493ac05 registry.redhat.io/rhel8/postgresql-10:1 run-postgresql 39 seconds ago Up 39 seconds ago 0.0.0.0:5432->5432/tcp postgresql-quay",
"sudo podman exec -it 8b35b493ac05 /bin/bash",
"bash-4.4USD psql -d quay -U quayuser -h 192.168.1.28 -W",
"quay=> UPDATE public.user SET password_hash = 'USD2bUSD12USDT8pkgtOoys3G5ut7FV1She6vXlYgU.6TeoGmbbAVQtN8X8ch4knKm' where username = 'quayadmin';",
"UPDATE 1",
"quay=> select * from public.user;",
"id | uuid | username | password_hash | email | verified | stripe_id | organization | robot | invoice_email | invalid_login_attempts | last_invalid_login |removed_tag_expiration_s | enabled | invoice_email_address | company | family_name | given_name | location | maximum_queued_builds_count | creation_date | last_accessed ----+--------------------------------------+-----------+--------------------------------------------------------------+-----------------------+--- -------+-----------+--------------+-------+---------------+------------------------+----------------------------+--------------------------+------ ---+-----------------------+---------+-------------+------------+----------+-----------------------------+----------------------------+----------- 1 | 73f04ef6-19ba-41d3-b14d-f2f1eed94a4a | quayadmin | USD2bUSD12USDT8pkgtOoys3G5ut7FV1She6vXlYgU.6TeoGmbbAVQtN8X8ch4knKm | [email protected] | t | | f | f | f | 0 | 2023-02-23 07:54:39.116485 | 1209600 | t | | | | | | | 2023-02-23 07:54:39.116492",
"sudo podman login -u quayadmin -p newpass1234 http://quay-server.example.com --tls-verify=false",
"Login Succeeded!",
"python3.9 -c 'import bcrypt; print(bcrypt.hashpw(b\"newpass1234\", bcrypt.gensalt(12)).decode(\"utf-8\"))'",
"USD2bUSD12USDzoilcTG6XQeAoVuDuIZH0..UpvQEZcKh3V6puksQJaUQupHgJ4.4y",
"oc rsh quayuser-quay-quay-database-669c8998f-v9qsl",
"sh-4.4USD psql -U quayuser-quay-quay-database -d quayuser-quay-quay-database -W",
"quay=> \\c",
"quay=> UPDATE public.user SET password_hash = 'USD2bUSD12USDzoilcTG6XQeAoVuDuIZH0..UpvQEZcKh3V6puksQJaUQupHgJ4.4y' where username = 'quayadmin';",
"quay=> select * from public.user;",
"id | uuid | username | password_hash | email | verified | stripe_id | organization | robot | invoice_email | invalid_login_attempts | last_invalid_login |removed_tag_expiration_s | enabled | invoice_email_address | company | family_name | given_name | location | maximum_queued_builds_count | creation_date | last_accessed ----+--------------------------------------+-----------+--------------------------------------------------------------+-----------------------+--- -------+-----------+--------------+-------+---------------+------------------------+----------------------------+--------------------------+------ ---+-----------------------+---------+-------------+------------+----------+-----------------------------+----------------------------+----------- 1 | 73f04ef6-19ba-41d3-b14d-f2f1eed94a4a | quayadmin | USD2bUSD12USDzoilcTG6XQeAoVuDuIZH0..UpvQEZcKh3V6puksQJaUQupHgJ4.4y | [email protected] | t | | f | f | f | 0 | 2023-02-23 07:54:39.116485 | 1209600 | t | | | | | | | 2023-02-23 07:54:39.116492",
"quay=# select * from federatedlogin;",
"id | user_id | service_id | service_ident | metadata_json ----+---------+------------+---------------------------------------------+------------------------------------------- 1 | 1 | 3 | testuser0 | {} 2 | 1 | 8 | PK7Zpg2Yu2AnfUKG15hKNXqOXirqUog6G-oE7OgzSWc | {\"service_username\": \"live.com#testuser0\"} 3 | 2 | 3 | testuser1 | {} 4 | 2 | 4 | 110875797246250333431 | {\"service_username\": \"testuser1\"} 5 | 3 | 3 | testuser2 | {} 6 | 3 | 1 | 26310880 | {\"service_username\": \"testuser2\"} (6 rows)",
"quay=# select username, email from \"user\";",
"username | email -----------+---------------------- testuser0 | [email protected] testuser1 | [email protected] testuser2 | [email protected] (3 rows)",
"oc get quayregistry quay-registry-name -o yaml",
"curl -vvv \"https://QUAY_HOSTNAME/_storage_proxy/dhaWZKRjlyO......Kuhc=/https/quay.hostname.com/quay-test/datastorage/registry/sha256/0e/0e1d17a1687fa270ba4f52a85c0f0e7958e13d3ded5123c3851a8031a9e55681?AWSAccessKeyId=xxxx&Signature=xxxxxx4%3D&Expires=1676066703\"",
"aws --profile quay_prod_s3 --endpoint=http://10.0.x.x:port s3 ls ocp-quay --recursive --human-readable --summarize",
"Total Objects: 17996 Total Size: 514.4 GiB",
"oc logs clair-pod",
"podman logs clair-container",
"\"level\":\"info\", \"component\":\"main\", \"version\":\"v4.5.1\",",
"oc exec -it clair-pod-name -- cat /clair/config.yaml",
"podman exec -it clair-container-name cat /clair/config.yaml",
"http_listen_addr: :8081 introspection_addr: :8088 log_level: debug"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/troubleshooting_red_hat_quay/troubleshooting-components |
Chapter 1. Overview of authentication and authorization | Chapter 1. Overview of authentication and authorization 1.1. About authentication in OpenShift Container Platform To control access to an OpenShift Container Platform cluster, a cluster administrator can configure user authentication and ensure only approved users access the cluster. To interact with an OpenShift Container Platform cluster, users must first authenticate to the OpenShift Container Platform API in some way. You can authenticate by providing an OAuth access token or an X.509 client certificate in your requests to the OpenShift Container Platform API. Note If you do not present a valid access token or certificate, your request is unauthenticated and you receive an HTTP 401 error. An administrator can configure authentication through the following tasks: Configuring an identity provider: You can define any supported identity provider in OpenShift Container Platform and add it to your cluster. Configuring the internal OAuth server : The OpenShift Container Platform control plane includes a built-in OAuth server that determines the user's identity from the configured identity provider and creates an access token. You can configure the token duration and inactivity timeout. Note Users can view and manage OAuth tokens owned by them . Registering an OAuth client: OpenShift Container Platform includes several default OAuth clients . You can register and configure additional OAuth clients . Note When users send a request for an OAuth token, they must specify either a default or custom OAuth client that receives and uses the token. Managing cloud provider credentials using the Cloud Credentials Operator : Cluster components use cloud provider credentials to get permissions required to perform cluster-related tasks. Impersonating a system admin user: You can grant cluster administrator permissions to a user by impersonating a system admin user . 1.2. About authorization in OpenShift Container Platform Authorization involves determining whether the identified user has permissions to perform the requested action. Administrators can define permissions and assign them to users using the RBAC objects, such as rules, roles, and bindings . To understand how authorization works in OpenShift Container Platform, see Evaluating authorization . You can also control access to an OpenShift Container Platform cluster through projects and namespaces . Along with controlling user access to a cluster, you can also control the actions a pod can perform and the resources it can access using security context constraints (SCCs) . You can manage authorization for OpenShift Container Platform through the following tasks: Viewing local and cluster roles and bindings. Creating a local role and assigning it to a user or group. Creating a cluster role and assigning it to a user or group: OpenShift Container Platform includes a set of default cluster roles . You can create additional cluster roles and add them to a user or group . Creating a cluster-admin user: By default, your cluster has only one cluster administrator called kubeadmin . You can create another cluster administrator . Before creating a cluster administrator, ensure that you have configured an identity provider. Note After creating the cluster admin user, delete the existing kubeadmin user to improve cluster security. Creating service accounts: Service accounts provide a flexible way to control API access without sharing a regular user's credentials. A user can create and use a service account in applications and also as an OAuth client . Scoping tokens : A scoped token is a token that identifies as a specific user who can perform only specific operations. You can create scoped tokens to delegate some of your permissions to another user or a service account. Syncing LDAP groups: You can manage user groups in one place by syncing the groups stored in an LDAP server with the OpenShift Container Platform user groups. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/authentication_and_authorization/overview-of-authentication-authorization |
Chapter 5. Reference | Chapter 5. Reference 5.1. Artifact repository mirrors A repository in Maven holds build artifacts and dependencies of various types (all the project jars, library jar, plugins or any other project specific artifacts). It also specifies locations from where to download artifacts from, while performing the S2I build. Besides using central repositories, it is a common practice for organizations to deploy a local custom repository (mirror). Benefits of using a mirror are: Availability of a synchronized mirror, which is geographically closer and faster. Ability to have greater control over the repository content. Possibility to share artifacts across different teams (developers, CI), without the need to rely on public servers and repositories. Improved build times. Often, a repository manager can serve as local cache to a mirror. Assuming that the repository manager is already deployed and reachable externally at http://10.0.0.1:8080/repository/internal/ , the S2I build can then use this manager by supplying the MAVEN_MIRROR_URL environment variable to the build configuration of the application using the following procedure: Procedure Identify the name of the build configuration to apply MAVEN_MIRROR_URL variable against. USD oc get bc -o name buildconfig/sso Update build configuration of sso with a MAVEN_MIRROR_URL environment variable. USD oc set env bc/sso \ -e MAVEN_MIRROR_URL="http://10.0.0.1:8080/repository/internal/" buildconfig "sso" updated Verify the setting. USD oc set env bc/sso --list # buildconfigs sso MAVEN_MIRROR_URL=http://10.0.0.1:8080/repository/internal/ Schedule new build of the application. Note During application build, you will notice that Maven dependencies are pulled from the repository manager, instead of the default public repositories. Also, after the build is finished, you will see that the mirror is filled with all the dependencies that were retrieved and used during the build. 5.2. Environment variables 5.2.1. Information environment variables The following information environment variables are designed to convey information about the image and should not be modified by the user: Table 5.1. Information Environment Variables Variable Name Description Example Value AB_JOLOKIA_AUTH_OPENSHIFT - true AB_JOLOKIA_HTTPS - true AB_JOLOKIA_PASSWORD_RANDOM - true JBOSS_IMAGE_NAME Image name, same as "name" label. rh-sso-7/sso76-openshift-rhel8 JBOSS_IMAGE_VERSION Image version, same as "version" label. 7.6 JBOSS_MODULES_SYSTEM_PKGS - org.jboss.logmanager,jdk.nashorn.api 5.2.2. Configuration environment variables Configuration environment variables are designed to conveniently adjust the image without requiring a rebuild, and should be set by the user as desired. Table 5.2. Configuration Environment Variables Variable Name Description Example Value AB_JOLOKIA_AUTH_OPENSHIFT Switch on client authentication for OpenShift TLS communication. The value of this parameter can be a relative distinguished name which must be contained in a presented client's certificate. Enabling this parameter will automatically switch Jolokia into https communication mode. The default CA cert is set to /var/run/secrets/kubernetes.io/serviceaccount/ca.crt . true AB_JOLOKIA_CONFIG If set uses this file (including path) as Jolokia JVM agent properties (as described in Jolokia's reference manual ). If not set, the /opt/jolokia/etc/jolokia.properties file will be created using the settings as defined in this document, otherwise the rest of the settings in this document are ignored. /opt/jolokia/custom.properties AB_JOLOKIA_DISCOVERY_ENABLED Enable Jolokia discovery. Defaults to false . true AB_JOLOKIA_HOST Host address to bind to. Defaults to 0.0.0.0 . 127.0.0.1 AB_JOLOKIA_HTTPS Switch on secure communication with https. By default self-signed server certificates are generated if no serverCert configuration is given in AB_JOLOKIA_OPTS . NOTE: If the values is set to an empty string, https is turned off . If the value is set to a non empty string, https is turned on . true AB_JOLOKIA_ID Agent ID to use (USDHOSTNAME by default, which is the container id). openjdk-app-1-xqlsj AB_JOLOKIA_OFF If set disables activation of Jolokia (i.e. echos an empty value). By default, Jolokia is enabled. NOTE: If the values is set to an empty string, https is turned off . If the value is set to a non empty string, https is turned on . true AB_JOLOKIA_OPTS Additional options to be appended to the agent configuration. They should be given in the format "key=value, key=value, ...<200b> " backlog=20 AB_JOLOKIA_PASSWORD Password for basic authentication. By default authentication is switched off. mypassword AB_JOLOKIA_PASSWORD_RANDOM If set, a random value is generated for AB_JOLOKIA_PASSWORD , and it is saved in the /opt/jolokia/etc/jolokia.pw file. true AB_JOLOKIA_PORT Port to use (Default: 8778 ). 5432 AB_JOLOKIA_USER User for basic authentication. Defaults to jolokia . myusername CONTAINER_CORE_LIMIT A calculated core limit as described in CFS Bandwidth Control. 2 GC_ADAPTIVE_SIZE_POLICY_WEIGHT The weighting given to the current Garbage Collection (GC) time versus GC times. 90 GC_MAX_HEAP_FREE_RATIO Maximum percentage of heap free after GC to avoid shrinking. 40 GC_MAX_METASPACE_SIZE The maximum metaspace size. 100 GC_TIME_RATIO_MIN_HEAP_FREE_RATIO Minimum percentage of heap free after GC to avoid expansion. 20 GC_TIME_RATIO Specifies the ratio of the time spent outside the garbage collection (for example, the time spent for application execution) to the time spent in the garbage collection. 4 JAVA_DIAGNOSTICS Set this to get some diagnostics information to standard out when things are happening. true JAVA_INITIAL_MEM_RATIO This is used to calculate a default initial heap memory based the maximal heap memory. The default is 100 which means 100% of the maximal heap is used for the initial heap size. You can skip this mechanism by setting this value to 0 in which case no -Xms option is added. 100 JAVA_MAX_MEM_RATIO It is used to calculate a default maximal heap memory based on a containers restriction. If used in a Docker container without any memory constraints for the container then this option has no effect. If there is a memory constraint then -Xmx is set to a ratio of the container available memory as set here. The default is 50 which means 50% of the available memory is used as an upper boundary. You can skip this mechanism by setting this value to 0 in which case no -Xmx option is added. 40 JAVA_OPTS_APPEND Server startup options. -Dkeycloak.migration.action=export -Dkeycloak.migration.provider=dir -Dkeycloak.migration.dir=/tmp MQ_SIMPLE_DEFAULT_PHYSICAL_DESTINATION For backwards compatability, set to true to use MyQueue and MyTopic as physical destination name defaults instead of queue/MyQueue and topic/MyTopic . false OPENSHIFT_KUBE_PING_LABELS Clustering labels selector. app=sso-app OPENSHIFT_KUBE_PING_NAMESPACE Clustering project namespace. myproject SCRIPT_DEBUG If set to true , ensurses that the bash scripts are executed with the -x option, printing the commands and their arguments as they are executed. true SSO_ADMIN_PASSWORD Password of the administrator account for the master realm of the Red Hat Single Sign-On server. Required. If no value is specified, it is auto generated and displayed as an OpenShift Instructional message when the template is instantiated. adm-password SSO_ADMIN_USERNAME Username of the administrator account for the master realm of the Red Hat Single Sign-On server. Required. If no value is specified, it is auto generated and displayed as an OpenShift Instructional message when the template is instantiated. admin SSO_HOSTNAME Custom hostname for the Red Hat Single Sign-On server. Not set by default . If not set, the request hostname SPI provider, which uses the request headers to determine the hostname of the Red Hat Single Sign-On server is used. If set, the fixed hostname SPI provider, with the hostname of the Red Hat Single Sign-On server set to the provided variable value, is used. See dedicated Customizing Hostname for the Red Hat Single Sign-On Server section for additional steps to be performed, when SSO_HOSTNAME variable is set. rh-sso-server.openshift.example.com SSO_REALM Name of the realm to be created in the Red Hat Single Sign-On server if this environment variable is provided. demo SSO_SERVICE_PASSWORD The password for the Red Hat Single Sign-On service user. mgmt-password SSO_SERVICE_USERNAME The username used to access the Red Hat Single Sign-On service. This is used by clients to create the application client(s) within the specified Red Hat Single Sign-On realm. This user is created if this environment variable is provided. sso-mgmtuser SSO_TRUSTSTORE The name of the truststore file within the secret. truststore.jks SSO_TRUSTSTORE_DIR Truststore directory. /etc/sso-secret-volume SSO_TRUSTSTORE_PASSWORD The password for the truststore and certificate. mykeystorepass SSO_TRUSTSTORE_SECRET The name of the secret containing the truststore file. Used for sso-truststore-volume volume. truststore-secret Available application templates for Red Hat Single Sign-On for OpenShift can combine the aforementioned configuration variables with common OpenShift variables (for example APPLICATION_NAME or SOURCE_REPOSITORY_URL ), product specific variables (e.g. HORNETQ_CLUSTER_PASSWORD ), or configuration variables typical to database images (e.g. POSTGRESQL_MAX_CONNECTIONS ) yet. All of these different types of configuration variables can be adjusted as desired to achieve the deployed Red Hat Single Sign-On-enabled application will align with the intended use case as much as possible. The list of configuration variables, available for each category of application templates for Red Hat Single Sign-On-enabled applications, is described below. 5.2.3. Template variables for all Red Hat Single Sign-On images Table 5.3. Configuration Variables Available For All Red Hat Single Sign-On Images Variable Description APPLICATION_NAME The name for the application. DB_MAX_POOL_SIZE Sets xa-pool/max-pool-size for the configured datasource. DB_TX_ISOLATION Sets transaction-isolation for the configured datasource. DB_USERNAME Database user name. HOSTNAME_HTTP Custom hostname for http service route. Leave blank for default hostname, e.g.: <application-name>.<project>.<default-domain-suffix> . HOSTNAME_HTTPS Custom hostname for https service route. Leave blank for default hostname, e.g.: <application-name>.<project>.<default-domain-suffix> . HTTPS_KEYSTORE The name of the keystore file within the secret. If defined along with HTTPS_PASSWORD and HTTPS_NAME , enable HTTPS and set the SSL certificate key file to a relative path under USDJBOSS_HOME/standalone/configuration . HTTPS_KEYSTORE_TYPE The type of the keystore file (JKS or JCEKS). HTTPS_NAME The name associated with the server certificate (e.g. jboss ). If defined along with HTTPS_PASSWORD and HTTPS_KEYSTORE , enable HTTPS and set the SSL name. HTTPS_PASSWORD The password for the keystore and certificate (e.g. mykeystorepass ). If defined along with HTTPS_NAME and HTTPS_KEYSTORE , enable HTTPS and set the SSL key password. HTTPS_SECRET The name of the secret containing the keystore file. IMAGE_STREAM_NAMESPACE Namespace in which the ImageStreams for Red Hat Middleware images are installed. These ImageStreams are normally installed in the openshift namespace. You should only need to modify this if you've installed the ImageStreams in a different namespace/project. JGROUPS_CLUSTER_PASSWORD JGroups cluster password. JGROUPS_ENCRYPT_KEYSTORE The name of the keystore file within the secret. JGROUPS_ENCRYPT_NAME The name associated with the server certificate (e.g. secret-key ). JGROUPS_ENCRYPT_PASSWORD The password for the keystore and certificate (e.g. password ). JGROUPS_ENCRYPT_SECRET The name of the secret containing the keystore file. SSO_ADMIN_USERNAME Username of the administrator account for the master realm of the Red Hat Single Sign-On server. Required. If no value is specified, it is auto generated and displayed as an OpenShift instructional message when the template is instantiated. SSO_ADMIN_PASSWORD Password of the administrator account for the master realm of the Red Hat Single Sign-On server. Required. If no value is specified, it is auto generated and displayed as an OpenShift instructional message when the template is instantiated. SSO_REALM Name of the realm to be created in the Red Hat Single Sign-On server if this environment variable is provided. SSO_SERVICE_USERNAME The username used to access the Red Hat Single Sign-On service. This is used by clients to create the application client(s) within the specified Red Hat Single Sign-On realm. This user is created if this environment variable is provided. SSO_SERVICE_PASSWORD The password for the Red Hat Single Sign-On service user. SSO_TRUSTSTORE The name of the truststore file within the secret. SSO_TRUSTSTORE_SECRET The name of the secret containing the truststore file. Used for sso-truststore-volume volume. SSO_TRUSTSTORE_PASSWORD The password for the truststore and certificate. 5.2.4. Template variables specific to sso76-ocp3-postgresql , sso76-ocp4-postgresql , sso76-ocp3-postgresql-persistent , sso76-ocp4-postgresql-persistent , sso76-ocp3-x509-postgresql-persistent , and sso76-ocp4-x509-postgresql-persistent Table 5.4. Configuration Variables Specific To Red Hat Single Sign-On-enabled PostgreSQL Applications With Ephemeral Or Persistent Storage Variable Description DB_USERNAME Database user name. DB_PASSWORD Database user password. DB_JNDI Database JNDI name used by application to resolve the datasource, e.g. java:/jboss/datasources/postgresql POSTGRESQL_MAX_CONNECTIONS The maximum number of client connections allowed. This also sets the maximum number of prepared transactions. POSTGRESQL_SHARED_BUFFERS Configures how much memory is dedicated to PostgreSQL for caching data. 5.2.5. Template variables for general eap64 and eap71 S2I images Table 5.5. Configuration Variables For EAP 6.4 and EAP 7 Applications Built Via S2I Variable Description APPLICATION_NAME The name for the application. ARTIFACT_DIR Artifacts directory. AUTO_DEPLOY_EXPLODED Controls whether exploded deployment content should be automatically deployed. CONTEXT_DIR Path within Git project to build; empty for root project directory. GENERIC_WEBHOOK_SECRET Generic build trigger secret. GITHUB_WEBHOOK_SECRET GitHub trigger secret. HORNETQ_CLUSTER_PASSWORD HornetQ cluster administrator password. HORNETQ_QUEUES Queue names. HORNETQ_TOPICS Topic names. HOSTNAME_HTTP Custom host name for http service route. Leave blank for default host name, e.g.: <application-name>.<project>.<default-domain-suffix> . HOSTNAME_HTTPS Custom host name for https service route. Leave blank for default host name, e.g.: <application-name>.<project>.<default-domain-suffix> . HTTPS_KEYSTORE_TYPE The type of the keystore file (JKS or JCEKS). HTTPS_KEYSTORE The name of the keystore file within the secret. If defined along with HTTPS_PASSWORD and HTTPS_NAME , enable HTTPS and set the SSL certificate key file to a relative path under USDJBOSS_HOME/standalone/configuration . HTTPS_NAME The name associated with the server certificate (e.g. jboss ). If defined along with HTTPS_PASSWORD and HTTPS_KEYSTORE , enable HTTPS and set the SSL name. HTTPS_PASSWORD The password for the keystore and certificate (e.g. mykeystorepass ). If defined along with HTTPS_NAME and HTTPS_KEYSTORE , enable HTTPS and set the SSL key password. HTTPS_SECRET The name of the secret containing the keystore file. IMAGE_STREAM_NAMESPACE Namespace in which the ImageStreams for Red Hat Middleware images are installed. These ImageStreams are normally installed in the openshift namespace. You should only need to modify this if you've installed the ImageStreams in a different namespace/project. JGROUPS_CLUSTER_PASSWORD JGroups cluster password. JGROUPS_ENCRYPT_KEYSTORE The name of the keystore file within the secret. JGROUPS_ENCRYPT_NAME The name associated with the server certificate (e.g. secret-key ). JGROUPS_ENCRYPT_PASSWORD The password for the keystore and certificate (e.g. password ). JGROUPS_ENCRYPT_SECRET The name of the secret containing the keystore file. SOURCE_REPOSITORY_REF Git branch/tag reference. SOURCE_REPOSITORY_URL Git source URI for application. 5.2.6. Template variables specific to eap64-sso-s2i and eap71-sso-s2i for automatic client registration Table 5.6. Configuration Variables For EAP 6.4 and EAP 7 Red Hat Single Sign-On-enabled Applications Built Via S2I Variable Description SSO_URL Red Hat Single Sign-On server location. SSO_REALM Name of the realm to be created in the Red Hat Single Sign-On server if this environment variable is provided. SSO_USERNAME The username used to access the Red Hat Single Sign-On service. This is used to create the application client(s) within the specified Red Hat Single Sign-On realm. This should match the SSO_SERVICE_USERNAME specified through one of the sso76- templates. SSO_PASSWORD The password for the Red Hat Single Sign-On service user. SSO_PUBLIC_KEY Red Hat Single Sign-On public key. Public key is recommended to be passed into the template to avoid man-in-the-middle security attacks. SSO_SECRET The Red Hat Single Sign-On client secret for confidential access. SSO_SERVICE_URL Red Hat Single Sign-On service location. SSO_TRUSTSTORE_SECRET The name of the secret containing the truststore file. Used for sso-truststore-volume volume. SSO_TRUSTSTORE The name of the truststore file within the secret. SSO_TRUSTSTORE_PASSWORD The password for the truststore and certificate. SSO_BEARER_ONLY Red Hat Single Sign-On client access type. SSO_DISABLE_SSL_CERTIFICATE_VALIDATION If true SSL communication between EAP and the Red Hat Single Sign-On Server is insecure (i.e. certificate validation is disabled with curl) SSO_ENABLE_CORS Enable CORS for Red Hat Single Sign-On applications. 5.2.7. Template variables specific to eap64-sso-s2i and eap71-sso-s2i for automatic client registration with SAML clients Table 5.7. Configuration Variables For EAP 6.4 and EAP 7 Red Hat Single Sign-On-enabled Applications Built Via S2I Using SAML Protocol Variable Description SSO_SAML_CERTIFICATE_NAME The name associated with the server certificate. SSO_SAML_KEYSTORE_PASSWORD The password for the keystore and certificate. SSO_SAML_KEYSTORE The name of the keystore file within the secret. SSO_SAML_KEYSTORE_SECRET The name of the secret containing the keystore file. SSO_SAML_LOGOUT_PAGE Red Hat Single Sign-On logout page for SAML applications. 5.3. Exposed ports Port Number Description 8443 HTTPS 8778 Jolokia monitoring | [
"oc get bc -o name buildconfig/sso",
"oc set env bc/sso -e MAVEN_MIRROR_URL=\"http://10.0.0.1:8080/repository/internal/\" buildconfig \"sso\" updated",
"oc set env bc/sso --list buildconfigs sso MAVEN_MIRROR_URL=http://10.0.0.1:8080/repository/internal/"
] | https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.6/html/red_hat_single_sign-on_for_openshift/reference |
Chapter 2. Generating a new dataset with SDG | Chapter 2. Generating a new dataset with SDG After customizing your taxonomy tree, you can generate a synthetic dataset using the Synthetic Data Generation (SDG) process on Red Hat Enterprise Linux AI. SDG is a process that creates an artificially generated dataset that mimics real data based on provided examples. SDG uses a YAML file containing question-and-answer pairs as input data. With these examples, SDG utilizes the mixtral-8x7b-instruct-v0-1 LLM as a teacher model to generate similar question-and-answer pairs. In the SDG pipeline, many questions are generated and scored based on quality, where the mixtral-8x7b-instruct-v0-1 model assesses the quality of these questions. The pipeline then selects the highest-scoring questions, generates corresponding answers, and includes these pairs in the synthetic dataset. 2.1. Creating a synthetic dataset using your examples You can use your examples and run the SDG process to create a synthetic dataset. Prerequisites You installed RHEL AI with the bootable container image. You created a custom qna.yaml file with knowledge data. You downloaded the mixtral-8x7b-instruct-v0-1 teacher model for SDG. You downloaded the skills-adapter-v3:1.2 and knowledge-adapter-v3:1.2 LoRA layered skills and knowledge adapter. You have root user access on your machine. Procedure To generate a new synthetic dataset, based on your custom taxonomy with knowledge, run the following command: USD ilab data generate This command runs SDG with mixtral-8x7B-instruct as the teacher model Note You can use the --enable-serving-output flag when running the ilab data generate command to display the vLLM startup logs. At the start of the SDG process, vLLM attempts to start a server. Example output of vLLM attempting to start a server Starting a temporary vLLM server at http://127.0.0.1:47825/v1 INFO 2024-08-22 17:01:09,461 instructlab.model.backends.backends:480: Waiting for the vLLM server to start at http://127.0.0.1:47825/v1, this might take a moment... Attempt: 1/120 INFO 2024-08-22 17:01:14,213 instructlab.model.backends.backends:480: Waiting for the vLLM server to start at http://127.0.0.1:47825/v1, this might take a moment... Attempt: 2/120 INFO 2024-08-22 17:01:19,142 instructlab.model.backends.backends:480: Waiting for the vLLM server to start at http://127.0.0.1:47825/v1, this might take a moment... Attempt: 3/120 Once vLLM connects, the SDG process starts creating synthetic data from your examples. Example output of vLLM connecting and SDG generating INFO 2024-08-22 15:16:38,933 instructlab.model.backends.backends:480: Waiting for the vLLM server to start at http://127.0.0.1:49311/v1, this might take a moment... Attempt: 30/120 INFO 2024-08-22 15:16:43,497 instructlab.model.backends.backends:480: Waiting for the vLLM server to start at http://127.0.0.1:49311/v1, this might take a moment... Attempt: 31/120 INFO 2024-08-22 15:16:45,949 instructlab.model.backends.backends:487: vLLM engine successfully started at http://127.0.0.1:49311/v1 Generating synthetic data using '/usr/share/instructlab/sdg/pipelines/agentic' pipeline, '/var/home/cloud-user/.cache/instructlab/models/mixtral-8x7b-instruct-v0-1' model, '/var/home/cloud-user/.local/share/instructlab/taxonomy' taxonomy, against http://127.0.0.1:49311/v1 server INFO 2024-08-22 15:16:46,594 instructlab.sdg:375: Synthesizing new instructions. If you aren't satisfied with the generated instructions, interrupt training (Ctrl-C) and try adjusting your YAML files. Adding more examples may help. The SDG process completes when the CLI displays the location of your new data set. Example output of a successful SDG run INFO 2024-09-15 17:12:46,548 instructlab.sdg.datamixing:200: Mixed Dataset saved to /home/example-user/.local/share/instructlab/datasets/skills_train_msgs_2024-08-16T16_50_11.jsonl INFO 2024-09-15 17:12:46,549 instructlab.sdg:438: Generation took 1355.74s Note This process can be time consuming depending on your hardware specifications. Verify the files are created by running the following command: USD ls ~/.local/share/instructlab/datasets/ Example output knowledge_recipe_2024-09-15T20_54_21.yaml skills_recipe_2024-09-15T20_54_21.yaml knowledge_train_msgs_2024-09-15T20_54_21.jsonl skills_train_msgs_2024-09-15T20_54_21.jsonl messages_granite-7b-lab-Q4_K_M_2024-09-15T20_54_21.jsonl node_datasets_2024-09-15T15_12_12/ Important Make a note of your most recent knowledge_train_msgs.jsonl and skills_train_msgs.jsonl file. You need to specify this file during multi-phase training. Each JSONL has the time stamp on the file, for example knowledge_train_msgs_2024-08-08T20_04_28.jsonl , use the most recent file when training. Optional: You can view output of SDG by navigating to the ~/.local/share/datasets/ directory and opening the JSONL file. USD cat ~/.local/share/datasets/<jsonl-dataset> Example output of a SDG JSONL file {"messages":[{"content":"I am, Red Hat\u00ae Instruct Model based on Granite 7B, an AI language model developed by Red Hat and IBM Research, based on the Granite-7b-base language model. My primary function is to be a chat assistant.","role":"system"},{"content":"<|user|>\n### Deep-sky objects\n\nThe constellation does not lie on the [galactic\nplane](galactic_plane \"wikilink\") of the Milky Way, and there are no\nprominent star clusters. [NGC 625](NGC_625 \"wikilink\") is a dwarf\n[irregular galaxy](irregular_galaxy \"wikilink\") of apparent magnitude\n11.0 and lying some 12.7 million light years distant. Only 24000 light\nyears in diameter, it is an outlying member of the [Sculptor\nGroup](Sculptor_Group \"wikilink\"). NGC 625 is thought to have been\ninvolved in a collision and is experiencing a burst of [active star\nformation](Active_galactic_nucleus \"wikilink\"). [NGC\n37](NGC_37 \"wikilink\") is a [lenticular\ngalaxy](lenticular_galaxy \"wikilink\") of apparent magnitude 14.66. It is\napproximately 42 [kiloparsecs](kiloparsecs \"wikilink\") (137,000\n[light-years](light-years \"wikilink\")) in diameter and about 12.9\nbillion years old. [Robert's Quartet](Robert's_Quartet \"wikilink\")\n(composed of the irregular galaxy [NGC 87](NGC_87 \"wikilink\"), and three\nspiral galaxies [NGC 88](NGC_88 \"wikilink\"), [NGC 89](NGC_89 \"wikilink\")\nand [NGC 92](NGC_92 \"wikilink\")) is a group of four galaxies located\naround 160 million light-years away which are in the process of\ncolliding and merging. They are within a circle of radius of 1.6 arcmin,\ncorresponding to about 75,000 light-years. Located in the galaxy ESO\n243-49 is [HLX-1](HLX-1 \"wikilink\"), an [intermediate-mass black\nhole](intermediate-mass_black_hole \"wikilink\")the first one of its kind\nidentified. It is thought to be a remnant of a dwarf galaxy that was\nabsorbed in a [collision](Interacting_galaxy \"wikilink\") with ESO\n243-49. Before its discovery, this class of black hole was only\nhypothesized.\n\nLying within the bounds of the constellation is the gigantic [Phoenix\ncluster](Phoenix_cluster \"wikilink\"), which is around 7.3 million light\nyears wide and 5.7 billion light years away, making it one of the most\nmassive [galaxy clusters](galaxy_cluster \"wikilink\"). It was first\ndiscovered in 2010, and the central galaxy is producing an estimated 740\nnew stars a year. Larger still is [El\nGordo](El_Gordo_(galaxy_cluster) \"wikilink\"), or officially ACT-CL\nJ0102-4915, whose discovery was announced in 2012. Located around\n7.2 billion light years away, it is composed of two subclusters in the\nprocess of colliding, resulting in the spewing out of hot gas, seen in\nX-rays and infrared images.\n\n### Meteor showers\n\nPhoenix is the [radiant](radiant_(meteor_shower) \"wikilink\") of two\nannual [meteor showers](meteor_shower \"wikilink\"). The\n[Phoenicids](Phoenicids \"wikilink\"), also known as the December\nPhoenicids, were first observed on 3 December 1887. The shower was\nparticularly intense in December 1956, and is thought related to the\nbreakup of the [short-period comet](short-period_comet \"wikilink\")\n[289P\/Blanpain](289P\/Blanpain \"wikilink\"). It peaks around 45 December,\nthough is not seen every year. A very minor meteor shower peaks\naround July 14 with around one meteor an hour, though meteors can be\nseen anytime from July 3 to 18; this shower is referred to as the July\nPhoenicids.\n\nHow many light years wide is the Phoenix cluster?\n<|assistant|>\n' 'The Phoenix cluster is around 7.3 million light years wide.'","role":"pretraining"}],"metadata":"{\"sdg_document\": \"### Deep-sky objects\\n\\nThe constellation does not lie on the [galactic\\nplane](galactic_plane \\\"wikilink\\\") of the Milky Way, and there are no\\nprominent star clusters. [NGC 625](NGC_625 \\\"wikilink\\\") is a dwarf\\n[irregular galaxy](irregular_galaxy \\\"wikilink\\\") of apparent magnitude\\n11.0 and lying some 12.7 million light years distant. Only 24000 light\\nyears in diameter, it is an outlying member of the [Sculptor\\nGroup](Sculptor_Group \\\"wikilink\\\"). NGC 625 is thought to have been\\ninvolved in a collision and is experiencing a burst of [active star\\nformation](Active_galactic_nucleus \\\"wikilink\\\"). [NGC\\n37](NGC_37 \\\"wikilink\\\") is a [lenticular\\ngalaxy](lenticular_galaxy \\\"wikilink\\\") of apparent magnitude 14.66. It is\\napproximately 42 [kiloparsecs](kiloparsecs \\\"wikilink\\\") (137,000\\n[light-years](light-years \\\"wikilink\\\")) in diameter and about 12.9\\nbillion years old. [Robert's Quartet](Robert's_Quartet \\\"wikilink\\\")\\n(composed of the irregular galaxy [NGC 87](NGC_87 \\\"wikilink\\\"), and three\\nspiral galaxies [NGC 88](NGC_88 \\\"wikilink\\\"), [NGC 89](NGC_89 \\\"wikilink\\\")\\nand [NGC 92](NGC_92 \\\"wikilink\\\")) is a group of four galaxies located\\naround 160 million light-years away which are in the process of\\ncolliding and merging. They are within a circle of radius of 1.6 arcmin,\\ncorresponding to about 75,000 light-years. Located in the galaxy ESO\\n243-49 is [HLX-1](HLX-1 \\\"wikilink\\\"), an [intermediate-mass black\\nhole](intermediate-mass_black_hole \\\"wikilink\\\")\the first one of its kind\\nidentified. It is thought to be a remnant of a dwarf galaxy that was\\nabsorbed in a [collision](Interacting_galaxy \\\"wikilink\\\") with ESO\\n243-49. Before its discovery, this class of black hole was only\\nhypothesized.\\n\\nLying within the bounds of the constellation is the gigantic [Phoenix\\ncluster](Phoenix_cluster \\\"wikilink\\\"), which is around 7.3 million light\\nyears wide and 5.7 billion light years away, making it one of the most\\nmassive [galaxy clusters](galaxy_cluster \\\"wikilink\\\"). It was first\\ndiscovered in 2010, and the central galaxy is producing an estimated 740\\nnew stars a year. Larger still is [El\\nGordo](El_Gordo_(galaxy_cluster) \\\"wikilink\\\"), or officially ACT-CL\\nJ0102-4915, whose discovery was announced in 2012. Located around\\n7.2 billion light years away, it is composed of two subclusters in the\\nprocess of colliding, resulting in the spewing out of hot gas, seen in\\nX-rays and infrared images.\\n\\n### Meteor showers\\n\\nPhoenix is the [radiant](radiant_(meteor_shower) \\\"wikilink\\\") of two\\nannual [meteor showers](meteor_shower \\\"wikilink\\\"). The\\n[Phoenicids](Phoenicids \\\"wikilink\\\"), also known as the December\\nPhoenicids, were first observed on 3 December 1887. The shower was\\nparticularly intense in December 1956, and is thought related to the\\nbreakup of the [short-period comet](short-period_comet \\\"wikilink\\\")\\n[289P\/Blanpain](289P\/Blanpain \\\"wikilink\\\"). It peaks around 4\5 December,\\nthough is not seen every year. A very minor meteor shower peaks\\naround July 14 with around one meteor an hour, though meteors can be\\nseen anytime from July 3 to 18; this shower is referred to as the July\\nPhoenicids.\", \"domain\": \"astronomy\", \"dataset\": \"document_knowledge_qa\"}","id":"1df7c219-a062-4511-8bae-f55c88927dc1"} | [
"ilab data generate",
"Starting a temporary vLLM server at http://127.0.0.1:47825/v1 INFO 2024-08-22 17:01:09,461 instructlab.model.backends.backends:480: Waiting for the vLLM server to start at http://127.0.0.1:47825/v1, this might take a moment... Attempt: 1/120 INFO 2024-08-22 17:01:14,213 instructlab.model.backends.backends:480: Waiting for the vLLM server to start at http://127.0.0.1:47825/v1, this might take a moment... Attempt: 2/120 INFO 2024-08-22 17:01:19,142 instructlab.model.backends.backends:480: Waiting for the vLLM server to start at http://127.0.0.1:47825/v1, this might take a moment... Attempt: 3/120",
"INFO 2024-08-22 15:16:38,933 instructlab.model.backends.backends:480: Waiting for the vLLM server to start at http://127.0.0.1:49311/v1, this might take a moment... Attempt: 30/120 INFO 2024-08-22 15:16:43,497 instructlab.model.backends.backends:480: Waiting for the vLLM server to start at http://127.0.0.1:49311/v1, this might take a moment... Attempt: 31/120 INFO 2024-08-22 15:16:45,949 instructlab.model.backends.backends:487: vLLM engine successfully started at http://127.0.0.1:49311/v1 Generating synthetic data using '/usr/share/instructlab/sdg/pipelines/agentic' pipeline, '/var/home/cloud-user/.cache/instructlab/models/mixtral-8x7b-instruct-v0-1' model, '/var/home/cloud-user/.local/share/instructlab/taxonomy' taxonomy, against http://127.0.0.1:49311/v1 server INFO 2024-08-22 15:16:46,594 instructlab.sdg:375: Synthesizing new instructions. If you aren't satisfied with the generated instructions, interrupt training (Ctrl-C) and try adjusting your YAML files. Adding more examples may help.",
"INFO 2024-09-15 17:12:46,548 instructlab.sdg.datamixing:200: Mixed Dataset saved to /home/example-user/.local/share/instructlab/datasets/skills_train_msgs_2024-08-16T16_50_11.jsonl INFO 2024-09-15 17:12:46,549 instructlab.sdg:438: Generation took 1355.74s",
"ls ~/.local/share/instructlab/datasets/",
"knowledge_recipe_2024-09-15T20_54_21.yaml skills_recipe_2024-09-15T20_54_21.yaml knowledge_train_msgs_2024-09-15T20_54_21.jsonl skills_train_msgs_2024-09-15T20_54_21.jsonl messages_granite-7b-lab-Q4_K_M_2024-09-15T20_54_21.jsonl node_datasets_2024-09-15T15_12_12/",
"cat ~/.local/share/datasets/<jsonl-dataset>",
"{\"messages\":[{\"content\":\"I am, Red Hat\\u00ae Instruct Model based on Granite 7B, an AI language model developed by Red Hat and IBM Research, based on the Granite-7b-base language model. My primary function is to be a chat assistant.\",\"role\":\"system\"},{\"content\":\"<|user|>\\n### Deep-sky objects\\n\\nThe constellation does not lie on the [galactic\\nplane](galactic_plane \\\"wikilink\\\") of the Milky Way, and there are no\\nprominent star clusters. [NGC 625](NGC_625 \\\"wikilink\\\") is a dwarf\\n[irregular galaxy](irregular_galaxy \\\"wikilink\\\") of apparent magnitude\\n11.0 and lying some 12.7 million light years distant. Only 24000 light\\nyears in diameter, it is an outlying member of the [Sculptor\\nGroup](Sculptor_Group \\\"wikilink\\\"). NGC 625 is thought to have been\\ninvolved in a collision and is experiencing a burst of [active star\\nformation](Active_galactic_nucleus \\\"wikilink\\\"). [NGC\\n37](NGC_37 \\\"wikilink\\\") is a [lenticular\\ngalaxy](lenticular_galaxy \\\"wikilink\\\") of apparent magnitude 14.66. It is\\napproximately 42 [kiloparsecs](kiloparsecs \\\"wikilink\\\") (137,000\\n[light-years](light-years \\\"wikilink\\\")) in diameter and about 12.9\\nbillion years old. [Robert's Quartet](Robert's_Quartet \\\"wikilink\\\")\\n(composed of the irregular galaxy [NGC 87](NGC_87 \\\"wikilink\\\"), and three\\nspiral galaxies [NGC 88](NGC_88 \\\"wikilink\\\"), [NGC 89](NGC_89 \\\"wikilink\\\")\\nand [NGC 92](NGC_92 \\\"wikilink\\\")) is a group of four galaxies located\\naround 160 million light-years away which are in the process of\\ncolliding and merging. They are within a circle of radius of 1.6 arcmin,\\ncorresponding to about 75,000 light-years. Located in the galaxy ESO\\n243-49 is [HLX-1](HLX-1 \\\"wikilink\\\"), an [intermediate-mass black\\nhole](intermediate-mass_black_hole \\\"wikilink\\\")the first one of its kind\\nidentified. It is thought to be a remnant of a dwarf galaxy that was\\nabsorbed in a [collision](Interacting_galaxy \\\"wikilink\\\") with ESO\\n243-49. Before its discovery, this class of black hole was only\\nhypothesized.\\n\\nLying within the bounds of the constellation is the gigantic [Phoenix\\ncluster](Phoenix_cluster \\\"wikilink\\\"), which is around 7.3 million light\\nyears wide and 5.7 billion light years away, making it one of the most\\nmassive [galaxy clusters](galaxy_cluster \\\"wikilink\\\"). It was first\\ndiscovered in 2010, and the central galaxy is producing an estimated 740\\nnew stars a year. Larger still is [El\\nGordo](El_Gordo_(galaxy_cluster) \\\"wikilink\\\"), or officially ACT-CL\\nJ0102-4915, whose discovery was announced in 2012. Located around\\n7.2 billion light years away, it is composed of two subclusters in the\\nprocess of colliding, resulting in the spewing out of hot gas, seen in\\nX-rays and infrared images.\\n\\n### Meteor showers\\n\\nPhoenix is the [radiant](radiant_(meteor_shower) \\\"wikilink\\\") of two\\nannual [meteor showers](meteor_shower \\\"wikilink\\\"). The\\n[Phoenicids](Phoenicids \\\"wikilink\\\"), also known as the December\\nPhoenicids, were first observed on 3 December 1887. The shower was\\nparticularly intense in December 1956, and is thought related to the\\nbreakup of the [short-period comet](short-period_comet \\\"wikilink\\\")\\n[289P\\/Blanpain](289P\\/Blanpain \\\"wikilink\\\"). It peaks around 45 December,\\nthough is not seen every year. A very minor meteor shower peaks\\naround July 14 with around one meteor an hour, though meteors can be\\nseen anytime from July 3 to 18; this shower is referred to as the July\\nPhoenicids.\\n\\nHow many light years wide is the Phoenix cluster?\\n<|assistant|>\\n' 'The Phoenix cluster is around 7.3 million light years wide.'\",\"role\":\"pretraining\"}],\"metadata\":\"{\\\"sdg_document\\\": \\\"### Deep-sky objects\\\\n\\\\nThe constellation does not lie on the [galactic\\\\nplane](galactic_plane \\\\\\\"wikilink\\\\\\\") of the Milky Way, and there are no\\\\nprominent star clusters. [NGC 625](NGC_625 \\\\\\\"wikilink\\\\\\\") is a dwarf\\\\n[irregular galaxy](irregular_galaxy \\\\\\\"wikilink\\\\\\\") of apparent magnitude\\\\n11.0 and lying some 12.7 million light years distant. Only 24000 light\\\\nyears in diameter, it is an outlying member of the [Sculptor\\\\nGroup](Sculptor_Group \\\\\\\"wikilink\\\\\\\"). NGC 625 is thought to have been\\\\ninvolved in a collision and is experiencing a burst of [active star\\\\nformation](Active_galactic_nucleus \\\\\\\"wikilink\\\\\\\"). [NGC\\\\n37](NGC_37 \\\\\\\"wikilink\\\\\\\") is a [lenticular\\\\ngalaxy](lenticular_galaxy \\\\\\\"wikilink\\\\\\\") of apparent magnitude 14.66. It is\\\\napproximately 42 [kiloparsecs](kiloparsecs \\\\\\\"wikilink\\\\\\\") (137,000\\\\n[light-years](light-years \\\\\\\"wikilink\\\\\\\")) in diameter and about 12.9\\\\nbillion years old. [Robert's Quartet](Robert's_Quartet \\\\\\\"wikilink\\\\\\\")\\\\n(composed of the irregular galaxy [NGC 87](NGC_87 \\\\\\\"wikilink\\\\\\\"), and three\\\\nspiral galaxies [NGC 88](NGC_88 \\\\\\\"wikilink\\\\\\\"), [NGC 89](NGC_89 \\\\\\\"wikilink\\\\\\\")\\\\nand [NGC 92](NGC_92 \\\\\\\"wikilink\\\\\\\")) is a group of four galaxies located\\\\naround 160 million light-years away which are in the process of\\\\ncolliding and merging. They are within a circle of radius of 1.6 arcmin,\\\\ncorresponding to about 75,000 light-years. Located in the galaxy ESO\\\\n243-49 is [HLX-1](HLX-1 \\\\\\\"wikilink\\\\\\\"), an [intermediate-mass black\\\\nhole](intermediate-mass_black_hole \\\\\\\"wikilink\\\\\\\")\\the first one of its kind\\\\nidentified. It is thought to be a remnant of a dwarf galaxy that was\\\\nabsorbed in a [collision](Interacting_galaxy \\\\\\\"wikilink\\\\\\\") with ESO\\\\n243-49. Before its discovery, this class of black hole was only\\\\nhypothesized.\\\\n\\\\nLying within the bounds of the constellation is the gigantic [Phoenix\\\\ncluster](Phoenix_cluster \\\\\\\"wikilink\\\\\\\"), which is around 7.3 million light\\\\nyears wide and 5.7 billion light years away, making it one of the most\\\\nmassive [galaxy clusters](galaxy_cluster \\\\\\\"wikilink\\\\\\\"). It was first\\\\ndiscovered in 2010, and the central galaxy is producing an estimated 740\\\\nnew stars a year. Larger still is [El\\\\nGordo](El_Gordo_(galaxy_cluster) \\\\\\\"wikilink\\\\\\\"), or officially ACT-CL\\\\nJ0102-4915, whose discovery was announced in 2012. Located around\\\\n7.2 billion light years away, it is composed of two subclusters in the\\\\nprocess of colliding, resulting in the spewing out of hot gas, seen in\\\\nX-rays and infrared images.\\\\n\\\\n### Meteor showers\\\\n\\\\nPhoenix is the [radiant](radiant_(meteor_shower) \\\\\\\"wikilink\\\\\\\") of two\\\\nannual [meteor showers](meteor_shower \\\\\\\"wikilink\\\\\\\"). The\\\\n[Phoenicids](Phoenicids \\\\\\\"wikilink\\\\\\\"), also known as the December\\\\nPhoenicids, were first observed on 3 December 1887. The shower was\\\\nparticularly intense in December 1956, and is thought related to the\\\\nbreakup of the [short-period comet](short-period_comet \\\\\\\"wikilink\\\\\\\")\\\\n[289P\\/Blanpain](289P\\/Blanpain \\\\\\\"wikilink\\\\\\\"). It peaks around 4\\5 December,\\\\nthough is not seen every year. A very minor meteor shower peaks\\\\naround July 14 with around one meteor an hour, though meteors can be\\\\nseen anytime from July 3 to 18; this shower is referred to as the July\\\\nPhoenicids.\\\", \\\"domain\\\": \\\"astronomy\\\", \\\"dataset\\\": \\\"document_knowledge_qa\\\"}\",\"id\":\"1df7c219-a062-4511-8bae-f55c88927dc1\"}"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.2/html/creating_a_custom_llm_using_rhel_ai/generate_sdg |
Assessing RHEL Configuration Issues Using the Red Hat Insights Advisor Service with FedRAMP | Assessing RHEL Configuration Issues Using the Red Hat Insights Advisor Service with FedRAMP Red Hat Insights 1-latest Assess and monitor the configuration issues impacting your RHEL systems Red Hat Customer Content Services | [
"USE [master] GO CREATE LOGIN [assessmentLogin] with PASSWORD= N'<*PASSWORD*>' ALTER SERVER ROLE [sysadmin] ADD MEMBER [assessmentLogin] GO",
"echo \"assessmentLogin\" > /var/opt/mssql/secrets/assessment echo \"<*PASSWORD*>\" >> /var/opt/mssql/secrets/assessment",
"chmod 0600 /var/opt/mssql/secrets/assessment chown mssql:mssql /var/opt/mssql/secrets/assessment",
"yum -y install powershell",
"su mssql -c \"/usr/bin/pwsh -Command Install-Module SqlServer\"",
"/bin/curl -LJ0 -o /opt/mssql/bin/runassessment.ps1 https://raw.githubusercontent.com/microsoft/sql-server-samples/master/samples/manage/sql-assessment-api/RHEL/runassessment.ps1 chown mssql:mssql /opt/mssql/bin/runassessment.ps1 chmod 0700 /opt/mssql/bin/runassessment.ps1",
"mkdir /var/opt/mssql/log/assessments/ chown mssql:mssql /var/opt/mssql/log/assessments/ chmod 0700 /var/opt/mssql/log/assessments/",
"su mssql -c \"pwsh -File /opt/mssql/bin/runassessment.ps1\"",
"insights-client",
"cp mssql-runassessment.service /etc/systemd/system/ cp mssql-runassessment.timer /etc/systemd/system/ chmod 644 /etc/systemd/system/",
"systemctl enable --now mssql-runassessment.timer",
"insights-client --group=<name-you-choose>",
"tags --- group: eastern-sap name: Jane Example contact: [email protected] Zone: eastern time zone Location: - gray_rack - basement Application: SAP",
"tags --- group: eastern-sap location: Boston description: - RHEL8 - SAP key 4: value",
"insights-client",
"vi /etc/insights-client/tags.yaml",
"cat /etc/insights-client/tags.yaml group: redhat location: Brisbane/Australia description: - RHEL8 - SAP security: strict network_performance: latency",
"insights-client",
"insights-client --unregister",
"curl -k --user PORTALUSERNAME https://console.redhat.com/api/inventory/v1/hosts | json_pp > hosts.json",
"yum install perl-JSON-PP",
"curl -k --user PORTALUSERNAME https://console.redhat.com/api/inventory/v1/hosts/f59716a6-5d64-4901-b65f-788b1aee25cc",
"curl -k --user PORTALUSERNAME -X \"DELETE\" https://console.redhat.com/api/inventory/v1/hosts/f59716a6-5d64-4901-b65f-788b1aee25cc"
] | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html-single/assessing_rhel_configuration_issues_using_the_red_hat_insights_advisor_service_with_fedramp/index |
Chapter 41. NotifierService | Chapter 41. NotifierService 41.1. GetNotifiers GET /v1/notifiers GetNotifiers returns all notifier configurations. 41.1.1. Description 41.1.2. Parameters 41.1.3. Return Type V1GetNotifiersResponse 41.1.4. Content Type application/json 41.1.5. Responses Table 41.1. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetNotifiersResponse 0 An unexpected error response. GooglerpcStatus 41.1.6. Samples 41.1.7. Common object reference 41.1.7.1. EmailAuthMethod Enum Values DISABLED PLAIN LOGIN 41.1.7.2. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 41.1.7.3. JiraPriorityMapping Field Name Required Nullable Type Description Format severity StorageSeverity UNSET_SEVERITY, LOW_SEVERITY, MEDIUM_SEVERITY, HIGH_SEVERITY, CRITICAL_SEVERITY, priorityName String 41.1.7.4. MicrosoftSentinelClientCertAuthConfig Field Name Required Nullable Type Description Format clientCert String PEM encoded ASN.1 DER format. privateKey String PEM encoded PKCS #8, ASN.1 DER format. 41.1.7.5. MicrosoftSentinelDataCollectionRuleConfig DataCollectionRuleConfig contains information about the data collection rule which is a config per notifier type. Field Name Required Nullable Type Description Format streamName String dataCollectionRuleId String enabled Boolean 41.1.7.6. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 41.1.7.6.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 41.1.7.7. StorageAWSSecurityHub Field Name Required Nullable Type Description Format region String credentials StorageAWSSecurityHubCredentials accountId String 41.1.7.8. StorageAWSSecurityHubCredentials Field Name Required Nullable Type Description Format accessKeyId String secretAccessKey String stsEnabled Boolean 41.1.7.9. StorageCSCC Field Name Required Nullable Type Description Format serviceAccount String The service account for the integration. The server will mask the value of this credential in responses and logs. sourceId String wifEnabled Boolean 41.1.7.10. StorageEmail Field Name Required Nullable Type Description Format server String sender String username String password String The password for the integration. The server will mask the value of this credential in responses and logs. disableTLS Boolean DEPRECATEDUseStartTLS Boolean from String startTLSAuthMethod EmailAuthMethod DISABLED, PLAIN, LOGIN, allowUnauthenticatedSmtp Boolean 41.1.7.11. StorageGeneric Field Name Required Nullable Type Description Format endpoint String skipTLSVerify Boolean caCert String username String password String The password for the integration. The server will mask the value of this credential in responses and logs. headers List of StorageKeyValuePair extraFields List of StorageKeyValuePair auditLoggingEnabled Boolean 41.1.7.12. StorageJira Field Name Required Nullable Type Description Format url String username String password String The password for the integration. The server will mask the value of this credential in responses and logs. issueType String priorityMappings List of JiraPriorityMapping defaultFieldsJson String disablePriority Boolean 41.1.7.13. StorageKeyValuePair Field Name Required Nullable Type Description Format key String value String 41.1.7.14. StorageMicrosoftSentinel Field Name Required Nullable Type Description Format logIngestionEndpoint String log_ingestion_endpoint is the log ingestion endpoint. directoryTenantId String directory_tenant_id contains the ID of the Microsoft Directory ID of the selected tenant. applicationClientId String application_client_id contains the ID of the application ID of the service principal. secret String secret contains the client secret. alertDcrConfig MicrosoftSentinelDataCollectionRuleConfig auditLogDcrConfig MicrosoftSentinelDataCollectionRuleConfig clientCertAuthConfig MicrosoftSentinelClientCertAuthConfig 41.1.7.15. StorageNotifier Field Name Required Nullable Type Description Format id String name String type String uiEndpoint String labelKey String labelDefault String jira StorageJira email StorageEmail cscc StorageCSCC splunk StorageSplunk pagerduty StoragePagerDuty generic StorageGeneric sumologic StorageSumoLogic awsSecurityHub StorageAWSSecurityHub syslog StorageSyslog microsoftSentinel StorageMicrosoftSentinel notifierSecret String traits StorageTraits 41.1.7.16. StoragePagerDuty Field Name Required Nullable Type Description Format apiKey String The API key for the integration. The server will mask the value of this credential in responses and logs. 41.1.7.17. StorageSeverity Enum Values UNSET_SEVERITY LOW_SEVERITY MEDIUM_SEVERITY HIGH_SEVERITY CRITICAL_SEVERITY 41.1.7.18. StorageSplunk Field Name Required Nullable Type Description Format httpToken String The HTTP token for the integration. The server will mask the value of this credential in responses and logs. httpEndpoint String insecure Boolean truncate String int64 auditLoggingEnabled Boolean derivedSourceType Boolean sourceTypes Map of string 41.1.7.19. StorageSumoLogic Field Name Required Nullable Type Description Format httpSourceAddress String skipTLSVerify Boolean 41.1.7.20. StorageSyslog Field Name Required Nullable Type Description Format localFacility SyslogLocalFacility LOCAL0, LOCAL1, LOCAL2, LOCAL3, LOCAL4, LOCAL5, LOCAL6, LOCAL7, tcpConfig SyslogTCPConfig extraFields List of StorageKeyValuePair messageFormat SyslogMessageFormat LEGACY, CEF, 41.1.7.21. StorageTraits Field Name Required Nullable Type Description Format mutabilityMode TraitsMutabilityMode ALLOW_MUTATE, ALLOW_MUTATE_FORCED, visibility TraitsVisibility VISIBLE, HIDDEN, origin TraitsOrigin IMPERATIVE, DEFAULT, DECLARATIVE, DECLARATIVE_ORPHANED, 41.1.7.22. SyslogLocalFacility Enum Values LOCAL0 LOCAL1 LOCAL2 LOCAL3 LOCAL4 LOCAL5 LOCAL6 LOCAL7 41.1.7.23. SyslogMessageFormat Enum Values LEGACY CEF 41.1.7.24. SyslogTCPConfig Field Name Required Nullable Type Description Format hostname String port Integer int32 skipTlsVerify Boolean useTls Boolean 41.1.7.25. TraitsMutabilityMode EXPERIMENTAL. NOTE: Please refer from using MutabilityMode for the time being. It will be replaced in the future (ROX-14276). MutabilityMode specifies whether and how an object can be modified. Default is ALLOW_MUTATE and means there are no modification restrictions; this is equivalent to the absence of MutabilityMode specification. ALLOW_MUTATE_FORCED forbids all modifying operations except object removal with force bit on. Be careful when changing the state of this field. For example, modifying an object from ALLOW_MUTATE to ALLOW_MUTATE_FORCED is allowed but will prohibit any further changes to it, including modifying it back to ALLOW_MUTATE. Enum Values ALLOW_MUTATE ALLOW_MUTATE_FORCED 41.1.7.26. TraitsOrigin Origin specifies the origin of an object. Objects can have four different origins: - IMPERATIVE: the object was created via the API. This is assumed by default. - DEFAULT: the object is a default object, such as default roles, access scopes etc. - DECLARATIVE: the object is created via declarative configuration. - DECLARATIVE_ORPHANED: the object is created via declarative configuration and then unsuccessfully deleted(for example, because it is referenced by another object) Based on the origin, different rules apply to the objects. Objects with the DECLARATIVE origin are not allowed to be modified via API, only via declarative configuration. Additionally, they may not reference objects with the IMPERATIVE origin. Objects with the DEFAULT origin are not allowed to be modified via either API or declarative configuration. They may be referenced by all other objects. Objects with the IMPERATIVE origin are allowed to be modified via API, not via declarative configuration. They may reference all other objects. Objects with the DECLARATIVE_ORPHANED origin are not allowed to be modified via either API or declarative configuration. DECLARATIVE_ORPHANED resource can become DECLARATIVE again if it is redefined in declarative configuration. Objects with this origin will be cleaned up from the system immediately after they are not referenced by other resources anymore. They may be referenced by all other objects. Enum Values IMPERATIVE DEFAULT DECLARATIVE DECLARATIVE_ORPHANED 41.1.7.27. TraitsVisibility EXPERIMENTAL. visibility allows to specify whether the object should be visible for certain APIs. Enum Values VISIBLE HIDDEN 41.1.7.28. V1GetNotifiersResponse Field Name Required Nullable Type Description Format notifiers List of StorageNotifier 41.2. DeleteNotifier DELETE /v1/notifiers/{id} DeleteNotifier removes a notifier configuration given its ID. 41.2.1. Description 41.2.2. Parameters 41.2.2.1. Path Parameters Name Description Required Default Pattern id X null 41.2.2.2. Query Parameters Name Description Required Default Pattern force - null 41.2.3. Return Type Object 41.2.4. Content Type application/json 41.2.5. Responses Table 41.2. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. GooglerpcStatus 41.2.6. Samples 41.2.7. Common object reference 41.2.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 41.2.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 41.2.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 41.3. GetNotifier GET /v1/notifiers/{id} GetNotifier returns the notifier configuration given its ID. 41.3.1. Description 41.3.2. Parameters 41.3.2.1. Path Parameters Name Description Required Default Pattern id X null 41.3.3. Return Type StorageNotifier 41.3.4. Content Type application/json 41.3.5. Responses Table 41.3. HTTP Response Codes Code Message Datatype 200 A successful response. StorageNotifier 0 An unexpected error response. GooglerpcStatus 41.3.6. Samples 41.3.7. Common object reference 41.3.7.1. EmailAuthMethod Enum Values DISABLED PLAIN LOGIN 41.3.7.2. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 41.3.7.3. JiraPriorityMapping Field Name Required Nullable Type Description Format severity StorageSeverity UNSET_SEVERITY, LOW_SEVERITY, MEDIUM_SEVERITY, HIGH_SEVERITY, CRITICAL_SEVERITY, priorityName String 41.3.7.4. MicrosoftSentinelClientCertAuthConfig Field Name Required Nullable Type Description Format clientCert String PEM encoded ASN.1 DER format. privateKey String PEM encoded PKCS #8, ASN.1 DER format. 41.3.7.5. MicrosoftSentinelDataCollectionRuleConfig DataCollectionRuleConfig contains information about the data collection rule which is a config per notifier type. Field Name Required Nullable Type Description Format streamName String dataCollectionRuleId String enabled Boolean 41.3.7.6. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 41.3.7.6.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 41.3.7.7. StorageAWSSecurityHub Field Name Required Nullable Type Description Format region String credentials StorageAWSSecurityHubCredentials accountId String 41.3.7.8. StorageAWSSecurityHubCredentials Field Name Required Nullable Type Description Format accessKeyId String secretAccessKey String stsEnabled Boolean 41.3.7.9. StorageCSCC Field Name Required Nullable Type Description Format serviceAccount String The service account for the integration. The server will mask the value of this credential in responses and logs. sourceId String wifEnabled Boolean 41.3.7.10. StorageEmail Field Name Required Nullable Type Description Format server String sender String username String password String The password for the integration. The server will mask the value of this credential in responses and logs. disableTLS Boolean DEPRECATEDUseStartTLS Boolean from String startTLSAuthMethod EmailAuthMethod DISABLED, PLAIN, LOGIN, allowUnauthenticatedSmtp Boolean 41.3.7.11. StorageGeneric Field Name Required Nullable Type Description Format endpoint String skipTLSVerify Boolean caCert String username String password String The password for the integration. The server will mask the value of this credential in responses and logs. headers List of StorageKeyValuePair extraFields List of StorageKeyValuePair auditLoggingEnabled Boolean 41.3.7.12. StorageJira Field Name Required Nullable Type Description Format url String username String password String The password for the integration. The server will mask the value of this credential in responses and logs. issueType String priorityMappings List of JiraPriorityMapping defaultFieldsJson String disablePriority Boolean 41.3.7.13. StorageKeyValuePair Field Name Required Nullable Type Description Format key String value String 41.3.7.14. StorageMicrosoftSentinel Field Name Required Nullable Type Description Format logIngestionEndpoint String log_ingestion_endpoint is the log ingestion endpoint. directoryTenantId String directory_tenant_id contains the ID of the Microsoft Directory ID of the selected tenant. applicationClientId String application_client_id contains the ID of the application ID of the service principal. secret String secret contains the client secret. alertDcrConfig MicrosoftSentinelDataCollectionRuleConfig auditLogDcrConfig MicrosoftSentinelDataCollectionRuleConfig clientCertAuthConfig MicrosoftSentinelClientCertAuthConfig 41.3.7.15. StorageNotifier Field Name Required Nullable Type Description Format id String name String type String uiEndpoint String labelKey String labelDefault String jira StorageJira email StorageEmail cscc StorageCSCC splunk StorageSplunk pagerduty StoragePagerDuty generic StorageGeneric sumologic StorageSumoLogic awsSecurityHub StorageAWSSecurityHub syslog StorageSyslog microsoftSentinel StorageMicrosoftSentinel notifierSecret String traits StorageTraits 41.3.7.16. StoragePagerDuty Field Name Required Nullable Type Description Format apiKey String The API key for the integration. The server will mask the value of this credential in responses and logs. 41.3.7.17. StorageSeverity Enum Values UNSET_SEVERITY LOW_SEVERITY MEDIUM_SEVERITY HIGH_SEVERITY CRITICAL_SEVERITY 41.3.7.18. StorageSplunk Field Name Required Nullable Type Description Format httpToken String The HTTP token for the integration. The server will mask the value of this credential in responses and logs. httpEndpoint String insecure Boolean truncate String int64 auditLoggingEnabled Boolean derivedSourceType Boolean sourceTypes Map of string 41.3.7.19. StorageSumoLogic Field Name Required Nullable Type Description Format httpSourceAddress String skipTLSVerify Boolean 41.3.7.20. StorageSyslog Field Name Required Nullable Type Description Format localFacility SyslogLocalFacility LOCAL0, LOCAL1, LOCAL2, LOCAL3, LOCAL4, LOCAL5, LOCAL6, LOCAL7, tcpConfig SyslogTCPConfig extraFields List of StorageKeyValuePair messageFormat SyslogMessageFormat LEGACY, CEF, 41.3.7.21. StorageTraits Field Name Required Nullable Type Description Format mutabilityMode TraitsMutabilityMode ALLOW_MUTATE, ALLOW_MUTATE_FORCED, visibility TraitsVisibility VISIBLE, HIDDEN, origin TraitsOrigin IMPERATIVE, DEFAULT, DECLARATIVE, DECLARATIVE_ORPHANED, 41.3.7.22. SyslogLocalFacility Enum Values LOCAL0 LOCAL1 LOCAL2 LOCAL3 LOCAL4 LOCAL5 LOCAL6 LOCAL7 41.3.7.23. SyslogMessageFormat Enum Values LEGACY CEF 41.3.7.24. SyslogTCPConfig Field Name Required Nullable Type Description Format hostname String port Integer int32 skipTlsVerify Boolean useTls Boolean 41.3.7.25. TraitsMutabilityMode EXPERIMENTAL. NOTE: Please refer from using MutabilityMode for the time being. It will be replaced in the future (ROX-14276). MutabilityMode specifies whether and how an object can be modified. Default is ALLOW_MUTATE and means there are no modification restrictions; this is equivalent to the absence of MutabilityMode specification. ALLOW_MUTATE_FORCED forbids all modifying operations except object removal with force bit on. Be careful when changing the state of this field. For example, modifying an object from ALLOW_MUTATE to ALLOW_MUTATE_FORCED is allowed but will prohibit any further changes to it, including modifying it back to ALLOW_MUTATE. Enum Values ALLOW_MUTATE ALLOW_MUTATE_FORCED 41.3.7.26. TraitsOrigin Origin specifies the origin of an object. Objects can have four different origins: - IMPERATIVE: the object was created via the API. This is assumed by default. - DEFAULT: the object is a default object, such as default roles, access scopes etc. - DECLARATIVE: the object is created via declarative configuration. - DECLARATIVE_ORPHANED: the object is created via declarative configuration and then unsuccessfully deleted(for example, because it is referenced by another object) Based on the origin, different rules apply to the objects. Objects with the DECLARATIVE origin are not allowed to be modified via API, only via declarative configuration. Additionally, they may not reference objects with the IMPERATIVE origin. Objects with the DEFAULT origin are not allowed to be modified via either API or declarative configuration. They may be referenced by all other objects. Objects with the IMPERATIVE origin are allowed to be modified via API, not via declarative configuration. They may reference all other objects. Objects with the DECLARATIVE_ORPHANED origin are not allowed to be modified via either API or declarative configuration. DECLARATIVE_ORPHANED resource can become DECLARATIVE again if it is redefined in declarative configuration. Objects with this origin will be cleaned up from the system immediately after they are not referenced by other resources anymore. They may be referenced by all other objects. Enum Values IMPERATIVE DEFAULT DECLARATIVE DECLARATIVE_ORPHANED 41.3.7.27. TraitsVisibility EXPERIMENTAL. visibility allows to specify whether the object should be visible for certain APIs. Enum Values VISIBLE HIDDEN 41.4. PutNotifier PUT /v1/notifiers/{id} PutNotifier modifies a given notifier, without using stored credential reconciliation. 41.4.1. Description 41.4.2. Parameters 41.4.2.1. Path Parameters Name Description Required Default Pattern id X null 41.4.2.2. Body Parameter Name Description Required Default Pattern body NotifierServicePutNotifierBody X 41.4.3. Return Type Object 41.4.4. Content Type application/json 41.4.5. Responses Table 41.4. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. GooglerpcStatus 41.4.6. Samples 41.4.7. Common object reference 41.4.7.1. EmailAuthMethod Enum Values DISABLED PLAIN LOGIN 41.4.7.2. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 41.4.7.3. JiraPriorityMapping Field Name Required Nullable Type Description Format severity StorageSeverity UNSET_SEVERITY, LOW_SEVERITY, MEDIUM_SEVERITY, HIGH_SEVERITY, CRITICAL_SEVERITY, priorityName String 41.4.7.4. MicrosoftSentinelClientCertAuthConfig Field Name Required Nullable Type Description Format clientCert String PEM encoded ASN.1 DER format. privateKey String PEM encoded PKCS #8, ASN.1 DER format. 41.4.7.5. MicrosoftSentinelDataCollectionRuleConfig DataCollectionRuleConfig contains information about the data collection rule which is a config per notifier type. Field Name Required Nullable Type Description Format streamName String dataCollectionRuleId String enabled Boolean 41.4.7.6. NotifierServicePutNotifierBody Field Name Required Nullable Type Description Format name String type String uiEndpoint String labelKey String labelDefault String jira StorageJira email StorageEmail cscc StorageCSCC splunk StorageSplunk pagerduty StoragePagerDuty generic StorageGeneric sumologic StorageSumoLogic awsSecurityHub StorageAWSSecurityHub syslog StorageSyslog microsoftSentinel StorageMicrosoftSentinel notifierSecret String traits StorageTraits 41.4.7.7. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 41.4.7.7.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 41.4.7.8. StorageAWSSecurityHub Field Name Required Nullable Type Description Format region String credentials StorageAWSSecurityHubCredentials accountId String 41.4.7.9. StorageAWSSecurityHubCredentials Field Name Required Nullable Type Description Format accessKeyId String secretAccessKey String stsEnabled Boolean 41.4.7.10. StorageCSCC Field Name Required Nullable Type Description Format serviceAccount String The service account for the integration. The server will mask the value of this credential in responses and logs. sourceId String wifEnabled Boolean 41.4.7.11. StorageEmail Field Name Required Nullable Type Description Format server String sender String username String password String The password for the integration. The server will mask the value of this credential in responses and logs. disableTLS Boolean DEPRECATEDUseStartTLS Boolean from String startTLSAuthMethod EmailAuthMethod DISABLED, PLAIN, LOGIN, allowUnauthenticatedSmtp Boolean 41.4.7.12. StorageGeneric Field Name Required Nullable Type Description Format endpoint String skipTLSVerify Boolean caCert String username String password String The password for the integration. The server will mask the value of this credential in responses and logs. headers List of StorageKeyValuePair extraFields List of StorageKeyValuePair auditLoggingEnabled Boolean 41.4.7.13. StorageJira Field Name Required Nullable Type Description Format url String username String password String The password for the integration. The server will mask the value of this credential in responses and logs. issueType String priorityMappings List of JiraPriorityMapping defaultFieldsJson String disablePriority Boolean 41.4.7.14. StorageKeyValuePair Field Name Required Nullable Type Description Format key String value String 41.4.7.15. StorageMicrosoftSentinel Field Name Required Nullable Type Description Format logIngestionEndpoint String log_ingestion_endpoint is the log ingestion endpoint. directoryTenantId String directory_tenant_id contains the ID of the Microsoft Directory ID of the selected tenant. applicationClientId String application_client_id contains the ID of the application ID of the service principal. secret String secret contains the client secret. alertDcrConfig MicrosoftSentinelDataCollectionRuleConfig auditLogDcrConfig MicrosoftSentinelDataCollectionRuleConfig clientCertAuthConfig MicrosoftSentinelClientCertAuthConfig 41.4.7.16. StoragePagerDuty Field Name Required Nullable Type Description Format apiKey String The API key for the integration. The server will mask the value of this credential in responses and logs. 41.4.7.17. StorageSeverity Enum Values UNSET_SEVERITY LOW_SEVERITY MEDIUM_SEVERITY HIGH_SEVERITY CRITICAL_SEVERITY 41.4.7.18. StorageSplunk Field Name Required Nullable Type Description Format httpToken String The HTTP token for the integration. The server will mask the value of this credential in responses and logs. httpEndpoint String insecure Boolean truncate String int64 auditLoggingEnabled Boolean derivedSourceType Boolean sourceTypes Map of string 41.4.7.19. StorageSumoLogic Field Name Required Nullable Type Description Format httpSourceAddress String skipTLSVerify Boolean 41.4.7.20. StorageSyslog Field Name Required Nullable Type Description Format localFacility SyslogLocalFacility LOCAL0, LOCAL1, LOCAL2, LOCAL3, LOCAL4, LOCAL5, LOCAL6, LOCAL7, tcpConfig SyslogTCPConfig extraFields List of StorageKeyValuePair messageFormat SyslogMessageFormat LEGACY, CEF, 41.4.7.21. StorageTraits Field Name Required Nullable Type Description Format mutabilityMode TraitsMutabilityMode ALLOW_MUTATE, ALLOW_MUTATE_FORCED, visibility TraitsVisibility VISIBLE, HIDDEN, origin TraitsOrigin IMPERATIVE, DEFAULT, DECLARATIVE, DECLARATIVE_ORPHANED, 41.4.7.22. SyslogLocalFacility Enum Values LOCAL0 LOCAL1 LOCAL2 LOCAL3 LOCAL4 LOCAL5 LOCAL6 LOCAL7 41.4.7.23. SyslogMessageFormat Enum Values LEGACY CEF 41.4.7.24. SyslogTCPConfig Field Name Required Nullable Type Description Format hostname String port Integer int32 skipTlsVerify Boolean useTls Boolean 41.4.7.25. TraitsMutabilityMode EXPERIMENTAL. NOTE: Please refer from using MutabilityMode for the time being. It will be replaced in the future (ROX-14276). MutabilityMode specifies whether and how an object can be modified. Default is ALLOW_MUTATE and means there are no modification restrictions; this is equivalent to the absence of MutabilityMode specification. ALLOW_MUTATE_FORCED forbids all modifying operations except object removal with force bit on. Be careful when changing the state of this field. For example, modifying an object from ALLOW_MUTATE to ALLOW_MUTATE_FORCED is allowed but will prohibit any further changes to it, including modifying it back to ALLOW_MUTATE. Enum Values ALLOW_MUTATE ALLOW_MUTATE_FORCED 41.4.7.26. TraitsOrigin Origin specifies the origin of an object. Objects can have four different origins: - IMPERATIVE: the object was created via the API. This is assumed by default. - DEFAULT: the object is a default object, such as default roles, access scopes etc. - DECLARATIVE: the object is created via declarative configuration. - DECLARATIVE_ORPHANED: the object is created via declarative configuration and then unsuccessfully deleted(for example, because it is referenced by another object) Based on the origin, different rules apply to the objects. Objects with the DECLARATIVE origin are not allowed to be modified via API, only via declarative configuration. Additionally, they may not reference objects with the IMPERATIVE origin. Objects with the DEFAULT origin are not allowed to be modified via either API or declarative configuration. They may be referenced by all other objects. Objects with the IMPERATIVE origin are allowed to be modified via API, not via declarative configuration. They may reference all other objects. Objects with the DECLARATIVE_ORPHANED origin are not allowed to be modified via either API or declarative configuration. DECLARATIVE_ORPHANED resource can become DECLARATIVE again if it is redefined in declarative configuration. Objects with this origin will be cleaned up from the system immediately after they are not referenced by other resources anymore. They may be referenced by all other objects. Enum Values IMPERATIVE DEFAULT DECLARATIVE DECLARATIVE_ORPHANED 41.4.7.27. TraitsVisibility EXPERIMENTAL. visibility allows to specify whether the object should be visible for certain APIs. Enum Values VISIBLE HIDDEN 41.5. UpdateNotifier PATCH /v1/notifiers/{notifier.id} UpdateNotifier modifies a given notifier, with optional stored credential reconciliation. 41.5.1. Description 41.5.2. Parameters 41.5.2.1. Path Parameters Name Description Required Default Pattern notifier.id X null 41.5.2.2. Body Parameter Name Description Required Default Pattern body NotifierServiceUpdateNotifierBody X 41.5.3. Return Type Object 41.5.4. Content Type application/json 41.5.5. Responses Table 41.5. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. GooglerpcStatus 41.5.6. Samples 41.5.7. Common object reference 41.5.7.1. EmailAuthMethod Enum Values DISABLED PLAIN LOGIN 41.5.7.2. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 41.5.7.3. JiraPriorityMapping Field Name Required Nullable Type Description Format severity StorageSeverity UNSET_SEVERITY, LOW_SEVERITY, MEDIUM_SEVERITY, HIGH_SEVERITY, CRITICAL_SEVERITY, priorityName String 41.5.7.4. MicrosoftSentinelClientCertAuthConfig Field Name Required Nullable Type Description Format clientCert String PEM encoded ASN.1 DER format. privateKey String PEM encoded PKCS #8, ASN.1 DER format. 41.5.7.5. MicrosoftSentinelDataCollectionRuleConfig DataCollectionRuleConfig contains information about the data collection rule which is a config per notifier type. Field Name Required Nullable Type Description Format streamName String dataCollectionRuleId String enabled Boolean 41.5.7.6. NextTag21 Field Name Required Nullable Type Description Format name String type String uiEndpoint String labelKey String labelDefault String jira StorageJira email StorageEmail cscc StorageCSCC splunk StorageSplunk pagerduty StoragePagerDuty generic StorageGeneric sumologic StorageSumoLogic awsSecurityHub StorageAWSSecurityHub syslog StorageSyslog microsoftSentinel StorageMicrosoftSentinel notifierSecret String traits StorageTraits 41.5.7.7. NotifierServiceUpdateNotifierBody Field Name Required Nullable Type Description Format notifier NextTag21 updatePassword Boolean When false, use the stored credentials of an existing notifier configuration given its ID. 41.5.7.8. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 41.5.7.8.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 41.5.7.9. StorageAWSSecurityHub Field Name Required Nullable Type Description Format region String credentials StorageAWSSecurityHubCredentials accountId String 41.5.7.10. StorageAWSSecurityHubCredentials Field Name Required Nullable Type Description Format accessKeyId String secretAccessKey String stsEnabled Boolean 41.5.7.11. StorageCSCC Field Name Required Nullable Type Description Format serviceAccount String The service account for the integration. The server will mask the value of this credential in responses and logs. sourceId String wifEnabled Boolean 41.5.7.12. StorageEmail Field Name Required Nullable Type Description Format server String sender String username String password String The password for the integration. The server will mask the value of this credential in responses and logs. disableTLS Boolean DEPRECATEDUseStartTLS Boolean from String startTLSAuthMethod EmailAuthMethod DISABLED, PLAIN, LOGIN, allowUnauthenticatedSmtp Boolean 41.5.7.13. StorageGeneric Field Name Required Nullable Type Description Format endpoint String skipTLSVerify Boolean caCert String username String password String The password for the integration. The server will mask the value of this credential in responses and logs. headers List of StorageKeyValuePair extraFields List of StorageKeyValuePair auditLoggingEnabled Boolean 41.5.7.14. StorageJira Field Name Required Nullable Type Description Format url String username String password String The password for the integration. The server will mask the value of this credential in responses and logs. issueType String priorityMappings List of JiraPriorityMapping defaultFieldsJson String disablePriority Boolean 41.5.7.15. StorageKeyValuePair Field Name Required Nullable Type Description Format key String value String 41.5.7.16. StorageMicrosoftSentinel Field Name Required Nullable Type Description Format logIngestionEndpoint String log_ingestion_endpoint is the log ingestion endpoint. directoryTenantId String directory_tenant_id contains the ID of the Microsoft Directory ID of the selected tenant. applicationClientId String application_client_id contains the ID of the application ID of the service principal. secret String secret contains the client secret. alertDcrConfig MicrosoftSentinelDataCollectionRuleConfig auditLogDcrConfig MicrosoftSentinelDataCollectionRuleConfig clientCertAuthConfig MicrosoftSentinelClientCertAuthConfig 41.5.7.17. StoragePagerDuty Field Name Required Nullable Type Description Format apiKey String The API key for the integration. The server will mask the value of this credential in responses and logs. 41.5.7.18. StorageSeverity Enum Values UNSET_SEVERITY LOW_SEVERITY MEDIUM_SEVERITY HIGH_SEVERITY CRITICAL_SEVERITY 41.5.7.19. StorageSplunk Field Name Required Nullable Type Description Format httpToken String The HTTP token for the integration. The server will mask the value of this credential in responses and logs. httpEndpoint String insecure Boolean truncate String int64 auditLoggingEnabled Boolean derivedSourceType Boolean sourceTypes Map of string 41.5.7.20. StorageSumoLogic Field Name Required Nullable Type Description Format httpSourceAddress String skipTLSVerify Boolean 41.5.7.21. StorageSyslog Field Name Required Nullable Type Description Format localFacility SyslogLocalFacility LOCAL0, LOCAL1, LOCAL2, LOCAL3, LOCAL4, LOCAL5, LOCAL6, LOCAL7, tcpConfig SyslogTCPConfig extraFields List of StorageKeyValuePair messageFormat SyslogMessageFormat LEGACY, CEF, 41.5.7.22. StorageTraits Field Name Required Nullable Type Description Format mutabilityMode TraitsMutabilityMode ALLOW_MUTATE, ALLOW_MUTATE_FORCED, visibility TraitsVisibility VISIBLE, HIDDEN, origin TraitsOrigin IMPERATIVE, DEFAULT, DECLARATIVE, DECLARATIVE_ORPHANED, 41.5.7.23. SyslogLocalFacility Enum Values LOCAL0 LOCAL1 LOCAL2 LOCAL3 LOCAL4 LOCAL5 LOCAL6 LOCAL7 41.5.7.24. SyslogMessageFormat Enum Values LEGACY CEF 41.5.7.25. SyslogTCPConfig Field Name Required Nullable Type Description Format hostname String port Integer int32 skipTlsVerify Boolean useTls Boolean 41.5.7.26. TraitsMutabilityMode EXPERIMENTAL. NOTE: Please refer from using MutabilityMode for the time being. It will be replaced in the future (ROX-14276). MutabilityMode specifies whether and how an object can be modified. Default is ALLOW_MUTATE and means there are no modification restrictions; this is equivalent to the absence of MutabilityMode specification. ALLOW_MUTATE_FORCED forbids all modifying operations except object removal with force bit on. Be careful when changing the state of this field. For example, modifying an object from ALLOW_MUTATE to ALLOW_MUTATE_FORCED is allowed but will prohibit any further changes to it, including modifying it back to ALLOW_MUTATE. Enum Values ALLOW_MUTATE ALLOW_MUTATE_FORCED 41.5.7.27. TraitsOrigin Origin specifies the origin of an object. Objects can have four different origins: - IMPERATIVE: the object was created via the API. This is assumed by default. - DEFAULT: the object is a default object, such as default roles, access scopes etc. - DECLARATIVE: the object is created via declarative configuration. - DECLARATIVE_ORPHANED: the object is created via declarative configuration and then unsuccessfully deleted(for example, because it is referenced by another object) Based on the origin, different rules apply to the objects. Objects with the DECLARATIVE origin are not allowed to be modified via API, only via declarative configuration. Additionally, they may not reference objects with the IMPERATIVE origin. Objects with the DEFAULT origin are not allowed to be modified via either API or declarative configuration. They may be referenced by all other objects. Objects with the IMPERATIVE origin are allowed to be modified via API, not via declarative configuration. They may reference all other objects. Objects with the DECLARATIVE_ORPHANED origin are not allowed to be modified via either API or declarative configuration. DECLARATIVE_ORPHANED resource can become DECLARATIVE again if it is redefined in declarative configuration. Objects with this origin will be cleaned up from the system immediately after they are not referenced by other resources anymore. They may be referenced by all other objects. Enum Values IMPERATIVE DEFAULT DECLARATIVE DECLARATIVE_ORPHANED 41.5.7.28. TraitsVisibility EXPERIMENTAL. visibility allows to specify whether the object should be visible for certain APIs. Enum Values VISIBLE HIDDEN 41.6. PostNotifier POST /v1/notifiers PostNotifier creates a notifier configuration. 41.6.1. Description 41.6.2. Parameters 41.6.2.1. Body Parameter Name Description Required Default Pattern body StorageNotifier X 41.6.3. Return Type StorageNotifier 41.6.4. Content Type application/json 41.6.5. Responses Table 41.6. HTTP Response Codes Code Message Datatype 200 A successful response. StorageNotifier 0 An unexpected error response. GooglerpcStatus 41.6.6. Samples 41.6.7. Common object reference 41.6.7.1. EmailAuthMethod Enum Values DISABLED PLAIN LOGIN 41.6.7.2. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 41.6.7.3. JiraPriorityMapping Field Name Required Nullable Type Description Format severity StorageSeverity UNSET_SEVERITY, LOW_SEVERITY, MEDIUM_SEVERITY, HIGH_SEVERITY, CRITICAL_SEVERITY, priorityName String 41.6.7.4. MicrosoftSentinelClientCertAuthConfig Field Name Required Nullable Type Description Format clientCert String PEM encoded ASN.1 DER format. privateKey String PEM encoded PKCS #8, ASN.1 DER format. 41.6.7.5. MicrosoftSentinelDataCollectionRuleConfig DataCollectionRuleConfig contains information about the data collection rule which is a config per notifier type. Field Name Required Nullable Type Description Format streamName String dataCollectionRuleId String enabled Boolean 41.6.7.6. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 41.6.7.6.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 41.6.7.7. StorageAWSSecurityHub Field Name Required Nullable Type Description Format region String credentials StorageAWSSecurityHubCredentials accountId String 41.6.7.8. StorageAWSSecurityHubCredentials Field Name Required Nullable Type Description Format accessKeyId String secretAccessKey String stsEnabled Boolean 41.6.7.9. StorageCSCC Field Name Required Nullable Type Description Format serviceAccount String The service account for the integration. The server will mask the value of this credential in responses and logs. sourceId String wifEnabled Boolean 41.6.7.10. StorageEmail Field Name Required Nullable Type Description Format server String sender String username String password String The password for the integration. The server will mask the value of this credential in responses and logs. disableTLS Boolean DEPRECATEDUseStartTLS Boolean from String startTLSAuthMethod EmailAuthMethod DISABLED, PLAIN, LOGIN, allowUnauthenticatedSmtp Boolean 41.6.7.11. StorageGeneric Field Name Required Nullable Type Description Format endpoint String skipTLSVerify Boolean caCert String username String password String The password for the integration. The server will mask the value of this credential in responses and logs. headers List of StorageKeyValuePair extraFields List of StorageKeyValuePair auditLoggingEnabled Boolean 41.6.7.12. StorageJira Field Name Required Nullable Type Description Format url String username String password String The password for the integration. The server will mask the value of this credential in responses and logs. issueType String priorityMappings List of JiraPriorityMapping defaultFieldsJson String disablePriority Boolean 41.6.7.13. StorageKeyValuePair Field Name Required Nullable Type Description Format key String value String 41.6.7.14. StorageMicrosoftSentinel Field Name Required Nullable Type Description Format logIngestionEndpoint String log_ingestion_endpoint is the log ingestion endpoint. directoryTenantId String directory_tenant_id contains the ID of the Microsoft Directory ID of the selected tenant. applicationClientId String application_client_id contains the ID of the application ID of the service principal. secret String secret contains the client secret. alertDcrConfig MicrosoftSentinelDataCollectionRuleConfig auditLogDcrConfig MicrosoftSentinelDataCollectionRuleConfig clientCertAuthConfig MicrosoftSentinelClientCertAuthConfig 41.6.7.15. StorageNotifier Field Name Required Nullable Type Description Format id String name String type String uiEndpoint String labelKey String labelDefault String jira StorageJira email StorageEmail cscc StorageCSCC splunk StorageSplunk pagerduty StoragePagerDuty generic StorageGeneric sumologic StorageSumoLogic awsSecurityHub StorageAWSSecurityHub syslog StorageSyslog microsoftSentinel StorageMicrosoftSentinel notifierSecret String traits StorageTraits 41.6.7.16. StoragePagerDuty Field Name Required Nullable Type Description Format apiKey String The API key for the integration. The server will mask the value of this credential in responses and logs. 41.6.7.17. StorageSeverity Enum Values UNSET_SEVERITY LOW_SEVERITY MEDIUM_SEVERITY HIGH_SEVERITY CRITICAL_SEVERITY 41.6.7.18. StorageSplunk Field Name Required Nullable Type Description Format httpToken String The HTTP token for the integration. The server will mask the value of this credential in responses and logs. httpEndpoint String insecure Boolean truncate String int64 auditLoggingEnabled Boolean derivedSourceType Boolean sourceTypes Map of string 41.6.7.19. StorageSumoLogic Field Name Required Nullable Type Description Format httpSourceAddress String skipTLSVerify Boolean 41.6.7.20. StorageSyslog Field Name Required Nullable Type Description Format localFacility SyslogLocalFacility LOCAL0, LOCAL1, LOCAL2, LOCAL3, LOCAL4, LOCAL5, LOCAL6, LOCAL7, tcpConfig SyslogTCPConfig extraFields List of StorageKeyValuePair messageFormat SyslogMessageFormat LEGACY, CEF, 41.6.7.21. StorageTraits Field Name Required Nullable Type Description Format mutabilityMode TraitsMutabilityMode ALLOW_MUTATE, ALLOW_MUTATE_FORCED, visibility TraitsVisibility VISIBLE, HIDDEN, origin TraitsOrigin IMPERATIVE, DEFAULT, DECLARATIVE, DECLARATIVE_ORPHANED, 41.6.7.22. SyslogLocalFacility Enum Values LOCAL0 LOCAL1 LOCAL2 LOCAL3 LOCAL4 LOCAL5 LOCAL6 LOCAL7 41.6.7.23. SyslogMessageFormat Enum Values LEGACY CEF 41.6.7.24. SyslogTCPConfig Field Name Required Nullable Type Description Format hostname String port Integer int32 skipTlsVerify Boolean useTls Boolean 41.6.7.25. TraitsMutabilityMode EXPERIMENTAL. NOTE: Please refer from using MutabilityMode for the time being. It will be replaced in the future (ROX-14276). MutabilityMode specifies whether and how an object can be modified. Default is ALLOW_MUTATE and means there are no modification restrictions; this is equivalent to the absence of MutabilityMode specification. ALLOW_MUTATE_FORCED forbids all modifying operations except object removal with force bit on. Be careful when changing the state of this field. For example, modifying an object from ALLOW_MUTATE to ALLOW_MUTATE_FORCED is allowed but will prohibit any further changes to it, including modifying it back to ALLOW_MUTATE. Enum Values ALLOW_MUTATE ALLOW_MUTATE_FORCED 41.6.7.26. TraitsOrigin Origin specifies the origin of an object. Objects can have four different origins: - IMPERATIVE: the object was created via the API. This is assumed by default. - DEFAULT: the object is a default object, such as default roles, access scopes etc. - DECLARATIVE: the object is created via declarative configuration. - DECLARATIVE_ORPHANED: the object is created via declarative configuration and then unsuccessfully deleted(for example, because it is referenced by another object) Based on the origin, different rules apply to the objects. Objects with the DECLARATIVE origin are not allowed to be modified via API, only via declarative configuration. Additionally, they may not reference objects with the IMPERATIVE origin. Objects with the DEFAULT origin are not allowed to be modified via either API or declarative configuration. They may be referenced by all other objects. Objects with the IMPERATIVE origin are allowed to be modified via API, not via declarative configuration. They may reference all other objects. Objects with the DECLARATIVE_ORPHANED origin are not allowed to be modified via either API or declarative configuration. DECLARATIVE_ORPHANED resource can become DECLARATIVE again if it is redefined in declarative configuration. Objects with this origin will be cleaned up from the system immediately after they are not referenced by other resources anymore. They may be referenced by all other objects. Enum Values IMPERATIVE DEFAULT DECLARATIVE DECLARATIVE_ORPHANED 41.6.7.27. TraitsVisibility EXPERIMENTAL. visibility allows to specify whether the object should be visible for certain APIs. Enum Values VISIBLE HIDDEN 41.7. TestNotifier POST /v1/notifiers/test TestNotifier checks if a notifier is correctly configured. 41.7.1. Description 41.7.2. Parameters 41.7.2.1. Body Parameter Name Description Required Default Pattern body StorageNotifier X 41.7.3. Return Type Object 41.7.4. Content Type application/json 41.7.5. Responses Table 41.7. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. GooglerpcStatus 41.7.6. Samples 41.7.7. Common object reference 41.7.7.1. EmailAuthMethod Enum Values DISABLED PLAIN LOGIN 41.7.7.2. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 41.7.7.3. JiraPriorityMapping Field Name Required Nullable Type Description Format severity StorageSeverity UNSET_SEVERITY, LOW_SEVERITY, MEDIUM_SEVERITY, HIGH_SEVERITY, CRITICAL_SEVERITY, priorityName String 41.7.7.4. MicrosoftSentinelClientCertAuthConfig Field Name Required Nullable Type Description Format clientCert String PEM encoded ASN.1 DER format. privateKey String PEM encoded PKCS #8, ASN.1 DER format. 41.7.7.5. MicrosoftSentinelDataCollectionRuleConfig DataCollectionRuleConfig contains information about the data collection rule which is a config per notifier type. Field Name Required Nullable Type Description Format streamName String dataCollectionRuleId String enabled Boolean 41.7.7.6. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 41.7.7.6.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 41.7.7.7. StorageAWSSecurityHub Field Name Required Nullable Type Description Format region String credentials StorageAWSSecurityHubCredentials accountId String 41.7.7.8. StorageAWSSecurityHubCredentials Field Name Required Nullable Type Description Format accessKeyId String secretAccessKey String stsEnabled Boolean 41.7.7.9. StorageCSCC Field Name Required Nullable Type Description Format serviceAccount String The service account for the integration. The server will mask the value of this credential in responses and logs. sourceId String wifEnabled Boolean 41.7.7.10. StorageEmail Field Name Required Nullable Type Description Format server String sender String username String password String The password for the integration. The server will mask the value of this credential in responses and logs. disableTLS Boolean DEPRECATEDUseStartTLS Boolean from String startTLSAuthMethod EmailAuthMethod DISABLED, PLAIN, LOGIN, allowUnauthenticatedSmtp Boolean 41.7.7.11. StorageGeneric Field Name Required Nullable Type Description Format endpoint String skipTLSVerify Boolean caCert String username String password String The password for the integration. The server will mask the value of this credential in responses and logs. headers List of StorageKeyValuePair extraFields List of StorageKeyValuePair auditLoggingEnabled Boolean 41.7.7.12. StorageJira Field Name Required Nullable Type Description Format url String username String password String The password for the integration. The server will mask the value of this credential in responses and logs. issueType String priorityMappings List of JiraPriorityMapping defaultFieldsJson String disablePriority Boolean 41.7.7.13. StorageKeyValuePair Field Name Required Nullable Type Description Format key String value String 41.7.7.14. StorageMicrosoftSentinel Field Name Required Nullable Type Description Format logIngestionEndpoint String log_ingestion_endpoint is the log ingestion endpoint. directoryTenantId String directory_tenant_id contains the ID of the Microsoft Directory ID of the selected tenant. applicationClientId String application_client_id contains the ID of the application ID of the service principal. secret String secret contains the client secret. alertDcrConfig MicrosoftSentinelDataCollectionRuleConfig auditLogDcrConfig MicrosoftSentinelDataCollectionRuleConfig clientCertAuthConfig MicrosoftSentinelClientCertAuthConfig 41.7.7.15. StorageNotifier Field Name Required Nullable Type Description Format id String name String type String uiEndpoint String labelKey String labelDefault String jira StorageJira email StorageEmail cscc StorageCSCC splunk StorageSplunk pagerduty StoragePagerDuty generic StorageGeneric sumologic StorageSumoLogic awsSecurityHub StorageAWSSecurityHub syslog StorageSyslog microsoftSentinel StorageMicrosoftSentinel notifierSecret String traits StorageTraits 41.7.7.16. StoragePagerDuty Field Name Required Nullable Type Description Format apiKey String The API key for the integration. The server will mask the value of this credential in responses and logs. 41.7.7.17. StorageSeverity Enum Values UNSET_SEVERITY LOW_SEVERITY MEDIUM_SEVERITY HIGH_SEVERITY CRITICAL_SEVERITY 41.7.7.18. StorageSplunk Field Name Required Nullable Type Description Format httpToken String The HTTP token for the integration. The server will mask the value of this credential in responses and logs. httpEndpoint String insecure Boolean truncate String int64 auditLoggingEnabled Boolean derivedSourceType Boolean sourceTypes Map of string 41.7.7.19. StorageSumoLogic Field Name Required Nullable Type Description Format httpSourceAddress String skipTLSVerify Boolean 41.7.7.20. StorageSyslog Field Name Required Nullable Type Description Format localFacility SyslogLocalFacility LOCAL0, LOCAL1, LOCAL2, LOCAL3, LOCAL4, LOCAL5, LOCAL6, LOCAL7, tcpConfig SyslogTCPConfig extraFields List of StorageKeyValuePair messageFormat SyslogMessageFormat LEGACY, CEF, 41.7.7.21. StorageTraits Field Name Required Nullable Type Description Format mutabilityMode TraitsMutabilityMode ALLOW_MUTATE, ALLOW_MUTATE_FORCED, visibility TraitsVisibility VISIBLE, HIDDEN, origin TraitsOrigin IMPERATIVE, DEFAULT, DECLARATIVE, DECLARATIVE_ORPHANED, 41.7.7.22. SyslogLocalFacility Enum Values LOCAL0 LOCAL1 LOCAL2 LOCAL3 LOCAL4 LOCAL5 LOCAL6 LOCAL7 41.7.7.23. SyslogMessageFormat Enum Values LEGACY CEF 41.7.7.24. SyslogTCPConfig Field Name Required Nullable Type Description Format hostname String port Integer int32 skipTlsVerify Boolean useTls Boolean 41.7.7.25. TraitsMutabilityMode EXPERIMENTAL. NOTE: Please refer from using MutabilityMode for the time being. It will be replaced in the future (ROX-14276). MutabilityMode specifies whether and how an object can be modified. Default is ALLOW_MUTATE and means there are no modification restrictions; this is equivalent to the absence of MutabilityMode specification. ALLOW_MUTATE_FORCED forbids all modifying operations except object removal with force bit on. Be careful when changing the state of this field. For example, modifying an object from ALLOW_MUTATE to ALLOW_MUTATE_FORCED is allowed but will prohibit any further changes to it, including modifying it back to ALLOW_MUTATE. Enum Values ALLOW_MUTATE ALLOW_MUTATE_FORCED 41.7.7.26. TraitsOrigin Origin specifies the origin of an object. Objects can have four different origins: - IMPERATIVE: the object was created via the API. This is assumed by default. - DEFAULT: the object is a default object, such as default roles, access scopes etc. - DECLARATIVE: the object is created via declarative configuration. - DECLARATIVE_ORPHANED: the object is created via declarative configuration and then unsuccessfully deleted(for example, because it is referenced by another object) Based on the origin, different rules apply to the objects. Objects with the DECLARATIVE origin are not allowed to be modified via API, only via declarative configuration. Additionally, they may not reference objects with the IMPERATIVE origin. Objects with the DEFAULT origin are not allowed to be modified via either API or declarative configuration. They may be referenced by all other objects. Objects with the IMPERATIVE origin are allowed to be modified via API, not via declarative configuration. They may reference all other objects. Objects with the DECLARATIVE_ORPHANED origin are not allowed to be modified via either API or declarative configuration. DECLARATIVE_ORPHANED resource can become DECLARATIVE again if it is redefined in declarative configuration. Objects with this origin will be cleaned up from the system immediately after they are not referenced by other resources anymore. They may be referenced by all other objects. Enum Values IMPERATIVE DEFAULT DECLARATIVE DECLARATIVE_ORPHANED 41.7.7.27. TraitsVisibility EXPERIMENTAL. visibility allows to specify whether the object should be visible for certain APIs. Enum Values VISIBLE HIDDEN 41.8. TestUpdatedNotifier POST /v1/notifiers/test/updated TestUpdatedNotifier checks if the given notifier is correctly configured, with optional stored credential reconciliation. 41.8.1. Description 41.8.2. Parameters 41.8.2.1. Body Parameter Name Description Required Default Pattern body V1UpdateNotifierRequest X 41.8.3. Return Type Object 41.8.4. Content Type application/json 41.8.5. Responses Table 41.8. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. GooglerpcStatus 41.8.6. Samples 41.8.7. Common object reference 41.8.7.1. EmailAuthMethod Enum Values DISABLED PLAIN LOGIN 41.8.7.2. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 41.8.7.3. JiraPriorityMapping Field Name Required Nullable Type Description Format severity StorageSeverity UNSET_SEVERITY, LOW_SEVERITY, MEDIUM_SEVERITY, HIGH_SEVERITY, CRITICAL_SEVERITY, priorityName String 41.8.7.4. MicrosoftSentinelClientCertAuthConfig Field Name Required Nullable Type Description Format clientCert String PEM encoded ASN.1 DER format. privateKey String PEM encoded PKCS #8, ASN.1 DER format. 41.8.7.5. MicrosoftSentinelDataCollectionRuleConfig DataCollectionRuleConfig contains information about the data collection rule which is a config per notifier type. Field Name Required Nullable Type Description Format streamName String dataCollectionRuleId String enabled Boolean 41.8.7.6. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 41.8.7.6.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 41.8.7.7. StorageAWSSecurityHub Field Name Required Nullable Type Description Format region String credentials StorageAWSSecurityHubCredentials accountId String 41.8.7.8. StorageAWSSecurityHubCredentials Field Name Required Nullable Type Description Format accessKeyId String secretAccessKey String stsEnabled Boolean 41.8.7.9. StorageCSCC Field Name Required Nullable Type Description Format serviceAccount String The service account for the integration. The server will mask the value of this credential in responses and logs. sourceId String wifEnabled Boolean 41.8.7.10. StorageEmail Field Name Required Nullable Type Description Format server String sender String username String password String The password for the integration. The server will mask the value of this credential in responses and logs. disableTLS Boolean DEPRECATEDUseStartTLS Boolean from String startTLSAuthMethod EmailAuthMethod DISABLED, PLAIN, LOGIN, allowUnauthenticatedSmtp Boolean 41.8.7.11. StorageGeneric Field Name Required Nullable Type Description Format endpoint String skipTLSVerify Boolean caCert String username String password String The password for the integration. The server will mask the value of this credential in responses and logs. headers List of StorageKeyValuePair extraFields List of StorageKeyValuePair auditLoggingEnabled Boolean 41.8.7.12. StorageJira Field Name Required Nullable Type Description Format url String username String password String The password for the integration. The server will mask the value of this credential in responses and logs. issueType String priorityMappings List of JiraPriorityMapping defaultFieldsJson String disablePriority Boolean 41.8.7.13. StorageKeyValuePair Field Name Required Nullable Type Description Format key String value String 41.8.7.14. StorageMicrosoftSentinel Field Name Required Nullable Type Description Format logIngestionEndpoint String log_ingestion_endpoint is the log ingestion endpoint. directoryTenantId String directory_tenant_id contains the ID of the Microsoft Directory ID of the selected tenant. applicationClientId String application_client_id contains the ID of the application ID of the service principal. secret String secret contains the client secret. alertDcrConfig MicrosoftSentinelDataCollectionRuleConfig auditLogDcrConfig MicrosoftSentinelDataCollectionRuleConfig clientCertAuthConfig MicrosoftSentinelClientCertAuthConfig 41.8.7.15. StorageNotifier Field Name Required Nullable Type Description Format id String name String type String uiEndpoint String labelKey String labelDefault String jira StorageJira email StorageEmail cscc StorageCSCC splunk StorageSplunk pagerduty StoragePagerDuty generic StorageGeneric sumologic StorageSumoLogic awsSecurityHub StorageAWSSecurityHub syslog StorageSyslog microsoftSentinel StorageMicrosoftSentinel notifierSecret String traits StorageTraits 41.8.7.16. StoragePagerDuty Field Name Required Nullable Type Description Format apiKey String The API key for the integration. The server will mask the value of this credential in responses and logs. 41.8.7.17. StorageSeverity Enum Values UNSET_SEVERITY LOW_SEVERITY MEDIUM_SEVERITY HIGH_SEVERITY CRITICAL_SEVERITY 41.8.7.18. StorageSplunk Field Name Required Nullable Type Description Format httpToken String The HTTP token for the integration. The server will mask the value of this credential in responses and logs. httpEndpoint String insecure Boolean truncate String int64 auditLoggingEnabled Boolean derivedSourceType Boolean sourceTypes Map of string 41.8.7.19. StorageSumoLogic Field Name Required Nullable Type Description Format httpSourceAddress String skipTLSVerify Boolean 41.8.7.20. StorageSyslog Field Name Required Nullable Type Description Format localFacility SyslogLocalFacility LOCAL0, LOCAL1, LOCAL2, LOCAL3, LOCAL4, LOCAL5, LOCAL6, LOCAL7, tcpConfig SyslogTCPConfig extraFields List of StorageKeyValuePair messageFormat SyslogMessageFormat LEGACY, CEF, 41.8.7.21. StorageTraits Field Name Required Nullable Type Description Format mutabilityMode TraitsMutabilityMode ALLOW_MUTATE, ALLOW_MUTATE_FORCED, visibility TraitsVisibility VISIBLE, HIDDEN, origin TraitsOrigin IMPERATIVE, DEFAULT, DECLARATIVE, DECLARATIVE_ORPHANED, 41.8.7.22. SyslogLocalFacility Enum Values LOCAL0 LOCAL1 LOCAL2 LOCAL3 LOCAL4 LOCAL5 LOCAL6 LOCAL7 41.8.7.23. SyslogMessageFormat Enum Values LEGACY CEF 41.8.7.24. SyslogTCPConfig Field Name Required Nullable Type Description Format hostname String port Integer int32 skipTlsVerify Boolean useTls Boolean 41.8.7.25. TraitsMutabilityMode EXPERIMENTAL. NOTE: Please refer from using MutabilityMode for the time being. It will be replaced in the future (ROX-14276). MutabilityMode specifies whether and how an object can be modified. Default is ALLOW_MUTATE and means there are no modification restrictions; this is equivalent to the absence of MutabilityMode specification. ALLOW_MUTATE_FORCED forbids all modifying operations except object removal with force bit on. Be careful when changing the state of this field. For example, modifying an object from ALLOW_MUTATE to ALLOW_MUTATE_FORCED is allowed but will prohibit any further changes to it, including modifying it back to ALLOW_MUTATE. Enum Values ALLOW_MUTATE ALLOW_MUTATE_FORCED 41.8.7.26. TraitsOrigin Origin specifies the origin of an object. Objects can have four different origins: - IMPERATIVE: the object was created via the API. This is assumed by default. - DEFAULT: the object is a default object, such as default roles, access scopes etc. - DECLARATIVE: the object is created via declarative configuration. - DECLARATIVE_ORPHANED: the object is created via declarative configuration and then unsuccessfully deleted(for example, because it is referenced by another object) Based on the origin, different rules apply to the objects. Objects with the DECLARATIVE origin are not allowed to be modified via API, only via declarative configuration. Additionally, they may not reference objects with the IMPERATIVE origin. Objects with the DEFAULT origin are not allowed to be modified via either API or declarative configuration. They may be referenced by all other objects. Objects with the IMPERATIVE origin are allowed to be modified via API, not via declarative configuration. They may reference all other objects. Objects with the DECLARATIVE_ORPHANED origin are not allowed to be modified via either API or declarative configuration. DECLARATIVE_ORPHANED resource can become DECLARATIVE again if it is redefined in declarative configuration. Objects with this origin will be cleaned up from the system immediately after they are not referenced by other resources anymore. They may be referenced by all other objects. Enum Values IMPERATIVE DEFAULT DECLARATIVE DECLARATIVE_ORPHANED 41.8.7.27. TraitsVisibility EXPERIMENTAL. visibility allows to specify whether the object should be visible for certain APIs. Enum Values VISIBLE HIDDEN 41.8.7.28. V1UpdateNotifierRequest Field Name Required Nullable Type Description Format notifier StorageNotifier updatePassword Boolean When false, use the stored credentials of an existing notifier configuration given its ID. | [
"client certificate which is used for authentication",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Next Tag: 21",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"client certificate which is used for authentication",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Next Tag: 21",
"client certificate which is used for authentication",
"Next Tag: 21",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"client certificate which is used for authentication",
"Next Tag: 21",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"client certificate which is used for authentication",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Next Tag: 21",
"client certificate which is used for authentication",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Next Tag: 21",
"client certificate which is used for authentication",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Next Tag: 21"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/api_reference/notifierservice |
Chapter 4. Configuring addresses and queues | Chapter 4. Configuring addresses and queues 4.1. Addresses, queues, and routing types In AMQ Broker, the addressing model comprises three main concepts; addresses , queues , and routing types . An address represents a messaging endpoint. Within the configuration, a typical address is given a unique name, one or more queues, and a routing type. A queue is associated with an address. There can be multiple queues per address. Once an incoming message is matched to an address, the message is sent on to one or more of its queues, depending on the routing type configured. Queues can be configured to be automatically created and deleted. You can also configure an address (and hence its associated queues) as durable . Messages in a durable queue can survive a crash or restart of the broker, as long as the messages in the queue are also persistent. By contrast, messages in a non-durable queue do not survive a crash or restart of the broker, even if the messages themselves are persistent. A routing type determines how messages are sent to the queues associated with an address. In AMQ Broker, you can configure an address with two different routing types, as shown in the table. Table 4.1. Address routing types If you want your messages routed to... Use this routing type... A single queue within the matching address, in a point-to-point manner anycast Every queue within the matching address, in a publish-subscribe manner multicast Note An address must have at least one defined routing type. It is possible to define more than one routing type per address, but this is not recommended. If an address does have both routing types defined, and the client does not show a preference for either one, the broker defaults to the multicast routing type. Additional resources For more information about configuring: Point-to-point messaging using the anycast routing type, see Section 4.3, "Configuring addresses for point-to-point messaging" Publish-subscribe messaging using the multicast routing type, see Section 4.4, "Configuring addresses for publish-subscribe messaging" 4.1.1. Address and queue naming requirements Be aware of the following requirements when you configure addresses and queues: To ensure that a client can connect to a queue, regardless of which wire protocol the client uses, your address and queue names should not include any of the following characters: & :: , ? > The number sign ( # ) and asterisk ( * ) characters are reserved for wildcard expressions and should not be used in address and queue names. For more information, see Section 4.2.1, "AMQ Broker wildcard syntax" . Address and queue names should not include spaces. To separate words in an address or queue name, use the configured delimiter character. The default delimiter character is a period ( . ). For more information, see Section 4.2.1, "AMQ Broker wildcard syntax" . 4.2. Applying address settings to sets of addresses In AMQ Broker, you can apply the configuration specified in an address-setting element to a set of addresses by using a wildcard expression to represent the matching address name. The following sections describe how to use wildcard expressions. 4.2.1. AMQ Broker wildcard syntax AMQ Broker uses a specific syntax for representing wildcards in address settings. Wildcards can also be used in security settings, and when creating consumers. A wildcard expression contains words delimited by a period ( . ). The number sign ( # ) and asterisk ( * ) characters also have special meaning and can take the place of a word, as follows: The number sign character means "match any sequence of zero or more words". Use this at the end of your expression. The asterisk character means "match a single word". Use this anywhere within your expression. Matching is not done character by character, but at each delimiter boundary. For example, an address-setting element that is configured to match queues with my in their name would not match with a queue named myqueue . When more than one address-setting element matches an address, the broker overlays configurations, using the configuration of the least specific match as the baseline. Literal expressions are more specific than wildcards, and an asterisk ( * ) is more specific than a number sign ( # ). For example, both my.destination and my.* match the address my.destination . In this case, the broker first applies the configuration found under my.* , since a wildcard expression is less specific than a literal. , the broker overlays the configuration of the my.destination address setting element, which overwrites any configuration shared with my.* . For example, given the following configuration, a queue associated with my.destination has max-delivery-attempts set to 3 and last-value-queue set to false . <address-setting match="my.*"> <max-delivery-attempts>3</max-delivery-attempts> <last-value-queue>true</last-value-queue> </address-setting> <address-setting match="my.destination"> <last-value-queue>false</last-value-queue> </address-setting> The examples in the following table illustrate how wildcards are used to match a set of addresses. Table 4.2. Matching addresses using wildcards Example Description # The default address-setting used in broker.xml . Matches every address. You can continue to apply this catch-all, or you can add a new address-setting for each address or group of addresses as the need arises. news.europe.# Matches news.europe , news.europe.sport , news.europe.politics.fr , but not news.usa or europe . news.* Matches news.europe and news.usa , but not news.europe.sport . news.*.sport Matches news.europe.sport and news.usa.sport , but not news.europe.fr.sport . 4.2.2. Configuring a literal match In a literal match, wildcard characters are treated as literal characters to match addresses that contain wildcards. For example, the hash ( # ) character in a literal match can match an address of orders.# without matching addresses such as orders.retail or orders.wholesale . Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Before you configure a literal match, use the literal-match-markers parameter to define the characters that delimit a literal match. In the following example, parentheses are used to delimit a literal match. <core> ... <literal-match-markers>()</literal-match-markers> ... </core> After you define the markers that delimit a literal match, specify the match, including the markers, in the address setting match parameter. The following example configures a literal match for an address called orders.# to enable metrics for that specific address. <address-settings> <address-setting match="(orders.#)"> <enable-metrics>true</enable-metrics> </address-setting> </address-settings> 4.2.3. Configuring the broker wildcard syntax The following procedure show how to customize the syntax used for wildcard addresses. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Add a <wildcard-addresses> section to the configuration, as in the example below. <configuration> <core> ... <wildcard-addresses> // <enabled>true</enabled> // <delimiter>,</delimiter> // <any-words>@</any-words> // <single-word>USD</single-word> </wildcard-addresses> ... </core> </configuration> enabled When set to true , instruct the broker to use your custom settings. delimiter Provide a custom character to use as the delimiter instead of the default, which is . . any-words The character provided as the value for any-words is used to mean 'match any sequence of zero or more words' and will replace the default # . Use this character at the end of your expression. single-word The character provided as the value for single-word is used to mean 'match a single word' and will replaced the default * . Use this character anywhere within your expression. 4.3. Configuring addresses for point-to-point messaging Point-to-point messaging is a common scenario in which a message sent by a producer has only one consumer. AMQP and JMS message producers and consumers can make use of point-to-point messaging queues, for example. To ensure that the queues associated with an address receive messages in a point-to-point manner, you define an anycast routing type for the given address element in your broker configuration. When a message is received on an address using anycast , the broker locates the queue associated with the address and routes the message to it. A consumer might then request to consume messages from that queue. If multiple consumers connect to the same queue, messages are distributed between the consumers equally, provided that the consumers are equally able to handle them. The following figure shows an example of point-to-point messaging. Figure 4.1. Point-to-point messaging 4.3.1. Configuring basic point-to-point messaging The following procedure shows how to configure an address with a single queue for point-to-point messaging. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Wrap an anycast configuration element around the chosen queue element of an address . Ensure that the values of the name attribute for both the address and queue elements are the same. For example: <configuration ...> <core ...> ... <address name="my.anycast.destination"> <anycast> <queue name="my.anycast.destination"/> </anycast> </address> </core> </configuration> 4.3.2. Configuring point-to-point messaging for multiple queues You can define more than one queue on an address that uses an anycast routing type. The broker distributes messages sent to an anycast address evenly across all associated queues. By specifying a Fully Qualified Queue Name (FQQN), you can connect a client to a specific queue. If more than one consumer connects to the same queue, the broker distributes messages evenly between the consumers. The following figure shows an example of point-to-point messaging using two queues. Figure 4.2. Point-to-point messaging using two queues The following procedure shows how to configure point-to-point messaging for an address that has multiple queues. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Wrap an anycast configuration element around the queue elements in the address element. For example: <configuration ...> <core ...> ... <address name="my.anycast.destination"> <anycast> <queue name="q1"/> <queue name="q2"/> </anycast> </address> </core> </configuration> If you have a configuration such as that shown above mirrored across multiple brokers in a cluster, the cluster can load-balance point-to-point messaging in a way that is opaque to producers and consumers. The exact behavior depends on how the message load balancing policy is configured for the cluster. Additional resources For more information about: Specifying Fully Qualified Queue Names, see Section 4.9, "Specifying a fully qualified queue name" . How to configure message load balancing for a broker cluster, see Section 14.1.1, "How broker clusters balance message load" . 4.4. Configuring addresses for publish-subscribe messaging In a publish-subscribe scenario, messages are sent to every consumer subscribed to an address. JMS topics and MQTT subscriptions are two examples of publish-subscribe messaging. To ensure that the queues associated with an address receive messages in a publish-subscribe manner, you define a multicast routing type for the given address element in your broker configuration. When a message is received on an address with a multicast routing type, the broker routes a copy of the message to each queue associated with the address. To reduce the overhead of copying, each queue is sent only a reference to the message, and not a full copy. The following figure shows an example of publish-subscribe messaging. Figure 4.3. Publish-subscribe messaging The following procedure shows how to configure an address for publish-subscribe messaging. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Add an empty multicast configuration element to the address. <configuration ...> <core ...> ... <address name="my.multicast.destination"> <multicast/> </address> </core> </configuration> (Optional) Add one or more queue elements to the address and wrap the multicast element around them. This step is typically not needed since the broker automatically creates a queue for each subscription requested by a client. <configuration ...> <core ...> ... <address name="my.multicast.destination"> <multicast> <queue name="client123.my.multicast.destination"/> <queue name="client456.my.multicast.destination"/> </multicast> </address> </core> </configuration> 4.5. Configuring an address for both point-to-point and publish-subscribe messaging You can also configure an address with both point-to-point and publish-subscribe semantics. Configuring an address that uses both point-to-point and publish-subscribe semantics is not typically recommended. However, it can be useful when you want, for example, a JMS queue named orders and a JMS topic also named orders . The different routing types make the addresses appear to be distinct for client connections. In this situation, messages sent by a JMS queue producer use the anycast routing type. Messages sent by a JMS topic producer use the multicast routing type. When a JMS topic consumer connects to the broker, it is attached to its own subscription queue. A JMS queue consumer, however, is attached to the anycast queue. The following figure shows an example of point-to-point and publish-subscribe messaging used together. Figure 4.4. Point-to-point and publish-subscribe messaging The following procedure shows how to configure an address for both point-to-point and publish-subscribe messaging. Note The behavior in this scenario is dependent on the protocol being used. For JMS, there is a clear distinction between topic and queue producers and consumers, which makes the logic straightforward. Other protocols like AMQP do not make this distinction. A message being sent via AMQP is routed by both anycast and multicast and consumers default to anycast . For more information, see Chapter 3, Configuring messaging protocols in network connections . Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Wrap an anycast configuration element around the queue elements in the address element. For example: <configuration ...> <core ...> ... <address name="orders"> <anycast> <queue name="orders"/> </anycast> </address> </core> </configuration> Add an empty multicast configuration element to the address. <configuration ...> <core ...> ... <address name="orders"> <anycast> <queue name="orders"/> </anycast> <multicast/> </address> </core> </configuration> Note Typically, the broker creates subscription queues on demand, so there is no need to list specific queue elements inside the multicast element. 4.6. Adding a routing type to an acceptor configuration Normally, if a message is received by an address that uses both anycast and multicast , one of the anycast queues receives the message and all of the multicast queues. However, clients can specify a special prefix when connecting to an address to specify whether to connect using anycast or multicast . The prefixes are custom values that are designated using the anycastPrefix and multicastPrefix parameters within the URL of an acceptor in the broker configuration. The following procedure shows how to configure prefixes for a given acceptor. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. For a given acceptor, to configure an anycast prefix, add anycastPrefix to the configured URL. Set a custom value. For example: <configuration ...> <core ...> ... <acceptors> <!-- Acceptor for every supported protocol --> <acceptor name="artemis">tcp://0.0.0.0:61616?protocols=AMQP;anycastPrefix=anycast://</acceptor> </acceptors> ... </core> </configuration> Based on the preceding configuration, the acceptor is configured to use anycast:// for the anycast prefix. Client code can specify anycast://<my.destination>/ if the client needs to send a message to only one of the anycast queues. For a given acceptor, to configure a multicast prefix, add multicastPrefix to the configured URL. Set a custom value. For example: <configuration ...> <core ...> ... <acceptors> <!-- Acceptor for every supported protocol --> <acceptor name="artemis">tcp://0.0.0.0:61616?protocols=AMQP;multicastPrefix=multicast://</acceptor> </acceptors> ... </core> </configuration> Based on the preceding configuration, the acceptor is configured to use multicast:// for the multicast prefix. Client code can specify multicast://<my.destination>/ if the client needs the message sent to only the multicast queues. 4.7. Configuring subscription queues In most cases, it is not necessary to manually create subscription queues because protocol managers create subscription queues automatically when clients first request to subscribe to an address. See Section 4.8.3, "Protocol managers and addresses" for more information. For durable subscriptions, the generated queue name is usually a concatenation of the client ID and the address. The following sections show how to manually create subscription queues, when required. 4.7.1. Configuring a durable subscription queue When a queue is configured as a durable subscription, the broker saves messages for any inactive subscribers and delivers them to the subscribers when they reconnect. Therefore, a client is guaranteed to receive each message delivered to the queue after subscribing to it. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Add the durable configuration element to a chosen queue. Set a value of true . <configuration ...> <core ...> ... <address name="my.durable.address"> <multicast> <queue name="q1"> <durable>true</durable> </queue> </multicast> </address> </core> </configuration> Note Because queues are durable by default, including the durable element and setting the value to true is not strictly necessary to create a durable queue. However, explicitly including the element enables you to later change the behavior of the queue to non-durable, if necessary. 4.7.2. Configuring a non-shared durable subscription queue The broker can be configured to prevent more than one consumer from connecting to a queue at any one time. Therefore, subscriptions to queues configured this way are regarded as "non-shared". Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Add the durable configuration element to each chosen queue. Set a value of true . <configuration ...> <core ...> ... <address name="my.non.shared.durable.address"> <multicast> <queue name="orders1"> <durable>true</durable> </queue> <queue name="orders2"> <durable>true</durable> </queue> </multicast> </address> </core> </configuration> Note Because queues are durable by default, including the durable element and setting the value to true is not strictly necessary to create a durable queue. However, explicitly including the element enables you to later change the behavior of the queue to non-durable, if necessary. Add the max-consumers attribute to each chosen queue. Set a value of 1 . <configuration ...> <core ...> ... <address name="my.non.shared.durable.address"> <multicast> <queue name="orders1" max-consumers="1"> <durable>true</durable> </queue> <queue name="orders2" max-consumers="1"> <durable>true</durable> </queue> </multicast> </address> </core> </configuration> 4.7.3. Configuring a non-durable subscription queue Non-durable subscriptions are usually managed by the relevant protocol manager, which creates and deletes temporary queues. However, if you want to manually create a queue that behaves like a non-durable subscription queue, you can use the purge-on-no-consumers attribute on the queue. When purge-on-no-consumers is set to true , the queue does not start receiving messages until a consumer is connected. In addition, when the last consumer is disconnected from the queue, the queue is purged (that is, its messages are removed). The queue does not receive any further messages until a new consumer is connected to the queue. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Add the purge-on-no-consumers attribute to each chosen queue. Set a value of true . <configuration ...> <core ...> ... <address name="my.non.durable.address"> <multicast> <queue name="orders1" purge-on-no-consumers="true"/> </multicast> </address> </core> </configuration> 4.8. Creating and deleting addresses and queues automatically You can configure the broker to automatically create addresses and queues, and to delete them after they are no longer in use. This saves you from having to pre-configure each address before a client can connect to it. 4.8.1. Configuration options for automatic queue creation and deletion The following table lists the configuration elements available when configuring an address-setting element to automatically create and delete queues and addresses. Table 4.3. Configuration elements to automatically create and delete queues and addresses If you want the address-setting to... Add this configuration... Create addresses when a client sends a message to or attempts to consume a message from a queue mapped to an address that does not exist. auto-create-addresses Create a queue when a client sends a message to or attempts to consume a message from a queue. auto-create-queues Delete an automatically created address when it no longer has any queues. auto-delete-addresses Delete an automatically created queue when the queue has 0 consumers and 0 messages. auto-delete-queues Use a specific routing type if the client does not specify one. default-address-routing-type 4.8.2. Configuring automatic creation and deletion of addresses and queues The following procedure shows how to configure automatic creation and deletion of addresses and queues. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Configure an address-setting for automatic creation and deletion. The following example uses all of the configuration elements mentioned in the table. <configuration ...> <core ...> ... <address-settings> <address-setting match="activemq.#"> <auto-create-addresses>true</auto-create-addresses> <auto-delete-addresses>true</auto-delete-addresses> <auto-create-queues>true</auto-create-queues> <auto-delete-queues>true</auto-delete-queues> <default-address-routing-type>ANYCAST</default-address-routing-type> </address-setting> </address-settings> ... </core> </configuration> address-setting The configuration of the address-setting element is applied to any address or queue that matches the wildcard address activemq.# . auto-create-addresses When a client requests to connect to an address that does not yet exist, the broker creates the address. auto-delete-addresses An automatically created address is deleted when it no longer has any queues associated with it. auto-create-queues When a client requests to connect to a queue that does not yet exist, the broker creates the queue. auto-delete-queues An automatically created queue is deleted when it no longer has any consumers or messages. default-address-routing-type If the client does not specify a routing type when connecting, the broker uses ANYCAST when delivering messages to an address. The default value is MULTICAST . Additional resources For more information about: The wildcard syntax that you can use when configuring addresses, see Section 4.2, "Applying address settings to sets of addresses" . Routing types, see Section 4.1, "Addresses, queues, and routing types" . 4.8.3. Protocol managers and addresses A component called a protocol manager maps protocol-specific concepts to concepts used in the AMQ Broker address model; queues and routing types. In certain situations, a protocol manager might automatically create queues on the broker. For example, when a client sends an MQTT subscription packet with the addresses /house/room1/lights and /house/room2/lights , the MQTT protocol manager understands that the two addresses require multicast semantics. Therefore, the protocol manager first looks to ensure that multicast is enabled for both addresses. If not, it attempts to dynamically create them. If successful, the protocol manager then creates special subscription queues for each subscription requested by the client. Each protocol behaves slightly differently. The table below describes what typically happens when subscribe frames to various types of queue are requested. Table 4.4. Protocol manager actions for different queue types If the queue is of this type... The typical action for a protocol manager is to... Durable subscription queue Look for the appropriate address and ensures that multicast semantics is enabled. It then creates a special subscription queue with the client ID and the address as its name and multicast as its routing type. The special name allows the protocol manager to quickly identify the required client subscription queues should the client disconnect and reconnect at a later date. When the client unsubscribes the queue is deleted. Temporary subscription queue Look for the appropriate address and ensures that multicast semantics is enabled. It then creates a queue with a random (read UUID) name under this address with multicast routing type. When the client disconnects the queue is deleted. Point-to-point queue Look for the appropriate address and ensures that anycast routing type is enabled. If it is, it aims to locate a queue with the same name as the address. If it does not exist, it looks for the first queue available. It this does not exist then it automatically creates the queue (providing auto create is enabled). The queue consumer is bound to this queue. If the queue is auto created, it is automatically deleted once there are no consumers and no messages in it. 4.9. Specifying a fully qualified queue name Internally, the broker maps a client's request for an address to specific queues. The broker decides on behalf of the client to which queues to send messages, or from which queue to receive messages. However, more advanced use cases might require that the client specifies a queue name directly. In these situations the client can use a fully qualified queue name (FQQN). An FQQN includes both the address name and the queue name, separated by a :: . The following procedure shows how to specify an FQQN when connecting to an address with multiple queues. Prerequisites You have an address configured with two or more queues, as shown in the example below. <configuration ...> <core ...> ... <addresses> <address name="my.address"> <anycast> <queue name="q1" /> <queue name="q2" /> </anycast> </address> </addresses> </core> </configuration> Procedure In the client code, use both the address name and the queue name when requesting a connection from the broker. Use two colons, :: , to separate the names. For example: String FQQN = "my.address::q1"; Queue q1 session.createQueue(FQQN); MessageConsumer consumer = session.createConsumer(q1); 4.10. Configuring sharded queues A common pattern for processing of messages across a queue where only partial ordering is required is to use queue sharding . This means that you define an anycast address that acts as a single logical queue, but which is backed by many underlying physical queues. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Add an address element and set the name attribute. For example: <configuration ...> <core ...> ... <addresses> <address name="my.sharded.address"></address> </addresses> </core> </configuration> Add the anycast routing type and include the desired number of sharded queues. In the example below, the queues q1 , q2 , and q3 are added as anycast destinations. <configuration ...> <core ...> ... <addresses> <address name="my.sharded.address"> <anycast> <queue name="q1" /> <queue name="q2" /> <queue name="q3" /> </anycast> </address> </addresses> </core> </configuration> Based on the preceding configuration, messages sent to my.sharded.address are distributed equally across q1 , q2 and q3 . Clients are able to connect directly to a specific physical queue when using a Fully Qualified Queue Name (FQQN). and receive messages sent to that specific queue only. To tie particular messages to a particular queue, clients can specify a message group for each message. The broker routes grouped messages to the same queue, and one consumer processes them all. Additional resources For more information about: Fully Qualified Queue Names, see Section 4.9, "Specifying a fully qualified queue name" Message grouping, see Using message groups in the AMQ Core Protocol JMS documentation. 4.11. Configuring last value queues A last value queue is a type of queue that discards messages in the queue when a newer message with the same last value key value is placed in the queue. Through this behavior, last value queues retain only the last values for messages of the same key. Note Last-value queues do not work as expected if messages sent to the queues are paged. Set the value of the address-full-policy parameter for addresses that have last-value queues to DROP , BLOCK or FAIL to ensure that messages sent to these queues are not paged. For more information, see Section 7.2, "Configuring message dropping" . A simple use case for a last value queue is for monitoring stock prices, where only the latest value for a particular stock is of interest. Note If a message without a configured last value key is sent to a last value queue, the broker handles this message as a "normal" message. Such messages are not purged from the queue when a new message with a configured last value key arrives. You can configure last value queues individually, or for all of the queues associated with a set of addresses. The following procedures show how to configure last value queues in these ways. 4.11.1. Configuring last value queues individually The following procedure shows to configure last value queues individually. Open the <broker_instance_dir> /etc/broker.xml configuration file. For a given queue, add the last-value-key key and specify a custom value. For example: <address name="my.address"> <multicast> <queue name="prices1" last-value-key="stock_ticker"/> </multicast> </address> Alternatively, you can configure a last value queue that uses the default last value key name of _AMQ_LVQ_NAME . To do this, add the last-value key to a given queue. Set the value to true . For example: <address name="my.address"> <multicast> <queue name="prices1" last-value="true"/> </multicast> </address> 4.11.2. Configuring last value queues for addresses The following procedure shows to configure last value queues for an address or set of addresses. Open the <broker_instance_dir> /etc/broker.xml configuration file. In the address-setting element, for a matching address, add default-last-value-key . Specify a custom value. For example: <address-setting match="lastValue"> <default-last-value-key>stock_ticker</default-last-value-key> </address-setting> Based on the preceding configuration, all queues associated with the lastValue address use a last value key of stock_ticker . By default, the value of default-last-value-key is not set. To configure last value queues for a set of addresses, you can specify an address wildcard. For example: <address-setting match="lastValue.*"> <default-last-value-key>stock_ticker</default-last-value-key> </address-setting> Alternatively, you can configure all queues associated with an address or set of addresses to use the default last value key name of _AMQ_LVQ_NAME . To do this, add default-last-value-queue instead of default-last-value-key . Set the value to true . For example: <address-setting match="lastValue"> <default-last-value-queue>true</default-last-value-queue> </address-setting> Additional resources For more information about the wildcard syntax that you can use when configuring addresses, see Section 4.2, "Applying address settings to sets of addresses" . 4.11.3. Example of last value queue behavior This example shows the behavior of a last value queue. In your broker.xml configuration file, suppose that you have added configuration that looks like the following: <address name="my.address"> <multicast> <queue name="prices1" last-value-key="stock_ticker"/> </multicast> </address> The preceding configuration creates a queue called prices1 , with a last value key of stock_ticker . Now, suppose that a client sends two messages. Each message has the same value of ATN for the property stock_ticker . Each message has a different value for a property called stock_price . Each message is sent to the same queue, prices1 . TextMessage message = session.createTextMessage("First message with last value property set"); message.setStringProperty("stock_ticker", "ATN"); message.setStringProperty("stock_price", "36.83"); producer.send(message); TextMessage message = session.createTextMessage("Second message with last value property set"); message.setStringProperty("stock_ticker", "ATN"); message.setStringProperty("stock_price", "37.02"); producer.send(message); When two messages with the same value for the stock_ticker last value key (in this case, ATN ) arrive to the prices1 queue , only the latest message remains in the queue, with the first message being purged. At the command line, you can enter the following lines to validate this behavior: TextMessage messageReceived = (TextMessage)messageConsumer.receive(5000); System.out.format("Received message: %s\n", messageReceived.getText()); In this example, the output you see is the second message, since both messages use the same value for the last value key and the second message was received in the queue after the first. 4.11.4. Enforcing non-destructive consumption for last value queues When a consumer connects to a queue, the normal behavior is that messages sent to that consumer are acquired exclusively by the consumer. When the consumer acknowledges receipt of the messages, the broker removes the messages from the queue. As an alternative to the normal consumption behaviour, you can configure a queue to enforce non-destructive consumption. In this case, when a queue sends a message to a consumer, the message can still be received by other consumers. In addition, the message remains in the queue even when a consumer has consumed it. When you enforce this non-destructive consumption behavior, the consumers are known as queue browsers . Enforcing non-destructive consumption is a useful configuration for last value queues, because it ensures that the queue always holds the latest value for a particular last value key. The following procedure shows how to enforce non-destructive consumption for a last value queue. Prerequisites You have already configured last-value queues individually, or for all queues associated with an address or set of addresses. For more information, see: Section 4.11.1, "Configuring last value queues individually" Section 4.11.2, "Configuring last value queues for addresses" Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. If you previously configured a queue individually as a last value queue, add the non-destructive key. Set the value to true . For example: <address name="my.address"> <multicast> <queue name="orders1" last-value-key="stock_ticker" non-destructive="true" /> </multicast> </address> If you previously configured an address or set of addresses for last value queues, add the default-non-destructive key. Set the value to true . For example: <address-setting match="lastValue"> <default-last-value-key>stock_ticker </default-last-value-key> <default-non-destructive>true</default-non-destructive> </address-setting> Note By default, the value of default-non-destructive is false . 4.12. Moving expired messages to an expiry address For a queue other than a last value queue, if you have only non-destructive consumers, the broker never deletes messages from the queue, causing the queue size to increase over time. To prevent this unconstrained growth in queue size, you can configure when messages expire and specify an address to which the broker moves expired messages. 4.12.1. Configuring message expiry The following procedure shows how to configure message expiry. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. In the core element, set the message-expiry-scan-period to specify how frequently the broker scans for expired messages. <configuration ...> <core ...> ... <message-expiry-scan-period>1000</message-expiry-scan-period> ... Based on the preceding configuration, the broker scans queues for expired messages every 1000 milliseconds. In the address-setting element for a matching address or set of addresses, specify an expiry address. Also, set a message expiration time. For example: <configuration ...> <core ...> ... <address-settings> ... <address-setting match="stocks"> ... <expiry-address>ExpiryAddress</expiry-address> <expiry-delay>10</expiry-delay> ... </address-setting> ... <address-settings> <configuration ...> expiry-address Expiry address for the matching address or addresses. In the preceding example, the broker sends expired messages for the stocks address to an expiry address called ExpiryAddress . expiry-delay Expiration time, in milliseconds, that the broker applies to messages that are using the default expiration time. By default, messages have an expiration time of 0 , meaning that they don't expire. For messages with an expiration time greater than the default, expiry-delay has no effect. For example, suppose you set expiry-delay on an address to 10 , as shown in the preceding example. If a message with the default expiration time of 0 arrives to a queue at this address, then the broker changes the expiration time of the message from 0 to 10 . However, if another message that is using an expiration time of 20 arrives, then its expiration time is unchanged. If you set expiry-delay to -1 , this feature is disabled. By default, expiry-delay is set to -1 . Alternatively, instead of specifying a value for expiry-delay , you can specify minimum and maximum expiry delay values. For example: <configuration ...> <core ...> ... <address-settings> ... <address-setting match="stocks"> ... <expiry-address>ExpiryAddress</expiry-address> <min-expiry-delay>10</min-expiry-delay> <max-expiry-delay>100</max-expiry-delay> ... </address-setting> ... <address-settings> <configuration ...> min-expiry-delay Minimum expiration time, in milliseconds, that the broker applies to messages. max-expiry-delay Maximum expiration time, in milliseconds, that the broker applies to messages. The broker applies the values of min-expiry-delay and max-expiry-delay as follows: For a message with the default expiration time of 0 , the broker sets the expiration time to the specified value of max-expiry-delay . If you have not specified a value for max-expiry-delay , the broker sets the expiration time to the specified value of min-expiry-delay . If you have not specified a value for min-expiry-delay , the broker does not change the expiration time of the message. For a message with an expiration time above the value of max-expiry-delay , the broker sets the expiration time to the specified value of max-expiry-delay . For a message with an expiration time below the value of min-expiry-delay , the broker sets the expiration time to the specified value of min-expiry-delay . For a message with an expiration between the values of min-expiry-delay and max-expiry-delay , the broker does not change the expiration time of the message. If you specify a value for expiry-delay (that is, other than the default value of -1 ), this overrides any values that you specify for min-expiry-delay and max-expiry-delay . The default value for both min-expiry-delay and max-expiry-delay is -1 (that is, disabled). In the addresses element of your configuration file, configure the address previously specified for expiry-address . Define a queue at this address. For example: <addresses> ... <address name="ExpiryAddress"> <anycast> <queue name="ExpiryQueue"/> </anycast> </address> ... </addresses> The preceding example configuration associates an expiry queue, ExpiryQueue , with the expiry address, ExpiryAddress . 4.12.2. Creating expiry resources automatically A common use case is to segregate expired messages according to their original addresses. For example, you might choose to route expired messages from an address called stocks to an expiry queue called EXP.stocks . Likewise, you might route expired messages from an address called orders to an expiry queue called EXP.orders . This type of routing pattern makes it easy to track, inspect, and administer expired messages. However, a pattern such as this is difficult to implement in an environment that uses mainly automatically-created addresses and queues. In this type of environment, an administrator does not want the extra effort required to manually create addresses and queues to hold expired messages. As a solution, you can configure the broker to automatically create resources (that is, addressees and queues) to handle expired messages for a given address or set of addresses. The following procedure shows an example. Prerequisites You have already configured an expiry address for a given address or set of addresses. For more information, see Section 4.12.1, "Configuring message expiry" . Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Locate the <address-setting> element that you previously added to the configuration file to define an expiry address for a matching address or set of addresses. For example: <configuration ...> <core ...> ... <address-settings> ... <address-setting match="stocks"> ... <expiry-address>ExpiryAddress</expiry-address> ... </address-setting> ... <address-settings> <configuration ...> In the <address-setting> element, add configuration items that instruct the broker to automatically create expiry resources (that is, addresses and queues) and how to name these resources. For example: <configuration ...> <core ...> ... <address-settings> ... <address-setting match="stocks"> ... <expiry-address>ExpiryAddress</expiry-address> <auto-create-expiry-resources>true</auto-create-expiry-resources> <expiry-queue-prefix>EXP.</expiry-queue-prefix> <expiry-queue-suffix></expiry-queue-suffix> ... </address-setting> ... <address-settings> <configuration ...> auto-create-expiry-resources Specifies whether the broker automatically creates an expiry address and queue to receive expired messages. The default value is false . If the parameter value is set to true , the broker automatically creates an <address> element that defines an expiry address and an associated expiry queue. The name value of the automatically-created <address> element matches the name value specified for <expiry-address> . The automatically-created expiry queue has the multicast routing type. By default, the broker names the expiry queue to match the address to which expired messages were originally sent, for example, stocks . The broker also defines a filter for the expiry queue that uses the _AMQ_ORIG_ADDRESS property. This filter ensures that the expiry queue receives only messages sent to the corresponding original address. expiry-queue-prefix Prefix that the broker applies to the name of the automatically-created expiry queue. The default value is EXP. When you define a prefix value or keep the default value, the name of the expiry queue is a concatenation of the prefix and the original address, for example, EXP.stocks . expiry-queue-suffix Suffix that the broker applies to the name of an automatically-created expiry queue. The default value is not defined (that is, the broker applies no suffix). You can directly access the expiry queue using either the queue name by itself (for example, when using the AMQ Broker Core Protocol JMS client) or using the fully qualified queue name (for example, when using another JMS client). Note Because the expiry address and queue are automatically created, any address settings related to deletion of automatically-created addresses and queues also apply to these expiry resources. Additional resources For more information about address settings used to configure automatic deletion of automatically-created addresses and queues, see Section 4.8.2, "Configuring automatic creation and deletion of addresses and queues" . 4.13. Moving undelivered messages to a dead letter address If delivery of a message to a client is unsuccessful, you might not want the broker to make ongoing attempts to deliver the message. To prevent infinite delivery attempts, you can define a dead letter address and one or more asscociated dead letter queues . After a specified number of delivery attempts, the broker removes an undelivered message from its original queue and sends the message to the configured dead letter address. A system administrator can later consume undelivered messages from a dead letter queue to inspect the messages. If you do not configure a dead letter address for a given queue, the broker permanently removes undelivered messages from the queue after the specified number of delivery attempts. Undelivered messages that are consumed from a dead letter queue have the following properties: _AMQ_ORIG_ADDRESS String property that specifies the original address of the message _AMQ_ORIG_QUEUE String property that specifies the original queue of the message 4.13.1. Configuring a dead letter address The following procedure shows how to configure a dead letter address and an associated dead letter queue. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. In an <address-setting> element that matches your queue name(s), set values for the dead letter address name and the maximum number of delivery attempts. For example: <configuration ...> <core ...> ... <address-settings> ... <address-setting match="exampleQueue"> <dead-letter-address>DLA</dead-letter-address> <max-delivery-attempts>3</max-delivery-attempts> </address-setting> ... <address-settings> <configuration ...> match Address to which the broker applies the configuration in this address-setting section. You can specify a wildcard expression for the match attribute of the <address-setting> element. Using a wildcard expression is useful if you want to associate the dead letter settings configured in the <address-setting> element with a matching set of addresses. dead-letter-address Name of the dead letter address. In this example, the broker moves undelivered messages from the queue exampleQueue to the dead letter address, DLA . max-delivery-attempts Maximum number of delivery attempts made by the broker before it moves an undelivered message to the configured dead letter address. In this example, the broker moves undelivered messages to the dead letter address after three unsuccessful delivery attempts. The default value is 10 . If you want the broker to make an infinite number of redelivery attempts, specify a value of -1 . In the addresses section, add an address element for the dead letter address, DLA . To associate a dead letter queue with the dead letter address, specify a name value for queue . For example: <configuration ...> <core ...> ... <addresses> <address name="DLA"> <anycast> <queue name="DLQ" /> </anycast> </address> ... </addresses> </core> </configuration> In the preceding configuration, you associate a dead letter queue named DLQ with the dead letter address, DLA . Additional resources For more information about using wildcards in address settings, see Section 4.2, "Applying address settings to sets of addresses" . 4.13.2. Creating dead letter queues automatically A common use case is to segregate undelivered messages according to their original addresses. For example, you might choose to route undelivered messages from an address called stocks to a dead letter queue called DLA.stocks that has an associated dead letter queue called DLQ.stocks . Likewise, you might route undelivered messages from an address called orders to a dead letter address called DLA.orders . This type of routing pattern makes it easy to track, inspect, and administrate undelivered messages. However, a pattern such as this is difficult to implement in an environment that uses mainly automatically-created addresses and queues. It is likely that a system administrator for this type of environment does not want the additional effort required to manually create addresses and queues to hold undelivered messages. As a solution, you can configure the broker to automatically create addressees and queues to handle undelivered messages, as shown in the procedure that follows. Prerequisites You have already configured a dead letter address for a queue or set of queues. For more information, see Section 4.13.1, "Configuring a dead letter address" . Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Locate the <address-setting> element that you previously added to define a dead letter address for a matching queue or set of queues. For example: <configuration ...> <core ...> ... <address-settings> ... <address-setting match="exampleQueue"> <dead-letter-address>DLA</dead-letter-address> <max-delivery-attempts>3</max-delivery-attempts> </address-setting> ... <address-settings> <configuration ...> In the <address-setting> element, add configuration items that instruct the broker to automatically create dead letter resources (that is, addresses and queues) and how to name these resources. For example: <configuration ...> <core ...> ... <address-settings> ... <address-setting match="exampleQueue"> <dead-letter-address>DLA</dead-letter-address> <max-delivery-attempts>3</max-delivery-attempts> <auto-create-dead-letter-resources>true</auto-create-dead-letter-resources> <dead-letter-queue-prefix>DLQ.</dead-letter-queue-prefix> <dead-letter-queue-suffix></dead-letter-queue-suffix> </address-setting> ... <address-settings> <configuration ...> auto-create-dead-letter-resources Specifies whether the broker automatically creates a dead letter address and queue to receive undelivered messages. The default value is false . If auto-create-dead-letter-resources is set to true , the broker automatically creates an <address> element that defines a dead letter address and an associated dead letter queue. The name of the automatically-created <address> element matches the name value that you specify for <dead-letter-address> . The dead letter queue that the broker defines in the automatically-created <address> element has the multicast routing type . By default, the broker names the dead letter queue to match the original address of the undelivered message, for example, stocks . The broker also defines a filter for the dead letter queue that uses the _AMQ_ORIG_ADDRESS property. This filter ensures that the dead letter queue receives only messages sent to the corresponding original address. dead-letter-queue-prefix Prefix that the broker applies to the name of an automatically-created dead letter queue. The default value is DLQ. When you define a prefix value or keep the default value, the name of the dead letter queue is a concatenation of the prefix and the original address, for example, DLQ.stocks . dead-letter-queue-suffix Suffix that the broker applies to an automatically-created dead letter queue. The default value is not defined (that is, the broker applies no suffix). 4.14. Annotations and properties on expired or undelivered AMQP messages Before the broker moves an expired or undelivered AMQP message to an expiry or dead letter queue that you have configured, the broker applies annotations and properties to the message. A client can create a filter based on these properties or annotations, to select particular messages to consume from the expiry or dead letter queue. Note The properties that the broker applies are internal properties These properties are are not exposed to clients for regular use, but can be specified by a client in a filter. The following table shows the annotations and internal properties that the broker applies to expired or undelivered AMQP messages. Table 4.5. Annotations and properties applied to expired or undelivered AMQP messages Annotation name Internal property name Description x-opt-ORIG-MESSAGE-ID _AMQ_ORIG_MESSAGE_ID Original message ID, before the message was moved to an expiry or dead letter queue. x-opt-ACTUAL-EXPIRY _AMQ_ACTUAL_EXPIRY Message expiry time, specified as the number of milliseconds since the last epoch started. x-opt-ORIG-QUEUE _AMQ_ORIG_QUEUE Original queue name of the expired or undelivered message. x-opt-ORIG-ADDRESS _AMQ_ORIG_ADDRESS Original address name of the expired or undelivered message. Additional resources For an example of configuring an AMQP client to filter AMQP messages based on annotations, see Section 13.3, "Filtering AMQP Messages Based on Properties on Annotations" . 4.15. Disabling queues If you manually define a queue in your broker configuration, the queue is enabled by default. However, there might be a case where you want to define a queue so that clients can subscribe to it, but are not ready to use the queue for message routing. Alternatively, there might be a situation where you want to stop message flow to a queue, but still keep clients bound to the queue. In these cases, you can disable the queue. The following example shows how to disable a queue that you have defined in your broker configuration. Prerequisites You should be familiar with how to define an address and associated queue in your broker configuration. For more information, see Chapter 4, Configuring addresses and queues . Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. For a queue that you previously defined, add the enabled attribute. To disable the queue, set the value of this attribute to false . For example: <addresses> <address name="orders"> <multicast> <queue name="orders" enabled="false"/> </multicast> </address> </addresses> The default value of the enabled property is true . When you set the value to false , message routing to the queue is disabled. Note If you disable all queues on an address, any messages sent to that address are silently dropped. 4.16. Limiting the number of consumers connected to a queue Limit the number of consumers connected to a particular queue by using the max-consumers attribute. Create an exclusive consumer by setting max-consumers flag to 1 . The default value is -1 , which sets an unlimited number of consumers. The following procedure shows how to set a limit on the number of consumers that can connect to a queue. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. For a given queue, add the max-consumers key and set a value. <configuration ...> <core ...> ... <addresses> <address name="my.address"> <anycast> <queue name="q3" max-consumers="20"/> </anycast> </address> </addresses> </core> </configuration> Based on the preceding configuration, only 20 consumers can connect to queue q3 at the same time. To create an exclusive consumer, set max-consumers to 1 . <configuration ...> <core ...> ... <address name="my.address"> <anycast> <queue name="q3" max-consumers="1"/> </anycast> </address> </core> </configuration> To allow an unlimited number of consumers, set max-consumers to -1 . <configuration ...> <core ...> ... <address name="my.address"> <anycast> <queue name="q3" max-consumers="-1"/> </anycast> </address> </core> </configuration> 4.17. Configuring exclusive queues Exclusive queues are special queues that route all messages to only one consumer at a time. This configuration is useful when you want all messages to be processed serially by the same consumer. If there are multiple consumers for a queue, only one consumer will receive messages. If that consumer disconnects from the queue, another consumer is chosen. 4.17.1. Configuring exclusive queues individually The following procedure shows to how to individually configure a given queue as exclusive. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. For a given queue, add the exclusive key. Set the value to true . <configuration ...> <core ...> ... <address name="my.address"> <multicast> <queue name="orders1" exclusive="true"/> </multicast> </address> </core> </configuration> 4.17.2. Configuring exclusive queues for addresses The following procedure shows how to configure an address or set of addresses so that all associated queues are exclusive. Open the <broker_instance_dir> /etc/broker.xml configuration file. In the address-setting element, for a matching address, add the default-exclusive-queue key. Set the value to true . <address-setting match="myAddress"> <default-exclusive-queue>true</default-exclusive-queue> </address-setting> Based on the preceding configuration, all queues associated with the myAddress address are exclusive. By default, the value of default-exclusive-queue is false . To configure exclusive queues for a set of addresses, you can specify an address wildcard. For example: <address-setting match="myAddress.*"> <default-exclusive-queue>true</default-exclusive-queue> </address-setting> Additional resources For more information about the wildcard syntax that you can use when configuring addresses, see Section 4.2, "Applying address settings to sets of addresses" . 4.18. Applying specific address settings to temporary queues When using JMS, for example, the broker creates temporary queues by assigning a universally unique identifier (UUID) as both the address name and the queue name. The default <address-setting match="#"> applies the configured address settings to all queues, including temporary ones. If you want to apply specific address settings to temporary queues only, you can optionally specify a temporary-queue-namespace as described below. You can then specify address settings that match the namespace and the broker applies those settings to all temporary queues. When a temporary queue is created and a temporary queue namespace exists, the broker prepends the temporary-queue-namespace value and the configured delimiter (default . ) to the address name. It uses that to reference the matching address settings. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Add a temporary-queue-namespace value. For example: <temporary-queue-namespace>temp-example</temporary-queue-namespace> Add an address-setting element with a match value that corresponds to the temporary queues namespace. For example: <address-settings> <address-setting match="temp-example.#"> <enable-metrics>false</enable-metrics> </address-setting> </address-settings> This example disables metrics in all temporary queues created by the broker. Note Specifying a temporary queue namespace does not affect temporary queues. For example, the namespace does not change the names of temporary queues. The namespace is used to reference the temporary queues. Additional resources For more information about using wildcards in address settings, see Section 4.2, "Applying address settings to sets of addresses" . 4.19. Configuring ring queues Generally, queues in AMQ Broker use first-in, first-out (FIFO) semantics. This means that the broker adds messages to the tail of the queue and removes them from the head. A ring queue is a special type of queue that holds a specified, fixed number of messages. The broker maintains the fixed queue size by removing the message at the head of the queue when a new message arrives but the queue already holds the specified number of messages. For example, consider a ring queue configured with a size of 3 and a producer that sequentially sends messages A , B , C , and D . Once message C arrives to the queue, the number of messages in the queue has reached the configured ring size. At this point, message A is at the head of the queue, while message C is at the tail. When message D arrives to the queue, the broker adds the message to the tail of the queue. To maintain the fixed queue size, the broker removes the message at the head of the queue (that is, message A ). Message B is now at the head of the queue. 4.19.1. Configuring ring queues The following procedure shows how to configure a ring queue. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. To define a default ring size for all queues on matching addresses that don't have an explicit ring size set, specify a value for default-ring-size in the address-setting element. For example: <address-settings> <address-setting match="ring.#"> <default-ring-size>3</default-ring-size> </address-setting> </address-settings> The default-ring-size parameter is especially useful for defining the default size of auto-created queues. The default value of default-ring-size is -1 (that is, no size limit). To define a ring size on a specific queue, add the ring-size key to the queue element. Specify a value. For example: <addresses> <address name="myRing"> <anycast> <queue name="myRing" ring-size="5" /> </anycast> </address> </addresses> Note You can update the value of ring-size while the broker is running. The broker dynamically applies the update. If the new ring-size value is lower than the value, the broker does not immediately delete messages from the head of the queue to enforce the new size. New messages sent to the queue still force the deletion of older messages, but the queue does not reach its new, reduced size until it does so naturally, through the normal consumption of messages by clients. 4.19.2. Troubleshooting ring queues This section describes situations in which the behavior of a ring queue appears to differ from its configuration. In-delivery messages and rollbacks When a message is in delivery to a consumer, the message is in an "in-between" state, where the message is technically no longer on the queue, but is also not yet acknowledged. A message remains in an in-delivery state until acknowledged by the consumer. Messages that remain in an in-delivery state cannot be removed from the ring queue. Because the broker cannot remove in-delivery messages, a client can send more messages to a ring queue than the ring size configuration seems to allow. For example, consider this scenario: A producer sends three messages to a ring queue configured with ring-size="3" . All messages are immediately dispatched to a consumer. At this point, messageCount = 3 and deliveringCount = 3 . The producer sends another message to the queue. The message is then dispatched to the consumer. Now, messageCount = 4 and deliveringCount = 4 . The message count of 4 is greater than the configured ring size of 3 . However, the broker is obliged to allow this situation because it cannot remove the in-delivery messages from the queue. Now, suppose that the consumer is closed without acknowledging any of the messages. In this case, the four in-delivery, unacknowledged messages are canceled back to the broker and added to the head of the queue in the reverse order from which they were consumed. This action puts the queue over its configured ring size. Because a ring queue prefers messages at the tail of the queue over messages at the head, the queue discards the first message sent by the producer, because this was the last message added back to the head of the queue. Transaction or core session rollbacks are treated in the same way. If you are using the core client directly, or using an AMQ Core Protocol JMS client, you can minimize the number of messages in delivery by reducing the value of the consumerWindowSize parameter (1024 * 1024 bytes by default). Scheduled messages When a scheduled message is sent to a queue, the message is not immediately added to the tail of the queue like a normal message. Instead, the broker holds the scheduled message in an intermediate buffer and schedules the message for delivery onto the head of the queue, according to the details of the message. However, scheduled messages are still reflected in the message count of the queue. As with in-delivery messages, this behavior can make it appear that the broker is not enforcing the ring queue size. For example, consider this scenario: At 12:00, a producer sends a message, A , to a ring queue configured with ring-size="3" . The message is scheduled for 12:05. At this point, messageCount = 1 and scheduledCount = 1 . At 12:01, producer sends message B to the same ring queue. Now, messageCount = 2 and scheduledCount = 1 . At 12:02, producer sends message C to the same ring queue. Now, messageCount = 3 and scheduledCount = 1 . At 12:03, producer sends message D to the same ring queue. Now, messageCount = 4 and scheduledCount = 1 . The message count for the queue is now 4 , one greater than the configured ring size of 3 . However, the scheduled message is not technically on the queue yet (that is, it is on the broker and scheduled to be put on the queue). At the scheduled delivery time of 12:05, the broker puts the message on the head of the queue. However, since the ring queue has already reached its configured size, the scheduled message A is immediately removed. Paged messages Similar to scheduled messages and messages in delivery, paged messages do not count towards the ring queue size enforced by the broker, because messages are actually paged at the address level, not the queue level. A paged message is not technically on a queue, although it is reflected in a queue's messageCount value. It is recommended that you do not use paging for addresses with ring queues. Instead, ensure that the entire address can fit into memory. Or, configure the address-full-policy parameter to a value of DROP , BLOCK or FAIL . Additional resources The broker creates internal instances of ring queues when you configure retroactive addresses. To learn more, see Section 4.20, "Configuring retroactive addresses" . 4.20. Configuring retroactive addresses Configuring an address as retroactive enables you to preserve messages sent to that address, including when there are no queues yet bound to the address. When queues are later created and bound to the address, the broker retroactively distributes messages to those queues. If an address is not configured as retroactive and does not yet have a queue bound to it, the broker discards messages sent to that address. When you configure a retroactive address, the broker creates an internal instance of a type of queue known as a ring queue . A ring queue is a special type of queue that holds a specified, fixed number of messages. Once the queue has reached the specified size, the message that arrives to the queue forces the oldest message out of the queue. When you configure a retroactive address, you indirectly specify the size of the internal ring queue. By default, the internal queue uses the multicast routing type. The internal ring queue used by a retroactive address is exposed via the management API. You can inspect metrics and perform other common management operations, such as emptying the queue. The ring queue also contributes to the overall memory usage of the address, which affects behavior such as message paging. The following procedure shows how to configure an address as retroactive. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Specify a value for the retroactive-message-count parameter in the address-setting element. The value you specify defines the number of messages you want the broker to preserve. For example: <configuration> <core> ... <address-settings> <address-setting match="orders"> <retroactive-message-count>100</retroactive-message-count> </address-setting> </address-settings> ... </core> </configuration> Note You can update the value of retroactive-message-count while the broker is running, in either the broker.xml configuration file or the management API. However, if you reduce the value of this parameter, an additional step is required, because retroactive addresses are implemented via ring queues. A ring queue whose ring-size parameter is reduced does not automatically delete messages from the queue to achieve the new ring-size value. This behavior is a safeguard against unintended message loss. In this case, you need to use the management API to manually reduce the number of messages in the ring queue. Additional resources For more information about ring queues, see Section 4.19, "Configuring ring queues" . 4.21. Disabling advisory messages for internally-managed addresses and queues By default, AMQ Broker creates advisory messages about addresses and queues when an OpenWire client is connected to the broker. Advisory messages are sent to internally-managed addresses created by the broker. These addresses appear on the AMQ Management Console within the same display as user-deployed addresses and queues. Although they provide useful information, advisory messages can cause unwanted consequences when the broker manages a large number of destinations. For example, the messages might increase memory usage or strain connection resources. Also, the AMQ Management Console might become cluttered when attempting to display all of the addresses created to send advisory messages. To avoid these situations, you can use the following parameters to configure the behavior of advisory messages on the broker. supportAdvisory Set this option to true to enable creation of advisory messages or false to disable them. The default value is true . suppressInternalManagementObjects Set this option to true to expose the advisory messages to management services such as JMX registry and AMQ Management Console, or false to not expose them. The default value is true . The following procedure shows how to disable advisory messages on the broker. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. For an OpenWire connector, add the supportAdvisory and suppressInternalManagementObjects parameters to the configured URL. Set the values as described earlier in this section. For example: <acceptor name="artemis">tcp://127.0.0.1:61616?protocols=CORE,AMQP,OPENWIRE;supportAdvisory=false;suppressInternalManagementObjects=false</acceptor> 4.22. Federating addresses and queues Federation enables transmission of messages between brokers, without requiring the brokers to be in a common cluster. Brokers can be standalone, or in separate clusters. In addition, the source and target brokers can be in different administrative domains, meaning that the brokers might have different configurations, users, and security setups. The brokers might even be using different versions of AMQ Broker. For example, federation is suitable for reliably sending messages from one cluster to another. This transmission might be across a Wide Area Network (WAN), Regions of a cloud infrastructure, or over the Internet. If connection from a source broker to a target broker is lost (for example, due to network failure), the source broker tries to reestablish the connection until the target broker comes back online. When the target broker comes back online, message transmission resumes. Administrators can use address and queue policies to manage federation. Policy configurations can be matched to specific addresses or queues, or the policies can include wildcard expressions that match configurations to sets of addresses or queues. Therefore, federation can be dynamically applied as queues or addresses are added to- or removed from matching sets. Policies can include multiple expressions that include and/or exclude particular addresses and queues. In addition, multiple policies can be applied to brokers or broker clusters. In AMQ Broker, the two primary federation options are address federation and queue federation . These options are described in the sections that follow. Note A broker can include configuration for federated and local-only components. That is, if you configure federation on a broker, you don't need to federate everything on that broker. 4.22.1. About address federation Address federation is like a full multicast distribution pattern between connected brokers. For example, every message sent to an address on BrokerA is delivered to every queue on that broker. In addition, each of the messages is delivered to BrokerB and all attached queues there. Address federation dynamically links a broker to addresses in remote brokers. For example, if a local broker wants to fetch messages from an address on a remote broker, a queue is automatically created on the remote address. Messages on the remote broker are then consumed to this queue. Finally, messages are copied to the corresponding address on the local broker, as though they were originally published directly to the local address. The remote broker does not need to be reconfigured to allow federation to create an address on it. However, the local broker does need to be granted permissions to the remote address. 4.22.2. Common topologies for address federation Some common topologies for the use of address federation are described below. Symmetric topology In a symmetric topology, a producer and consumer are connected to each broker. Queues and their consumers can receive messages published by either producer. An example of a symmetric topology is shown below. Figure 4.5. Address federation in a symmetric topology When configuring address federation for a symmetric topology, it is important to set the value of the max-hops property of the address policy to 1 . This ensures that messages are copied only once , avoiding cyclic replication. If this property is set to a larger value, consumers will receive multiple copies of the same message. Full mesh topology A full mesh topology is similar to a symmetric setup. Three or more brokers symmetrically federate to each other, creating a full mesh. In this setup, a producer and consumer are connected to each broker. Queues and their consumers can receive messages published by any producer. An example of this topology is shown below. Figure 4.6. Address federation in a full mesh topology As with a symmetric setup, when configuring address federation for a full mesh topology, it is important to set the value of the max-hops property of the address policy to 1 . This ensures that messages are copied only once , avoiding cyclic replication. Ring topology In a ring of brokers, each federated address is upstream to just one other in the ring. An example of this topology is shown below. Figure 4.7. Address federation in a ring topology When you configure federation for a ring topology, to avoid cyclic replication, it is important to set the max-hops property of the address policy to a value of n-1 , where n is the number of nodes in the ring. For example, in the ring topology shown above, the value of max-hops is set to 5 . This ensures that every address in the ring sees the message exactly once . An advantage of a ring topology is that it is cheap to set up, in terms of the number of physical connections that you need to make. However, a drawback of this type of topology is that if a single broker fails, the whole ring fails. Fan-out topology In a fan-out topology, a single master address is linked-to by a tree of federated addresses. Any message published to the master address can be received by any consumer connected to any broker in the tree. The tree can be configured to any depth. The tree can also be extended without the need to re-configure existing brokers in the tree. An example of this topology is shown below. Figure 4.8. Address federation in a fan-out topology When you configure federation for a fan-out topology, ensure that you set the max-hops property of the address policy to a value of n-1 , where n is the number of levels in the tree. For example, in the fan-out topology shown above, the value of max-hops is set to 2 . This ensures that every address in the tree sees the message exactly once . 4.22.3. Support for divert bindings in address federation configuration When configuring address federation, you can add support for divert bindings in the address policy configuration. Adding this support enables the federation to respond to divert bindings to create a federated consumer for a given address on a remote broker. For example, suppose that an address called test.federation.source is included in the address policy, and another address called test.federation.target is not included. Normally, when a queue is created on test.federation.target , this would not cause a federated consumer to be created, because the address is not part of the address policy. However, if you create a divert binding such that test.federation.source is the source address and test.federation.target is the forwarding address, then a durable consumer is created at the forwarding address. The source address still must use the multicast routing type , but the target address can use multicast or anycast . An example use case is a divert that redirects a JMS topic ( multicast address) to a JMS queue ( anycast address). This enables load balancing of messages on the topic for legacy consumers not supporting JMS 2.0 and shared subscriptions. 4.22.4. Configuring federation You can configure address and queue federation using either the Core protocol or, beginning in 7.12, AMQP. Using AMQP for federation offers the following advantages: If clients use AMQP for messaging, using AMQP for federation eliminates the need to convert messages between AMQP and the Core protocol and vice versa, which is required if federation uses the Core protocol. AMQP federation supports two-way federation over a single outgoing connection. This eliminates the need for a remote broker to connect back to a local broker, which is a requirement when you use the Core protocol for federation and which might be prevented by network policies. 4.22.4.1. Configuring federation using AMQP You can uses the following policies to configure address and queue federation using AMQP: A local address policy configures the local broker to watch for demand on addresses and, when that demand exists, create a federation consumer on the matching address on the remote broker to federate messages to the local broker. A remote address policy configures the remote broker to watch for demand on addresses and, when that demand exists, create a federation consumer on the matching address on the local broker to federate messages to the remote broker. A local queue policy configures the local broker to watch for demand on queues and, when that demand exists, create a federation consumer on the matching queue on the remote broker to federate messages to the local broker. A remote queue policy configures the remote broker to watch for demand on queues and, when that demand exists, create a federation consumer on the matching queue on the local broker to federate messages to the remote broker. Remote address and queue policies are sent to the remote broker and become local policies on the remote broker to provide a reverse federation connection. In applying the policies for the reverse federation connection, the broker that received the policies is the local broker and the broker that sent the policies is the remote broker. By configuring remote address and queue policies on the local broker, you can keep all federation configuration on a single broker, which might be a useful approach for a hub-and-spoke topology, for example. 4.22.4.1.1. Configuring address federation using AMQP Use the <broker-connections> element to configure address federation using AMQP. Prerequisite The user specified in the <amqp-connection> element has read and write permissions to matching addresses and queues on the remote broker. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Add a <broker connections> element that includes an <amqp-connection> element. In the <amqp-connection> element, specify the connection details for a remote broker and assign a name to the federation configuration. For example: <broker-connections> <amqp-connection uri="tcp://<__HOST__>:<__PORT__>" user="federation_user" password="federation_pwd" name="queue-federation-example"> </amqp-connection> </broker-connections> Add a <federation> element and include one or both of the following: A <local-address-policy> element to federate messages from the remote broker to the local broker. A <remote-address-policy> element to federate messages from the local broker to the remote broker. The following example show a federation element with both local and remote address policies. <broker-connections> <amqp-connection uri="tcp://<__HOST__>:<__PORT__>" user="federation_user" password="federation_pwd" name="queue-federation-example"> <federation> <local-address-policy name="example-local-address-policy" auto-delete="true" auto-delete-delay="1" auto-delete-message-count="2" max-hops="1" enable-divert-bindings="true"> <include address-match="queue.news.#" /> <include address-match="queue.bbc.news" /> <exclude address-match="queue.news.sport.#" /> </local-address-policy> <remote-address-policy name="example-remote-address-policy"> <include address-match="queue.usatoday" /> </remote-address-policy> </federation> </amqp-connection> </broker-connections> The same parameters are configurable in both a local and remote address policy. The valid parameters are: name Name of the address policy. All address policy names must be unique within the <federation> elements in a <broker-connections> element. max-hops Maximum number of hops that a message can make during federation. The default value of 0 is suitable for most simple federation deployments. However, in certain topologies a greater value might be required to prevent messages from looping. auto-delete For address federation, a durable queue is created on the broker from which messages are being federated. Set this parameter to true to mark the queue for automatic deletion once the initiating broker disconnects and the delay and message count parameters are met. This is a useful option if you want to automate the cleanup of dynamically-created queues. The default value is false , which means that the queue is not automatically deleted. auto-delete-delay The amount of time in milliseconds before the created queue is eligible for automatic deletion after the initiating broker has disconnected. The default value is 0 . auto-delete-message-count The value that the message count for the queue must be less than or equal to before the queue can be automatically deleted. The default value is 0 . enable-divert-bindings Setting to true enables divert bindings to be listened-to for demand. If a divert binding with an address matches the included addresses for the address policy, any queue bindings that match the forwarding address of the divert creates demand. The default value is false . include The address-match patterns to match addresses to include in the policy. You can specify multiple patterns. If you do not specify a pattern, all addresses are included in the policy. You can specify an exact address, for example, queue.bbc.news . Or, you can use the number sign (#) wildcard character to specify a matching set of addresses. In the preceding example, the local address policy also includes all addresses that start with the queue.news string. exclude The address-match patterns to match addresses to exclude from the policy. You can specify multiple patterns. If you do not specify a pattern, no addresses are excluded from the policy. You can specify an exact address, for example, queue.bbc.news . Or, you can use the number sign (#) wildcard character to specify a matching set of addresses. In the preceding example, the local address policy excludes all addresses that start with the queue.news.sport string. 4.22.4.1.2. Configuring queue federation using AMQP Procedure Use the <broker connections> element to configure queue federation for AMQP. Prerequisite The user specified in the <amqp-connection> element has read and write permissions to matching addresses and queues on the remote broker. Open the <broker_instance_dir> /etc/broker.xml configuration file. Add a <broker connections> element that includes an <amqp-connection> element. In the <amqp-connection> element, specify the connection details for a remote broker and assign a name to the federation configuration. For example: <broker-connections> <amqp-connection uri="tcp://<__HOST__>:<__PORT__>" user="federation_user" password="federation_pwd" name="queue-federation-example"> </amqp-connection> </broker-connections> Add a <federation> element and include one or both of the following: A <local-queue-policy> element to federate messages from the remote broker to the local broker. A <remote-queue-policy> element to federate messages from the local broker to the remote broker. The following examples show a federation element that contains a local queue policy. <broker-connections> <amqp-connection uri="tcp://HOST:PORT" name="federation-example"> <federation> <local-queue-policy name="example-local-queue-policy"> <include address-match="#" queue-match="#.remote" /> <exclude address-match="#" queue-match="#.local" /> </local-queue-policy> </federation> </amqp-connection> </broker-connections> name Name of the queue policy. All queue policy names must be unique within the <federation> elements of a <broker-connections> element. include The address-match patterns to match addresses and the queue-match patterns to match specific queues on those addresses for inclusion in the policy. As with the address-match parameter, you can specify an exact name for the queue-match parameter, or you can use a wildcard expression to specify a set of queues. In the preceding example, queues that match the .remote string across all addresses, represented by an address-match value of # , are included. exclude The address-match patterns to match addresses and the queue-match patterns to match specific queues on those addresses for exclusion from the policy. As with the address-match parameter, you can specify an exact name for the queue-match parameter, or you can use a wildcard expression to specify a set of queues. In the preceding example, queues that match the .local string across all addresses, represented by an address-match value of # , are excluded. priority-adjustment Adjusts the value of federated consumers to ensure that they have a lower priority value than other local consumers on the same queue. The default value is -1 , which ensures that the local consumer are prioritized over federated consumers. include-federated When the value of this parameter is set to false , the configuration does not re-federate an already-federated consumer (that is, a consumer on a federated queue). This avoids a situation where, in a symmetric or closed-loop topology, there are no non-federated consumers and messages flow endlessly around the system. You might set the value of this parameter to true if you do not have a closed-loop topology. For example, suppose that you have a chain of three brokers, BrokerA, BrokerB, and BrokerC, with a producer at BrokerA and a consumer at BrokerC. In this case, you would want BrokerB to re-federate the consumer to BrokerA. 4.22.4.2. Configuring federation using the Core protocol You can configure message and queue federation to use the Core protocol. 4.22.4.2.1. Configuring federation for a broker cluster The examples in the sections that follow show how to configure address and queue federation between standalone local and remote brokers. For federation between standalone brokers, the name of the federation configuration, as well as the names of any address and queue policies, must be unique between the local and remote brokers. However, if you are configuring federation for brokers in a cluster , there is an additional requirement. For clustered brokers, the names of the federation configuration, as well as the names of any address and queues policies within that configuration, must be the same for every broker in that cluster. Ensuring that brokers in the same cluster use the same federation configuration and address and queue policy names avoids message duplication. For example, if brokers within the same cluster have different federation configuration names, this might lead to a situation where multiple, differently-named forwarding queues are created for the same address, resulting in message duplication for downstream consumers. By contrast, if brokers in the same cluster use the same federation configuration name, this essentially creates replicated, clustered forwarding queues that are load-balanced to the downstream consumers. This avoids message duplication. 4.22.4.2.2. Configuring upstream address federation The following example shows how to configure upstream address federation between standalone brokers. In this example, you configure federation from a local (that is, downstream ) broker, to some remote (that is, upstream ) brokers. Prerequisites The following example shows how to configure address federation between standalone brokers. However, you should also be familiar with the requirements for configuring federation for a broker cluster . For more information, see Section 4.22.4.2.1, "Configuring federation for a broker cluster" . Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Add a new <federations> element that includes a <federation> element. For example: <federations> <federation name="eu-north-1" user="federation_username" password="32a10275cf4ab4e9"> </federation> </federations> name Name of the federation configuration. In this example, the name corresponds to the name of the local broker. user Shared user name for connection to the upstream brokers. password Shared password for connection to the upstream brokers. Note If user and password credentials differ for remote brokers, you can separately specify credentials for those brokers when you add them to the configuration. This is described later in this procedure. Within the federation element, add an <address-policy> element. For example: <federations> <federation name="eu-north-1" user="federation_username" password="32a10275cf4ab4e9"> <address-policy name="news-address-federation" auto-delete="true" auto-delete-delay="300000" auto-delete-message-count="-1" enable-divert-bindings="false" max-hops="1" transformer-ref="news-transformer"> </address-policy> </federation> </federations> name Name of the address policy. All address policies that are configured on the broker must have unique names. auto-delete During address federation, the local broker dynamically creates a durable queue at the remote address. The value of the auto-delete property specifies whether the remote queue should be deleted once the local broker disconnects and the values of the auto-delete-delay and auto-delete-message-count properties have also been reached. This is a useful option if you want to automate the cleanup of dynamically-created queues. It is also a useful option if you want to prevent a buildup of messages on a remote broker if the local broker is disconnected for a long time. However, you might set this option to false if you want messages to always remain queued for the local broker while it is disconnected, avoiding message loss on the local broker. auto-delete-delay After the local broker has disconnected, the value of this property specifies the amount of time, in milliseconds, before dynamically-created remote queues are eligible to be automatically deleted. auto-delete-message-count After the local broker has been disconnected, the value of this property specifies the maximum number of messages that can still be in a dynamically-created remote queue before that queue is eligible to be automatically deleted. enable-divert-bindings Setting this property to true enables divert bindings to be listened-to for demand. If there is a divert binding with an address that matches the included addresses for the address policy, then any queue bindings that match the forwarding address of the divert will create demand. The default value is false . max-hops Maximum number of hops that a message can make during federation. Particular topologies require specific values for this property. To learn more about these requirements, see Section 4.22.2, "Common topologies for address federation" . transformer-ref Name of a transformer configuration. You might add a transformer configuration if you want to transform messages during federated message transmission. Transformer configuration is described later in this procedure. Within the <address-policy> element, add address-matching patterns to include and exclude addresses from the address policy. For example: <federations> <federation name="eu-north-1" user="federation_username" password="32a10275cf4ab4e9"> <address-policy name="news-address-federation" auto-delete="true" auto-delete-delay="300000" auto-delete-message-count="-1" enable-divert-bindings="false" max-hops="1" transformer-ref="news-transformer"> <include address-match="queue.bbc.new" /> <include address-match="queue.usatoday" /> <include address-match="queue.news.#" /> <exclude address-match="queue.news.sport.#" /> </address-policy> </federation> </federations> include The value of the address-match property of this element specifies addresses to include in the address policy. You can specify an exact address, for example, queue.bbc.new or queue.usatoday . Or, you can use a wildcard expression to specify a matching set of addresses. In the preceding example, the address policy also includes all address names that start with the string queue.news . exclude The value of the address-match property of this element specifies addresses to exclude from the address policy. You can specify an exact address name or use a wildcard expression to specify a matching set of addresses. In the preceding example, the address policy excludes all address names that start with the string queue.news.sport . (Optional) Within the federation element, add a transformer element to reference a custom transformer implementation. For example: <federations> <federation name="eu-north-1" user="federation_username" password="32a10275cf4ab4e9"> <address-policy name="news-address-federation" auto-delete="true" auto-delete-delay="300000" auto-delete-message-count="-1" enable-divert-bindings="false" max-hops="1" transformer-ref="news-transformer"> <include address-match="queue.bbc.new" /> <include address-match="queue.usatoday" /> <include address-match="queue.news.#" /> <exclude address-match="queue.news.sport.#" /> </address-policy> <transformer name="news-transformer"> <class-name>org.myorg.NewsTransformer</class-name> <property key="key1" value="value1"/> <property key="key2" value="value2"/> </transformer> </federation> </federations> name Name of the transformer configuration. This name must be unique on the local broker. This is the name that you specify as a value for the transformer-ref property of the address policy. class-name Name of a user-defined class that implements the org.apache.activemq.artemis.core.server.transformer.Transformer interface. The transformer's transform() method is invoked with the message before the message is transmitted. This enables you to transform the message header or body before it is federated. property Used to hold key-value pairs for specific transformer configuration. Within the federation element, add one or more upstream elements. Each upstream element defines a connection to a remote broker and the policies to apply to that connection. For example: <federations> <federation name="eu-north-1" user="federation_username" password="32a10275cf4ab4e9"> <upstream name="eu-east-1"> <static-connectors> <connector-ref>eu-east-connector1</connector-ref> </static-connectors> <policy ref="news-address-federation"/> </upstream> <upstream name="eu-west-1" > <static-connectors> <connector-ref>eu-west-connector1</connector-ref> </static-connectors> <policy ref="news-address-federation"/> </upstream> <address-policy name="news-address-federation" auto-delete="true" auto-delete-delay="300000" auto-delete-message-count="-1" enable-divert-bindings="false" max-hops="1" transformer-ref="news-transformer"> <include address-match="queue.bbc.new" /> <include address-match="queue.usatoday" /> <include address-match="queue.news.#" /> <exclude address-match="queue.news.sport.#" /> </address-policy> <transformer name="news-transformer"> <class-name>org.myorg.NewsTransformer</class-name> <property key="key1" value="value1"/> <property key="key2" value="value2"/> </transformer> </federation> </federations> static-connectors Contains a list of connector-ref elements that reference connector elements that are defined elsewhere in the broker.xml configuration file of the local broker. A connector defines what transport (TCP, SSL, HTTP, and so on) and server connection parameters (host, port, and so on) to use for outgoing connections. The step of this procedure shows how to add the connectors that are referenced in the static-connectors element. policy-ref Name of the address policy configured on the downstream broker that is applied to the upstream broker. The additional options that you can specify for an upstream element are described below: name Name of the upstream broker configuration. In this example, the names correspond to upstream brokers called eu-east-1 and eu-west-1 . user User name to use when creating the connection to the upstream broker. If not specified, the shared user name that is specified in the configuration of the federation element is used. password Password to use when creating the connection to the upstream broker. If not specified, the shared password that is specified in the configuration of the federation element is used. call-failover-timeout Similar to call-timeout , but used when a call is made during a failover attempt. The default value is -1 , which means that the timeout is disabled. call-timeout Time, in milliseconds, that a federation connection waits for a reply from a remote broker when it transmits a packet that is a blocking call. If this time elapses, the connection throws an exception. The default value is 30000 . check-period Period, in milliseconds, between consecutive "keep-alive" messages that the local broker sends to a remote broker to check the health of the federation connection. If the federation connection is healthy, the remote broker responds to each keep-alive message. If the connection is unhealthy, when the downstream broker fails to receive a response from the upstream broker, a mechanism called a circuit breaker is used to block federated consumers. See the description of the circuit-breaker-timeout parameter for more information. The default value of the check-period parameter is 30000 . circuit-breaker-timeout A single connection between a downstream and upstream broker might be shared by many federated queue and address consumers. In the event that the connection between the brokers is lost, each federated consumer might try to reconnect at the same time. To avoid this, a mechanism called a circuit breaker blocks the consumers. When the specified timeout value elapses, the circuit breaker re-tries the connection. If successful, consumers are unblocked. Otherwise, the circuit breaker is applied again. connection-ttl Time, in milliseconds, that a federation connection stays alive if it stops receiving messages from the remote broker. The default value is 60000 . discovery-group-ref As an alternative to defining static connectors for connections to upstream brokers, this element can be used to specify a discovery group that is already configured elsewhere in the broker.xml configuration file. Specifically, you specify an existing discovery group as a value for the discovery-group-name property of this element. For more information about discovery groups, see Section 14.1.6, "Broker discovery methods" . ha Specifies whether high availability is enabled for the connection to the upstream broker. If the value of this parameter is set to true , the local broker can connect to any available broker in an upstream cluster and automatically fails over to a backup broker if the live upstream broker shuts down. The default value is false . initial-connect-attempts Number of initial attempts that the downstream broker will make to connect to the upstream broker. If this value is reached without a connection being established, the upstream broker is considered permanently offline. The downstream broker no longer routes messages to the upstream broker. The default value is -1 , which means that there is no limit. max-retry-interval Maximum time, in milliseconds, between subsequent reconnection attempts when connection to the remote broker fails. The default value is 2000 . reconnect-attempts Number of times that the downstream broker will try to reconnect to the upstream broker if the connection fails. If this value is reached without a connection being re-established, the upstream broker is considered permanently offline. The downstream broker no longer routes messages to the upstream broker. The default value is -1 , which means that there is no limit. retry-interval Period, in milliseconds, between subsequent reconnection attempts, if connection to the remote broker has failed. The default value is 500 . retry-interval-multiplier Multiplying factor that is applied to the value of the retry-interval parameter. The default value is 1 . share-connection If there is both a downstream and upstream connection configured for the same broker, then the same connection will be shared, as long as both of the downstream and upstream configurations set the value of this parameter to true . The default value is false . On the local broker, add connectors to the remote brokers. These are the connectors referenced in the static-connectors elements of your federated address configuration. For example: <connectors> <connector name="eu-west-1-connector">tcp://localhost:61616</connector> <connector name="eu-east-1-connector">tcp://localhost:61617</connector> </connectors> 4.22.4.2.3. Configuring downstream address federation The following example shows how to configure downstream address federation for standalone brokers. Downstream address federation enables you to add configuration on the local broker that one or more remote brokers use to connect back to the local broker. The advantage of this approach is that you can keep all federation configuration on a single broker. This might be a useful approach for a hub-and-spoke topology, for example. Note Downstream address federation reverses the direction of the federation connection versus upstream address configuration. Therefore, when you add remote brokers to your configuration, these become considered as the downstream brokers. The downstream brokers use the connection information in the configuration to connect back to the local broker, which is now considered to be upstream. This is illustrated later in this example, when you add configuration for the remote brokers. Prerequisites You should be familiar with the configuration for upstream address federation. See Section 4.22.4.2.2, "Configuring upstream address federation" . The following example shows how to configure address federation between standalone brokers. However, you should also be familiar with the requirements for configuring federation for a broker cluster . For more information, see Section 4.22.4.2.1, "Configuring federation for a broker cluster" . Procedure On the local broker, open the <broker_instance_dir> /etc/broker.xml configuration file. Add a <federations> element that includes a <federation> element. For example: <federations> <federation name="eu-north-1" user="federation_username" password="32a10275cf4ab4e9"> </federation> </federations> Add an address policy configuration. For example: <federations> ... <federation name="eu-north-1" user="federation_username" password="32a10275cf4ab4e9"> <address-policy name="news-address-federation" max-hops="1" auto-delete="true" auto-delete-delay="300000" auto-delete-message-count="-1" transformer-ref="news-transformer"> <include address-match="queue.bbc.new" /> <include address-match="queue.usatoday" /> <include address-match="queue.news.#" /> <exclude address-match="queue.news.sport.#" /> </address-policy> </federation> ... </federations> If you want to transform messages before transmission, add a transformer configuration. For example: <federations> ... <federation name="eu-north-1" user="federation_username" password="32a10275cf4ab4e9"> <address-policy name="news-address-federation" max-hops="1" auto-delete="true" auto-delete-delay="300000" auto-delete-message-count="-1" transformer-ref="news-transformer"> <include address-match="queue.bbc.new" /> <include address-match="queue.usatoday" /> <include address-match="queue.news.#" /> <exclude address-match="queue.news.sport.#" /> </address-policy> <transformer name="news-transformer"> <class-name>org.myorg.NewsTransformer</class-name> <property key="key1" value="value1"/> <property key="key2" value="value2"/> </transformer> </federation> ... </federations> Add a downstream element for each remote broker. For example: <federations> ... <federation name="eu-north-1" user="federation_username" password="32a10275cf4ab4e9"> <downstream name="eu-east-1"> <static-connectors> <connector-ref>eu-east-connector1</connector-ref> </static-connectors> <upstream-connector-ref>netty-connector</upstream-connector-ref> <policy ref="news-address-federation"/> </downstream> <downstream name="eu-west-1" > <static-connectors> <connector-ref>eu-west-connector1</connector-ref> </static-connectors> <upstream-connector-ref>netty-connector</upstream-connector-ref> <policy ref="news-address-federation"/> </downstream> <address-policy name="news-address-federation" max-hops="1" auto-delete="true" auto-delete-delay="300000" auto-delete-message-count="-1" transformer-ref="news-transformer"> <include address-match="queue.bbc.new" /> <include address-match="queue.usatoday" /> <include address-match="queue.news.#" /> <exclude address-match="queue.news.sport.#" /> </address-policy> <transformer name="news-transformer"> <class-name>org.myorg.NewsTransformer</class-name> <property key="key1" value="value1"/> <property key="key2" value="value2"/> </transformer> </federation> ... </federations> As shown in the preceding configuration, the remote brokers are now considered to be downstream of the local broker. The downstream brokers use the connection information in the configuration to connect back to the local (that is, upstream ) broker. On the local broker, add connectors and acceptors used by the local and remote brokers to establish the federation connection. For example: <connectors> <connector name="netty-connector">tcp://localhost:61616</connector> <connector name="eu-west-1-connector">tcp://localhost:61616</connector> <connector name="eu-east-1-connector">tcp://localhost:61617</connector> </connectors> <acceptors> <acceptor name="netty-acceptor">tcp://localhost:61616</acceptor> </acceptors> connector name="netty-connector" Connector configuration that the local broker sends to the remote broker. The remote broker use this configuration to connect back to the local broker. connector name="eu-west-1-connector" , connector name="eu-east-1-connector" Connectors to remote brokers. The local broker uses these connectors to connect to the remote brokers and share the configuration that the remote brokers need to connect back to the local broker. acceptor name="netty-acceptor" Acceptor on the local broker that corresponds to the connector used by the remote broker to connect back to the local broker. 4.22.4.2.4. About queue federation Queue federation provides a way to balance the load of a single queue on a local broker across other, remote brokers. To achieve load balancing, a local broker retrieves messages from remote queues in order to satisfy demand for messages from local consumers. An example is shown below. Figure 4.9. Symmetric queue federation The remote queues do not need to be reconfigured and they do not have to be on the same broker or in the same cluster. All of the configuration needed to establish the remote links and the federated queue is on the local broker. 4.22.4.2.4.1. Advantages of queue federation Described below are some reasons you might choose to configure queue federation. Increasing capacity Queue federation can create a "logical" queue that is distributed over many brokers. This logical distributed queue has a much higher capacity than a single queue on a single broker. In this setup, as many messages as possible are consumed from the broker they were originally published to. The system moves messages around in the federation only when load balancing is needed. Deploying multi-region setups In a multi-region setup, you might have a message producer in one region or venue and a consumer in another. However, you should ideally keep producer and consumer connections local to a given region. In this case, you can deploy brokers in each region where producers and consumers are, and use queue federation to move messages over a Wide Area Network (WAN), between regions. An example is shown below. Figure 4.10. Multi-region queue federation Communicating between a secure enterprise LAN and a DMZ In networking security, a demilitarized zone (DMZ) is a physical or logical subnetwork that contains and exposes an enterprise's external-facing services to an untrusted, usually larger, network such as the Internet. The remainder of the enterprise's Local Area Network (LAN) remains isolated from this external network, behind a firewall. In a situation where a number of message producers are in the DMZ and a number of consumers in the secure enterprise LAN, it might not be appropriate to allow the producers to connect to a broker in the secure enterprise LAN. In this case, you could deploy a broker in the DMZ that the producers can publish messages to. Then, the broker in the enterprise LAN can connect to the broker in the DMZ and use federated queues to receive messages from the broker in the DMZ. 4.22.4.2.5. Configuring upstream queue federation The following example shows how to configure upstream queue federation for standalone brokers. In this example, you configure federation from a local (that is, downstream ) broker, to some remote (that is, upstream ) brokers. Prerequisites The following example shows how to configure queue federation between standalone brokers. However, you should also be familiar with the requirements for configuring federation for a broker cluster . For more information, see Section 4.22.4.2.1, "Configuring federation for a broker cluster" . Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Within a new <federations> element, add a <federation> element. For example: <federations> <federation name="eu-north-1" user="federation_username" password="32a10275cf4ab4e9"> </federation> </federations> name Name of the federation configuration. In this example, the name corresponds to the name of the downstream broker. user Shared user name for connection to the upstream brokers. password Shared password for connection to the upstream brokers. Note If user and password credentials differ for upstream brokers, you can separately specify credentials for those brokers when you add them to the configuration. This is described later in this procedure. Within the federation element, add a <queue-policy> element. Specify values for properties of the <queue-policy> element. For example: <federations> <federation name="eu-north-1" user="federation_username" password="32a10275cf4ab4e9"> <queue-policy name="news-queue-federation" include-federated="true" priority-adjustment="-5" transformer-ref="news-transformer"> </queue-policy> </federation> </federations> name Name of the queue policy. All queue policies that are configured on the broker must have unique names. include-federated When the value of this property is set to false , the configuration does not re-federate an already-federated consumer (that is, a consumer on a federated queue). This avoids a situation where in a symmetric or closed-loop topology, there are no non-federated consumers, and messages flow endlessly around the system. You might set the value of this property to true if you do not have a closed-loop topology. For example, suppose that you have a chain of three brokers, BrokerA , BrokerB , and BrokerC , with a producer at BrokerA and a consumer at BrokerC . In this case, you would want BrokerB to re-federate the consumer to BrokerA . priority-adjustment When a consumer connects to a queue, its priority is used when the upstream (that is federated ) consumer is created. The priority of the federated consumer is adjusted by the value of the priority-adjustment property. The default value of this property is -1 , which ensures that the local consumer get prioritized over the federated consumer during load balancing. However, you can change the value of the priority adjustment as needed. If the priority adjustment is insufficient to prevent too many messages from moving to federated consumers, which can cause messages to move back and forth between brokers, you can limit the size of the batches of messages that are moved to the federated consumers. To limit the batch size, set the consumerWindowSize value to 0 on the connection URI of federated consumers. With the consumerWindowSize value set to 0 , AMQ Broker uses the value of the defaultConsumerWindowSize parameter in the address settings for a matching address to determine the batch size of messages that can be moved between brokers. The default value for the defaultConsumerWindowSize attribute is 1048576 bytes. transformer-ref Name of a transformer configuration. You might add a transformer configuration if you want to transform messages during federated message transmission. Transformer configuration is described later in this procedure. Within the <queue-policy> element, add address-matching patterns to include and exclude addresses from the queue policy. For example: <federations> <federation name="eu-north-1" user="federation_username" password="32a10275cf4ab4e9"> <queue-policy name="news-queue-federation" include-federated="true" priority-adjustment="-5" transformer-ref="news-transformer"> <include queue-match="#" address-match="queue.bbc.new" /> <include queue-match="#" address-match="queue.usatoday" /> <include queue-match="#" address-match="queue.news.#" /> <exclude queue-match="#.local" address-match="#" /> </queue-policy> </federation> </federations> include The value of the address-match property of this element specifies addresses to include in the queue policy. You can specify an exact address, for example, queue.bbc.new or queue.usatoday . Or, you can use a wildcard expression to specify a matching set of addresses. In the preceding example, the queue policy also includes all address names that start with the string queue.news . In combination with the address-match property, you can use the queue-match property to include specific queues on those addresses in the queue policy. Like the address-match property, you can specify an exact queue name, or you can use a wildcard expression to specify a set of queues. In the preceding example, the number sign ( # ) wildcard character means that all queues on each address or set of addresses are included in the queue policy. exclude The value of the address-match property of this element specifies addresses to exclude from the queue policy. You can specify an exact address or use a wildcard expression to specify a matching set of addresses. In the preceding example, the number sign ( # ) wildcard character means that any queues that match the queue-match property across all addresses are excluded. In this case, any queue that ends with the string .local is excluded. This indicates that certain queues are kept as local queues, and not federated. Within the federation element, add a transformer element to reference a custom transformer implementation. For example: <federations> <federation name="eu-north-1" user="federation_username" password="32a10275cf4ab4e9"> <queue-policy name="news-queue-federation" include-federated="true" priority-adjustment="-5" transformer-ref="news-transformer"> <include queue-match="#" address-match="queue.bbc.new" /> <include queue-match="#" address-match="queue.usatoday" /> <include queue-match="#" address-match="queue.news.#" /> <exclude queue-match="#.local" address-match="#" /> </queue-policy> <transformer name="news-transformer"> <class-name>org.myorg.NewsTransformer</class-name> <property key="key1" value="value1"/> <property key="key2" value="value2"/> </transformer> </federation> </federations> name Name of the transformer configuration. This name must be unique on the broker in question. You specify this name as a value for the transformer-ref property of the address policy. class-name Name of a user-defined class that implements the org.apache.activemq.artemis.core.server.transformer.Transformer interface. The transformer's transform() method is invoked with the message before the message is transmitted. This enables you to transform the message header or body before it is federated. property Used to hold key-value pairs for specific transformer configuration. Within the federation element, add one or more upstream elements. Each upstream element defines an upstream broker connection and the policies to apply to that connection. For example: <federations> <federation name="eu-north-1" user="federation_username" password="32a10275cf4ab4e9"> <upstream name="eu-east-1"> <static-connectors> <connector-ref>eu-east-connector1</connector-ref> </static-connectors> <policy ref="news-queue-federation"/> </upstream> <upstream name="eu-west-1" > <static-connectors> <connector-ref>eu-west-connector1</connector-ref> </static-connectors> <policy ref="news-queue-federation"/> </upstream> <queue-policy name="news-queue-federation" include-federated="true" priority-adjustment="-5" transformer-ref="news-transformer"> <include queue-match="#" address-match="queue.bbc.new" /> <include queue-match="#" address-match="queue.usatoday" /> <include queue-match="#" address-match="queue.news.#" /> <exclude queue-match="#.local" address-match="#" /> </queue-policy> <transformer name="news-transformer"> <class-name>org.myorg.NewsTransformer</class-name> <property key="key1" value="value1"/> <property key="key2" value="value2"/> </transformer> </federation> </federations> static-connectors Contains a list of connector-ref elements that reference connector elements that are defined elsewhere in the broker.xml configuration file of the local broker. A connector defines what transport (TCP, SSL, HTTP, and so on) and server connection parameters (host, port, and so on) to use for outgoing connections. The following step of this procedure shows how to add the connectors referenced by the static-connectors elements of your federated queue configuration. policy-ref Name of the queue policy configured on the downstream broker that is applied to the upstream broker. The additional options that you can specify for an upstream element are described below: name Name of the upstream broker configuration. In this example, the names correspond to upstream brokers called eu-east-1 and eu-west-1 . user User name to use when creating the connection to the upstream broker. If not specified, the shared user name that is specified in the configuration of the federation element is used. password Password to use when creating the connection to the upstream broker. If not specified, the shared password that is specified in the configuration of the federation element is used. call-failover-timeout Similar to call-timeout , but used when a call is made during a failover attempt. The default value is -1 , which means that the timeout is disabled. call-timeout Time, in milliseconds, that a federation connection waits for a reply from a remote broker when it transmits a packet that is a blocking call. If this time elapses, the connection throws an exception. The default value is 30000 . check-period Period, in milliseconds, between consecutive "keep-alive" messages that the local broker sends to a remote broker to check the health of the federation connection. If the federation connection is healthy, the remote broker responds to each keep-alive message. If the connection is unhealthy, when the downstream broker fails to receive a response from the upstream broker, a mechanism called a circuit breaker is used to block federated consumers. See the description of the circuit-breaker-timeout parameter for more information. The default value of the check-period parameter is 30000 . circuit-breaker-timeout A single connection between a downstream and upstream broker might be shared by many federated queue and address consumers. In the event that the connection between the brokers is lost, each federated consumer might try to reconnect at the same time. To avoid this, a mechanism called a circuit breaker blocks the consumers. When the specified timeout value elapses, the circuit breaker re-tries the connection. If successful, consumers are unblocked. Otherwise, the circuit breaker is applied again. connection-ttl Time, in milliseconds, that a federation connection stays alive if it stops receiving messages from the remote broker. The default value is 60000 . discovery-group-ref As an alternative to defining static connectors for connections to upstream brokers, this element can be used to specify a discovery group that is already configured elsewhere in the broker.xml configuration file. Specifically, you specify an existing discovery group as a value for the discovery-group-name property of this element. For more information about discovery groups, see Section 14.1.6, "Broker discovery methods" . ha Specifies whether high availability is enabled for the connection to the upstream broker. If the value of this parameter is set to true , the local broker can connect to any available broker in an upstream cluster and automatically fails over to a backup broker if the live upstream broker shuts down. The default value is false . initial-connect-attempts Number of initial attempts that the downstream broker will make to connect to the upstream broker. If this value is reached without a connection being established, the upstream broker is considered permanently offline. The downstream broker no longer routes messages to the upstream broker. The default value is -1 , which means that there is no limit. max-retry-interval Maximum time, in milliseconds, between subsequent reconnection attempts when connection to the remote broker fails. The default value is 2000 . reconnect-attempts Number of times that the downstream broker will try to reconnect to the upstream broker if the connection fails. If this value is reached without a connection being re-established, the upstream broker is considered permanently offline. The downstream broker no longer routes messages to the upstream broker. The default value is -1 , which means that there is no limit. retry-interval Period, in milliseconds, between subsequent reconnection attempts, if connection to the remote broker has failed. The default value is 500 . retry-interval-multiplier Multiplying factor that is applied to the value of the retry-interval parameter. The default value is 1 . share-connection If there is both a downstream and upstream connection configured for the same broker, then the same connection will be shared, as long as both of the downstream and upstream configurations set the value of this parameter to true . The default value is false . On the local broker, add connectors to the remote brokers. These are the connectors referenced in the static-connectors elements of your federated address configuration. For example: <connectors> <connector name="eu-west-1-connector">tcp://localhost:61616</connector> <connector name="eu-east-1-connector">tcp://localhost:61617</connector> </connectors> Note If you want large messages to flow across federation connections, set the consumerWindowSize parameter to -1 on the connectors used for the federation connections. Setting the consumerWindowSize parameter to -1 means that there is no limit set for this parameter, which allows large messages to flow across connections. For example: <connectors> <connector name="eu-west-1-connector">tcp://localhost:61616?consumerWindowSize=-1</connector> <connector name="eu-east-1-connector">tcp://localhost:61617?consumerWindowSize=-1</connector> </connectors> For more information on large messages, see Chapter 8, Handling large messages . 4.22.4.2.6. Configuring downstream queue federation The following example shows how to configure downstream queue federation. Downstream queue federation enables you to add configuration on the local broker that one or more remote brokers use to connect back to the local broker. The advantage of this approach is that you can keep all federation configuration on a single broker. This might be a useful approach for a hub-and-spoke topology, for example. Note Downstream queue federation reverses the direction of the federation connection versus upstream queue configuration. Therefore, when you add remote brokers to your configuration, these become considered as the downstream brokers. The downstream brokers use the connection information in the configuration to connect back to the local broker, which is now considered to be upstream. This is illustrated later in this example, when you add configuration for the remote brokers. Prerequisites You should be familiar with the configuration for upstream queue federation. See Section 4.22.4.2.5, "Configuring upstream queue federation" . The following example shows how to configure queue federation between standalone brokers. However, you should also be familiar with the requirements for configuring federation for a broker cluster . For more information, see Section 4.22.4.2.1, "Configuring federation for a broker cluster" . Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Add a <federations> element that includes a <federation> element. For example: <federations> <federation name="eu-north-1" user="federation_username" password="32a10275cf4ab4e9"> </federation> </federations> Add a queue policy configuration. For example: <federations> ... <federation name="eu-north-1" user="federation_username" password="32a10275cf4ab4e9"> <queue-policy name="news-queue-federation" priority-adjustment="-5" include-federated="true" transformer-ref="new-transformer"> <include queue-match="#" address-match="queue.bbc.new" /> <include queue-match="#" address-match="queue.usatoday" /> <include queue-match="#" address-match="queue.news.#" /> <exclude queue-match="#.local" address-match="#" /> </queue-policy> </federation> ... </federations> If you want to transform messages before transmission, add a transformer configuration. For example: <federations> ... <federation name="eu-north-1" user="federation_username" password="32a10275cf4ab4e9"> <queue-policy name="news-queue-federation" priority-adjustment="-5" include-federated="true" transformer-ref="news-transformer"> <include queue-match="#" address-match="queue.bbc.new" /> <include queue-match="#" address-match="queue.usatoday" /> <include queue-match="#" address-match="queue.news.#" /> <exclude queue-match="#.local" address-match="#" /> </queue-policy> <transformer name="news-transformer"> <class-name>org.myorg.NewsTransformer</class-name> <property key="key1" value="value1"/> <property key="key2" value="value2"/> </transformer> </federation> ... </federations> Add a downstream element for each remote broker. For example: <federations> ... <federation name="eu-north-1" user="federation_username" password="32a10275cf4ab4e9"> <downstream name="eu-east-1"> <static-connectors> <connector-ref>eu-east-connector1</connector-ref> </static-connectors> <upstream-connector-ref>netty-connector</upstream-connector-ref> <policy ref="news-address-federation"/> </downstream> <downstream name="eu-west-1" > <static-connectors> <connector-ref>eu-west-connector1</connector-ref> </static-connectors> <upstream-connector-ref>netty-connector</upstream-connector-ref> <policy ref="news-address-federation"/> </downstream> <queue-policy name="news-queue-federation" priority-adjustment="-5" include-federated="true" transformer-ref="new-transformer"> <include queue-match="#" address-match="queue.bbc.new" /> <include queue-match="#" address-match="queue.usatoday" /> <include queue-match="#" address-match="queue.news.#" /> <exclude queue-match="#.local" address-match="#" /> </queue-policy> <transformer name="news-transformer"> <class-name>org.myorg.NewsTransformer</class-name> <property key="key1" value="value1"/> <property key="key2" value="value2"/> </transformer> </federation> ... </federations> As shown in the preceding configuration, the remote brokers are now considered to be downstream of the local broker. The downstream brokers use the connection information in the configuration to connect back to the local (that is, upstream ) broker. On the local broker, add connectors and acceptors used by the local and remote brokers to establish the federation connection. For example: <connectors> <connector name="netty-connector">tcp://localhost:61616</connector> <connector name="eu-west-1-connector">tcp://localhost:61616</connector> <connector name="eu-east-1-connector">tcp://localhost:61617</connector> </connectors> <acceptors> <acceptor name="netty-acceptor">tcp://localhost:61616</acceptor> </acceptors> connector name="netty-connector" Connector configuration that the local broker sends to the remote broker. The remote broker use this configuration to connect back to the local broker. connector name="eu-west-1-connector" , connector name="eu-east-1-connector" Connectors to remote brokers. The local broker uses these connectors to connect to the remote brokers and share the configuration that the remote brokers need to connect back to the local broker. acceptor name="netty-acceptor" Acceptor on the local broker that corresponds to the connector used by the remote broker to connect back to the local broker. Note If you want large messages to flow across the federation connections, set the consumerWindowSize parameter to -1 on the connectors used for the federation connections. Setting the consumerWindowSize parameter to -1 means that there is no limit set for this parameter, which allows large messages to flow across connections. For example: <connectors> <connector name="eu-west-1-connector">tcp://localhost:61616?consumerWindowSize=-1</connector> <connector name="eu-east-1-connector">tcp://localhost:61617?consumerWindowSize=-1</connector> </connectors> For more information on large messages, see Chapter 8, Handling large messages . | [
"<address-setting match=\"my.*\"> <max-delivery-attempts>3</max-delivery-attempts> <last-value-queue>true</last-value-queue> </address-setting> <address-setting match=\"my.destination\"> <last-value-queue>false</last-value-queue> </address-setting>",
"<core> <literal-match-markers>()</literal-match-markers> </core>",
"<address-settings> <address-setting match=\"(orders.#)\"> <enable-metrics>true</enable-metrics> </address-setting> </address-settings>",
"<configuration> <core> <wildcard-addresses> // <enabled>true</enabled> // <delimiter>,</delimiter> // <any-words>@</any-words> // <single-word>USD</single-word> </wildcard-addresses> </core> </configuration>",
"<configuration ...> <core ...> <address name=\"my.anycast.destination\"> <anycast> <queue name=\"my.anycast.destination\"/> </anycast> </address> </core> </configuration>",
"<configuration ...> <core ...> <address name=\"my.anycast.destination\"> <anycast> <queue name=\"q1\"/> <queue name=\"q2\"/> </anycast> </address> </core> </configuration>",
"<configuration ...> <core ...> <address name=\"my.multicast.destination\"> <multicast/> </address> </core> </configuration>",
"<configuration ...> <core ...> <address name=\"my.multicast.destination\"> <multicast> <queue name=\"client123.my.multicast.destination\"/> <queue name=\"client456.my.multicast.destination\"/> </multicast> </address> </core> </configuration>",
"<configuration ...> <core ...> <address name=\"orders\"> <anycast> <queue name=\"orders\"/> </anycast> </address> </core> </configuration>",
"<configuration ...> <core ...> <address name=\"orders\"> <anycast> <queue name=\"orders\"/> </anycast> <multicast/> </address> </core> </configuration>",
"<configuration ...> <core ...> <acceptors> <!-- Acceptor for every supported protocol --> <acceptor name=\"artemis\">tcp://0.0.0.0:61616?protocols=AMQP;anycastPrefix=anycast://</acceptor> </acceptors> </core> </configuration>",
"<configuration ...> <core ...> <acceptors> <!-- Acceptor for every supported protocol --> <acceptor name=\"artemis\">tcp://0.0.0.0:61616?protocols=AMQP;multicastPrefix=multicast://</acceptor> </acceptors> </core> </configuration>",
"<configuration ...> <core ...> <address name=\"my.durable.address\"> <multicast> <queue name=\"q1\"> <durable>true</durable> </queue> </multicast> </address> </core> </configuration>",
"<configuration ...> <core ...> <address name=\"my.non.shared.durable.address\"> <multicast> <queue name=\"orders1\"> <durable>true</durable> </queue> <queue name=\"orders2\"> <durable>true</durable> </queue> </multicast> </address> </core> </configuration>",
"<configuration ...> <core ...> <address name=\"my.non.shared.durable.address\"> <multicast> <queue name=\"orders1\" max-consumers=\"1\"> <durable>true</durable> </queue> <queue name=\"orders2\" max-consumers=\"1\"> <durable>true</durable> </queue> </multicast> </address> </core> </configuration>",
"<configuration ...> <core ...> <address name=\"my.non.durable.address\"> <multicast> <queue name=\"orders1\" purge-on-no-consumers=\"true\"/> </multicast> </address> </core> </configuration>",
"<configuration ...> <core ...> <address-settings> <address-setting match=\"activemq.#\"> <auto-create-addresses>true</auto-create-addresses> <auto-delete-addresses>true</auto-delete-addresses> <auto-create-queues>true</auto-create-queues> <auto-delete-queues>true</auto-delete-queues> <default-address-routing-type>ANYCAST</default-address-routing-type> </address-setting> </address-settings> </core> </configuration>",
"<configuration ...> <core ...> <addresses> <address name=\"my.address\"> <anycast> <queue name=\"q1\" /> <queue name=\"q2\" /> </anycast> </address> </addresses> </core> </configuration>",
"String FQQN = \"my.address::q1\"; Queue q1 session.createQueue(FQQN); MessageConsumer consumer = session.createConsumer(q1);",
"<configuration ...> <core ...> <addresses> <address name=\"my.sharded.address\"></address> </addresses> </core> </configuration>",
"<configuration ...> <core ...> <addresses> <address name=\"my.sharded.address\"> <anycast> <queue name=\"q1\" /> <queue name=\"q2\" /> <queue name=\"q3\" /> </anycast> </address> </addresses> </core> </configuration>",
"<address name=\"my.address\"> <multicast> <queue name=\"prices1\" last-value-key=\"stock_ticker\"/> </multicast> </address>",
"<address name=\"my.address\"> <multicast> <queue name=\"prices1\" last-value=\"true\"/> </multicast> </address>",
"<address-setting match=\"lastValue\"> <default-last-value-key>stock_ticker</default-last-value-key> </address-setting>",
"<address-setting match=\"lastValue.*\"> <default-last-value-key>stock_ticker</default-last-value-key> </address-setting>",
"<address-setting match=\"lastValue\"> <default-last-value-queue>true</default-last-value-queue> </address-setting>",
"<address name=\"my.address\"> <multicast> <queue name=\"prices1\" last-value-key=\"stock_ticker\"/> </multicast> </address>",
"TextMessage message = session.createTextMessage(\"First message with last value property set\"); message.setStringProperty(\"stock_ticker\", \"ATN\"); message.setStringProperty(\"stock_price\", \"36.83\"); producer.send(message);",
"TextMessage message = session.createTextMessage(\"Second message with last value property set\"); message.setStringProperty(\"stock_ticker\", \"ATN\"); message.setStringProperty(\"stock_price\", \"37.02\"); producer.send(message);",
"TextMessage messageReceived = (TextMessage)messageConsumer.receive(5000); System.out.format(\"Received message: %s\\n\", messageReceived.getText());",
"<address name=\"my.address\"> <multicast> <queue name=\"orders1\" last-value-key=\"stock_ticker\" non-destructive=\"true\" /> </multicast> </address>",
"<address-setting match=\"lastValue\"> <default-last-value-key>stock_ticker </default-last-value-key> <default-non-destructive>true</default-non-destructive> </address-setting>",
"<configuration ...> <core ...> <message-expiry-scan-period>1000</message-expiry-scan-period>",
"<configuration ...> <core ...> <address-settings> <address-setting match=\"stocks\"> <expiry-address>ExpiryAddress</expiry-address> <expiry-delay>10</expiry-delay> </address-setting> <address-settings> <configuration ...>",
"<configuration ...> <core ...> <address-settings> <address-setting match=\"stocks\"> <expiry-address>ExpiryAddress</expiry-address> <min-expiry-delay>10</min-expiry-delay> <max-expiry-delay>100</max-expiry-delay> </address-setting> <address-settings> <configuration ...>",
"<addresses> <address name=\"ExpiryAddress\"> <anycast> <queue name=\"ExpiryQueue\"/> </anycast> </address> </addresses>",
"<configuration ...> <core ...> <address-settings> <address-setting match=\"stocks\"> <expiry-address>ExpiryAddress</expiry-address> </address-setting> <address-settings> <configuration ...>",
"<configuration ...> <core ...> <address-settings> <address-setting match=\"stocks\"> <expiry-address>ExpiryAddress</expiry-address> <auto-create-expiry-resources>true</auto-create-expiry-resources> <expiry-queue-prefix>EXP.</expiry-queue-prefix> <expiry-queue-suffix></expiry-queue-suffix> </address-setting> <address-settings> <configuration ...>",
"<configuration ...> <core ...> <address-settings> <address-setting match=\"exampleQueue\"> <dead-letter-address>DLA</dead-letter-address> <max-delivery-attempts>3</max-delivery-attempts> </address-setting> <address-settings> <configuration ...>",
"<configuration ...> <core ...> <addresses> <address name=\"DLA\"> <anycast> <queue name=\"DLQ\" /> </anycast> </address> </addresses> </core> </configuration>",
"<configuration ...> <core ...> <address-settings> <address-setting match=\"exampleQueue\"> <dead-letter-address>DLA</dead-letter-address> <max-delivery-attempts>3</max-delivery-attempts> </address-setting> <address-settings> <configuration ...>",
"<configuration ...> <core ...> <address-settings> <address-setting match=\"exampleQueue\"> <dead-letter-address>DLA</dead-letter-address> <max-delivery-attempts>3</max-delivery-attempts> <auto-create-dead-letter-resources>true</auto-create-dead-letter-resources> <dead-letter-queue-prefix>DLQ.</dead-letter-queue-prefix> <dead-letter-queue-suffix></dead-letter-queue-suffix> </address-setting> <address-settings> <configuration ...>",
"<addresses> <address name=\"orders\"> <multicast> <queue name=\"orders\" enabled=\"false\"/> </multicast> </address> </addresses>",
"<configuration ...> <core ...> <addresses> <address name=\"my.address\"> <anycast> <queue name=\"q3\" max-consumers=\"20\"/> </anycast> </address> </addresses> </core> </configuration>",
"<configuration ...> <core ...> <address name=\"my.address\"> <anycast> <queue name=\"q3\" max-consumers=\"1\"/> </anycast> </address> </core> </configuration>",
"<configuration ...> <core ...> <address name=\"my.address\"> <anycast> <queue name=\"q3\" max-consumers=\"-1\"/> </anycast> </address> </core> </configuration>",
"<configuration ...> <core ...> <address name=\"my.address\"> <multicast> <queue name=\"orders1\" exclusive=\"true\"/> </multicast> </address> </core> </configuration>",
"<address-setting match=\"myAddress\"> <default-exclusive-queue>true</default-exclusive-queue> </address-setting>",
"<address-setting match=\"myAddress.*\"> <default-exclusive-queue>true</default-exclusive-queue> </address-setting>",
"<temporary-queue-namespace>temp-example</temporary-queue-namespace>",
"<address-settings> <address-setting match=\"temp-example.#\"> <enable-metrics>false</enable-metrics> </address-setting> </address-settings>",
"<address-settings> <address-setting match=\"ring.#\"> <default-ring-size>3</default-ring-size> </address-setting> </address-settings>",
"<addresses> <address name=\"myRing\"> <anycast> <queue name=\"myRing\" ring-size=\"5\" /> </anycast> </address> </addresses>",
"<configuration> <core> <address-settings> <address-setting match=\"orders\"> <retroactive-message-count>100</retroactive-message-count> </address-setting> </address-settings> </core> </configuration>",
"<acceptor name=\"artemis\">tcp://127.0.0.1:61616?protocols=CORE,AMQP,OPENWIRE;supportAdvisory=false;suppressInternalManagementObjects=false</acceptor>",
"<broker-connections> <amqp-connection uri=\"tcp://<__HOST__>:<__PORT__>\" user=\"federation_user\" password=\"federation_pwd\" name=\"queue-federation-example\"> </amqp-connection> </broker-connections>",
"<broker-connections> <amqp-connection uri=\"tcp://<__HOST__>:<__PORT__>\" user=\"federation_user\" password=\"federation_pwd\" name=\"queue-federation-example\"> <federation> <local-address-policy name=\"example-local-address-policy\" auto-delete=\"true\" auto-delete-delay=\"1\" auto-delete-message-count=\"2\" max-hops=\"1\" enable-divert-bindings=\"true\"> <include address-match=\"queue.news.#\" /> <include address-match=\"queue.bbc.news\" /> <exclude address-match=\"queue.news.sport.#\" /> </local-address-policy> <remote-address-policy name=\"example-remote-address-policy\"> <include address-match=\"queue.usatoday\" /> </remote-address-policy> </federation> </amqp-connection> </broker-connections>",
"<broker-connections> <amqp-connection uri=\"tcp://<__HOST__>:<__PORT__>\" user=\"federation_user\" password=\"federation_pwd\" name=\"queue-federation-example\"> </amqp-connection> </broker-connections>",
"<broker-connections> <amqp-connection uri=\"tcp://HOST:PORT\" name=\"federation-example\"> <federation> <local-queue-policy name=\"example-local-queue-policy\"> <include address-match=\"#\" queue-match=\"#.remote\" /> <exclude address-match=\"#\" queue-match=\"#.local\" /> </local-queue-policy> </federation> </amqp-connection> </broker-connections>",
"<federations> <federation name=\"eu-north-1\" user=\"federation_username\" password=\"32a10275cf4ab4e9\"> </federation> </federations>",
"<federations> <federation name=\"eu-north-1\" user=\"federation_username\" password=\"32a10275cf4ab4e9\"> <address-policy name=\"news-address-federation\" auto-delete=\"true\" auto-delete-delay=\"300000\" auto-delete-message-count=\"-1\" enable-divert-bindings=\"false\" max-hops=\"1\" transformer-ref=\"news-transformer\"> </address-policy> </federation> </federations>",
"<federations> <federation name=\"eu-north-1\" user=\"federation_username\" password=\"32a10275cf4ab4e9\"> <address-policy name=\"news-address-federation\" auto-delete=\"true\" auto-delete-delay=\"300000\" auto-delete-message-count=\"-1\" enable-divert-bindings=\"false\" max-hops=\"1\" transformer-ref=\"news-transformer\"> <include address-match=\"queue.bbc.new\" /> <include address-match=\"queue.usatoday\" /> <include address-match=\"queue.news.#\" /> <exclude address-match=\"queue.news.sport.#\" /> </address-policy> </federation> </federations>",
"<federations> <federation name=\"eu-north-1\" user=\"federation_username\" password=\"32a10275cf4ab4e9\"> <address-policy name=\"news-address-federation\" auto-delete=\"true\" auto-delete-delay=\"300000\" auto-delete-message-count=\"-1\" enable-divert-bindings=\"false\" max-hops=\"1\" transformer-ref=\"news-transformer\"> <include address-match=\"queue.bbc.new\" /> <include address-match=\"queue.usatoday\" /> <include address-match=\"queue.news.#\" /> <exclude address-match=\"queue.news.sport.#\" /> </address-policy> <transformer name=\"news-transformer\"> <class-name>org.myorg.NewsTransformer</class-name> <property key=\"key1\" value=\"value1\"/> <property key=\"key2\" value=\"value2\"/> </transformer> </federation> </federations>",
"<federations> <federation name=\"eu-north-1\" user=\"federation_username\" password=\"32a10275cf4ab4e9\"> <upstream name=\"eu-east-1\"> <static-connectors> <connector-ref>eu-east-connector1</connector-ref> </static-connectors> <policy ref=\"news-address-federation\"/> </upstream> <upstream name=\"eu-west-1\" > <static-connectors> <connector-ref>eu-west-connector1</connector-ref> </static-connectors> <policy ref=\"news-address-federation\"/> </upstream> <address-policy name=\"news-address-federation\" auto-delete=\"true\" auto-delete-delay=\"300000\" auto-delete-message-count=\"-1\" enable-divert-bindings=\"false\" max-hops=\"1\" transformer-ref=\"news-transformer\"> <include address-match=\"queue.bbc.new\" /> <include address-match=\"queue.usatoday\" /> <include address-match=\"queue.news.#\" /> <exclude address-match=\"queue.news.sport.#\" /> </address-policy> <transformer name=\"news-transformer\"> <class-name>org.myorg.NewsTransformer</class-name> <property key=\"key1\" value=\"value1\"/> <property key=\"key2\" value=\"value2\"/> </transformer> </federation> </federations>",
"<connectors> <connector name=\"eu-west-1-connector\">tcp://localhost:61616</connector> <connector name=\"eu-east-1-connector\">tcp://localhost:61617</connector> </connectors>",
"<federations> <federation name=\"eu-north-1\" user=\"federation_username\" password=\"32a10275cf4ab4e9\"> </federation> </federations>",
"<federations> <federation name=\"eu-north-1\" user=\"federation_username\" password=\"32a10275cf4ab4e9\"> <address-policy name=\"news-address-federation\" max-hops=\"1\" auto-delete=\"true\" auto-delete-delay=\"300000\" auto-delete-message-count=\"-1\" transformer-ref=\"news-transformer\"> <include address-match=\"queue.bbc.new\" /> <include address-match=\"queue.usatoday\" /> <include address-match=\"queue.news.#\" /> <exclude address-match=\"queue.news.sport.#\" /> </address-policy> </federation> </federations>",
"<federations> <federation name=\"eu-north-1\" user=\"federation_username\" password=\"32a10275cf4ab4e9\"> <address-policy name=\"news-address-federation\" max-hops=\"1\" auto-delete=\"true\" auto-delete-delay=\"300000\" auto-delete-message-count=\"-1\" transformer-ref=\"news-transformer\"> <include address-match=\"queue.bbc.new\" /> <include address-match=\"queue.usatoday\" /> <include address-match=\"queue.news.#\" /> <exclude address-match=\"queue.news.sport.#\" /> </address-policy> <transformer name=\"news-transformer\"> <class-name>org.myorg.NewsTransformer</class-name> <property key=\"key1\" value=\"value1\"/> <property key=\"key2\" value=\"value2\"/> </transformer> </federation> </federations>",
"<federations> <federation name=\"eu-north-1\" user=\"federation_username\" password=\"32a10275cf4ab4e9\"> <downstream name=\"eu-east-1\"> <static-connectors> <connector-ref>eu-east-connector1</connector-ref> </static-connectors> <upstream-connector-ref>netty-connector</upstream-connector-ref> <policy ref=\"news-address-federation\"/> </downstream> <downstream name=\"eu-west-1\" > <static-connectors> <connector-ref>eu-west-connector1</connector-ref> </static-connectors> <upstream-connector-ref>netty-connector</upstream-connector-ref> <policy ref=\"news-address-federation\"/> </downstream> <address-policy name=\"news-address-federation\" max-hops=\"1\" auto-delete=\"true\" auto-delete-delay=\"300000\" auto-delete-message-count=\"-1\" transformer-ref=\"news-transformer\"> <include address-match=\"queue.bbc.new\" /> <include address-match=\"queue.usatoday\" /> <include address-match=\"queue.news.#\" /> <exclude address-match=\"queue.news.sport.#\" /> </address-policy> <transformer name=\"news-transformer\"> <class-name>org.myorg.NewsTransformer</class-name> <property key=\"key1\" value=\"value1\"/> <property key=\"key2\" value=\"value2\"/> </transformer> </federation> </federations>",
"<connectors> <connector name=\"netty-connector\">tcp://localhost:61616</connector> <connector name=\"eu-west-1-connector\">tcp://localhost:61616</connector> <connector name=\"eu-east-1-connector\">tcp://localhost:61617</connector> </connectors> <acceptors> <acceptor name=\"netty-acceptor\">tcp://localhost:61616</acceptor> </acceptors>",
"<federations> <federation name=\"eu-north-1\" user=\"federation_username\" password=\"32a10275cf4ab4e9\"> </federation> </federations>",
"<federations> <federation name=\"eu-north-1\" user=\"federation_username\" password=\"32a10275cf4ab4e9\"> <queue-policy name=\"news-queue-federation\" include-federated=\"true\" priority-adjustment=\"-5\" transformer-ref=\"news-transformer\"> </queue-policy> </federation> </federations>",
"tcp://<host>:<port>?consumerWindowSize=0",
"<federations> <federation name=\"eu-north-1\" user=\"federation_username\" password=\"32a10275cf4ab4e9\"> <queue-policy name=\"news-queue-federation\" include-federated=\"true\" priority-adjustment=\"-5\" transformer-ref=\"news-transformer\"> <include queue-match=\"#\" address-match=\"queue.bbc.new\" /> <include queue-match=\"#\" address-match=\"queue.usatoday\" /> <include queue-match=\"#\" address-match=\"queue.news.#\" /> <exclude queue-match=\"#.local\" address-match=\"#\" /> </queue-policy> </federation> </federations>",
"<federations> <federation name=\"eu-north-1\" user=\"federation_username\" password=\"32a10275cf4ab4e9\"> <queue-policy name=\"news-queue-federation\" include-federated=\"true\" priority-adjustment=\"-5\" transformer-ref=\"news-transformer\"> <include queue-match=\"#\" address-match=\"queue.bbc.new\" /> <include queue-match=\"#\" address-match=\"queue.usatoday\" /> <include queue-match=\"#\" address-match=\"queue.news.#\" /> <exclude queue-match=\"#.local\" address-match=\"#\" /> </queue-policy> <transformer name=\"news-transformer\"> <class-name>org.myorg.NewsTransformer</class-name> <property key=\"key1\" value=\"value1\"/> <property key=\"key2\" value=\"value2\"/> </transformer> </federation> </federations>",
"<federations> <federation name=\"eu-north-1\" user=\"federation_username\" password=\"32a10275cf4ab4e9\"> <upstream name=\"eu-east-1\"> <static-connectors> <connector-ref>eu-east-connector1</connector-ref> </static-connectors> <policy ref=\"news-queue-federation\"/> </upstream> <upstream name=\"eu-west-1\" > <static-connectors> <connector-ref>eu-west-connector1</connector-ref> </static-connectors> <policy ref=\"news-queue-federation\"/> </upstream> <queue-policy name=\"news-queue-federation\" include-federated=\"true\" priority-adjustment=\"-5\" transformer-ref=\"news-transformer\"> <include queue-match=\"#\" address-match=\"queue.bbc.new\" /> <include queue-match=\"#\" address-match=\"queue.usatoday\" /> <include queue-match=\"#\" address-match=\"queue.news.#\" /> <exclude queue-match=\"#.local\" address-match=\"#\" /> </queue-policy> <transformer name=\"news-transformer\"> <class-name>org.myorg.NewsTransformer</class-name> <property key=\"key1\" value=\"value1\"/> <property key=\"key2\" value=\"value2\"/> </transformer> </federation> </federations>",
"<connectors> <connector name=\"eu-west-1-connector\">tcp://localhost:61616</connector> <connector name=\"eu-east-1-connector\">tcp://localhost:61617</connector> </connectors>",
"<connectors> <connector name=\"eu-west-1-connector\">tcp://localhost:61616?consumerWindowSize=-1</connector> <connector name=\"eu-east-1-connector\">tcp://localhost:61617?consumerWindowSize=-1</connector> </connectors>",
"<federations> <federation name=\"eu-north-1\" user=\"federation_username\" password=\"32a10275cf4ab4e9\"> </federation> </federations>",
"<federations> <federation name=\"eu-north-1\" user=\"federation_username\" password=\"32a10275cf4ab4e9\"> <queue-policy name=\"news-queue-federation\" priority-adjustment=\"-5\" include-federated=\"true\" transformer-ref=\"new-transformer\"> <include queue-match=\"#\" address-match=\"queue.bbc.new\" /> <include queue-match=\"#\" address-match=\"queue.usatoday\" /> <include queue-match=\"#\" address-match=\"queue.news.#\" /> <exclude queue-match=\"#.local\" address-match=\"#\" /> </queue-policy> </federation> </federations>",
"<federations> <federation name=\"eu-north-1\" user=\"federation_username\" password=\"32a10275cf4ab4e9\"> <queue-policy name=\"news-queue-federation\" priority-adjustment=\"-5\" include-federated=\"true\" transformer-ref=\"news-transformer\"> <include queue-match=\"#\" address-match=\"queue.bbc.new\" /> <include queue-match=\"#\" address-match=\"queue.usatoday\" /> <include queue-match=\"#\" address-match=\"queue.news.#\" /> <exclude queue-match=\"#.local\" address-match=\"#\" /> </queue-policy> <transformer name=\"news-transformer\"> <class-name>org.myorg.NewsTransformer</class-name> <property key=\"key1\" value=\"value1\"/> <property key=\"key2\" value=\"value2\"/> </transformer> </federation> </federations>",
"<federations> <federation name=\"eu-north-1\" user=\"federation_username\" password=\"32a10275cf4ab4e9\"> <downstream name=\"eu-east-1\"> <static-connectors> <connector-ref>eu-east-connector1</connector-ref> </static-connectors> <upstream-connector-ref>netty-connector</upstream-connector-ref> <policy ref=\"news-address-federation\"/> </downstream> <downstream name=\"eu-west-1\" > <static-connectors> <connector-ref>eu-west-connector1</connector-ref> </static-connectors> <upstream-connector-ref>netty-connector</upstream-connector-ref> <policy ref=\"news-address-federation\"/> </downstream> <queue-policy name=\"news-queue-federation\" priority-adjustment=\"-5\" include-federated=\"true\" transformer-ref=\"new-transformer\"> <include queue-match=\"#\" address-match=\"queue.bbc.new\" /> <include queue-match=\"#\" address-match=\"queue.usatoday\" /> <include queue-match=\"#\" address-match=\"queue.news.#\" /> <exclude queue-match=\"#.local\" address-match=\"#\" /> </queue-policy> <transformer name=\"news-transformer\"> <class-name>org.myorg.NewsTransformer</class-name> <property key=\"key1\" value=\"value1\"/> <property key=\"key2\" value=\"value2\"/> </transformer> </federation> </federations>",
"<connectors> <connector name=\"netty-connector\">tcp://localhost:61616</connector> <connector name=\"eu-west-1-connector\">tcp://localhost:61616</connector> <connector name=\"eu-east-1-connector\">tcp://localhost:61617</connector> </connectors> <acceptors> <acceptor name=\"netty-acceptor\">tcp://localhost:61616</acceptor> </acceptors>",
"<connectors> <connector name=\"eu-west-1-connector\">tcp://localhost:61616?consumerWindowSize=-1</connector> <connector name=\"eu-east-1-connector\">tcp://localhost:61617?consumerWindowSize=-1</connector> </connectors>"
] | https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.12/html/configuring_amq_broker/assembly-br-configuring-addresses-and-queues_configuring |
Chapter 9. ImageStreamTag [image.openshift.io/v1] | Chapter 9. ImageStreamTag [image.openshift.io/v1] Description ImageStreamTag represents an Image that is retrieved by tag name from an ImageStream. Use this resource to interact with the tags and images in an image stream by tag, or to see the image details for a particular tag. The image associated with this resource is the most recently successfully tagged, imported, or pushed image (as described in the image stream status.tags.items list for this tag). If an import is in progress or has failed the image will be shown. Deleting an image stream tag clears both the status and spec fields of an image stream. If no image can be retrieved for a given tag, a not found error will be returned. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required tag generation lookupPolicy image 9.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources conditions array conditions is an array of conditions that apply to the image stream tag. conditions[] object TagEventCondition contains condition information for a tag event. generation integer generation is the current generation of the tagged image - if tag is provided and this value is not equal to the tag generation, a user has requested an import that has not completed, or conditions will be filled out indicating any error. image object Image is an immutable representation of a container image and metadata at a point in time. Images are named by taking a hash of their contents (metadata and content) and any change in format, content, or metadata results in a new name. The images resource is primarily for use by cluster administrators and integrations like the cluster image registry - end users instead access images via the imagestreamtags or imagestreamimages resources. While image metadata is stored in the API, any integration that implements the container image registry API must provide its own storage for the raw manifest data, image config, and layer contents. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds lookupPolicy object ImageLookupPolicy describes how an image stream can be used to override the image references used by pods, builds, and other resources in a namespace. metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata tag object TagReference specifies optional annotations for images using this tag and an optional reference to an ImageStreamTag, ImageStreamImage, or DockerImage this tag should track. 9.1.1. .conditions Description conditions is an array of conditions that apply to the image stream tag. Type array 9.1.2. .conditions[] Description TagEventCondition contains condition information for a tag event. Type object Required type status generation Property Type Description generation integer Generation is the spec tag generation that this status corresponds to lastTransitionTime Time LastTransitionTIme is the time the condition transitioned from one status to another. message string Message is a human readable description of the details about last transition, complementing reason. reason string Reason is a brief machine readable explanation for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of tag event condition, currently only ImportSuccess 9.1.3. .image Description Image is an immutable representation of a container image and metadata at a point in time. Images are named by taking a hash of their contents (metadata and content) and any change in format, content, or metadata results in a new name. The images resource is primarily for use by cluster administrators and integrations like the cluster image registry - end users instead access images via the imagestreamtags or imagestreamimages resources. While image metadata is stored in the API, any integration that implements the container image registry API must provide its own storage for the raw manifest data, image config, and layer contents. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources dockerImageConfig string DockerImageConfig is a JSON blob that the runtime uses to set up the container. This is a part of manifest schema v2. Will not be set when the image represents a manifest list. dockerImageLayers array DockerImageLayers represents the layers in the image. May not be set if the image does not define that data or if the image represents a manifest list. dockerImageLayers[] object ImageLayer represents a single layer of the image. Some images may have multiple layers. Some may have none. dockerImageManifest string DockerImageManifest is the raw JSON of the manifest dockerImageManifestMediaType string DockerImageManifestMediaType specifies the mediaType of manifest. This is a part of manifest schema v2. dockerImageManifests array DockerImageManifests holds information about sub-manifests when the image represents a manifest list. When this field is present, no DockerImageLayers should be specified. dockerImageManifests[] object ImageManifest represents sub-manifests of a manifest list. The Digest field points to a regular Image object. dockerImageMetadata RawExtension DockerImageMetadata contains metadata about this image dockerImageMetadataVersion string DockerImageMetadataVersion conveys the version of the object, which if empty defaults to "1.0" dockerImageReference string DockerImageReference is the string that can be used to pull this image. dockerImageSignatures array (string) DockerImageSignatures provides the signatures as opaque blobs. This is a part of manifest schema v1. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata signatures array Signatures holds all signatures of the image. signatures[] object ImageSignature holds a signature of an image. It allows to verify image identity and possibly other claims as long as the signature is trusted. Based on this information it is possible to restrict runnable images to those matching cluster-wide policy. Mandatory fields should be parsed by clients doing image verification. The others are parsed from signature's content by the server. They serve just an informative purpose. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). 9.1.4. .image.dockerImageLayers Description DockerImageLayers represents the layers in the image. May not be set if the image does not define that data or if the image represents a manifest list. Type array 9.1.5. .image.dockerImageLayers[] Description ImageLayer represents a single layer of the image. Some images may have multiple layers. Some may have none. Type object Required name size mediaType Property Type Description mediaType string MediaType of the referenced object. name string Name of the layer as defined by the underlying store. size integer Size of the layer in bytes as defined by the underlying store. 9.1.6. .image.dockerImageManifests Description DockerImageManifests holds information about sub-manifests when the image represents a manifest list. When this field is present, no DockerImageLayers should be specified. Type array 9.1.7. .image.dockerImageManifests[] Description ImageManifest represents sub-manifests of a manifest list. The Digest field points to a regular Image object. Type object Required digest mediaType manifestSize architecture os Property Type Description architecture string Architecture specifies the supported CPU architecture, for example amd64 or ppc64le . digest string Digest is the unique identifier for the manifest. It refers to an Image object. manifestSize integer ManifestSize represents the size of the raw object contents, in bytes. mediaType string MediaType defines the type of the manifest, possible values are application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v2+json or application/vnd.docker.distribution.manifest.v1+json. os string OS specifies the operating system, for example linux . variant string Variant is an optional field repreenting a variant of the CPU, for example v6 to specify a particular CPU variant of the ARM CPU. 9.1.8. .image.signatures Description Signatures holds all signatures of the image. Type array 9.1.9. .image.signatures[] Description ImageSignature holds a signature of an image. It allows to verify image identity and possibly other claims as long as the signature is trusted. Based on this information it is possible to restrict runnable images to those matching cluster-wide policy. Mandatory fields should be parsed by clients doing image verification. The others are parsed from signature's content by the server. They serve just an informative purpose. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required type content Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources conditions array Conditions represent the latest available observations of a signature's current state. conditions[] object SignatureCondition describes an image signature condition of particular kind at particular probe time. content string Required: An opaque binary string which is an image's signature. created Time If specified, it is the time of signature's creation. imageIdentity string A human readable string representing image's identity. It could be a product name and version, or an image pull spec (e.g. "registry.access.redhat.com/rhel7/rhel:7.2"). issuedBy object SignatureIssuer holds information about an issuer of signing certificate or key. issuedTo object SignatureSubject holds information about a person or entity who created the signature. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata signedClaims object (string) Contains claims from the signature. type string Required: Describes a type of stored blob. 9.1.10. .image.signatures[].conditions Description Conditions represent the latest available observations of a signature's current state. Type array 9.1.11. .image.signatures[].conditions[] Description SignatureCondition describes an image signature condition of particular kind at particular probe time. Type object Required type status Property Type Description lastProbeTime Time Last time the condition was checked. lastTransitionTime Time Last time the condition transit from one status to another. message string Human readable message indicating details about last transition. reason string (brief) reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of signature condition, Complete or Failed. 9.1.12. .image.signatures[].issuedBy Description SignatureIssuer holds information about an issuer of signing certificate or key. Type object Property Type Description commonName string Common name (e.g. openshift-signing-service). organization string Organization name. 9.1.13. .image.signatures[].issuedTo Description SignatureSubject holds information about a person or entity who created the signature. Type object Required publicKeyID Property Type Description commonName string Common name (e.g. openshift-signing-service). organization string Organization name. publicKeyID string If present, it is a human readable key id of public key belonging to the subject used to verify image signature. It should contain at least 64 lowest bits of public key's fingerprint (e.g. 0x685ebe62bf278440). 9.1.14. .lookupPolicy Description ImageLookupPolicy describes how an image stream can be used to override the image references used by pods, builds, and other resources in a namespace. Type object Required local Property Type Description local boolean local will change the docker short image references (like "mysql" or "php:latest") on objects in this namespace to the image ID whenever they match this image stream, instead of reaching out to a remote registry. The name will be fully qualified to an image ID if found. The tag's referencePolicy is taken into account on the replaced value. Only works within the current namespace. 9.1.15. .tag Description TagReference specifies optional annotations for images using this tag and an optional reference to an ImageStreamTag, ImageStreamImage, or DockerImage this tag should track. Type object Required name Property Type Description annotations object (string) Optional; if specified, annotations that are applied to images retrieved via ImageStreamTags. from ObjectReference Optional; if specified, a reference to another image that this tag should point to. Valid values are ImageStreamTag, ImageStreamImage, and DockerImage. ImageStreamTag references can only reference a tag within this same ImageStream. generation integer Generation is a counter that tracks mutations to the spec tag (user intent). When a tag reference is changed the generation is set to match the current stream generation (which is incremented every time spec is changed). Other processes in the system like the image importer observe that the generation of spec tag is newer than the generation recorded in the status and use that as a trigger to import the newest remote tag. To trigger a new import, clients may set this value to zero which will reset the generation to the latest stream generation. Legacy clients will send this value as nil which will be merged with the current tag generation. importPolicy object TagImportPolicy controls how images related to this tag will be imported. name string Name of the tag reference boolean Reference states if the tag will be imported. Default value is false, which means the tag will be imported. referencePolicy object TagReferencePolicy describes how pull-specs for images in this image stream tag are generated when image change triggers in deployment configs or builds are resolved. This allows the image stream author to control how images are accessed. 9.1.16. .tag.importPolicy Description TagImportPolicy controls how images related to this tag will be imported. Type object Property Type Description importMode string ImportMode describes how to import an image manifest. insecure boolean Insecure is true if the server may bypass certificate verification or connect directly over HTTP during image import. scheduled boolean Scheduled indicates to the server that this tag should be periodically checked to ensure it is up to date, and imported 9.1.17. .tag.referencePolicy Description TagReferencePolicy describes how pull-specs for images in this image stream tag are generated when image change triggers in deployment configs or builds are resolved. This allows the image stream author to control how images are accessed. Type object Required type Property Type Description type string Type determines how the image pull spec should be transformed when the image stream tag is used in deployment config triggers or new builds. The default value is Source , indicating the original location of the image should be used (if imported). The user may also specify Local , indicating that the pull spec should point to the integrated container image registry and leverage the registry's ability to proxy the pull to an upstream registry. Local allows the credentials used to pull this image to be managed from the image stream's namespace, so others on the platform can access a remote image but have no access to the remote secret. It also allows the image layers to be mirrored into the local registry which the images can still be pulled even if the upstream registry is unavailable. 9.2. API endpoints The following API endpoints are available: /apis/image.openshift.io/v1/imagestreamtags GET : list objects of kind ImageStreamTag /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreamtags GET : list objects of kind ImageStreamTag POST : create an ImageStreamTag /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreamtags/{name} DELETE : delete an ImageStreamTag GET : read the specified ImageStreamTag PATCH : partially update the specified ImageStreamTag PUT : replace the specified ImageStreamTag 9.2.1. /apis/image.openshift.io/v1/imagestreamtags Table 9.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind ImageStreamTag Table 9.2. HTTP responses HTTP code Reponse body 200 - OK ImageStreamTagList schema 401 - Unauthorized Empty 9.2.2. /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreamtags Table 9.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 9.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description list objects of kind ImageStreamTag Table 9.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 9.6. HTTP responses HTTP code Reponse body 200 - OK ImageStreamTagList schema 401 - Unauthorized Empty HTTP method POST Description create an ImageStreamTag Table 9.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.8. Body parameters Parameter Type Description body ImageStreamTag schema Table 9.9. HTTP responses HTTP code Reponse body 200 - OK ImageStreamTag schema 201 - Created ImageStreamTag schema 202 - Accepted ImageStreamTag schema 401 - Unauthorized Empty 9.2.3. /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreamtags/{name} Table 9.10. Global path parameters Parameter Type Description name string name of the ImageStreamTag namespace string object name and auth scope, such as for teams and projects Table 9.11. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an ImageStreamTag Table 9.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 9.13. Body parameters Parameter Type Description body DeleteOptions schema Table 9.14. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ImageStreamTag Table 9.15. HTTP responses HTTP code Reponse body 200 - OK ImageStreamTag schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ImageStreamTag Table 9.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 9.17. Body parameters Parameter Type Description body Patch schema Table 9.18. HTTP responses HTTP code Reponse body 200 - OK ImageStreamTag schema 201 - Created ImageStreamTag schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ImageStreamTag Table 9.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.20. Body parameters Parameter Type Description body ImageStreamTag schema Table 9.21. HTTP responses HTTP code Reponse body 200 - OK ImageStreamTag schema 201 - Created ImageStreamTag schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/image_apis/imagestreamtag-image-openshift-io-v1 |
Chapter 22. OpenLMI | Chapter 22. OpenLMI The Open Linux Management Infrastructure , commonly abbreviated as OpenLMI , is a common infrastructure for the management of Linux systems. It builds on top of existing tools and serves as an abstraction layer in order to hide much of the complexity of the underlying system from system administrators. OpenLMI is distributed with a set of services that can be accessed locally or remotely and provides multiple language bindings, standard APIs, and standard scripting interfaces that can be used to manage and monitor hardware, operating systems, and system services. 22.1. About OpenLMI OpenLMI is designed to provide a common management interface to production servers running the Red Hat Enterprise Linux system on both physical and virtual machines. It consists of the following three components: System management agents - these agents are installed on a managed system and implement an object model that is presented to a standard object broker. The initial agents implemented in OpenLMI include storage configuration and network configuration, but later work will address additional elements of system management. The system management agents are commonly referred to as Common Information Model providers or CIM providers . A standard object broker - the object broker manages system management agents and provides an interface to them. The standard object broker is also known as a CIM Object Monitor or CIMOM . Client applications and scripts - the client applications and scripts call the system management agents through the standard object broker. The OpenLMI project complements existing management initiatives by providing a low-level interface that can be used by scripts or system management consoles. Interfaces distributed with OpenLMI include C, C++, Python, Java, and an interactive command line client, and all of them offer the same full access to the capabilities implemented in each agent. This ensures that you always have access to exactly the same capabilities no matter which programming interface you decide to use. 22.1.1. Main Features The following are key benefits of installing and using OpenLMI on your system: OpenLMI provides a standard interface for configuration, management, and monitoring of your local and remote systems. It allows you to configure, manage, and monitor production servers running on both physical and virtual machines. It is distributed with a collection of CIM providers that allow you to configure, manage, and monitor storage devices and complex networks. It allows you to call system management functions from C, C++, Python, and Java programs, and includes LMIShell, which provides a command line interface. It is free software based on open industry standards. 22.1.2. Management Capabilities Key capabilities of OpenLMI include the management of storage devices, networks, system services, user accounts, hardware and software configuration, power management, and interaction with Active Directory. For a complete list of CIM providers that are distributed with Red Hat Enterprise Linux 7, see Table 22.1, "Available CIM Providers" . Table 22.1. Available CIM Providers Package Name Description openlmi-account A CIM provider for managing user accounts. openlmi-logicalfile A CIM provider for reading files and directories. openlmi-networking A CIM provider for network management. openlmi-powermanagement A CIM provider for power management. openlmi-service A CIM provider for managing system services. openlmi-storage A CIM provider for storage management. openlmi-fan A CIM provider for controlling computer fans. openlmi-hardware A CIM provider for retrieving hardware information. openlmi-realmd A CIM provider for configuring realmd. openlmi-software [a] A CIM provider for software management. [a] In Red Hat Enterprise Linux 7, the OpenLMI Software provider is included as a Technology Preview . This provider is fully functional, but has a known performance scaling issue where listing large numbers of software packages may consume excessive amount of memory and time. To work around this issue, adjust package searches to return as few packages as possible. 22.2. Installing OpenLMI OpenLMI is distributed as a collection of RPM packages that include the CIMOM, individual CIM providers, and client applications. This allows you distinguish between a managed and client system and install only those components you need. 22.2.1. Installing OpenLMI on a Managed System A managed system is the system you intend to monitor and manage by using the OpenLMI client tools. To install OpenLMI on a managed system, complete the following steps: Install the tog-pegasus package by typing the following at a shell prompt as root : This command installs the OpenPegasus CIMOM and all its dependencies to the system and creates a user account for the pegasus user. Install required CIM providers by running the following command as root : This command installs the CIM providers for storage, network, service, account, and power management. For a complete list of CIM providers distributed with Red Hat Enterprise Linux 7, see Table 22.1, "Available CIM Providers" . Edit the /etc/Pegasus/access.conf configuration file to customize the list of users that are allowed to connect to the OpenPegasus CIMOM. By default, only the pegasus user is allowed to access the CIMOM both remotely and locally. To activate this user account, run the following command as root to set the user's password: Start the OpenPegasus CIMOM by activating the tog-pegasus.service unit. To activate the tog-pegasus.service unit in the current session, type the following at a shell prompt as root : To configure the tog-pegasus.service unit to start automatically at boot time, type as root : If you intend to interact with the managed system from a remote machine, enable TCP communication on port 5989 ( wbem-https ). To open this port in the current session, run the following command as root : To open port 5989 for TCP communication permanently, type as root : You can now connect to the managed system and interact with it by using the OpenLMI client tools as described in Section 22.4, "Using LMIShell" . If you intend to perform OpenLMI operations directly on the managed system, also complete the steps described in Section 22.2.2, "Installing OpenLMI on a Client System" . 22.2.2. Installing OpenLMI on a Client System A client system is the system from which you intend to interact with the managed system. In a typical scenario, the client system and the managed system are installed on two separate machines, but you can also install the client tools on the managed system and interact with it directly. To install OpenLMI on a client system, complete the following steps: Install the openlmi-tools package by typing the following at a shell prompt as root : This command installs LMIShell, an interactive client and interpreter for accessing CIM objects provided by OpenPegasus, and all its dependencies to the system. Configure SSL certificates for OpenPegasus as described in Section 22.3, "Configuring SSL Certificates for OpenPegasus" . You can now use the LMIShell client to interact with the managed system as described in Section 22.4, "Using LMIShell" . 22.3. Configuring SSL Certificates for OpenPegasus OpenLMI uses the Web-Based Enterprise Management (WBEM) protocol that functions over an HTTP transport layer. Standard HTTP Basic authentication is performed in this protocol, which means that the user name and password are transmitted alongside the requests. Configuring the OpenPegasus CIMOM to use HTTPS for communication is necessary to ensure secure authentication. A Secure Sockets Layer (SSL) or Transport Layer Security (TLS) certificate is required on the managed system to establish an encrypted channel. There are two ways of managing SSL/TLS certificates on a system: Self-signed certificates require less infrastructure to use, but are more difficult to deploy to clients and manage securely. Authority-signed certificates are easier to deploy to clients once they are set up, but may require a greater initial investment. When using an authority-signed certificate, it is necessary to configure a trusted certificate authority on the client systems. The authority can then be used for signing all of the managed systems' CIMOM certificates. Certificates can also be part of a certificate chain, so the certificate used for signing the managed systems' certificates may in turn be signed by another, higher authority (such as Verisign, CAcert, RSA and many others). The default certificate and trust store locations on the file system are listed in Table 22.2, "Certificate and Trust Store Locations" . Table 22.2. Certificate and Trust Store Locations Configuration Option Location Description sslCertificateFilePath /etc/Pegasus/server.pem Public certificate of the CIMOM. sslKeyFilePath /etc/Pegasus/file.pem Private key known only to the CIMOM. sslTrustStore /etc/Pegasus/client.pem The file or directory providing the list of trusted certificate authorities. Important If you modify any of the files mentioned in Table 22.2, "Certificate and Trust Store Locations" , restart the tog-pegasus service to make sure it recognizes the new certificates. To restart the service, type the following at a shell prompt as root : For more information on how to manage system services in Red Hat Enterprise Linux 7, see Chapter 10, Managing Services with systemd . 22.3.1. Managing Self-signed Certificates A self-signed certificate uses its own private key to sign itself and it is not connected to any chain of trust. On a managed system, if certificates have not been provided by the administrator prior to the first time that the tog-pegasus service is started, a set of self-signed certificates will be automatically generated using the system's primary host name as the certificate subject. Important The automatically generated self-signed certificates are valid by default for 10 years, but they have no automatic-renewal capability. Any modification to these certificates will require manually creating new certificates following guidelines provided by the OpenSSL or Mozilla NSS documentation on the subject. To configure client systems to trust the self-signed certificate, complete the following steps: Copy the /etc/Pegasus/server.pem certificate from the managed system to the /etc/pki/ca-trust/source/anchors/ directory on the client system. To do so, type the following at a shell prompt as root : Replace hostname with the host name of the managed system. Note that this command only works if the sshd service is running on the managed system and is configured to allow the root user to log in to the system over the SSH protocol. For more information on how to install and configure the sshd service and use the scp command to transfer files over the SSH protocol, see Chapter 12, OpenSSH . Verify the integrity of the certificate on the client system by comparing its check sum with the check sum of the original file. To calculate the check sum of the /etc/Pegasus/server.pem file on the managed system, run the following command as root on that system: To calculate the check sum of the /etc/pki/ca-trust/source/anchors/pegasus- hostname .pem file on the client system, run the following command on this system: Replace hostname with the host name of the managed system. Update the trust store on the client system by running the following command as root : 22.3.2. Managing Authority-signed Certificates with Identity Management (Recommended) The Identity Management feature of Red Hat Enterprise Linux provides a domain controller which simplifies the management of SSL certificates within systems joined to the domain. Among others, the Identity Management server provides an embedded Certificate Authority. See the Red Hat Enterprise Linux 7 Linux Domain Identity, Authentication, and Policy Guide or the FreeIPA documentation for information on how to join the client and managed systems to the domain. It is necessary to register the managed system to Identity Management; for client systems the registration is optional. The following steps are required on the managed system: Install the ipa-client package and register the system to Identity Management as described in the Red Hat Enterprise Linux 7 Linux Domain Identity, Authentication, and Policy Guide . Copy the Identity Management signing certificate to the trusted store by typing the following command as root : Update the trust store by running the following command as root : Register Pegasus as a service in the Identity Management domain by running the following command as a privileged domain user: Replace hostname with the host name of the managed system. This command can be run from any system in the Identity Management domain that has the ipa-admintools package installed. It creates a service entry in Identity Management that can be used to generate signed SSL certificates. Back up the PEM files located in the /etc/Pegasus/ directory (recommended). Retrieve the signed certificate by running the following command as root : Replace hostname with the host name of the managed system. The certificate and key files are now kept in proper locations. The certmonger daemon installed on the managed system by the ipa-client-install script ensures that the certificate is kept up-to-date and renewed as necessary. For more information, see the Red Hat Enterprise Linux 7 Linux Domain Identity, Authentication, and Policy Guide . To register the client system and update the trust store, follow the steps below. Install the ipa-client package and register the system to Identity Management as described in the Red Hat Enterprise Linux 7 Linux Domain Identity, Authentication, and Policy Guide . Copy the Identity Management signing certificate to the trusted store by typing the following command as root : Update the trust store by running the following command as root : If the client system is not meant to be registered in Identity Management, complete the following steps to update the trust store. Copy the /etc/ipa/ca.crt file securely from any other system joined to the same Identity Management domain to the trusted store /etc/pki/ca-trust/source/anchors/ directory as root . Update the trust store by running the following command as root : 22.3.3. Managing Authority-signed Certificates Manually Managing authority-signed certificates with other mechanisms than Identity Management requires more manual configuration. It is necessary to ensure that all of the clients trust the certificate of the authority that will be signing the managed system certificates: If a certificate authority is trusted by default, it is not necessary to perform any particular steps to accomplish this. If the certificate authority is not trusted by default, the certificate has to be imported on the client and managed systems. Copy the certificate to the trusted store by typing the following command as root : Update the trust store by running the following command as root : On the managed system, complete the following steps: Create a new SSL configuration file /etc/Pegasus/ssl.cnf to store information about the certificate. The contents of this file must be similar to the following example: Replace hostname with the fully qualified domain name of the managed system. Generate a private key on the managed system by using the following command as root : Generate a certificate signing request (CSR) by running this command as root : Send the /etc/Pegasus/server.csr file to the certificate authority for signing. The detailed procedure of submitting the file depends on the particular certificate authority. When the signed certificate is received from the certificate authority, save it as /etc/Pegasus/server.pem . Copy the certificate of the trusted authority to the Pegasus trust store to make sure that Pegasus is capable of trusting its own certificate by running as root : After accomplishing all the described steps, the clients that trust the signing authority are able to successfully communicate with the managed server's CIMOM. Important Unlike the Identity Management solution, if the certificate expires and needs to be renewed, all of the described manual steps have to be carried out again. It is recommended to renew the certificates before they expire. 22.4. Using LMIShell LMIShell is an interactive client and non-interactive interpreter that can be used to access CIM objects provided by the OpenPegasus CIMOM. It is based on the Python interpreter, but also implements additional functions and classes for interacting with CIM objects. 22.4.1. Starting, Using, and Exiting LMIShell Similarly to the Python interpreter, you can use LMIShell either as an interactive client, or as a non-interactive interpreter for LMIShell scripts. Starting LMIShell in Interactive Mode To start the LMIShell interpreter in interactive mode, run the lmishell command with no additional arguments: By default, when LMIShell attempts to establish a connection with a CIMOM, it validates the server-side certificate against the Certification Authorities trust store. To disable this validation, run the lmishell command with the --noverify or -n command line option: Using Tab Completion When running in interactive mode, the LMIShell interpreter allows you press the Tab key to complete basic programming structures and CIM objects, including namespaces, classes, methods, and object properties. Browsing History By default, LMIShell stores all commands you type at the interactive prompt in the ~/.lmishell_history file. This allows you to browse the command history and re-use already entered lines in interactive mode without the need to type them at the prompt again. To move backward in the command history, press the Up Arrow key or the Ctrl + p key combination. To move forward in the command history, press the Down Arrow key or the Ctrl + n key combination. LMIShell also supports an incremental reverse search. To look for a particular line in the command history, press Ctrl + r and start typing any part of the command. For example: To clear the command history, use the clear_history() function as follows: You can configure the number of lines that are stored in the command history by changing the value of the history_length option in the ~/.lmishellrc configuration file. In addition, you can change the location of the history file by changing the value of the history_file option in this configuration file. For example, to set the location of the history file to ~/.lmishell_history and configure LMIShell to store the maximum of 1000 lines in it, add the following lines to the ~/.lmishellrc file: Handling Exceptions By default, the LMIShell interpreter handles all exceptions and uses return values. To disable this behavior in order to handle all exceptions in the code, use the use_exceptions() function as follows: To re-enable the automatic exception handling, use: You can permanently disable the exception handling by changing the value of the use_exceptions option in the ~/.lmishellrc configuration file to True : Configuring a Temporary Cache With the default configuration, LMIShell connection objects use a temporary cache for storing CIM class names and CIM classes in order to reduce network communication. To clear this temporary cache, use the clear_cache() method as follows: Replace object_name with the name of a connection object. To disable the temporary cache for a particular connection object, use the use_cache() method as follows: To enable it again, use: You can permanently disable the temporary cache for connection objects by changing the value of the use_cache option in the ~/.lmishellrc configuration file to False : Exiting LMIShell To terminate the LMIShell interpreter and return to the shell prompt, press the Ctrl + d key combination or issue the quit() function as follows: Running an LMIShell Script To run an LMIShell script, run the lmishell command as follows: Replace file_name with the name of the script. To inspect an LMIShell script after its execution, also specify the --interact or -i command line option: The preferred file extension of LMIShell scripts is .lmi . 22.4.2. Connecting to a CIMOM LMIShell allows you to connect to a CIMOM that is running either locally on the same system, or on a remote machine accessible over the network. Connecting to a Remote CIMOM To access CIM objects provided by a remote CIMOM, create a connection object by using the connect() function as follows: Replace host_name with the host name of the managed system, user_name with the name of a user that is allowed to connect to the OpenPegasus CIMOM running on that system, and password with the user's password. If the password is omitted, LMIShell prompts the user to enter it. The function returns an LMIConnection object. Example 22.1. Connecting to a Remote CIMOM To connect to the OpenPegasus CIMOM running on server.example.com as user pegasus , type the following at the interactive prompt: Connecting to a Local CIMOM LMIShell allows you to connect to a local CIMOM by using a Unix socket. For this type of connection, you must run the LMIShell interpreter as the root user and the /var/run/tog-pegasus/cimxml.socket socket must exist. To access CIM objects provided by a local CIMOM, create a connection object by using the connect() function as follows: Replace host_name with localhost , 127.0.0.1 , or ::1 . The function returns an LMIConnection object or None . Example 22.2. Connecting to a Local CIMOM To connect to the OpenPegasus CIMOM running on localhost as the root user, type the following at the interactive prompt: Verifying a Connection to a CIMOM The connect() function returns either an LMIConnection object, or None if the connection could not be established. In addition, when the connect() function fails to establish a connection, it prints an error message to standard error output. To verify that a connection to a CIMOM has been established successfully, use the isinstance() function as follows: Replace object_name with the name of the connection object. This function returns True if object_name is an LMIConnection object, or False otherwise. Example 22.3. Verifying a Connection to a CIMOM To verify that the c variable created in Example 22.1, "Connecting to a Remote CIMOM" contains an LMIConnection object, type the following at the interactive prompt: Alternatively, you can verify that c is not None : 22.4.3. Working with Namespaces LMIShell namespaces provide a natural means of organizing available classes and serve as a hierarchic access point to other namespaces and classes. The root namespace is the first entry point of a connection object. Listing Available Namespaces To list all available namespaces, use the print_namespaces() method as follows: Replace object_name with the name of the object to inspect. This method prints available namespaces to standard output. To get a list of available namespaces, access the object attribute namespaces : This returns a list of strings. Example 22.4. Listing Available Namespaces To inspect the root namespace object of the c connection object created in Example 22.1, "Connecting to a Remote CIMOM" and list all available namespaces, type the following at the interactive prompt: To assign a list of these namespaces to a variable named root_namespaces , type: Accessing Namespace Objects To access a particular namespace object, use the following syntax: Replace object_name with the name of the object to inspect and namespace_name with the name of the namespace to access. This returns an LMINamespace object. Example 22.5. Accessing Namespace Objects To access the cimv2 namespace of the c connection object created in Example 22.1, "Connecting to a Remote CIMOM" and assign it to a variable named ns , type the following at the interactive prompt: 22.4.4. Working with Classes LMIShell classes represent classes provided by a CIMOM. You can access and list their properties, methods, instances, instance names, and ValueMap properties, print their documentation strings, and create new instances and instance names. Listing Available Classes To list all available classes in a particular namespace, use the print_classes() method as follows: Replace namespace_object with the namespace object to inspect. This method prints available classes to standard output. To get a list of available classes, use the classes() method: This method returns a list of strings. Example 22.6. Listing Available Classes To inspect the ns namespace object created in Example 22.5, "Accessing Namespace Objects" and list all available classes, type the following at the interactive prompt: To assign a list of these classes to a variable named cimv2_classes , type: Accessing Class Objects To access a particular class object that is provided by the CIMOM, use the following syntax: Replace namespace_object with the name of the namespace object to inspect and class_name with the name of the class to access. Example 22.7. Accessing Class Objects To access the LMI_IPNetworkConnection class of the ns namespace object created in Example 22.5, "Accessing Namespace Objects" and assign it to a variable named cls , type the following at the interactive prompt: Examining Class Objects All class objects store information about their name and the namespace they belong to, as well as detailed class documentation. To get the name of a particular class object, use the following syntax: Replace class_object with the name of the class object to inspect. This returns a string representation of the object name. To get information about the namespace a class object belongs to, use: This returns a string representation of the namespace. To display detailed class documentation, use the doc() method as follows: Example 22.8. Examining Class Objects To inspect the cls class object created in Example 22.7, "Accessing Class Objects" and display its name and corresponding namespace, type the following at the interactive prompt: To access class documentation, type: Listing Available Methods To list all available methods of a particular class object, use the print_methods() method as follows: Replace class_object with the name of the class object to inspect. This method prints available methods to standard output. To get a list of available methods, use the methods() method: This method returns a list of strings. Example 22.9. Listing Available Methods To inspect the cls class object created in Example 22.7, "Accessing Class Objects" and list all available methods, type the following at the interactive prompt: To assign a list of these methods to a variable named service_methods , type: Listing Available Properties To list all available properties of a particular class object, use the print_properties() method as follows: Replace class_object with the name of the class object to inspect. This method prints available properties to standard output. To get a list of available properties, use the properties() method: This method returns a list of strings. Example 22.10. Listing Available Properties To inspect the cls class object created in Example 22.7, "Accessing Class Objects" and list all available properties, type the following at the interactive prompt: To assign a list of these classes to a variable named service_properties , type: Listing and Viewing ValueMap Properties CIM classes may contain ValueMap properties in their Managed Object Format ( MOF ) definition. ValueMap properties contain constant values, which may be useful when calling methods or checking returned values. To list all available ValueMap properties of a particular class object, use the print_valuemap_properties() method as follows: Replace class_object with the name of the class object to inspect. This method prints available ValueMap properties to standard output: To get a list of available ValueMap properties, use the valuemap_properties() method: This method returns a list of strings. Example 22.11. Listing ValueMap Properties To inspect the cls class object created in Example 22.7, "Accessing Class Objects" and list all available ValueMap properties, type the following at the interactive prompt: To assign a list of these ValueMap properties to a variable named service_valuemap_properties , type: To access a particular ValueMap property, use the following syntax: Replace valuemap_property with the name of the ValueMap property to access. To list all available constant values, use the print_values() method as follows: This method prints available named constant values to standard output. You can also get a list of available constant values by using the values() method: This method returns a list of strings. Example 22.12. Accessing ValueMap Properties Example 22.11, "Listing ValueMap Properties" mentions a ValueMap property named RequestedState . To inspect this property and list available constant values, type the following at the interactive prompt: To assign a list of these constant values to a variable named requested_state_values , type: To access a particular constant value, use the following syntax: Replace constant_value_name with the name of the constant value. Alternatively, you can use the value() method as follows: To determine the name of a particular constant value, use the value_name() method: This method returns a string. Example 22.13. Accessing Constant Values Example 22.12, "Accessing ValueMap Properties" shows that the RequestedState property provides a constant value named Reset . To access this named constant value, type the following at the interactive prompt: To determine the name of this constant value, type: Fetching a CIMClass Object Many class methods do not require access to a CIMClass object, which is why LMIShell only fetches this object from the CIMOM when a called method actually needs it. To fetch the CIMClass object manually, use the fetch() method as follows: Replace class_object with the name of the class object. Note that methods that require access to a CIMClass object fetch it automatically. 22.4.5. Working with Instances LMIShell instances represent instances provided by a CIMOM. You can get and set their properties, list and call their methods, print their documentation strings, get a list of associated or association objects, push modified objects to the CIMOM, and delete individual instances from the CIMOM. Accessing Instances To get a list of all available instances of a particular class object, use the instances() method as follows: Replace class_object with the name of the class object to inspect. This method returns a list of LMIInstance objects. To access the first instance of a class object, use the first_instance() method: This method returns an LMIInstance object. In addition to listing all instances or returning the first one, both instances() and first_instance() support an optional argument to allow you to filter the results: Replace criteria with a dictionary consisting of key-value pairs, where keys represent instance properties and values represent required values of these properties. Example 22.14. Accessing Instances To find the first instance of the cls class object created in Example 22.7, "Accessing Class Objects" that has the ElementName property equal to eth0 and assign it to a variable named device , type the following at the interactive prompt: Examining Instances All instance objects store information about their class name and the namespace they belong to, as well as detailed documentation about their properties and values. In addition, instance objects allow you to retrieve a unique identification object. To get the class name of a particular instance object, use the following syntax: Replace instance_object with the name of the instance object to inspect. This returns a string representation of the class name. To get information about the namespace an instance object belongs to, use: This returns a string representation of the namespace. To retrieve a unique identification object for an instance object, use: This returns an LMIInstanceName object. Finally, to display detailed documentation, use the doc() method as follows: Example 22.15. Examining Instances To inspect the device instance object created in Example 22.14, "Accessing Instances" and display its class name and the corresponding namespace, type the following at the interactive prompt: To access instance object documentation, type: Creating New Instances Certain CIM providers allow you to create new instances of specific classes objects. To create a new instance of a class object, use the create_instance() method as follows: Replace class_object with the name of the class object and properties with a dictionary that consists of key-value pairs, where keys represent instance properties and values represent property values. This method returns an LMIInstance object. Example 22.16. Creating New Instances The LMI_Group class represents system groups and the LMI_Account class represents user accounts on the managed system. To use the ns namespace object created in Example 22.5, "Accessing Namespace Objects" , create instances of these two classes for the system group named pegasus and the user named lmishell-user , and assign them to variables named group and user , type the following at the interactive prompt: To get an instance of the LMI_Identity class for the lmishell-user user, type: The LMI_MemberOfGroup class represents system group membership. To use the LMI_MemberOfGroup class to add the lmishell-user to the pegasus group, create a new instance of this class as follows: Deleting Individual Instances To delete a particular instance from the CIMOM, use the delete() method as follows: Replace instance_object with the name of the instance object to delete. This method returns a boolean. Note that after deleting an instance, its properties and methods become inaccessible. Example 22.17. Deleting Individual Instances The LMI_Account class represents user accounts on the managed system. To use the ns namespace object created in Example 22.5, "Accessing Namespace Objects" , create an instance of the LMI_Account class for the user named lmishell-user , and assign it to a variable named user , type the following at the interactive prompt: To delete this instance and remove the lmishell-user from the system, type: Listing and Accessing Available Properties To list all available properties of a particular instance object, use the print_properties() method as follows: Replace instance_object with the name of the instance object to inspect. This method prints available properties to standard output. To get a list of available properties, use the properties() method: This method returns a list of strings. Example 22.18. Listing Available Properties To inspect the device instance object created in Example 22.14, "Accessing Instances" and list all available properties, type the following at the interactive prompt: To assign a list of these properties to a variable named device_properties , type: To get the current value of a particular property, use the following syntax: Replace property_name with the name of the property to access. To modify the value of a particular property, assign a value to it as follows: Replace value with the new value of the property. Note that in order to propagate the change to the CIMOM, you must also execute the push() method: This method returns a three-item tuple consisting of a return value, return value parameters, and an error string. Example 22.19. Accessing Individual Properties To inspect the device instance object created in Example 22.14, "Accessing Instances" and display the value of the property named SystemName , type the following at the interactive prompt: Listing and Using Available Methods To list all available methods of a particular instance object, use the print_methods() method as follows: Replace instance_object with the name of the instance object to inspect. This method prints available methods to standard output. To get a list of available methods, use the method() method: This method returns a list of strings. Example 22.20. Listing Available Methods To inspect the device instance object created in Example 22.14, "Accessing Instances" and list all available methods, type the following at the interactive prompt: To assign a list of these methods to a variable named network_device_methods , type: To call a particular method, use the following syntax: Replace instance_object with the name of the instance object to use, method_name with the name of the method to call, parameter with the name of the parameter to set, and value with the value of this parameter. Methods return a three-item tuple consisting of a return value, return value parameters, and an error string. Important LMIInstance objects do not automatically refresh their contents (properties, methods, qualifiers, and so on). To do so, use the refresh() method as described below. Example 22.21. Using Methods The PG_ComputerSystem class represents the system. To create an instance of this class by using the ns namespace object created in Example 22.5, "Accessing Namespace Objects" and assign it to a variable named sys , type the following at the interactive prompt: The LMI_AccountManagementService class implements methods that allow you to manage users and groups in the system. To create an instance of this class and assign it to a variable named acc , type: To create a new user named lmishell-user in the system, use the CreateAccount() method as follows: LMIShell support synchronous method calls: when you use a synchronous method, LMIShell waits for the corresponding Job object to change its state to "finished" and then returns the return parameters of this job. LMIShell is able to perform a synchronous method call if the given method returns an object of one of the following classes: LMI_StorageJob LMI_SoftwareInstallationJob LMI_NetworkJob LMIShell first tries to use indications as the waiting method. If it fails, it uses a polling method instead. To perform a synchronous method call, use the following syntax: Replace instance_object with the name of the instance object to use, method_name with the name of the method to call, parameter with the name of the parameter to set, and value with the value of this parameter. All synchronous methods have the Sync prefix in their name and return a three-item tuple consisting of the job's return value, job's return value parameters, and job's error string. You can also force LMIShell to use only polling method. To do so, specify the PreferPolling parameter as follows: Listing and Viewing ValueMap Parameters CIM methods may contain ValueMap parameters in their Managed Object Format ( MOF ) definition. ValueMap parameters contain constant values. To list all available ValueMap parameters of a particular method, use the print_valuemap_parameters() method as follows: Replace instance_object with the name of the instance object and method_name with the name of the method to inspect. This method prints available ValueMap parameters to standard output. To get a list of available ValueMap parameters, use the valuemap_parameters() method: This method returns a list of strings. Example 22.22. Listing ValueMap Parameters To inspect the acc instance object created in Example 22.21, "Using Methods" and list all available ValueMap parameters of the CreateAccount() method, type the following at the interactive prompt: To assign a list of these ValueMap parameters to a variable named create_account_parameters , type: To access a particular ValueMap parameter, use the following syntax: Replace valuemap_parameter with the name of the ValueMap parameter to access. To list all available constant values, use the print_values() method as follows: This method prints available named constant values to standard output. You can also get a list of available constant values by using the values() method: This method returns a list of strings. Example 22.23. Accessing ValueMap Parameters Example 22.22, "Listing ValueMap Parameters" mentions a ValueMap parameter named CreateAccount . To inspect this parameter and list available constant values, type the following at the interactive prompt: To assign a list of these constant values to a variable named create_account_values , type: To access a particular constant value, use the following syntax: Replace constant_value_name with the name of the constant value. Alternatively, you can use the value() method as follows: To determine the name of a particular constant value, use the value_name() method: This method returns a string. Example 22.24. Accessing Constant Values Example 22.23, "Accessing ValueMap Parameters" shows that the CreateAccount ValueMap parameter provides a constant value named Failed . To access this named constant value, type the following at the interactive prompt: To determine the name of this constant value, type: Refreshing Instance Objects Local objects used by LMIShell, which represent CIM objects at CIMOM side, can get outdated, if such objects change while working with LMIShell's ones. To update the properties and methods of a particular instance object, use the refresh() method as follows: Replace instance_object with the name of the object to refresh. This method returns a three-item tuple consisting of a return value, return value parameter, and an error string. Example 22.25. Refreshing Instance Objects To update the properties and methods of the device instance object created in Example 22.14, "Accessing Instances" , type the following at the interactive prompt: Displaying MOF Representation To display the Managed Object Format ( MOF ) representation of an instance object, use the tomof() method as follows: Replace instance_object with the name of the instance object to inspect. This method prints the MOF representation of the object to standard output. Example 22.26. Displaying MOF Representation To display the MOF representation of the device instance object created in Example 22.14, "Accessing Instances" , type the following at the interactive prompt: 22.4.6. Working with Instance Names LMIShell instance names are objects that hold a set of primary keys and their values. This type of an object exactly identifies an instance. Accessing Instance Names CIMInstance objects are identified by CIMInstanceName objects. To get a list of all available instance name objects, use the instance_names() method as follows: Replace class_object with the name of the class object to inspect. This method returns a list of LMIInstanceName objects. To access the first instance name object of a class object, use the first_instance_name() method: This method returns an LMIInstanceName object. In addition to listing all instance name objects or returning the first one, both instance_names() and first_instance_name() support an optional argument to allow you to filter the results: Replace criteria with a dictionary consisting of key-value pairs, where keys represent key properties and values represent required values of these key properties. Example 22.27. Accessing Instance Names To find the first instance name of the cls class object created in Example 22.7, "Accessing Class Objects" that has the Name key property equal to eth0 and assign it to a variable named device_name , type the following at the interactive prompt: Examining Instance Names All instance name objects store information about their class name and the namespace they belong to. To get the class name of a particular instance name object, use the following syntax: Replace instance_name_object with the name of the instance name object to inspect. This returns a string representation of the class name. To get information about the namespace an instance name object belongs to, use: This returns a string representation of the namespace. Example 22.28. Examining Instance Names To inspect the device_name instance name object created in Example 22.27, "Accessing Instance Names" and display its class name and the corresponding namespace, type the following at the interactive prompt: Creating New Instance Names LMIShell allows you to create a new wrapped CIMInstanceName object if you know all primary keys of a remote object. This instance name object can then be used to retrieve the whole instance object. To create a new instance name of a class object, use the new_instance_name() method as follows: Replace class_object with the name of the class object and key_properties with a dictionary that consists of key-value pairs, where keys represent key properties and values represent key property values. This method returns an LMIInstanceName object. Example 22.29. Creating New Instance Names The LMI_Account class represents user accounts on the managed system. To use the ns namespace object created in Example 22.5, "Accessing Namespace Objects" and create a new instance name of the LMI_Account class representing the lmishell-user user on the managed system, type the following at the interactive prompt: Listing and Accessing Key Properties To list all available key properties of a particular instance name object, use the print_key_properties() method as follows: Replace instance_name_object with the name of the instance name object to inspect. This method prints available key properties to standard output. To get a list of available key properties, use the key_properties() method: This method returns a list of strings. Example 22.30. Listing Available Key Properties To inspect the device_name instance name object created in Example 22.27, "Accessing Instance Names" and list all available key properties, type the following at the interactive prompt: To assign a list of these key properties to a variable named device_name_properties , type: To get the current value of a particular key property, use the following syntax: Replace key_property_name with the name of the key property to access. Example 22.31. Accessing Individual Key Properties To inspect the device_name instance name object created in Example 22.27, "Accessing Instance Names" and display the value of the key property named SystemName , type the following at the interactive prompt: Converting Instance Names to Instances Each instance name can be converted to an instance. To do so, use the to_instance() method as follows: Replace instance_name_object with the name of the instance name object to convert. This method returns an LMIInstance object. Example 22.32. Converting Instance Names to Instances To convert the device_name instance name object created in Example 22.27, "Accessing Instance Names" to an instance object and assign it to a variable named device , type the following at the interactive prompt: 22.4.7. Working with Associated Objects The Common Information Model defines an association relationship between managed objects. Accessing Associated Instances To get a list of all objects associated with a particular instance object, use the associators() method as follows: To access the first object associated with a particular instance object, use the first_associator() method: Replace instance_object with the name of the instance object to inspect. You can filter the results by specifying the following parameters: AssocClass - Each returned object must be associated with the source object through an instance of this class or one of its subclasses. The default value is None . ResultClass - Each returned object must be either an instance of this class or one of its subclasses, or it must be this class or one of its subclasses. The default value is None . Role - Each returned object must be associated with the source object through an association in which the source object plays the specified role. The name of the property in the association class that refers to the source object must match the value of this parameter. The default value is None . ResultRole - Each returned object must be associated with the source object through an association in which the returned object plays the specified role. The name of the property in the association class that refers to the returned object must match the value of this parameter. The default value is None . The remaining parameters refer to: IncludeQualifiers - A boolean indicating whether all qualifiers of each object (including qualifiers on the object and on any returned properties) should be included as QUALIFIER elements in the response. The default value is False . IncludeClassOrigin - A boolean indicating whether the CLASSORIGIN attribute should be present on all appropriate elements in each returned object. The default value is False . PropertyList - The members of this list define one or more property names. Returned objects will not include elements for any properties missing from this list. If PropertyList is an empty list, no properties are included in returned objects. If it is None , no additional filtering is defined. The default value is None . Example 22.33. Accessing Associated Instances The LMI_StorageExtent class represents block devices available in the system. To use the ns namespace object created in Example 22.5, "Accessing Namespace Objects" , create an instance of the LMI_StorageExtent class for the block device named /dev/vda , and assign it to a variable named vda , type the following at the interactive prompt: To get a list of all disk partitions on this block device and assign it to a variable named vda_partitions , use the associators() method as follows: Accessing Associated Instance Names To get a list of all associated instance names of a particular instance object, use the associator_names() method as follows: To access the first associated instance name of a particular instance object, use the first_associator_name() method: Replace instance_object with the name of the instance object to inspect. You can filter the results by specifying the following parameters: AssocClass - Each returned name identifies an object that must be associated with the source object through an instance of this class or one of its subclasses. The default value is None . ResultClass - Each returned name identifies an object that must be either an instance of this class or one of its subclasses, or it must be this class or one of its subclasses. The default value is None . Role - Each returned name identifies an object that must be associated with the source object through an association in which the source object plays the specified role. The name of the property in the association class that refers to the source object must match the value of this parameter. The default value is None . ResultRole - Each returned name identifies an object that must be associated with the source object through an association in which the returned named object plays the specified role. The name of the property in the association class that refers to the returned object must match the value of this parameter. The default value is None . Example 22.34. Accessing Associated Instance Names To use the vda instance object created in Example 22.33, "Accessing Associated Instances" , get a list of its associated instance names, and assign it to a variable named vda_partitions , type: 22.4.8. Working with Association Objects The Common Information Model defines an association relationship between managed objects. Association objects define the relationship between two other objects. Accessing Association Instances To get a list of association objects that refer to a particular target object, use the references() method as follows: To access the first association object that refers to a particular target object, use the first_reference() method: Replace instance_object with the name of the instance object to inspect. You can filter the results by specifying the following parameters: ResultClass - Each returned object must be either an instance of this class or one of its subclasses, or it must be this class or one of its subclasses. The default value is None . Role - Each returned object must refer to the target object through a property with a name that matches the value of this parameter. The default value is None . The remaining parameters refer to: IncludeQualifiers - A boolean indicating whether each object (including qualifiers on the object and on any returned properties) should be included as a QUALIFIER element in the response. The default value is False . IncludeClassOrigin - A boolean indicating whether the CLASSORIGIN attribute should be present on all appropriate elements in each returned object. The default value is False . PropertyList - The members of this list define one or more property names. Returned objects will not include elements for any properties missing from this list. If PropertyList is an empty list, no properties are included in returned objects. If it is None , no additional filtering is defined. The default value is None . Example 22.35. Accessing Association Instances The LMI_LANEndpoint class represents a communication endpoint associated with a certain network interface device. To use the ns namespace object created in Example 22.5, "Accessing Namespace Objects" , create an instance of the LMI_LANEndpoint class for the network interface device named eth0, and assign it to a variable named lan_endpoint , type the following at the interactive prompt: To access the first association object that refers to an LMI_BindsToLANEndpoint object and assign it to a variable named bind , type: You can now use the Dependent property to access the dependent LMI_IPProtocolEndpoint class that represents the IP address of the corresponding network interface device: Accessing Association Instance Names To get a list of association instance names of a particular instance object, use the reference_names() method as follows: To access the first association instance name of a particular instance object, use the first_reference_name() method: Replace instance_object with the name of the instance object to inspect. You can filter the results by specifying the following parameters: ResultClass - Each returned object name identifies either an instance of this class or one of its subclasses, or this class or one of its subclasses. The default value is None . Role - Each returned object identifies an object that refers to the target instance through a property with a name that matches the value of this parameter. The default value is None . Example 22.36. Accessing Association Instance Names To use the lan_endpoint instance object created in Example 22.35, "Accessing Association Instances" , access the first association instance name that refers to an LMI_BindsToLANEndpoint object, and assign it to a variable named bind , type: You can now use the Dependent property to access the dependent LMI_IPProtocolEndpoint class that represents the IP address of the corresponding network interface device: 22.4.9. Working with Indications Indication is a reaction to a specific event that occurs in response to a particular change in data. LMIShell can subscribe to an indication in order to receive such event responses. Subscribing to Indications To subscribe to an indication, use the subscribe_indication() method as follows: Alternatively, you can use a shorter version of the method call as follows: Replace connection_object with a connection object and host_name with the host name of the system you want to deliver the indications to. By default, all subscriptions created by the LMIShell interpreter are automatically deleted when the interpreter terminates. To change this behavior, pass the Permanent=True keyword parameter to the subscribe_indication() method call. This will prevent LMIShell from deleting the subscription. Example 22.37. Subscribing to Indications To use the c connection object created in Example 22.1, "Connecting to a Remote CIMOM" and subscribe to an indication named cpu , type the following at the interactive prompt: Listing Subscribed Indications To list all the subscribed indications, use the print_subscribed_indications() method as follows: Replace connection_object with the name of the connection object to inspect. This method prints subscribed indications to standard output. To get a list of subscribed indications, use the subscribed_indications() method: This method returns a list of strings. Example 22.38. Listing Subscribed Indications To inspect the c connection object created in Example 22.1, "Connecting to a Remote CIMOM" and list all subscribed indications, type the following at the interactive prompt: To assign a list of these indications to a variable named indications , type: Unsubscribing from Indications By default, all subscriptions created by the LMIShell interpreter are automatically deleted when the interpreter terminates. To delete an individual subscription sooner, use the unsubscribe_indication() method as follows: Replace connection_object with the name of the connection object and indication_name with the name of the indication to delete. To delete all subscriptions, use the unsubscribe_all_indications() method: Example 22.39. Unsubscribing from Indications To use the c connection object created in Example 22.1, "Connecting to a Remote CIMOM" and unsubscribe from the indication created in Example 22.37, "Subscribing to Indications" , type the following at the interactive prompt: Implementing an Indication Handler The subscribe_indication() method allows you to specify the host name of the system you want to deliver the indications to. The following example shows how to implement an indication handler: The first argument of the handler is an LmiIndication object, which contains a list of methods and objects exported by the indication. Other parameters are user specific: those arguments need to be specified when adding a handler to the listener. In the example above, the add_handler() method call uses a special string with eight "X" characters. These characters are replaced with a random string that is generated by listeners in order to avoid a possible handler name collision. To use the random string, start the indication listener first and then subscribe to an indication so that the Destination property of the handler object contains the following value: schema :// host_name / random_string . Example 22.40. Implementing an Indication Handler The following script illustrates how to write a handler that monitors a managed system located at 192.168.122.1 and calls the indication_callback() function whenever a new user account is created: 22.4.10. Example Usage This section provides a number of examples for various CIM providers distributed with the OpenLMI packages. All examples in this section use the following two variable definitions: Replace host_name with the host name of the managed system, user_name with the name of user that is allowed to connect to OpenPegasus CIMOM running on that system, and password with the user's password. Using the OpenLMI Service Provider The openlmi-service package installs a CIM provider for managing system services. The examples below illustrate how to use this CIM provider to list available system services and how to start, stop, enable, and disable them. Example 22.41. Listing Available Services To list all available services on the managed machine along with information regarding whether the service has been started ( TRUE ) or stopped ( FALSE ) and the status string, use the following code snippet: To list only the services that are enabled by default, use this code snippet: Note that the value of the EnabledDefault property is equal to 2 for enabled services and 3 for disabled services. To display information about the cups service, use the following: Example 22.42. Starting and Stopping Services To start and stop the cups service and to see its current status, use the following code snippet: Example 22.43. Enabling and Disabling Services To enable and disable the cups service and to display its EnabledDefault property, use the following code snippet: Using the OpenLMI Networking Provider The openlmi-networking package installs a CIM provider for networking. The examples below illustrate how to use this CIM provider to list IP addresses associated with a certain port number, create a new connection, configure a static IP address, and activate a connection. Example 22.44. Listing IP Addresses Associated with a Given Port Number To list all IP addresses associated with the eth0 network interface, use the following code snippet: This code snippet uses the LMI_IPProtocolEndpoint class associated with a given LMI_IPNetworkConnection class. To display the default gateway, use this code snippet: The default gateway is represented by an LMI_NetworkRemoteServiceAccessPoint instance with the AccessContext property equal to DefaultGateway . To get a list of DNS servers, the object model needs to be traversed as follows: Get the LMI_IPProtocolEndpoint instances associated with a given LMI_IPNetworkConnection using LMI_NetworkSAPSAPDependency . Use the same association for the LMI_DNSProtocolEndpoint instances. The LMI_NetworkRemoteServiceAccessPoint instances with the AccessContext property equal to the DNS Server associated through LMI_NetworkRemoteAccessAvailableToElement have the DNS server address in the AccessInfo property. There can be more possible paths to get to the RemoteServiceAccessPath and entries can be duplicated. The following code snippet uses the set() function to remove duplicate entries from the list of DNS servers: Example 22.45. Creating a New Connection and Configuring a Static IP Address To create a new setting with a static IPv4 and stateless IPv6 configuration for network interface eth0, use the following code snippet: This code snippet creates a new setting by calling the LMI_CreateIPSetting() method on the instance of LMI_IPNetworkConnectionCapabilities , which is associated with LMI_IPNetworkConnection through LMI_IPNetworkConnectionElementCapabilities . It also uses the push() method to modify the setting. Example 22.46. Activating a Connection To apply a setting to the network interface, call the ApplySettingToIPNetworkConnection() method of the LMI_IPConfigurationService class. This method is asynchronous and returns a job. The following code snippets illustrates how to call this method synchronously: The Mode parameter affects how the setting is applied. The most commonly used values of this parameter are as follows: 1 - apply the setting now and make it auto-activated. 2 - make the setting auto-activated and do not apply it now. 4 - disconnect and disable auto-activation. 5 - do not change the setting state, only disable auto-activation. 32768 - apply the setting. 32769 - disconnect. Using the OpenLMI Storage Provider The openlmi-storage package installs a CIM provider for storage management. The examples below illustrate how to use this CIM provider to create a volume group, create a logical volume, build a file system, mount a file system, and list block devices known to the system. In addition to the c and ns variables, these examples use the following variable definitions: Example 22.47. Creating a Volume Group To create a new volume group located in /dev/myGroup/ that has three members and the default extent size of 4 MB, use the following code snippet: Example 22.48. Creating a Logical Volume To create two logical volumes with the size of 100 MB, use this code snippet: Example 22.49. Creating a File System To create an ext3 file system on logical volume lv from Example 22.48, "Creating a Logical Volume" , use the following code snippet: Example 22.50. Mounting a File System To mount the file system created in Example 22.49, "Creating a File System" , use the following code snippet: Example 22.51. Listing Block Devices To list all block devices known to the system, use the following code snippet: Using the OpenLMI Hardware Provider The openlmi-hardware package installs a CIM provider for monitoring hardware. The examples below illustrate how to use this CIM provider to retrieve information about CPU, memory modules, PCI devices, and the manufacturer and model of the machine. Example 22.52. Viewing CPU Information To display basic CPU information such as the CPU name, the number of processor cores, and the number of hardware threads, use the following code snippet: Example 22.53. Viewing Memory Information To display basic information about memory modules such as their individual sizes, use the following code snippet: Example 22.54. Viewing Chassis Information To display basic information about the machine such as its manufacturer or its model, use the following code snippet: Example 22.55. Listing PCI Devices To list all PCI devices known to the system, use the following code snippet: 22.5. Using OpenLMI Scripts The LMIShell interpreter is built on top of Python modules that can be used to develop custom management tools. The OpenLMI Scripts project provides a number of Python libraries for interfacing with OpenLMI providers. In addition, it is distributed with lmi , an extensible utility that can be used to interact with these libraries from the command line. To install OpenLMI Scripts on your system, type the following at a shell prompt: This command installs the Python modules and the lmi utility in the ~/.local/ directory. To extend the functionality of the lmi utility, install additional OpenLMI modules by using the following command: For a complete list of available modules, see the Python website . For more information about OpenLMI Scripts, see the official OpenLMI Scripts documentation . 22.6. Additional Resources For more information about OpenLMI and system management in general, see the resources listed below. Installed Documentation lmishell (1) - The manual page for the lmishell client and interpreter provides detailed information about its execution and usage. Online Documentation Red Hat Enterprise Linux 7 Networking Guide - The Networking Guide for Red Hat Enterprise Linux 7 documents relevant information regarding the configuration and administration of network interfaces and network services on the system. Red Hat Enterprise Linux 7 Storage Administration Guide - The Storage Administration Guide for Red Hat Enterprise Linux 7 provides instructions on how to manage storage devices and file systems on the system. Red Hat Enterprise Linux 7 Power Management Guide - The Power Management Guide for Red Hat Enterprise Linux 7 explains how to manage power consumption of the system effectively. It discusses different techniques that lower power consumption for both servers and laptops, and explains how each technique affects the overall performance of the system. Red Hat Enterprise Linux 7 Linux Domain Identity, Authentication, and Policy Guide - The Linux Domain Identity, Authentication, and Policy Guide for Red Hat Enterprise Linux 7 covers all aspects of installing, configuring, and managing IPA domains, including both servers and clients. The guide is intended for IT and systems administrators. FreeIPA Documentation - The FreeIPA Documentation serves as the primary user documentation for using the FreeIPA Identity Management project. OpenSSL Home Page - The OpenSSL home page provides an overview of the OpenSSL project. Mozilla NSS Documentation - The Mozilla NSS Documentation serves as the primary user documentation for using the Mozilla NSS project. See Also Chapter 4, Managing Users and Groups documents how to manage system users and groups in the graphical user interface and on the command line. Chapter 9, Yum describes how to use the Yum package manager to search, install, update, and uninstall packages on the command line. Chapter 10, Managing Services with systemd provides an introduction to systemd and documents how to use the systemctl command to manage system services, configure systemd targets, and execute power management commands. Chapter 12, OpenSSH describes how to configure an SSH server and how to use the ssh , scp , and sftp client utilities to access it. | [
"install tog-pegasus",
"install openlmi-{storage,networking,service,account,powermanagement}",
"passwd pegasus",
"systemctl start tog-pegasus.service",
"systemctl enable tog-pegasus.service",
"firewall-cmd --add-port 5989/tcp",
"firewall-cmd --permanent --add-port 5989/tcp",
"install openlmi-tools",
"systemctl restart tog-pegasus.service",
"scp root@ hostname :/etc/Pegasus/server.pem /etc/pki/ca-trust/source/anchors/pegasus- hostname .pem",
"sha1sum /etc/Pegasus/server.pem",
"sha1sum /etc/pki/ca-trust/source/anchors/pegasus- hostname .pem",
"update-ca-trust extract",
"cp /etc/ipa/ca.crt /etc/pki/ca-trust/source/anchors/ipa.crt",
"update-ca-trust extract",
"ipa service-add CIMOM/ hostname",
"ipa-getcert request -f /etc/Pegasus/server.pem -k /etc/Pegasus/file.pem -N CN= hostname -K CIMOM/ hostname",
"cp /etc/ipa/ca.crt /etc/pki/ca-trust/source/anchors/ipa.crt",
"update-ca-trust extract",
"update-ca-trust extract",
"cp /path/to/ca.crt /etc/pki/ca-trust/source/anchors/ca.crt",
"update-ca-trust extract",
"[ req ] distinguished_name = req_distinguished_name prompt = no [ req_distinguished_name ] C = US ST = Massachusetts L = Westford O = Fedora OU = Fedora OpenLMI CN = hostname",
"openssl genrsa -out /etc/Pegasus/file.pem 1024",
"openssl req -config /etc/Pegasus/ssl.cnf -new -key /etc/Pegasus/file.pem -out /etc/Pegasus/server.csr",
"cp /path/to/ca.crt /etc/Pegasus/client.pem",
"lmishell",
"lmishell --noverify",
"> (reverse-i-search)` connect ': c = connect(\"server.example.com\", \"pegasus\")",
"clear_history ()",
"history_file = \"~/.lmishell_history\" history_length = 1000",
"use_exceptions ()",
"use_exception ( False )",
"use_exceptions = True",
"object_name . clear_cache ()",
"object_name . use_cache ( False )",
"object_name . use_cache ( True )",
"use_cache = False",
"> quit() ~]USD",
"lmishell file_name",
"lmishell --interact file_name",
"connect ( host_name , user_name , password )",
"> c = connect(\"server.example.com\", \"pegasus\") password: >",
"connect ( host_name )",
"> c = connect(\"localhost\") >",
"isinstance ( object_name , LMIConnection )",
"> isinstance(c, LMIConnection) True >",
"> c is None False >",
"object_name . print_namespaces ()",
"object_name . namespaces",
"> c.root.print_namespaces() cimv2 interop PG_InterOp PG_Internal >",
"> root_namespaces = c.root.namespaces >",
"object_name . namespace_name",
"> ns = c.root.cimv2 >",
"namespace_object . print_classes()",
"namespace_object . classes ()",
"> ns.print_classes() CIM_CollectionInSystem CIM_ConcreteIdentity CIM_ControlledBy CIM_DeviceSAPImplementation CIM_MemberOfStatusCollection >",
"> cimv2_classes = ns.classes() >",
"namespace_object . class_name",
"> cls = ns.LMI_IPNetworkConnection >",
"class_object . classname",
"class_object . namespace",
"class_object . doc ()",
"> cls.classname 'LMI_IPNetworkConnection' > cls.namespace 'root/cimv2' >",
"> cls.doc() Class: LMI_IPNetworkConnection SuperClass: CIM_IPNetworkConnection [qualifier] string UMLPackagePath: 'CIM::Network::IP' [qualifier] string Version: '0.1.0'",
"class_object . print_methods ()",
"class_object . methods()",
"> cls.print_methods() RequestStateChange >",
"> service_methods = cls.methods() >",
"class_object . print_properties ()",
"class_object . properties ()",
"> cls.print_properties() RequestedState HealthState StatusDescriptions TransitioningToState Generation >",
"> service_properties = cls.properties() >",
"class_object . print_valuemap_properties ()",
"class_object . valuemap_properties ()",
"> cls.print_valuemap_properties() RequestedState HealthState TransitioningToState DetailedStatus OperationalStatus >",
"> service_valuemap_properties = cls.valuemap_properties() >",
"class_object . valuemap_property Values",
"class_object . valuemap_property Values . print_values ()",
"class_object . valuemap_property Values . values ()",
"> cls.RequestedStateValues.print_values() Reset NoChange NotApplicable Quiesce Unknown >",
"> requested_state_values = cls.RequestedStateValues.values() >",
"class_object . valuemap_property Values . constant_value_name",
"class_object . valuemap_property Values . value (\" constant_value_name \")",
"class_object . valuemap_property Values . value_name (\" constant_value \")",
"> cls.RequestedStateValues.Reset 11 > cls.RequestedStateValues.value(\"Reset\") 11 >",
"> cls.RequestedStateValues.value_name(11) u'Reset' >",
"class_object . fetch ()",
"class_object . instances ()",
"class_object . first_instance ()",
"class_object . instances ( criteria )",
"class_object . first_instance ( criteria )",
"> device = cls.first_instance({\"ElementName\": \"eth0\"}) >",
"instance_object . classname",
"instance_object . namespace",
"instance_object . path",
"instance_object . doc ()",
"> device.classname u'LMI_IPNetworkConnection' > device.namespace 'root/cimv2' >",
"> device.doc() Instance of LMI_IPNetworkConnection [property] uint16 RequestedState = '12' [property] uint16 HealthState [property array] string [] StatusDescriptions",
"class_object . create_instance ( properties )",
"> group = ns.LMI_Group.first_instance({\"Name\" : \"pegasus\"}) > user = ns.LMI_Account.first_instance({\"Name\" : \"lmishell-user\"}) >",
"> identity = user.first_associator(ResultClass=\"LMI_Identity\") >",
"> ns.LMI_MemberOfGroup.create_instance({ ... \"Member\" : identity.path, ... \"Collection\" : group.path}) LMIInstance(classname=\"LMI_MemberOfGroup\", ...) >",
"instance_object . delete ()",
"> user = ns.LMI_Account.first_instance({\"Name\" : \"lmishell-user\"}) >",
"> user.delete() True >",
"instance_object . print_properties ()",
"instance_object . properties ()",
"> device.print_properties() RequestedState HealthState StatusDescriptions TransitioningToState Generation >",
"> device_properties = device.properties() >",
"instance_object . property_name",
"instance_object . property_name = value",
"instance_object . push ()",
"> device.SystemName u'server.example.com' >",
"instance_object . print_methods ()",
"instance_object . methods ()",
"> device.print_methods() RequestStateChange >",
"> network_device_methods = device.methods() >",
"instance_object . method_name ( parameter = value , ...)",
"> sys = ns.PG_ComputerSystem.first_instance() >",
"> acc = ns.LMI_AccountManagementService.first_instance() >",
"> acc.CreateAccount(Name=\"lmishell-user\", System=sys) LMIReturnValue(rval=0, rparams=NocaseDict({u'Account': LMIInstanceName(classname=\"LMI_Account\"...), u'Identities': [LMIInstanceName(classname=\"LMI_Identity\"...), LMIInstanceName(classname=\"LMI_Identity\"...)]}), errorstr='')",
"instance_object . Sync method_name ( parameter = value , ...)",
"instance_object . Sync method_name ( PreferPolling = True parameter = value , ...)",
"instance_object . method_name . print_valuemap_parameters ()",
"instance_object . method_name . valuemap_parameters ()",
"> acc.CreateAccount.print_valuemap_parameters() CreateAccount >",
"> create_account_parameters = acc.CreateAccount.valuemap_parameters() >",
"instance_object . method_name . valuemap_parameter Values",
"instance_object . method_name . valuemap_parameter Values . print_values ()",
"instance_object . method_name . valuemap_parameter Values . values ()",
"> acc.CreateAccount.CreateAccountValues.print_values() Operationunsupported Failed Unabletosetpasswordusercreated Unabletocreatehomedirectoryusercreatedandpasswordset Operationcompletedsuccessfully >",
"> create_account_values = acc.CreateAccount.CreateAccountValues.values() >",
"instance_object . method_name . valuemap_parameter Values . constant_value_name",
"instance_object . method_name . valuemap_parameter Values . value (\" constant_value_name \")",
"instance_object . method_name . valuemap_parameter Values . value_name (\" constant_value \")",
"> acc.CreateAccount.CreateAccountValues.Failed 2 > acc.CreateAccount.CreateAccountValues.value(\"Failed\") 2 >",
"> acc.CreateAccount.CreateAccountValues.value_name(2) u'Failed' >",
"instance_object . refresh ()",
"> device.refresh() LMIReturnValue(rval=True, rparams=NocaseDict({}), errorstr='') >",
"instance_object . tomof ()",
"> device.tomof() instance of LMI_IPNetworkConnection { RequestedState = 12; HealthState = NULL; StatusDescriptions = NULL; TransitioningToState = 12;",
"class_object . instance_names ()",
"class_object . first_instance_name ()",
"class_object . instance_names ( criteria )",
"class_object . first_instance_name ( criteria )",
"> device_name = cls.first_instance_name({\"Name\": \"eth0\"}) >",
"instance_name_object . classname",
"instance_name_object . namespace",
"> device_name.classname u'LMI_IPNetworkConnection' > device_name.namespace 'root/cimv2' >",
"class_object . new_instance_name ( key_properties )",
"> instance_name = ns.LMI_Account.new_instance_name({ ... \"CreationClassName\" : \"LMI_Account\", ... \"Name\" : \"lmishell-user\", ... \"SystemCreationClassName\" : \"PG_ComputerSystem\", ... \"SystemName\" : \"server\"}) >",
"instance_name_object . print_key_properties ()",
"instance_name_object . key_properties ()",
"> device_name.print_key_properties() CreationClassName SystemName Name SystemCreationClassName >",
"> device_name_properties = device_name.key_properties() >",
"instance_name_object . key_property_name",
"> device_name.SystemName u'server.example.com' >",
"instance_name_object . to_instance ()",
"> device = device_name.to_instance() >",
"instance_object . associators ( AssocClass= class_name , ResultClass= class_name , ResultRole= role , IncludeQualifiers= include_qualifiers , IncludeClassOrigin= include_class_origin , PropertyList= property_list )",
"instance_object . first_associator ( AssocClass= class_name , ResultClass= class_name , ResultRole= role , IncludeQualifiers= include_qualifiers , IncludeClassOrigin= include_class_origin , PropertyList= property_list )",
"> vda = ns.LMI_StorageExtent.first_instance({ ... \"DeviceID\" : \"/dev/vda\"}) >",
"> vda_partitions = vda.associators(ResultClass=\"LMI_DiskPartition\") >",
"instance_object . associator_names ( AssocClass= class_name , ResultClass= class_name , Role= role , ResultRole= role )",
"instance_object . first_associator_name ( AssocClass= class_object , ResultClass= class_object , Role= role , ResultRole= role )",
"> vda_partitions = vda.associator_names(ResultClass=\"LMI_DiskPartition\") >",
"instance_object . references ( ResultClass= class_name , Role= role , IncludeQualifiers= include_qualifiers , IncludeClassOrigin= include_class_origin , PropertyList= property_list )",
"instance_object . first_reference ( ... ResultClass= class_name , ... Role= role , ... IncludeQualifiers= include_qualifiers , ... IncludeClassOrigin= include_class_origin , ... PropertyList= property_list ) >",
"> lan_endpoint = ns.LMI_LANEndpoint.first_instance({ ... \"Name\" : \"eth0\"}) >",
"> bind = lan_endpoint.first_reference( ... ResultClass=\"LMI_BindsToLANEndpoint\") >",
"> ip = bind.Dependent.to_instance() > print ip.IPv4Address 192.168.122.1 >",
"instance_object . reference_names ( ResultClass= class_name , Role= role )",
"instance_object . first_reference_name ( ResultClass= class_name , Role= role )",
"> bind = lan_endpoint.first_reference_name( ... ResultClass=\"LMI_BindsToLANEndpoint\")",
"> ip = bind.Dependent.to_instance() > print ip.IPv4Address 192.168.122.1 >",
"connection_object . subscribe_indication ( QueryLanguage= \"WQL\" , Query= 'SELECT * FROM CIM_InstModification' , Name= \"cpu\" , CreationNamespace= \"root/interop\" , SubscriptionCreationClassName= \"CIM_IndicationSubscription\" , FilterCreationClassName= \"CIM_IndicationFilter\" , FilterSystemCreationClassName= \"CIM_ComputerSystem\" , FilterSourceNamespace= \"root/cimv2\" , HandlerCreationClassName= \"CIM_IndicationHandlerCIMXML\" , HandlerSystemCreationClassName= \"CIM_ComputerSystem\" , Destination= \"http://host_name:5988\" )",
"connection_object . subscribe_indication ( Query= 'SELECT * FROM CIM_InstModification' , Name= \"cpu\" , Destination= \"http://host_name:5988\" )",
"> c.subscribe_indication( ... QueryLanguage=\"WQL\", ... Query='SELECT * FROM CIM_InstModification', ... Name=\"cpu\", ... CreationNamespace=\"root/interop\", ... SubscriptionCreationClassName=\"CIM_IndicationSubscription\", ... FilterCreationClassName=\"CIM_IndicationFilter\", ... FilterSystemCreationClassName=\"CIM_ComputerSystem\", ... FilterSourceNamespace=\"root/cimv2\", ... HandlerCreationClassName=\"CIM_IndicationHandlerCIMXML\", ... HandlerSystemCreationClassName=\"CIM_ComputerSystem\", ... Destination=\"http://server.example.com:5988\") LMIReturnValue(rval=True, rparams=NocaseDict({}), errorstr='') >",
"connection_object . print_subscribed_indications ()",
"connection_object . subscribed_indications ()",
"> c.print_subscribed_indications() >",
"> indications = c.subscribed_indications() >",
"connection_object . unsubscribe_indication ( indication_name )",
"connection_object . unsubscribe_all_indications ()",
"> c.unsubscribe_indication('cpu') LMIReturnValue(rval=True, rparams=NocaseDict({}), errorstr='') >",
"> def handler(ind, arg1, arg2, kwargs): ... exported_objects = ind.exported_objects() ... do_something_with(exported_objects) > listener = LmiIndicationListener(\"0.0.0.0\", listening_port) > listener.add_handler(\"indication-name-XXXXXXXX\", handler, arg1, arg2, kwargs) > listener.start() >",
"#!/usr/bin/lmishell import sys from time import sleep from lmi.shell.LMIUtil import LMIPassByRef from lmi.shell.LMIIndicationListener import LMIIndicationListener These are passed by reference to indication_callback var1 = LMIPassByRef(\"some_value\") var2 = LMIPassByRef(\"some_other_value\") def indication_callback(ind, var1, var2): # Do something with ind, var1 and var2 print ind.exported_objects() print var1.value print var2.value c = connect(\"hostname\", \"username\", \"password\") listener = LMIIndicationListener(\"0.0.0.0\", 65500) unique_name = listener.add_handler( \"demo-XXXXXXXX\", # Creates a unique name for me indication_callback, # Callback to be called var1, # Variable passed by ref var2 # Variable passed by ref ) listener.start() print c.subscribe_indication( Name=unique_name, Query=\"SELECT * FROM LMI_AccountInstanceCreationIndication WHERE SOURCEINSTANCE ISA LMI_Account\", Destination=\"192.168.122.1:65500\" ) try: while True: sleep(60) except KeyboardInterrupt: sys.exit(0)",
"c = connect(\"host_name\", \"user_name\", \"password\") ns = c.root.cimv2",
"for service in ns.LMI_Service.instances(): print \"%s:\\t%s\" % (service.Name, service.Status)",
"cls = ns.LMI_Service for service in cls.instances(): if service.EnabledDefault == cls.EnabledDefaultValues.Enabled: print service.Name",
"cups = ns.LMI_Service.first_instance({\"Name\": \"cups.service\"}) cups.doc()",
"cups = ns.LMI_Service.first_instance({\"Name\": \"cups.service\"}) cups.StartService() print cups.Status cups.StopService() print cups.Status",
"cups = ns.LMI_Service.first_instance({\"Name\": \"cups.service\"}) cups.TurnServiceOff() print cups.EnabledDefault cups.TurnServiceOn() print cups.EnabledDefault",
"device = ns.LMI_IPNetworkConnection.first_instance({'ElementName': 'eth0'}) for endpoint in device.associators(AssocClass=\"LMI_NetworkSAPSAPDependency\", ResultClass=\"LMI_IPProtocolEndpoint\"): if endpoint.ProtocolIFType == ns.LMI_IPProtocolEndpoint.ProtocolIFTypeValues.IPv4: print \"IPv4: %s/%s\" % (endpoint.IPv4Address, endpoint.SubnetMask) elif endpoint.ProtocolIFType == ns.LMI_IPProtocolEndpoint.ProtocolIFTypeValues.IPv6: print \"IPv6: %s/%d\" % (endpoint.IPv6Address, endpoint.IPv6SubnetPrefixLength)",
"for rsap in device.associators(AssocClass=\"LMI_NetworkRemoteAccessAvailableToElement\", ResultClass=\"LMI_NetworkRemoteServiceAccessPoint\"): if rsap.AccessContext == ns.LMI_NetworkRemoteServiceAccessPoint.AccessContextValues.DefaultGateway: print \"Default Gateway: %s\" % rsap.AccessInfo",
"dnsservers = set() for ipendpoint in device.associators(AssocClass=\"LMI_NetworkSAPSAPDependency\", ResultClass=\"LMI_IPProtocolEndpoint\"): for dnsedpoint in ipendpoint.associators(AssocClass=\"LMI_NetworkSAPSAPDependency\", ResultClass=\"LMI_DNSProtocolEndpoint\"): for rsap in dnsedpoint.associators(AssocClass=\"LMI_NetworkRemoteAccessAvailableToElement\", ResultClass=\"LMI_NetworkRemoteServiceAccessPoint\"): if rsap.AccessContext == ns.LMI_NetworkRemoteServiceAccessPoint.AccessContextValues.DNSServer: dnsservers.add(rsap.AccessInfo) print \"DNS:\", \", \".join(dnsservers)",
"capability = ns.LMI_IPNetworkConnectionCapabilities.first_instance({ 'ElementName': 'eth0' }) result = capability.LMI_CreateIPSetting(Caption='eth0 Static', IPv4Type=capability.LMI_CreateIPSetting.IPv4TypeValues.Static, IPv6Type=capability.LMI_CreateIPSetting.IPv6TypeValues.Stateless) setting = result.rparams[\"SettingData\"].to_instance() for settingData in setting.associators(AssocClass=\"LMI_OrderedIPAssignmentComponent\"): if setting.ProtocolIFType == ns.LMI_IPAssignmentSettingData.ProtocolIFTypeValues.IPv4: # Set static IPv4 address settingData.IPAddresses = [\"192.168.1.100\"] settingData.SubnetMasks = [\"255.255.0.0\"] settingData.GatewayAddresses = [\"192.168.1.1\"] settingData.push()",
"setting = ns.LMI_IPAssignmentSettingData.first_instance({ \"Caption\": \"eth0 Static\" }) port = ns.LMI_IPNetworkConnection.first_instance({ 'ElementName': 'ens8' }) service = ns.LMI_IPConfigurationService.first_instance() service.SyncApplySettingToIPNetworkConnection(SettingData=setting, IPNetworkConnection=port, Mode=32768)",
"MEGABYTE = 1024*1024 storage_service = ns.LMI_StorageConfigurationService.first_instance() filesystem_service = ns.LMI_FileSystemConfigurationService.first_instance()",
"Find the devices to add to the volume group (filtering the CIM_StorageExtent.instances() call would be faster, but this is easier to read): sda1 = ns.CIM_StorageExtent.first_instance({\"Name\": \"/dev/sda1\"}) sdb1 = ns.CIM_StorageExtent.first_instance({\"Name\": \"/dev/sdb1\"}) sdc1 = ns.CIM_StorageExtent.first_instance({\"Name\": \"/dev/sdc1\"}) Create a new volume group: (ret, outparams, err) = storage_service.SyncCreateOrModifyVG( ElementName=\"myGroup\", InExtents=[sda1, sdb1, sdc1]) vg = outparams['Pool'].to_instance() print \"VG\", vg.PoolID, \"with extent size\", vg.ExtentSize, \"and\", vg.RemainingExtents, \"free extents created.\"",
"Find the volume group: vg = ns.LMI_VGStoragePool.first_instance({\"Name\": \"/dev/mapper/myGroup\"}) Create the first logical volume: (ret, outparams, err) = storage_service.SyncCreateOrModifyLV( ElementName=\"Vol1\", InPool=vg, Size=100 * MEGABYTE) lv = outparams['TheElement'].to_instance() print \"LV\", lv.DeviceID, \"with\", lv.BlockSize * lv.NumberOfBlocks, \"bytes created.\" Create the second logical volume: (ret, outparams, err) = storage_service.SyncCreateOrModifyLV( ElementName=\"Vol2\", InPool=vg, Size=100 * MEGABYTE) lv = outparams['TheElement'].to_instance() print \"LV\", lv.DeviceID, \"with\", lv.BlockSize * lv.NumberOfBlocks, \"bytes created.\"",
"(ret, outparams, err) = filesystem_service.SyncLMI_CreateFileSystem( FileSystemType=filesystem_service.LMI_CreateFileSystem.FileSystemTypeValues.EXT3, InExtents=[lv])",
"Find the file system on the logical volume: fs = lv.first_associator(ResultClass=\"LMI_LocalFileSystem\") mount_service = ns.LMI_MountConfigurationService.first_instance() (rc, out, err) = mount_service.SyncCreateMount( FileSystemType='ext3', Mode=32768, # just mount FileSystem=fs, MountPoint='/mnt/test', FileSystemSpec=lv.Name)",
"devices = ns.CIM_StorageExtent.instances() for device in devices: if lmi_isinstance(device, ns.CIM_Memory): # Memory and CPU caches are StorageExtents too, do not print them continue print device.classname, print device.DeviceID, print device.Name, print device.BlockSize*device.NumberOfBlocks",
"cpu = ns.LMI_Processor.first_instance() cpu_cap = cpu.associators(ResultClass=\"LMI_ProcessorCapabilities\")[0] print cpu.Name print cpu_cap.NumberOfProcessorCores print cpu_cap.NumberOfHardwareThreads",
"mem = ns.LMI_Memory.first_instance() for i in mem.associators(ResultClass=\"LMI_PhysicalMemory\"): print i.Name",
"chassis = ns.LMI_Chassis.first_instance() print chassis.Manufacturer print chassis.Model",
"for pci in ns.LMI_PCIDevice.instances(): print pci.Name",
"easy_install --user openlmi-scripts",
"easy_install --user package_name"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system_administrators_guide/chap-openlmi |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.