title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
βŒ€
url
stringlengths
79
342
4. Branding and Chroming the Graphical User Interface
4. Branding and Chroming the Graphical User Interface The following sections describe changing the appearance of the graphical user interface (GUI) of the Anaconda installer. There are several elements in the graphical user interface of Anaconda which can be changed to customize the look of the installer. To customize the installer's appearance, you must create a custom product.img file containing a custom installclass (to change the product name displayed in the installer) and your own branding material. The product.img file is not an installation image; it is used to supplement the full installation ISO image by loading your customizations and using them to overwrite files included on the boot image by default. See Section 2, "Working with ISO Images" for information about extracting boot images provided by Red Hat, creating a product.img file and adding this file to the ISO images. 4.1. Customizing Graphical Elements Graphical elements of the installer which can be changed are stored in the /usr/share/anaconda/pixmaps/ directory in the installer runtime file system. This directory contains the following files: Additionally, the /usr/share/anaconda/ directory contains a CSS stylesheet named anaconda-gtk.css , which determines the file names and parameters of the main UI elements - the logo and the backgrounds for the side bar and top bar. The file has the following contents: /* vendor-specific colors/images */ @define-color redhat #021519; /* logo and sidebar classes for RHEL */ .logo-sidebar { background-image: url('/usr/share/anaconda/pixmaps/sidebar-bg.png'); background-color: @redhat; background-repeat: no-repeat; } .logo { background-image: url('/usr/share/anaconda/pixmaps/sidebar-logo.png'); background-position: 50% 20px; background-repeat: no-repeat; background-color: transparent; } AnacondaSpokeWindow #nav-box { background-color: @redhat; background-image: url('/usr/share/anaconda/pixmaps/topbar-bg.png'); background-repeat: no-repeat; color: white; } AnacondaSpokeWindow #layout-indicator { color: black; } The most imporant part of the CSS file is the way it handles scaling based on resolution. The PNG image backgrounds do not scale, they are always displayed in their true dimensions. Instead, the backgrounds have a transparent background, and the style sheet defines a matching background color on the @define-color line. Therefore, the background images "fade" into the background color , which means that the backgrounds work on all resolutions without a need for image scaling. You could also change the background-repeat parameters to tile the background, or, if you are confident that every system you will be installing on will have the same display resolution, you can use background images which fill the entire bar. The rnotes/ directory contains a set of banners. During the installation, banner graphics cycle along the bottom of the screen, approximately once per minute. Any of the files listed above can be customized. Once you do so, follow the instructions in Section 2.2, "Creating a product.img File" to create your own product.img with custom graphics, and then Section 2.3, "Creating Custom Boot Images" to create a new bootable ISO image with your changes included. 4.2. Customizing the Product Name Apart from graphical elements described in the section, you can also customize the product name displayed during the installation. This product name is shown in the top right corner in all screens. To change the product name, you must create a custom installation class . Create a new file named custom.py with content similar to the example below: Example 1. Creating a Custom Installclass from pyanaconda.installclass import BaseInstallClass from pyanaconda.product import productName from pyanaconda import network from pyanaconda import nm class CustomBaseInstallClass(BaseInstallClass): name = "My Distribution" sortPriority = 30000 if not productName.startswith("My Distribution"): hidden = True defaultFS = "xfs" bootloaderTimeoutDefault = 5 bootloaderExtraArgs = [] ignoredPackages = ["ntfsprogs"] installUpdates = False _l10n_domain = "comps" efi_dir = "redhat" help_placeholder = "RHEL7Placeholder.html" help_placeholder_with_links = "RHEL7PlaceholderWithLinks.html" def configure(self, anaconda): BaseInstallClass.configure(self, anaconda) BaseInstallClass.setDefaultPartitioning(self, anaconda.storage) def setNetworkOnbootDefault(self, ksdata): if ksdata.method.method not in ("url", "nfs"): return if network.has_some_wired_autoconnect_device(): return dev = network.default_route_device() if not dev: return if nm.nm_device_type_is_wifi(dev): return network.update_onboot_value(dev, "yes", ksdata) def __init__(self): BaseInstallClass.__init__(self) The file above determines the installer defaults (such as the default file system, etc.), but the part relevant to this procedure is the following block: class CustomBaseInstallClass (BaseInstallClass): name = " My Distribution " sortPriority = 30000 if not productName.startswith("My Distribution"): hidden = True Change My Distribution to the name which you want to display in the installer. Also make sure that the sortPriority attribute is set to more than 20000 ; this makes sure that the new installation class will be loaded first. Warning Do not change any other attributes or class names in the file - otherwise you may cause the installer to behave unpredictably. After you create the custom installclass, follow the steps in Section 2.2, "Creating a product.img File" to create a new product.img file containing your customizations, and the Section 2.3, "Creating Custom Boot Images" to create a new bootable ISO file with your changes included.
[ "pixmaps β”œβ”€ anaconda-selected-icon.svg β”œβ”€ dialog-warning-symbolic.svg β”œβ”€ right-arrow-icon.png β”œβ”€ rnotes β”‚ └─ en β”‚ β”œβ”€ RHEL_7_InstallerBanner_Andreas_750x120_11649367_1213jw.png β”‚ β”œβ”€ RHEL_7_InstallerBanner_Blog_750x120_11649367_1213jw.png β”‚ β”œβ”€ RHEL_7_InstallerBanner_CPAccess_CommandLine_750x120_11649367_1213jw.png β”‚ β”œβ”€ RHEL_7_InstallerBanner_CPAccess_Desktop_750x120_11649367_1213jw.png β”‚ β”œβ”€ RHEL_7_InstallerBanner_CPAccess_Help_750x120_11649367_1213jw.png β”‚ β”œβ”€ RHEL_7_InstallerBanner_Middleware_750x120_11649367_1213jw.png β”‚ β”œβ”€ RHEL_7_InstallerBanner_OPSEN_750x120_11649367_1213cd.png β”‚ β”œβ”€ RHEL_7_InstallerBanner_RHDev_Program_750x120_11649367_1213cd.png β”‚ β”œβ”€ RHEL_7_InstallerBanner_RHELStandardize_750x120_11649367_1213jw.png β”‚ └─ RHEL_7_InstallerBanner_Satellite_750x120_11649367_1213cd.png β”œβ”€ sidebar-bg.png β”œβ”€ sidebar-logo.png └─ topbar-bg.png", "/* vendor-specific colors/images */ @define-color redhat #021519; /* logo and sidebar classes for RHEL */ .logo-sidebar { background-image: url('/usr/share/anaconda/pixmaps/sidebar-bg.png'); background-color: @redhat; background-repeat: no-repeat; } .logo { background-image: url('/usr/share/anaconda/pixmaps/sidebar-logo.png'); background-position: 50% 20px; background-repeat: no-repeat; background-color: transparent; } AnacondaSpokeWindow #nav-box { background-color: @redhat; background-image: url('/usr/share/anaconda/pixmaps/topbar-bg.png'); background-repeat: no-repeat; color: white; } AnacondaSpokeWindow #layout-indicator { color: black; }", "from pyanaconda.installclass import BaseInstallClass from pyanaconda.product import productName from pyanaconda import network from pyanaconda import nm class CustomBaseInstallClass(BaseInstallClass): name = \"My Distribution\" sortPriority = 30000 if not productName.startswith(\"My Distribution\"): hidden = True defaultFS = \"xfs\" bootloaderTimeoutDefault = 5 bootloaderExtraArgs = [] ignoredPackages = [\"ntfsprogs\"] installUpdates = False _l10n_domain = \"comps\" efi_dir = \"redhat\" help_placeholder = \"RHEL7Placeholder.html\" help_placeholder_with_links = \"RHEL7PlaceholderWithLinks.html\" def configure(self, anaconda): BaseInstallClass.configure(self, anaconda) BaseInstallClass.setDefaultPartitioning(self, anaconda.storage) def setNetworkOnbootDefault(self, ksdata): if ksdata.method.method not in (\"url\", \"nfs\"): return if network.has_some_wired_autoconnect_device(): return dev = network.default_route_device() if not dev: return if nm.nm_device_type_is_wifi(dev): return network.update_onboot_value(dev, \"yes\", ksdata) def __init__(self): BaseInstallClass.__init__(self)", "class CustomBaseInstallClass (BaseInstallClass): name = \" My Distribution \" sortPriority = 30000 if not productName.startswith(\"My Distribution\"): hidden = True" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/anaconda_customization_guide/sect-anaconda-visuals
Chapter 4. Managing Users and Groups
Chapter 4. Managing Users and Groups The control of users and groups is a core element of Red Hat Enterprise Linux system administration. This chapter explains how to add, manage, and delete users and groups in the graphical user interface and on the command line, and covers advanced topics, such as creating group directories. 4.1. Introduction to Users and Groups While users can be either people (meaning accounts tied to physical users) or accounts that exist for specific applications to use, groups are logical expressions of organization, tying users together for a common purpose. Users within a group share the same permissions to read, write, or execute files owned by that group. Each user is associated with a unique numerical identification number called a user ID ( UID ). Likewise, each group is associated with a group ID ( GID ). A user who creates a file is also the owner and group owner of that file. The file is assigned separate read, write, and execute permissions for the owner, the group, and everyone else. The file owner can be changed only by root , and access permissions can be changed by both the root user and file owner. Additionally, Red Hat Enterprise Linux supports access control lists ( ACLs ) for files and directories which allow permissions for specific users outside of the owner to be set. For more information about this feature, see Chapter 5, Access Control Lists . Reserved User and Group IDs Red Hat Enterprise Linux reserves user and group IDs below 1000 for system users and groups. By default, the User Manager does not display the system users. Reserved user and group IDs are documented in the setup package. To view the documentation, use this command: The recommended practice is to assign IDs starting at 5,000 that were not already reserved, as the reserved range can increase in the future. To make the IDs assigned to new users by default start at 5,000, change the UID_MIN and GID_MIN directives in the /etc/login.defs file: Note For users created before you changed UID_MIN and GID_MIN directives, UIDs will still start at the default 1000. Even with new user and group IDs beginning with 5,000, it is recommended not to raise IDs reserved by the system above 1000 to avoid conflict with systems that retain the 1000 limit. 4.1.1. User Private Groups Red Hat Enterprise Linux uses a user private group ( UPG ) scheme, which makes UNIX groups easier to manage. A user private group is created whenever a new user is added to the system. It has the same name as the user for which it was created and that user is the only member of the user private group. User private groups make it safe to set default permissions for a newly created file or directory, allowing both the user and the group of that user to make modifications to the file or directory. The setting which determines what permissions are applied to a newly created file or directory is called a umask and is configured in the /etc/bashrc file. Traditionally on UNIX-based systems, the umask is set to 022 , which allows only the user who created the file or directory to make modifications. Under this scheme, all other users, including members of the creator's group , are not allowed to make any modifications. However, under the UPG scheme, this "group protection" is not necessary since every user has their own private group. See Section 4.3.5, "Setting Default Permissions for New Files Using umask " for more information. A list of all groups is stored in the /etc/group configuration file. 4.1.2. Shadow Passwords In environments with multiple users, it is very important to use shadow passwords provided by the shadow-utils package to enhance the security of system authentication files. For this reason, the installation program enables shadow passwords by default. The following is a list of the advantages shadow passwords have over the traditional way of storing passwords on UNIX-based systems: Shadow passwords improve system security by moving encrypted password hashes from the world-readable /etc/passwd file to /etc/shadow , which is readable only by the root user. Shadow passwords store information about password aging. Shadow passwords allow to enforce some of the security policies set in the /etc/login.defs file. Most utilities provided by the shadow-utils package work properly whether or not shadow passwords are enabled. However, since password aging information is stored exclusively in the /etc/shadow file, some utilities and commands do not work without first enabling shadow passwords: The chage utility for setting password aging parameters. For details, see the Password Security section in the Red Hat Enterprise Linux 7 Security Guide . The gpasswd utility for administrating the /etc/group file. The usermod command with the -e, --expiredate or -f, --inactive option. The useradd command with the -e, --expiredate or -f, --inactive option. 4.2. Managing Users in a Graphical Environment The Users utility allows you to view, modify, add, and delete local users in the graphical user interface. 4.2.1. Using the Users Settings Tool Press the Super key to enter the Activities Overview, type Users and then press Enter . The Users settings tool appears. The Super key appears in a variety of guises, depending on the keyboard and other hardware, but often as either the Windows or Command key, and typically to the left of the Space bar. Alternatively, you can open the Users utility from the Settings menu after clicking your user name in the top right corner of the screen. To make changes to the user accounts, first select the Unlock button and authenticate yourself as indicated by the dialog box that appears. Note that unless you have superuser privileges, the application will prompt you to authenticate as root . To add and remove users, select the + and - button respectively. To add a user to the administrative group wheel , change the Account Type from Standard to Administrator . To edit a user's language setting, select the language and a drop-down menu appears. Figure 4.1. The Users Settings Tool When a new user is created, the account is disabled until a password is set. The Password drop-down menu, shown in Figure 4.2, "The Password Menu" , contains the options to set a password by the administrator immediately, choose a password by the user at the first login, or create a guest account with no password required to log in. You can also disable or enable an account from this menu. Figure 4.2. The Password Menu 4.3. Using Command-Line Tools Apart from the Users settings tool described in Section 4.2, "Managing Users in a Graphical Environment" , which is designed for basic managing of users, you can use command line tools for managing users and groups that are listed in Table 4.1, "Command line utilities for managing users and groups" . Table 4.1. Command line utilities for managing users and groups Utilities Description id Displays user and group IDs. useradd , usermod , userdel Standard utilities for adding, modifying, and deleting user accounts. groupadd , groupmod , groupdel Standard utilities for adding, modifying, and deleting groups. gpasswd Utility primarily used for modification of group password in the /etc/gshadow file which is used by the newgrp command. pwck , grpck Utilities that can be used for verification of the password, group, and associated shadow files. pwconv , pwunconv Utilities that can be used for the conversion of passwords to shadow passwords, or back from shadow passwords to standard passwords. grpconv , grpunconv Similar to the , these utilities can be used for conversion of shadowed information for group accounts. 4.3.1. Adding a New User To add a new user to the system, type the following at a shell prompt as root : ...where options are command-line options as described in Table 4.2, "Common useradd command-line options" . By default, the useradd command creates a locked user account. To unlock the account, run the following command as root to assign a password: Optionally, you can set a password aging policy. See the Password Security section in the Red Hat Enterprise Linux 7 Security Guide . Table 4.2. Common useradd command-line options Option -c ' comment ' comment can be replaced with any string. This option is generally used to specify the full name of a user. -d home_directory Home directory to be used instead of default /home/ username / . -e date Date for the account to be disabled in the format YYYY-MM-DD. -f days Number of days after the password expires until the account is disabled. If 0 is specified, the account is disabled immediately after the password expires. If -1 is specified, the account is not disabled after the password expires. -g group_name Group name or group number for the user's default (primary) group. The group must exist prior to being specified here. -G group_list List of additional (supplementary, other than default) group names or group numbers, separated by commas, of which the user is a member. The groups must exist prior to being specified here. -m Create the home directory if it does not exist. -M Do not create the home directory. -N Do not create a user private group for the user. -p password The password encrypted with crypt . -r Create a system account with a UID less than 1000 and without a home directory. -s User's login shell, which defaults to /bin/bash . -u uid User ID for the user, which must be unique and greater than 999. Important The default range of IDs for system and normal users has been changed in Red Hat Enterprise Linux 7 from earlier releases. Previously, UID 1-499 was used for system users and values above for normal users. The default range for system users is now 1-999. This change might cause problems when migrating to Red Hat Enterprise Linux 7 with existing users having UIDs and GIDs between 500 and 999. The default ranges of UID and GID can be changed in the /etc/login.defs file. Explaining the Process The following steps illustrate what happens if the command useradd juan is issued on a system that has shadow passwords enabled: A new line for juan is created in /etc/passwd : The line has the following characteristics: It begins with the user name juan . There is an x for the password field indicating that the system is using shadow passwords. A UID greater than 999 is created. Under Red Hat Enterprise Linux 7, UIDs below 1000 are reserved for system use and should not be assigned to users. A GID greater than 999 is created. Under Red Hat Enterprise Linux 7, GIDs below 1000 are reserved for system use and should not be assigned to users. The optional GECOS information is left blank. The GECOS field can be used to provide additional information about the user, such as their full name or phone number. The home directory for juan is set to /home/juan/ . The default shell is set to /bin/bash . A new line for juan is created in /etc/shadow : The line has the following characteristics: It begins with the user name juan . Two exclamation marks ( !! ) appear in the password field of the /etc/shadow file, which locks the account. Note If an encrypted password is passed using the -p flag, it is placed in the /etc/shadow file on the new line for the user. The password is set to never expire. A new line for a group named juan is created in /etc/group : A group with the same name as a user is called a user private group . For more information on user private groups, see Section 4.1.1, "User Private Groups" . The line created in /etc/group has the following characteristics: It begins with the group name juan . An x appears in the password field indicating that the system is using shadow group passwords. The GID matches the one listed for juan 's primary group in /etc/passwd . A new line for a group named juan is created in /etc/gshadow : The line has the following characteristics: It begins with the group name juan . An exclamation mark ( ! ) appears in the password field of the /etc/gshadow file, which locks the group. All other fields are blank. A directory for user juan is created in the /home directory: This directory is owned by user juan and group juan . It has read , write , and execute privileges only for the user juan . All other permissions are denied. The files within the /etc/skel/ directory (which contain default user settings) are copied into the new /home/juan/ directory: At this point, a locked account called juan exists on the system. To activate it, the administrator must assign a password to the account using the passwd command and, optionally, set password aging guidelines (see the Password Security section in the Red Hat Enterprise Linux 7 Security Guide for details). 4.3.2. Adding a New Group To add a new group to the system, type the following at a shell prompt as root : ...where options are command-line options as described in Table 4.3, "Common groupadd command-line options" . Table 4.3. Common groupadd command-line options Option Description -f , --force When used with -g gid and gid already exists, groupadd will choose another unique gid for the group. -g gid Group ID for the group, which must be unique and greater than 999. -K , --key key = value Override /etc/login.defs defaults. -o , --non-unique Allows creating groups with duplicate GID. -p , --password password Use this encrypted password for the new group. -r Create a system group with a GID less than 1000. 4.3.3. Adding an Existing User to an Existing Group Use the usermod utility to add an already existing user to an already existing group. Various options of usermod have different impact on user's primary group and on his or her supplementary groups. To override user's primary group, run the following command as root : To override user's supplementary groups, run the following command as root : Note that in this case all supplementary groups of the user are replaced by the new group or several new groups. To add one or more groups to user's supplementary groups, run one of the following commands as root : Note that in this case the new group is added to user's current supplementary groups. 4.3.4. Creating Group Directories System administrators usually like to create a group for each major project and assign people to the group when they need to access that project's files. With this traditional scheme, file management is difficult; when someone creates a file, it is associated with the primary group to which they belong. When a single person works on multiple projects, it becomes difficult to associate the right files with the right group. However, with the UPG scheme, groups are automatically assigned to files created within a directory with the setgid bit set. The setgid bit makes managing group projects that share a common directory very simple because any files a user creates within the directory are owned by the group that owns the directory. For example, a group of people need to work on files in the /opt/myproject/ directory. Some people are trusted to modify the contents of this directory, but not everyone. As root , create the /opt/myproject/ directory by typing the following at a shell prompt: Add the myproject group to the system: Associate the contents of the /opt/myproject/ directory with the myproject group: Allow users in the group to create files within the directory and set the setgid bit: At this point, all members of the myproject group can create and edit files in the /opt/myproject/ directory without the administrator having to change file permissions every time users write new files. To verify that the permissions have been set correctly, run the following command: Add users to the myproject group: 4.3.5. Setting Default Permissions for New Files Using umask When a process creates a file, the file has certain default permissions, for example, -rw-rw-r-- . These initial permissions are partially defined by the file mode creation mask , also called file permission mask or umask . Every process has its own umask, for example, bash has umask 0022 by default. Process umask can be changed. What umask consists of A umask consists of bits corresponding to standard file permissions. For example, for umask 0137 , the digits mean that: 0 = no meaning, it is always 0 (umask does not affect special bits) 1 = for owner permissions, the execute bit is set 3 = for group permissions, the execute and write bits are set 7 = for others permissions, the execute, write, and read bits are set Umasks can be represented in binary, octal, or symbolic notation. For example, the octal representation 0137 equals symbolic representation u=rw-,g=r--,o=--- . Symbolic notation specification is the reverse of the octal notation specification: it shows the allowed permissions, not the prohibited permissions. How umask works Umask prohibits permissions from being set for a file: When a bit is set in umask , it is unset in the file. When a bit is not set in umask , it can be set in the file, depending on other factors. The following figure shows how umask 0137 affects creating a new file. Figure 4.3. Applying umask when creating a file Important For security reasons, a regular file cannot have execute permissions by default. Therefore, even if umask is 0000 , which does not prohibit any permissions, a new regular file still does not have execute permissions. However, directories can be created with execute permissions: 4.3.5.1. Managing umask in Shells For popular shells, such as bash , ksh , zsh and tcsh , umask is managed using the umask shell builtin . Processes started from shell inherit its umask. Displaying the current mask To show the current umask in octal notation: To show the current umask in symbolic notation: Setting mask in shell using umask To set umask for the current shell session using octal notation run: Substitute octal_mask with four or less digits from 0 to 7 . When three or less digits are provided, permissions are set as if the command contained leading zeros. For example, umask 7 translates to 0007 . Example 4.1. Setting umask Using Octal Notation To prohibit new files from having write and execute permissions for owner and group, and from having any permissions for others: Or simply: To set umask for the current shell session using symbolic notation: Example 4.2. Setting umask Using Symbolic Notation To set umask 0337 using symbolic notation: Working with the default shell umask Shells usually have a configuration file where their default umask is set. For bash , it is /etc/bashrc . To show the default bash umask: The output shows if umask is set, either using the umask command or the UMASK variable. In the following example, umask is set to 022 using the umask command: To change the default umask for bash , change the umask command call or the UMASK variable assignment in /etc/bashrc . This example changes the default umask to 0227 : Working with the default shell umask of a specific user By default, bash umask of a new user defaults to the one defined in /etc/bashrc . To change bash umask for a particular user, add a call to the umask command in USDHOME/.bashrc file of that user. For example, to change bash umask of user john to 0227 : Setting default permissions for newly created home directories To change permissions with which user home directories are created, change the UMASK variable in the /etc/login.defs file: 4.4. Additional Resources For more information on how to manage users and groups on Red Hat Enterprise Linux, see the resources listed below. Installed Documentation For information about various utilities for managing users and groups, see the following manual pages: useradd (8) - The manual page for the useradd command documents how to use it to create new users. userdel (8) - The manual page for the userdel command documents how to use it to delete users. usermod (8) - The manual page for the usermod command documents how to use it to modify users. groupadd (8) - The manual page for the groupadd command documents how to use it to create new groups. groupdel (8) - The manual page for the groupdel command documents how to use it to delete groups. groupmod (8) - The manual page for the groupmod command documents how to use it to modify group membership. gpasswd (1) - The manual page for the gpasswd command documents how to manage the /etc/group file. grpck (8) - The manual page for the grpck command documents how to use it to verify the integrity of the /etc/group file. pwck (8) - The manual page for the pwck command documents how to use it to verify the integrity of the /etc/passwd and /etc/shadow files. pwconv (8) - The manual page for the pwconv , pwunconv , grpconv , and grpunconv commands documents how to convert shadowed information for passwords and groups. id (1) - The manual page for the id command documents how to display user and group IDs. umask (2) - The manual page for the umask command documents how to work with the file mode creation mask. For information about related configuration files, see: group (5) - The manual page for the /etc/group file documents how to use this file to define system groups. passwd (5) - The manual page for the /etc/passwd file documents how to use this file to define user information. shadow (5) - The manual page for the /etc/shadow file documents how to use this file to set passwords and account expiration information for the system. Online Documentation Red Hat Enterprise Linux 7 Security Guide - The Security Guide for Red Hat Enterprise Linux 7 provides additional information how to ensure password security and secure the workstation by enabling password aging and user account locking. See Also Chapter 6, Gaining Privileges documents how to gain administrative privileges by using the su and sudo commands.
[ "cat /usr/share/doc/setup*/uidgid", "[file contents truncated] UID_MIN 5000 [file contents truncated] GID_MIN 5000 [file contents truncated]", "useradd options username", "passwd username", "juan:x:1001:1001::/home/juan:/bin/bash", "juan:!!:14798:0:99999:7:::", "juan:x:1001:", "juan:!::", "~]# ls -ld /home/juan drwx------. 4 juan juan 4096 Mar 3 18:23 /home/juan", "~]# ls -la /home/juan total 28 drwx------. 4 juan juan 4096 Mar 3 18:23 . drwxr-xr-x. 5 root root 4096 Mar 3 18:23 .. -rw-r--r--. 1 juan juan 18 Jun 22 2010 .bash_logout -rw-r--r--. 1 juan juan 176 Jun 22 2010 .bash_profile -rw-r--r--. 1 juan juan 124 Jun 22 2010 .bashrc drwxr-xr-x. 4 juan juan 4096 Nov 23 15:09 .mozilla", "groupadd options group_name", "~]# usermod -g group_name user_name", "~]# usermod -G group_name1 , group_name2 ,... user_name", "~]# usermod -aG group_name1 , group_name2 ,... user_name", "~]# usermod --append -G group_name1 , group_name2 ,... user_name", "mkdir /opt/myproject", "groupadd myproject", "chown root:myproject /opt/myproject", "chmod 2775 /opt/myproject", "~]# ls -ld /opt/myproject drwxrwsr-x. 3 root myproject 4096 Mar 3 18:31 /opt/myproject", "usermod -aG myproject username", "[john@server tmp]USD umask 0000 [john@server tmp]USD touch file [john@server tmp]USD mkdir directory [john@server tmp]USD ls -lh . total 0 drwxrwxrwx. 2 john john 40 Nov 2 13:17 directory -rw-rw-rw-. 1 john john 0 Nov 2 13:17 file", "~]USD umask 0022", "~]USD umask -S u=rwx,g=rx,o=rx", "~]USD umask octal_mask", "~]USD umask 0337", "~]USD umask 337", "~]USD umask -S symbolic_mask", "~]USD umask -S u=r,g=r,o=", "~]USD grep -i -B 1 umask /etc/bashrc", "~]USD grep -i -B 1 umask /etc/bashrc # By default, we want umask to get set. This sets it for non-login shell. -- if [ USDUID -gt 199 ] && [ \"id -gn\" = \"id -un\" ]; then umask 002 else umask 022", "if [ USDUID -gt 199 ] && [ \"id -gn\" = \"id -un\" ]; then umask 002 else umask 227", "john@server ~]USD echo 'umask 227' >> /home/john/.bashrc", "The permission mask is initialized to this value. If not specified, the permission mask will be initialized to 022. UMASK 077" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system_administrators_guide/ch-Managing_Users_and_Groups
Chapter 2. Windows Container Support for Red Hat OpenShift release notes
Chapter 2. Windows Container Support for Red Hat OpenShift release notes 2.1. About Windows Container Support for Red Hat OpenShift Red Hat OpenShift support for Windows Containers enables running Windows compute nodes in an OpenShift Container Platform cluster. Running Windows workloads is possible by using the Red Hat Windows Machine Config Operator (WMCO) to install and manage Windows nodes. With Windows nodes available, you can run Windows container workloads in OpenShift Container Platform. The release notes for Red Hat OpenShift support for Windows Containers tracks the development of the WMCO, which provides all Windows container workload capabilities in OpenShift Container Platform. 2.2. Getting support Windows Container Support for Red Hat OpenShift is provided and available as an optional, installable component. Windows Container Support for Red Hat OpenShift is not part of the OpenShift Container Platform subscription. It requires an additional Red Hat subscription and is supported according to the Scope of coverage and Service level agreements . You must have a separate subscription to receive support for Windows Container Support for Red Hat OpenShift. Without this additional Red Hat subscription, deploying Windows container workloads in production clusters is not supported. You can request support through the Red Hat Customer Portal . For more information, see the Red Hat OpenShift Container Platform Life Cycle Policy document for Red Hat OpenShift support for Windows Containers . If you do not have this additional Red Hat subscription, you can use the Community Windows Machine Config Operator, which is a distribution that lacks official support. 2.3. Windows Machine Config Operator prerequisites The following information details the supported cloud provider versions, Windows Server versions, and networking configurations for the Windows Machine Config Operator. See the vSphere documentation for any information that is relevant to only that platform. 2.3.1. Supported cloud providers based on OpenShift Container Platform and WMCO versions Cloud provider Supported OpenShift Container Platform version Supported WMCO version Amazon Web Services (AWS) 4.6+ WMCO 1.0+ Microsoft Azure 4.6+ WMCO 1.0+ VMware vSphere 4.7+ WMCO 2.0+ 2.3.2. Supported Windows Server versions The following table lists the supported Windows Server version based on the applicable cloud provider. Any unlisted Windows Server version is not supported and will cause errors. To prevent these errors, only use the appropriate version according to the cloud provider in use. Cloud provider Supported Windows Server version Amazon Web Services (AWS) Windows Server Long-Term Servicing Channel (LTSC): Windows Server 2019 Microsoft Azure Windows Server Long-Term Servicing Channel (LTSC): Windows Server 2019 VMware vSphere Windows Server Semi-Annual Channel (SAC): Windows Server 20H2 2.3.3. Supported networking Hybrid networking with OVN-Kubernetes is the only supported networking configuration. See the additional resources below for more information on this functionality. The following tables outline the type of networking configuration and Windows Server versions to use based on your cloud provider. You must specify the network configuration when you install the cluster. Be aware that OpenShift SDN networking is the default network for OpenShift Container Platform clusters. However, OpenShift SDN is not supported by WMCO. Table 2.1. Cloud provider networking support Cloud provider Supported networking Amazon Web Services (AWS) Hybrid networking with OVN-Kubernetes Microsoft Azure Hybrid networking with OVN-Kubernetes VMware vSphere Hybrid networking with OVN-Kubernetes with a custom VXLAN port Table 2.2. Hybrid OVN-Kubernetes Windows Server support Hybrid networking with OVN-Kubernetes Supported Windows Server version Default VXLAN port Windows Server Long-Term Servicing Channel (LTSC): Windows Server 2019 Custom VXLAN port Windows Server Semi-Annual Channel (SAC): Windows Server 20H2 2.3.4. Supported installation method The installer-provisioned infrastructure installation method is the only supported installation method. This is consistent across all supported cloud providers. User-provisioned infrastructure installation method is not supported. 2.4. Release notes for Red Hat Windows Machine Config Operator 2.0.5 Issued: 2022-05-02 The WMCO 2.0.5 is now available with bug fixes. The components of the WMCO were released in link: RHSA-2022:93074 . 2.5. Release notes for Red Hat Windows Machine Config Operator 2.0.4 Issued: 2022-02-16 The WMCO 2.0.4 is now available with bug fixes. The components of the WMCO were released in RHBA-2022:0241 . 2.5.1. Bug fixes For clusters installed on VMware vSphere, the WMCO ignored the Deleting phase notification event, leaving incorrect node information in the windows-exporter metrics endpoint. This resulted in an invalid mapping for the Prometheus metrics endpoint. This bug has been fixed; the WMCO now recognizes the Deleting phase notification event and maps the Prometheus metrics endpoint appropriately. ( BZ#1995340 ) 2.6. Release notes for Red Hat Windows Machine Config Operator 2.0.3 Issued: 2021-07-28 The WMCO 2.0.3 is now available with bug fixes. The components of the WMCO were released in RHBA-2021:2926 . 2.6.1. Bug fixes This WMCO release fixes a bug that prevented users from upgrading to WMCO 3.0.0. Users should upgrade to WMCO 2.0.3 before upgrading to OpenShift Container Platform 4.8, which only supports WMCO 3.0.0. ( BZ#1985349 ) 2.7. Release notes for Red Hat Windows Machine Config Operator 2.0.2 Issued: 2021-07-08 The WMCO 2.0.2 is now available with bug fixes. The components of the WMCO were released in RHBA-2021:2671 . Important Users who are running a version of WMCO prior to 2.0.3 should first upgrade to WMCO 2.0.3 prior to upgrading to WMCO 3.0.0. ( BZ#1983153 ) 2.7.1. Bug fixes OpenShift Container Platform 4.8 enables the BoundServiceAccountTokenVolume option by default. This option attaches the projected volumes to all of the pods. In addition, OpenShift Container Platform 4.8 adds the RunAsUser option to the SecurityContext . This combination results in Windows pods being stuck in the ContainerCreating status. To work around this issue, you should upgrade to WMCO 2.0.2 before upgrading your cluster to OpenShift Container Platform 4.8. ( BZ#1975553 ) 2.8. Release notes for Red Hat Windows Machine Config Operator 2.0.1 Issued: 2021-06-23 The WMCO 2.0.1 is now available with bug fixes. The components of the WMCO were released in RHSA-2021:2130 . 2.8.1. New features and improvements This release adds the following new features and improvements. 2.8.1.1. Increased image pull time-out duration Image pull time-out has been increased to 30 minutes. 2.8.1.2. Autoscaling for Windows instances Cluster autoscaling is now supported for Windows instances. You can complete the following actions for Windows nodes: Define and deploy a cluster autoscaler . Create a Windows node using a Windows machine set . Define and deploy a machine autoscaler , referencing a Windows machine set. 2.8.2. Bug fixes Previously, when using the Windows kube-proxy component on an AWS installation, when you created a LoadBalancer service, packets would be misrouted and reached an unintended destination. Now, packets are no longer wrongly routed to unintended destinations. ( BZ#1946538 ) Previously, Windows nodes were not reporting some key node-level metrics via telemetry monitoring. The windows_exporter reports various metrics as windows_* instead of the node_exporter equivalent of node_* . Now, the telemetry results cover all of the expected metrics. ( BZ#1955319 ) Previously, when the WMCO configured Windows instances, if the hybrid-overlay or kube-proxy components failed, the node might report itself as Ready . Now, the error is detected and the node reports itself as NotReady . ( BZ#1956412 ) Previously, the kube-proxy service would terminate unexpectedly after the load balancer is created if you created the load balancer after the Windows pods begin running. Now, the kube-proxy service does not crash when recreating the load balancer service. ( BZ#1939968 ) 2.8.3. RHSA-2021:2130 - Windows Container support for OpenShift Container Platform security update As part of the previously noted bug fix ( BZ#1946538 ), an update for Windows kube-proxy is now available for Red Hat Windows Machine Config Operator 2.0.1. Details of the update are documented in the RHSA-2021:2130 advisory. 2.9. Release notes for Red Hat Windows Machine Config Operator 2.0.0 This release of the WMCO provides bug fixes and enhancements for running Windows compute nodes in an OpenShift Container Platform cluster. The components of the WMCO 2.0.0 were released in RHBA-2021:0440 . Important Running Windows container workloads is not supported for clusters in a restricted network or disconnected environment. Version 2.x of the WMCO is only compatible with OpenShift Container Platform 4.7. 2.9.1. New features and improvements This release adds the following new features and improvements. 2.9.1.1. Support for clusters running on VMware vSphere You can now run Windows nodes on a cluster installed on VMware vSphere version 6.5, 6.7, or 7.0. You can create a Windows MachineSet object on vSphere to host Windows Server compute nodes. For more information, see Creating a Windows MachineSet object on vSphere . 2.9.1.2. Enhanced Windows node monitoring Windows nodes are now fully integrated with most of the monitoring capabilities provided by the web console. However, it is not possible to view workload graphs for pods running on Windows nodes in this release. 2.10. Known issues When you create Windows pods with RunAsUserName set in its "SecurityContext" with a projected volume associated with these pods, the file ownership permissions for the projected entities are ignored, resulting in incorrectly configured ownership permissions. The filesystem graphs available in the web console do not display for Windows nodes. This is caused by changes in the filesystem queries. This will be fixed in a future release of WMCO. ( BZ#1930347 ) The Prometheus windows_exporter used by the WMCO currently collects metrics through HTTP, so it is considered unsafe. You must ensure that only trusted users can retrieve metrics from the endpoint. The windows_exporter feature recently added support for HTTPS configuration, but this configuration has not been implemented for WMCO. Support for HTTPS configuration in the WMCO will be added in a future release. When the RunAsUser permission is set in the security context of a Linux-based pod, the projected files have the correct permissions set, including container user ownership. However, when the Windows equivalent RunAsUsername permission is set in a Windows pod, the kubelet is prevented from setting correct ownership on the files in the projected volume. This problem can get exacerbated when used in conjunction with a hostPath volume where best practices are not followed. For example, giving a pod access to the C:\var\lib\kubelet\pods\ folder results in that pod being able to access service account tokens from other pods. By default, the projected files will have the following ownership, as shown in this example Windows projected volume file: Path : Microsoft.PowerShell.Core\FileSystem::C:\var\run\secrets\kubernetes.io\serviceaccount\..2021_08_31_22_22_18.318230061\ca.crt Owner : BUILTIN\Administrators Group : NT AUTHORITY\SYSTEM Access : NT AUTHORITY\SYSTEM Allow FullControl BUILTIN\Administrators Allow FullControl BUILTIN\Users Allow ReadAndExecute, Synchronize Audit : Sddl : O:BAG:SYD:AI(A;ID;FA;;;SY)(A;ID;FA;;;BA)(A;ID;0x1200a9;;;BU) This indicates all administrator users, such as someone with the ContainerAdministrator role, have read, write, and execute access, while non-administrator users have read and execute access. Important OpenShift Container Platform applies the RunAsUser security context to all pods irrespective of its operating system. This means Windows pods automatically have the RunAsUser permission applied to its security context. In addition, if a Windows pod is created with a projected volume with the default RunAsUser permission set, the pod remains in the ContainerCreating phase. To handle these issues, OpenShift Container Platform forces the file permission handling in projected service account volumes set in the security context of the pod to not be honored for projected volumes on Windows. Note that this behavior for Windows pods is how file permission handling used to work for all pod types prior to OpenShift Container Platform 4.7. ( BZ#1971745 ) 2.11. Known limitations Note the following limitations when working with Windows nodes managed by the WMCO (Windows nodes): The following OpenShift Container Platform features are not supported on Windows nodes: Red Hat OpenShift Developer CLI (odo) Image builds OpenShift Pipelines OpenShift Service Mesh OpenShift monitoring of user-defined projects OpenShift Serverless Horizontal Pod Autoscaling Vertical Pod Autoscaling The following Red Hat features are not supported on Windows nodes: Red Hat cost management Red Hat OpenShift Local Windows nodes do not support pulling container images from private registries. You can use images from public registries or pre-pull the images. Windows nodes do not support workloads created by using deployment configs. You can use a deployment or other method to deploy workloads. Windows nodes are not supported in clusters that use a cluster-wide proxy. This is because the WMCO is not able to route traffic through the proxy connection for the workloads. Windows nodes are not supported in clusters that are in a disconnected environment. Red Hat OpenShift support for Windows Containers supports only in-tree storage drivers for all cloud providers. Kubernetes has identified the following node feature limitations : Huge pages are not supported for Windows containers. Privileged containers are not supported for Windows containers. Pod termination grace periods require the containerd container runtime to be installed on the Windows node. Kubernetes has identified several API compatibility issues .
[ "Path : Microsoft.PowerShell.Core\\FileSystem::C:\\var\\run\\secrets\\kubernetes.io\\serviceaccount\\..2021_08_31_22_22_18.318230061\\ca.crt Owner : BUILTIN\\Administrators Group : NT AUTHORITY\\SYSTEM Access : NT AUTHORITY\\SYSTEM Allow FullControl BUILTIN\\Administrators Allow FullControl BUILTIN\\Users Allow ReadAndExecute, Synchronize Audit : Sddl : O:BAG:SYD:AI(A;ID;FA;;;SY)(A;ID;FA;;;BA)(A;ID;0x1200a9;;;BU)" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/windows_container_support_for_openshift/windows-containers-release-notes-2-x
Chapter 2. Understanding disconnected installation mirroring
Chapter 2. Understanding disconnected installation mirroring You can use a mirror registry for disconnected installations and to ensure that your clusters only use container images that satisfy your organization's controls on external content. Before you install a cluster on infrastructure that you provision in a disconnected environment, you must mirror the required container images into that environment. To mirror container images, you must have a registry for mirroring. 2.1. Mirroring images for a disconnected installation through the Agent-based Installer You can use one of the following procedures to mirror your OpenShift Container Platform image repository to your mirror registry: Mirroring images for a disconnected installation Mirroring images for a disconnected installation using the oc-mirror plugin 2.2. About mirroring the OpenShift Container Platform image repository for a disconnected registry To use mirror images for a disconnected installation with the Agent-based Installer, you must modify the install-config.yaml file. You can mirror the release image by using the output of either the oc adm release mirror or oc mirror command. This is dependent on which command you used to set up the mirror registry. The following example shows the output of the oc adm release mirror command. USD oc adm release mirror Example output To use the new mirrored repository to install, add the following section to the install-config.yaml: imageContentSources: mirrors: virthost.ostest.test.metalkube.org:5000/localimages/local-release-image source: quay.io/openshift-release-dev/ocp-v4.0-art-dev mirrors: virthost.ostest.test.metalkube.org:5000/localimages/local-release-image source: registry.ci.openshift.org/ocp/release The following example shows part of the imageContentSourcePolicy.yaml file generated by the oc-mirror plugin. The file can be found in the results directory, for example oc-mirror-workspace/results-1682697932/ . Example imageContentSourcePolicy.yaml file spec: repositoryDigestMirrors: - mirrors: - virthost.ostest.test.metalkube.org:5000/openshift/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev - mirrors: - virthost.ostest.test.metalkube.org:5000/openshift/release-images source: quay.io/openshift-release-dev/ocp-release 2.2.1. Configuring the Agent-based Installer to use mirrored images You must use the output of either the oc adm release mirror command or the oc-mirror plugin to configure the Agent-based Installer to use mirrored images. Procedure If you used the oc-mirror plugin to mirror your release images: Open the imageContentSourcePolicy.yaml located in the results directory, for example oc-mirror-workspace/results-1682697932/ . Copy the text in the repositoryDigestMirrors section of the yaml file. If you used the oc adm release mirror command to mirror your release images: Copy the text in the imageContentSources section of the command output. Paste the copied text into the imageContentSources field of the install-config.yaml file. Add the certificate file used for the mirror registry to the additionalTrustBundle field of the yaml file. Important The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Example install-config.yaml file additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- If you are using GitOps ZTP manifests: add the registries.conf and ca-bundle.crt files to the mirror path to add the mirror configuration in the agent ISO image. Note You can create the registries.conf file from the output of either the oc adm release mirror command or the oc mirror plugin. The format of the /etc/containers/registries.conf file has changed. It is now version 2 and in TOML format. Example registries.conf file [[registry]] location = "registry.ci.openshift.org/ocp/release" mirror-by-digest-only = true [[registry.mirror]] location = "virthost.ostest.test.metalkube.org:5000/localimages/local-release-image" [[registry]] location = "quay.io/openshift-release-dev/ocp-v4.0-art-dev" mirror-by-digest-only = true [[registry.mirror]] location = "virthost.ostest.test.metalkube.org:5000/localimages/local-release-image"
[ "oc adm release mirror", "To use the new mirrored repository to install, add the following section to the install-config.yaml: imageContentSources: mirrors: virthost.ostest.test.metalkube.org:5000/localimages/local-release-image source: quay.io/openshift-release-dev/ocp-v4.0-art-dev mirrors: virthost.ostest.test.metalkube.org:5000/localimages/local-release-image source: registry.ci.openshift.org/ocp/release", "spec: repositoryDigestMirrors: - mirrors: - virthost.ostest.test.metalkube.org:5000/openshift/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev - mirrors: - virthost.ostest.test.metalkube.org:5000/openshift/release-images source: quay.io/openshift-release-dev/ocp-release", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "[[registry]] location = \"registry.ci.openshift.org/ocp/release\" mirror-by-digest-only = true [[registry.mirror]] location = \"virthost.ostest.test.metalkube.org:5000/localimages/local-release-image\" [[registry]] location = \"quay.io/openshift-release-dev/ocp-v4.0-art-dev\" mirror-by-digest-only = true [[registry.mirror]] location = \"virthost.ostest.test.metalkube.org:5000/localimages/local-release-image\"" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_an_on-premise_cluster_with_the_agent-based_installer/understanding-disconnected-installation-mirroring
8.138. pam
8.138. pam 8.138.1. RHEA-2013:1734 - pam enhancement update Updated pam packages that add one enhancement are now available for Red Hat Enterprise Linux 6. Pluggable Authentication Modules (PAM) provide a system to set up authentication policies without the need to recompile programs to handle authentication. Enhancement BZ# 976033 During TTY auditing, it is usually not necessary or even not desirable to log passwords that are being entered by the audited operator. This update adds an enhancement to the pam_tty_audit PAM module, so that passwords entered in the TTY console are logged only in case the "log_passwd" option is used. As a result, passwords are no longer logged, unless the "log_passwd" option of pam_tty_audit is used. Note that this option is not available in kernel versions available prior to Red Hat Enterprise Linux 6.5. Users of pam are advised to upgrade to these updated packages, which add this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/pam
Web console
Web console OpenShift Container Platform 4.11 Getting started with the web console in OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/web_console/index
Chapter 18. NetworkAttachmentDefinition [k8s.cni.cncf.io/v1]
Chapter 18. NetworkAttachmentDefinition [k8s.cni.cncf.io/v1] Description NetworkAttachmentDefinition is a CRD schema specified by the Network Plumbing Working Group to express the intent for attaching pods to one or more logical or physical networks. More information available at: https://github.com/k8snetworkplumbingwg/multi-net-spec Type object 18.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object NetworkAttachmentDefinition spec defines the desired state of a network attachment 18.1.1. .spec Description NetworkAttachmentDefinition spec defines the desired state of a network attachment Type object Property Type Description config string NetworkAttachmentDefinition config is a JSON-formatted CNI configuration 18.2. API endpoints The following API endpoints are available: /apis/k8s.cni.cncf.io/v1/network-attachment-definitions GET : list objects of kind NetworkAttachmentDefinition /apis/k8s.cni.cncf.io/v1/namespaces/{namespace}/network-attachment-definitions DELETE : delete collection of NetworkAttachmentDefinition GET : list objects of kind NetworkAttachmentDefinition POST : create a NetworkAttachmentDefinition /apis/k8s.cni.cncf.io/v1/namespaces/{namespace}/network-attachment-definitions/{name} DELETE : delete a NetworkAttachmentDefinition GET : read the specified NetworkAttachmentDefinition PATCH : partially update the specified NetworkAttachmentDefinition PUT : replace the specified NetworkAttachmentDefinition 18.2.1. /apis/k8s.cni.cncf.io/v1/network-attachment-definitions HTTP method GET Description list objects of kind NetworkAttachmentDefinition Table 18.1. HTTP responses HTTP code Reponse body 200 - OK NetworkAttachmentDefinitionList schema 401 - Unauthorized Empty 18.2.2. /apis/k8s.cni.cncf.io/v1/namespaces/{namespace}/network-attachment-definitions HTTP method DELETE Description delete collection of NetworkAttachmentDefinition Table 18.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind NetworkAttachmentDefinition Table 18.3. HTTP responses HTTP code Reponse body 200 - OK NetworkAttachmentDefinitionList schema 401 - Unauthorized Empty HTTP method POST Description create a NetworkAttachmentDefinition Table 18.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 18.5. Body parameters Parameter Type Description body NetworkAttachmentDefinition schema Table 18.6. HTTP responses HTTP code Reponse body 200 - OK NetworkAttachmentDefinition schema 201 - Created NetworkAttachmentDefinition schema 202 - Accepted NetworkAttachmentDefinition schema 401 - Unauthorized Empty 18.2.3. /apis/k8s.cni.cncf.io/v1/namespaces/{namespace}/network-attachment-definitions/{name} Table 18.7. Global path parameters Parameter Type Description name string name of the NetworkAttachmentDefinition HTTP method DELETE Description delete a NetworkAttachmentDefinition Table 18.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 18.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified NetworkAttachmentDefinition Table 18.10. HTTP responses HTTP code Reponse body 200 - OK NetworkAttachmentDefinition schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified NetworkAttachmentDefinition Table 18.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 18.12. HTTP responses HTTP code Reponse body 200 - OK NetworkAttachmentDefinition schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified NetworkAttachmentDefinition Table 18.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 18.14. Body parameters Parameter Type Description body NetworkAttachmentDefinition schema Table 18.15. HTTP responses HTTP code Reponse body 200 - OK NetworkAttachmentDefinition schema 201 - Created NetworkAttachmentDefinition schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/network_apis/networkattachmentdefinition-k8s-cni-cncf-io-v1
Chapter 5. Overriding the cluster-wide default node selector for OpenShift Data Foundation post deployment
Chapter 5. Overriding the cluster-wide default node selector for OpenShift Data Foundation post deployment When a cluster-wide default node selector is used for OpenShift Data Foundation, the pods generated by container storage interface (CSI) daemonsets are able to start only on the nodes that match the selector. To be able to use OpenShift Data Foundation from nodes which do not match the selector, override the cluster-wide default node selector by performing the following steps in the command line interface : Procedure Specify a blank node selector for the openshift-storage namespace. Delete the original pods generated by the DaemonSets.
[ "oc annotate namespace openshift-storage openshift.io/node-selector=", "delete pod -l app=csi-cephfsplugin -n openshift-storage delete pod -l app=csi-rbdplugin -n openshift-storage" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/troubleshooting_openshift_data_foundation/overriding-the-cluster-wide-default-node-selector-for-openshift-data-foundation-post-deployment_rhodf
Chapter 26. Managing Certificates and Certificate Authorities
Chapter 26. Managing Certificates and Certificate Authorities 26.1. Lightweight Sub-CAs If your IdM installation is configured with the integrated Certificate System (CS) certificate authority (CA), you are able to create lightweight sub-CAs. They enable you to configure services, like virtual private network (VPN) gateways, to accept only certificates issued by one sub-CA. At the same time, you can configure other services to accept only certificates issued by a different sub-CA or the root CA. If you revoke the intermediate certificate of a sub-CA, all certificates issued by this sub-CA are automatically invalid. If you set up IdM using the integrated CA, the automatically created ipa CA is the root CA of the certificate system. All sub-CAs you create, are subordinated to this root CA. 26.1.1. Creating a Lightweight Sub-CA For details on creating a sub-CA, see the section called "Creating a Sub-CA from the Web UI" the section called "Creating a Sub-CA from the Command Line" Creating a Sub-CA from the Web UI To create a new sub-CA named vpn-ca : Open the Authentication tab, and select the Certificates subtab. Select Certificate Authorities and click Add . Enter the name and subject DN for the CA. Figure 26.1. Adding a CA The subject DN must be unique in the IdM CA infrastructure. Creating a Sub-CA from the Command Line To create a new sub-CA named vpn-ca , enter: Name Name of the CA. Authority ID Automatically created, individual ID for the CA. Subject DN Subject distinguished name (DN). The subject DN must be unique in the IdM CA infrastructure. Issuer DN Parent CA that issued the sub-CA certificate. All sub-CAs are created as a child of the IdM root CA. To verify that the new CA signing certificate has been successfully added to the IdM database, run: Note The new CA certificate is automatically transferred to all replicas when they have a certificate system instance installed. 26.1.2. Removing a Lightweight Sub-CA For details on deleting a sub-CA, see the section called "Removing a Sub-CA from the Web UI" the section called "Removing a Sub-CA from the Command Line" Removing a Sub-CA from the Web UI Open the Authentication tab, and select the Certificates subtab. Select Certificate Authorities . Select the sub-CA to remove and click Delete . Click Delete to confirm. Removing a Sub-CA from the Command Line To delete a sub-CA, enter:
[ "ipa ca-add vpn-ca --subject=\" CN=VPN,O=IDM.EXAMPLE.COM \" ------------------- Created CA \"vpn-ca\" ------------------- Name: vpn-ca Authority ID: ba83f324-5e50-4114-b109-acca05d6f1dc Subject DN: CN=VPN,O=IDM.EXAMPLE.COM Issuer DN: CN=Certificate Authority,O=IDM.EXAMPLE.COM", "certutil -d /etc/pki/pki-tomcat/alias/ -L Certificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI caSigningCert cert-pki-ca CTu,Cu,Cu Server-Cert cert-pki-ca u,u,u auditSigningCert cert-pki-ca u,u,Pu caSigningCert cert-pki-ca ba83f324-5e50-4114-b109-acca05d6f1dc u,u,u ocspSigningCert cert-pki-ca u,u,u subsystemCert cert-pki-ca u,u,u", "ipa ca-del vpn-ca ------------------- Deleted CA \"vpn-ca\" -------------------" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/config-certificates
Chapter 3. Supported configurations
Chapter 3. Supported configurations Command-line interface Red Hat Enterprise Linux 8 x86-64 and aarch64 Red Hat Enterprise Linux 9 x86-64 and aarch64 Linux x86-64 and aarch64 macOS x86-64 Windows x86-64 IBM Z and IBM LinuxONE (s390x) Router For use in Kubernetes-based sites and as a gateway for containers or machines. Red Hat Enterprise Linux 8 x86-64 and aarch64 Red Hat Enterprise Linux 9 x86-64 and aarch64 IBM Z and IBM LinuxONE (s390x) for containers. Note Red Hat Service Interconnect is not supported for standalone use as a messaging router. Red Hat Service Interconnect Operator The operator is supported with OpenShift 4.x only. OpenShift versions OpenShift 3.11 OpenShift 4.14, 4.15 and 4.16 ROSA and ARO OpenShift Container Platform and OpenShift Dedicated Installing Red Hat Service Interconnect in a disconnected network by mirroring the required components to the cluster is supported. Ingress types LoadBalancer OpenShift Routes CPU architecture x86-64, aarch64, and s390x Kubernetes distributions Red Hat provides assistance running Red Hat Service Interconnect on any CNCF-certified distribution of Kubernetes . Note, however, that Red Hat Service Interconnect is tested only on OpenShift. Ingress types Contour Nginx - This requires configuration for TLS passthrough NodePort Upgrades Red Hat supports upgrades from one downstream minor version to the , with no jumps. While Red Hat aims to have compatibility across minor versions, we recommend upgrading all sites to latest version. Note If you have applications that require long lived connections, for example Kafka clients, consider using a load balancer as ingress instead of a proxy ingress such as OpenShift route. If you use an OpenShift route as ingress, expect interruptions whenever routes are configured. For information about the latest release, see Red Hat Service Interconnect Supported Configurations .
null
https://docs.redhat.com/en/documentation/red_hat_service_interconnect/1.8/html/release_notes/supported-configurations
probe::tcp.disconnect.return
probe::tcp.disconnect.return Name probe::tcp.disconnect.return - TCP socket disconnection complete Synopsis tcp.disconnect.return Values name Name of this probe ret Error code (0: no error) Context The process which disconnects tcp
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-tcp-disconnect-return
function::probe_type
function::probe_type Name function::probe_type - The low level probe handler type of the current probe. Synopsis Arguments None Description Returns a short string describing the low level probe handler type for the current probe point. This is for informational purposes only. Depending on the low level probe handler different context functions can or cannot provide information about the current event (for example some probe handlers only trigger in user space and have no associated kernel context). High-level probes might map to the same or different low-level probes (depending on systemtap version and/or kernel used).
[ "probe_type:string()" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-probe-type
B.83. ruby
B.83. ruby B.83.1. RHBA-2011:0005 - ruby bug fix update Updated ruby packages that fix a bug are now available for Red Hat Enterprise Linux 6. Ruby is an extensible, interpreted, object-oriented, scripting language. It has features to process text files and to do system management tasks. Bug Fix BZ# 653824 Under some circumstances on the PowerPC 64 architecture, Ruby did not save the context correctly before switching threads. Consequently, when a thread was restored, it had stale context whose use would result in a segmentation fault. This affected nearly any thread-using program on PowerPC 64. With this update, the underlying source code has been modified to address this issue, and the context is now saved correctly. All PowerPC 64 ruby users are advised to upgrade to these updated packages, which resolve this issue.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/ruby
Chapter 5. Tools for application migration
Chapter 5. Tools for application migration Before you migrate your applications from Red Hat build of OpenJDK version 8 or 11 to Red Hat build of OpenJDK 17, you can use tools to test the suitability of your applications to run on Red Hat build of OpenJDK 17. You can use the following steps to enhance your testing process: Update third-party libraries. Compile your application code. Run jdeps on your application's code. Use the migration toolkit for applications (MTA) tool to migrate Java applications from Red Hat build of OpenJDK version 8 or 11 to Red Hat build of OpenJDK 17. Additional resources For more information about the MTA tool, see the Introduction to the Migration Toolkit for Applications guide.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/migrating_to_red_hat_build_of_openjdk_17_from_earlier_versions/assembly_steps-for-application-migration_openjdk
Chapter 3. Use conventions in the API
Chapter 3. Use conventions in the API Automation controller uses a standard REST API, rooted at /api/ on the server. The API is versioned for compatibility reasons. You can see what API versions are available by querying /api/ . You might have to specify the content or type on POST or PUT requests: PUT : Update a specific resource (by an identifier) or a collection of resources. You can also use PUT to create a specific resource if you know the resource identifier before-hand. POST : Create a new resource. Also acts as a catch-all verb for operations that do not fit into the other categories. All URIs not ending with "/" receive a 301 redirect. Note The formatting of extra_vars attached to Job Template records is preserved. YAML is returned as YAML with formatting and comments preserved, and JSON is returned as JSON.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/automation_execution_api_overview/controller-api-conventions
Chapter 81. KIE sessions
Chapter 81. KIE sessions In Red Hat Decision Manager, a KIE session stores and executes runtime data. The KIE session is created from a KIE base or directly from a KIE container if you have defined the KIE session in the KIE module descriptor file ( kmodule.xml ) for your project. Example KIE session configuration in a kmodule.xml file <kmodule> ... <kbase> ... <ksession name="KSession2_1" type="stateless" default="true" clockType="realtime"> ... </kbase> ... </kmodule> A KIE base is a repository that you define in the KIE module descriptor file ( kmodule.xml ) for your project and contains all rules and other business assets in Red Hat Decision Manager, but does not contain any runtime data. Example KIE base configuration in a kmodule.xml file <kmodule> ... <kbase name="KBase2" default="false" eventProcessingMode="stream" equalsBehavior="equality" declarativeAgenda="enabled" packages="org.domain.pkg2, org.domain.pkg3" includes="KBase1"> ... </kbase> ... </kmodule> A KIE session can be stateless or stateful. In a stateless KIE session, data from a invocation of the KIE session (the session state) is discarded between session invocations. In a stateful KIE session, that data is retained. The type of KIE session you use depends on your project requirements and how you want data from different asset invocations to be persisted. 81.1. Stateless KIE sessions A stateless KIE session is a session that does not use inference to make iterative changes to facts over time. In a stateless KIE session, data from a invocation of the KIE session (the session state) is discarded between session invocations, whereas in a stateful KIE session, that data is retained. A stateless KIE session behaves similarly to a function in that the results that it produces are determined by the contents of the KIE base and by the data that is passed into the KIE session for execution at a specific point in time. The KIE session has no memory of any data that was passed into the KIE session previously. Stateless KIE sessions are commonly used for the following use cases: Validation , such as validating that a person is eligible for a mortgage Calculation , such as computing a mortgage premium Routing and filtering , such as sorting incoming emails into folders or sending incoming emails to a destination For example, consider the following driver's license data model and sample DRL rule: Data model for driver's license application public class Applicant { private String name; private int age; private boolean valid; // Getter and setter methods } Sample DRL rule for driver's license application The Is of valid age rule disqualifies any applicant younger than 18 years old. When the Applicant object is inserted into the decision engine, the decision engine evaluates the constraints for each rule and searches for a match. The "objectType" constraint is always implied, after which any number of explicit field constraints are evaluated. The variable USDa is a binding variable that references the matched object in the rule consequence. Note The dollar sign ( USD ) is optional and helps to differentiate between variable names and field names. In this example, the sample rule and all other files in the ~/resources folder of the Red Hat Decision Manager project are built with the following code: Create the KIE container KieServices kieServices = KieServices.Factory.get(); KieContainer kContainer = kieServices.getKieClasspathContainer(); This code compiles all the rule files found on the class path and adds the result of this compilation, a KieModule object, in the KieContainer . Finally, the StatelessKieSession object is instantiated from the KieContainer and is executed against specified data: Instantiate the stateless KIE session and enter data StatelessKieSession kSession = kContainer.newStatelessKieSession(); Applicant applicant = new Applicant("Mr John Smith", 16); assertTrue(applicant.isValid()); ksession.execute(applicant); assertFalse(applicant.isValid()); In a stateless KIE session configuration, the execute() call acts as a combination method that instantiates the KieSession object, adds all the user data and executes user commands, calls fireAllRules() , and then calls dispose() . Therefore, with a stateless KIE session, you do not need to call fireAllRules() or call dispose() after session invocation as you do with a stateful KIE session. In this case, the specified applicant is under the age of 18, so the application is declined. For a more complex use case, see the following example. This example uses a stateless KIE session and executes rules against an iterable list of objects, such as a collection. Expanded data model for driver's license application public class Applicant { private String name; private int age; // Getter and setter methods } public class Application { private Date dateApplied; private boolean valid; // Getter and setter methods } Expanded DRL rule set for driver's license application Expanded Java source with iterable execution in a stateless KIE session StatelessKieSession ksession = kbase.newStatelessKnowledgeSession(); Applicant applicant = new Applicant("Mr John Smith", 16); Application application = new Application(); assertTrue(application.isValid()); ksession.execute(Arrays.asList(new Object[] { application, applicant })); 1 assertFalse(application.isValid()); ksession.execute (CommandFactory.newInsertIterable(new Object[] { application, applicant })); 2 List<Command> cmds = new ArrayList<Command>(); 3 cmds.add(CommandFactory.newInsert(new Person("Mr John Smith"), "mrSmith")); cmds.add(CommandFactory.newInsert(new Person("Mr John Doe"), "mrDoe")); BatchExecutionResults results = ksession.execute(CommandFactory.newBatchExecution(cmds)); assertEquals(new Person("Mr John Smith"), results.getValue("mrSmith")); 1 Method for executing rules against an iterable collection of objects produced by the Arrays.asList() method. Every collection element is inserted before any matched rules are executed. The execute(Object object) and execute(Iterable objects) methods are wrappers around the execute(Command command) method that comes from the BatchExecutor interface. 2 Execution of the iterable collection of objects using the CommandFactory interface. 3 BatchExecutor and CommandFactory configurations for working with many different commands or result output identifiers. The CommandFactory interface supports other commands that you can use in the BatchExecutor , such as StartProcess , Query , and SetGlobal . 81.1.1. Global variables in stateless KIE sessions The StatelessKieSession object supports global variables (globals) that you can configure to be resolved as session-scoped globals, delegate globals, or execution-scoped globals. Session-scoped globals: For session-scoped globals, you can use the method getGlobals() to return a Globals instance that provides access to the KIE session globals. These globals are used for all execution calls. Use caution with mutable globals because execution calls can be executing simultaneously in different threads. Session-scoped global import org.kie.api.runtime.StatelessKieSession; StatelessKieSession ksession = kbase.newStatelessKieSession(); // Set a global `myGlobal` that can be used in the rules. ksession.setGlobal("myGlobal", "I am a global"); // Execute while resolving the `myGlobal` identifier. ksession.execute(collection); Delegate globals: For delegate globals, you can assign a value to a global (with setGlobal(String, Object) ) so that the value is stored in an internal collection that maps identifiers to values. Identifiers in this internal collection have priority over any supplied delegate. If an identifier cannot be found in this internal collection, the delegate global (if any) is used. Execution-scoped globals: For execution-scoped globals, you can use the Command object to set a global that is passed to the CommandExecutor interface for execution-specific global resolution. The CommandExecutor interface also enables you to export data using out identifiers for globals, inserted facts, and query results: Out identifiers for globals, inserted facts, and query results import org.kie.api.runtime.ExecutionResults; // Set up a list of commands. List cmds = new ArrayList(); cmds.add(CommandFactory.newSetGlobal("list1", new ArrayList(), true)); cmds.add(CommandFactory.newInsert(new Person("jon", 102), "person")); cmds.add(CommandFactory.newQuery("Get People" "getPeople")); // Execute the list. ExecutionResults results = ksession.execute(CommandFactory.newBatchExecution(cmds)); // Retrieve the `ArrayList`. results.getValue("list1"); // Retrieve the inserted `Person` fact. results.getValue("person"); // Retrieve the query as a `QueryResults` instance. results.getValue("Get People"); 81.2. Stateful KIE sessions A stateful KIE session is a session that uses inference to make iterative changes to facts over time. In a stateful KIE session, data from a invocation of the KIE session (the session state) is retained between session invocations, whereas in a stateless KIE session, that data is discarded. Warning Ensure that you call the dispose() method after running a stateful KIE session so that no memory leaks occur between session invocations. Stateful KIE sessions are commonly used for the following use cases: Monitoring , such as monitoring a stock market and automating the buying process Diagnostics , such as running fault-finding processes or medical diagnostic processes Logistics , such as parcel tracking and delivery provisioning Ensuring compliance , such as verifying the legality of market trades For example, consider the following fire alarm data model and sample DRL rules: Data model for sprinklers and fire alarm public class Room { private String name; // Getter and setter methods } public class Sprinkler { private Room room; private boolean on; // Getter and setter methods } public class Fire { private Room room; // Getter and setter methods } public class Alarm { } Sample DRL rule set for activating sprinklers and alarm For the When there is a fire turn on the sprinkler rule, when a fire occurs, the instances of the Fire class are created for that room and inserted into the KIE session. The rule adds a constraint for the specific room matched in the Fire instance so that only the sprinkler for that room is checked. When this rule is executed, the sprinkler activates. The other sample rules determine when the alarm is activated or deactivated accordingly. Whereas a stateless KIE session relies on standard Java syntax to modify a field, a stateful KIE session relies on the modify statement in rules to notify the decision engine of changes. The decision engine then reasons over the changes and assesses impact on subsequent rule executions. This process is part of the decision engine ability to use inference and truth maintenance and is essential in stateful KIE sessions. In this example, the sample rules and all other files in the ~/resources folder of the Red Hat Decision Manager project are built with the following code: Create the KIE container KieServices kieServices = KieServices.Factory.get(); KieContainer kContainer = kieServices.getKieClasspathContainer(); This code compiles all the rule files found on the class path and adds the result of this compilation, a KieModule object, in the KieContainer . Finally, the KieSession object is instantiated from the KieContainer and is executed against specified data: Instantiate the stateful KIE session and enter data KieSession ksession = kContainer.newKieSession(); String[] names = new String[]{"kitchen", "bedroom", "office", "livingroom"}; Map<String,Room> name2room = new HashMap<String,Room>(); for( String name: names ){ Room room = new Room( name ); name2room.put( name, room ); ksession.insert( room ); Sprinkler sprinkler = new Sprinkler( room ); ksession.insert( sprinkler ); } ksession.fireAllRules(); Console output With the data added, the decision engine completes all pattern matching but no rules have been executed, so the configured verification message appears. As new data triggers rule conditions, the decision engine executes rules to activate the alarm and later to cancel the alarm that has been activated: Enter new data to trigger rules Fire kitchenFire = new Fire( name2room.get( "kitchen" ) ); Fire officeFire = new Fire( name2room.get( "office" ) ); FactHandle kitchenFireHandle = ksession.insert( kitchenFire ); FactHandle officeFireHandle = ksession.insert( officeFire ); ksession.fireAllRules(); Console output ksession.delete( kitchenFireHandle ); ksession.delete( officeFireHandle ); ksession.fireAllRules(); Console output In this case, a reference is kept for the returned FactHandle object. A fact handle is an internal engine reference to the inserted instance and enables instances to be retracted or modified later. As this example illustrates, the data and results from stateful KIE sessions (the activated alarm) affect the invocation of subsequent sessions (alarm cancellation). 81.3. KIE session pools In use cases with large amounts of KIE runtime data and high system activity, KIE sessions might be created and disposed very frequently. A high turnover of KIE sessions is not always time consuming, but when the turnover is repeated millions of times, the process can become a bottleneck and require substantial clean-up effort. For these high-volume cases, you can use KIE session pools instead of many individual KIE sessions. To use a KIE session pool, you obtain a KIE session pool from a KIE container, define the initial number of KIE sessions in the pool, and create the KIE sessions from that pool as usual: Example KIE session pool // Obtain a KIE session pool from the KIE container KieContainerSessionsPool pool = kContainer.newKieSessionsPool(10); // Create KIE sessions from the KIE session pool KieSession kSession = pool.newKieSession(); In this example, the KIE session pool starts with 10 KIE sessions in it, but you can specify the number of KIE sessions that you need. This integer value is the number of KIE sessions that are only initially created in the pool. If required by the running application, the number of KIE sessions in the pool can dynamically grow beyond that value. After you define a KIE session pool, the time you use the KIE session as usual and call dispose() on it, the KIE session is reset and pushed back into the pool instead of being destroyed. KIE session pools typically apply to stateful KIE sessions, but KIE session pools can also affect stateless KIE sessions that you reuse with multiple execute() calls. When you create a stateless KIE session directly from a KIE container, the KIE session continues to internally create a new KIE session for each execute() invocation. Conversely, when you create a stateless KIE session from a KIE session pool, the KIE session internally uses only the specific KIE sessions provided by the pool. When you finish using a KIE session pool, you can call the shutdown() method on it to avoid memory leaks. Alternatively, you can call dispose() on the KIE container to shut down all the pools created from the KIE container.
[ "<kmodule> <kbase> <ksession name=\"KSession2_1\" type=\"stateless\" default=\"true\" clockType=\"realtime\"> </kbase> </kmodule>", "<kmodule> <kbase name=\"KBase2\" default=\"false\" eventProcessingMode=\"stream\" equalsBehavior=\"equality\" declarativeAgenda=\"enabled\" packages=\"org.domain.pkg2, org.domain.pkg3\" includes=\"KBase1\"> </kbase> </kmodule>", "public class Applicant { private String name; private int age; private boolean valid; // Getter and setter methods }", "package com.company.license rule \"Is of valid age\" when USDa : Applicant(age < 18) then USDa.setValid(false); end", "KieServices kieServices = KieServices.Factory.get(); KieContainer kContainer = kieServices.getKieClasspathContainer();", "StatelessKieSession kSession = kContainer.newStatelessKieSession(); Applicant applicant = new Applicant(\"Mr John Smith\", 16); assertTrue(applicant.isValid()); ksession.execute(applicant); assertFalse(applicant.isValid());", "public class Applicant { private String name; private int age; // Getter and setter methods } public class Application { private Date dateApplied; private boolean valid; // Getter and setter methods }", "package com.company.license rule \"Is of valid age\" when Applicant(age < 18) USDa : Application() then USDa.setValid(false); end rule \"Application was made this year\" when USDa : Application(dateApplied > \"01-jan-2009\") then USDa.setValid(false); end", "StatelessKieSession ksession = kbase.newStatelessKnowledgeSession(); Applicant applicant = new Applicant(\"Mr John Smith\", 16); Application application = new Application(); assertTrue(application.isValid()); ksession.execute(Arrays.asList(new Object[] { application, applicant })); 1 assertFalse(application.isValid()); ksession.execute (CommandFactory.newInsertIterable(new Object[] { application, applicant })); 2 List<Command> cmds = new ArrayList<Command>(); 3 cmds.add(CommandFactory.newInsert(new Person(\"Mr John Smith\"), \"mrSmith\")); cmds.add(CommandFactory.newInsert(new Person(\"Mr John Doe\"), \"mrDoe\")); BatchExecutionResults results = ksession.execute(CommandFactory.newBatchExecution(cmds)); assertEquals(new Person(\"Mr John Smith\"), results.getValue(\"mrSmith\"));", "import org.kie.api.runtime.StatelessKieSession; StatelessKieSession ksession = kbase.newStatelessKieSession(); // Set a global `myGlobal` that can be used in the rules. ksession.setGlobal(\"myGlobal\", \"I am a global\"); // Execute while resolving the `myGlobal` identifier. ksession.execute(collection);", "import org.kie.api.runtime.ExecutionResults; // Set up a list of commands. List cmds = new ArrayList(); cmds.add(CommandFactory.newSetGlobal(\"list1\", new ArrayList(), true)); cmds.add(CommandFactory.newInsert(new Person(\"jon\", 102), \"person\")); cmds.add(CommandFactory.newQuery(\"Get People\" \"getPeople\")); // Execute the list. ExecutionResults results = ksession.execute(CommandFactory.newBatchExecution(cmds)); // Retrieve the `ArrayList`. results.getValue(\"list1\"); // Retrieve the inserted `Person` fact. results.getValue(\"person\"); // Retrieve the query as a `QueryResults` instance. results.getValue(\"Get People\");", "public class Room { private String name; // Getter and setter methods } public class Sprinkler { private Room room; private boolean on; // Getter and setter methods } public class Fire { private Room room; // Getter and setter methods } public class Alarm { }", "rule \"When there is a fire turn on the sprinkler\" when Fire(USDroom : room) USDsprinkler : Sprinkler(room == USDroom, on == false) then modify(USDsprinkler) { setOn(true) }; System.out.println(\"Turn on the sprinkler for room \"+USDroom.getName()); end rule \"Raise the alarm when we have one or more fires\" when exists Fire() then insert( new Alarm() ); System.out.println( \"Raise the alarm\" ); end rule \"Cancel the alarm when all the fires have gone\" when not Fire() USDalarm : Alarm() then delete( USDalarm ); System.out.println( \"Cancel the alarm\" ); end rule \"Status output when things are ok\" when not Alarm() not Sprinkler( on == true ) then System.out.println( \"Everything is ok\" ); end", "KieServices kieServices = KieServices.Factory.get(); KieContainer kContainer = kieServices.getKieClasspathContainer();", "KieSession ksession = kContainer.newKieSession(); String[] names = new String[]{\"kitchen\", \"bedroom\", \"office\", \"livingroom\"}; Map<String,Room> name2room = new HashMap<String,Room>(); for( String name: names ){ Room room = new Room( name ); name2room.put( name, room ); ksession.insert( room ); Sprinkler sprinkler = new Sprinkler( room ); ksession.insert( sprinkler ); } ksession.fireAllRules();", "> Everything is ok", "Fire kitchenFire = new Fire( name2room.get( \"kitchen\" ) ); Fire officeFire = new Fire( name2room.get( \"office\" ) ); FactHandle kitchenFireHandle = ksession.insert( kitchenFire ); FactHandle officeFireHandle = ksession.insert( officeFire ); ksession.fireAllRules();", "> Raise the alarm > Turn on the sprinkler for room kitchen > Turn on the sprinkler for room office", "ksession.delete( kitchenFireHandle ); ksession.delete( officeFireHandle ); ksession.fireAllRules();", "> Cancel the alarm > Turn off the sprinkler for room office > Turn off the sprinkler for room kitchen > Everything is ok", "// Obtain a KIE session pool from the KIE container KieContainerSessionsPool pool = kContainer.newKieSessionsPool(10); // Create KIE sessions from the KIE session pool KieSession kSession = pool.newKieSession();" ]
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_decision_services_in_red_hat_decision_manager/kie-sessions-con_decision-engine
Chapter 6. SubjectAccessReview [authorization.openshift.io/v1]
Chapter 6. SubjectAccessReview [authorization.openshift.io/v1] Description SubjectAccessReview is an object for requesting information about whether a user or group can perform an action Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required namespace verb resourceAPIGroup resourceAPIVersion resource resourceName path isNonResourceURL user groups scopes 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources content RawExtension Content is the actual content of the request for create and update groups array (string) GroupsSlice is optional. Groups is the list of groups to which the User belongs. isNonResourceURL boolean IsNonResourceURL is true if this is a request for a non-resource URL (outside of the resource hierarchy) kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds namespace string Namespace is the namespace of the action being requested. Currently, there is no distinction between no namespace and all namespaces path string Path is the path of a non resource URL resource string Resource is one of the existing resource types resourceAPIGroup string Group is the API group of the resource Serialized as resourceAPIGroup to avoid confusion with the 'groups' field when inlined resourceAPIVersion string Version is the API version of the resource Serialized as resourceAPIVersion to avoid confusion with TypeMeta.apiVersion and ObjectMeta.resourceVersion when inlined resourceName string ResourceName is the name of the resource being requested for a "get" or deleted for a "delete" scopes array (string) Scopes to use for the evaluation. Empty means "use the unscoped (full) permissions of the user/groups". Nil for a self-SAR, means "use the scopes on this request". Nil for a regular SAR, means the same as empty. user string User is optional. If both User and Groups are empty, the current authenticated user is used. verb string Verb is one of: get, list, watch, create, update, delete 6.2. API endpoints The following API endpoints are available: /apis/authorization.openshift.io/v1/subjectaccessreviews POST : create a SubjectAccessReview 6.2.1. /apis/authorization.openshift.io/v1/subjectaccessreviews Table 6.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. pretty string If 'true', then the output is pretty printed. HTTP method POST Description create a SubjectAccessReview Table 6.2. Body parameters Parameter Type Description body SubjectAccessReview schema Table 6.3. HTTP responses HTTP code Reponse body 200 - OK SubjectAccessReview schema 201 - Created SubjectAccessReview schema 202 - Accepted SubjectAccessReview schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/authorization_apis/subjectaccessreview-authorization-openshift-io-v1
Chapter 8. Template-based broker deployment examples
Chapter 8. Template-based broker deployment examples Prerequisites These procedures assume an OpenShift Container Platform instance similar to that created in OpenShift Container Platform Getting Started . In the AMQ Broker application templates, the values of the AMQ_USER, AMQ_PASSWORD, AMQ_CLUSTER_USER, AMQ_CLUSTER_PASSWORD, AMQ_TRUSTSTORE_PASSWORD, and AMQ_KEYSTORE_PASSWORD environment variables are stored in a secret. To learn more about using and modifying these environment variables when you deploy a template in any of tutorials that follow, see About sensitive credentials . The following procedures example how to use application templates to create various deployments of brokers. 8.1. Deploying a basic broker with SSL Deploy a basic broker that is ephemeral and supports SSL. 8.1.1. Deploying the image and template Prerequisites This tutorial builds upon Preparing a template-based broker deployment . Completion of the Deploying a basic broker tutorial is recommended. Procedure Navigate to the OpenShift web console and log in. Select the amq-demo project space. Click Add to Project > Browse Catalog to list all of the default image streams and templates. Use the Filter search bar to limit the list to those that match amq . You might need to click See all to show the desired application template. Select the amq-broker-78-ssl template which is labeled Red Hat AMQ Broker 7.8 (Ephemeral, with SSL) . Set the following values in the configuration and click Create . Table 8.1. Example template Environment variable Display Name Value Description AMQ_PROTOCOL AMQ Protocols openwire,amqp,stomp,mqtt,hornetq The protocols to be accepted by the broker AMQ_QUEUES Queues demoQueue Creates an anycast queue called demoQueue AMQ_ADDRESSES Addresses demoTopic Creates an address (or topic) called demoTopic. By default, this address has no assigned routing type. AMQ_USER AMQ Username amq-demo-user The username the client uses AMQ_PASSWORD AMQ Password password The password the client uses with the username AMQ_TRUSTSTORE Trust Store Filename broker.ts The SSL truststore file name AMQ_TRUSTSTORE_PASSWORD Truststore Password password The password used when creating the Truststore AMQ_KEYSTORE AMQ Keystore Filename broker.ks The SSL keystore file name AMQ_KEYSTORE_PASSWORD AMQ Keystore Password password The password used when creating the Keystore 8.1.2. Deploying the application After creating the application, deploy it to create a Pod and start the broker. Procedure Click Deployments in the OpenShift Container Platform web console. Click the broker-amq deployment. Click Deploy to deploy the application. Click the broker Pod and then click the Logs tab to verify the state of the broker. If the broker logs have not loaded, and the Pod status shows ErrImagePull or ImagePullBackOff , your deployment configuration was not able to directly pull the specified broker image from the Red Hat Container Registry. In this case, edit your deployment configuration to reference the correct broker image name and the image pull secret name associated with the account used for authentication in the Red Hat Container Registry. Then, you can import the broker image and start the broker. To do this, complete steps similar to those in Deploying and starting the broker application . 8.1.3. Creating a Route Create a Route for the broker so that clients outside of OpenShift Container Platform can connect using SSL. By default, the secured broker protocols are available through the 61617/TCP port. In addition, there are SSL and non-SSL ports exposed on the broker Pod for each messaging protocol that the broker supports. However, external client cannot connect directly to these ports on the broker. Instead, external clients connect to OpenShift via the Openshift router, which determines how to forward traffic to the appropriate port on the broker Pod. Note If you scale your deployment up to multiple brokers in a cluster, you must manually create a Service and a Route for each broker, and then use each Service-and-Route combination to direct a given client to a given broker, or broker list. For an example of configuring multiple Services and Routes to connect clustered brokers to their own instances of the AMQ Broker management console, see Creating Routes for the AMQ Broker management console . Prerequisites Before creating an SSL Route, you should understand how external clients use this Route to connect to the broker. For more information, see Creating an SSL Route . Procedure Click Services broker-amq-tcp-ssl . Click Actions Create a route . To display the TLS parameters, select the Secure route check box. From the TLS Termination drop-down menu, choose Passthrough . This selection relays all communication to AMQ Broker without the OpenShift router decrypting and resending it. To view the Route, click Routes . For example: This hostname will be used by external clients to connect to the broker using SSL with SNI. Additional resources For more information about creating SSL Routes, see Creating an SSL Route . For more information on Routes in the OpenShift Container Platform, see Routes . 8.2. Deploying a basic broker with persistence and SSL Deploy a persistent broker that supports SSL. When a broker needs persistence, the broker is deployed as a StatefulSet and stores messaging data on a persistent volume associated with the broker Pod via a persistent volume claim. When a broker Pod is created, it uses storage that remains in the event that you shut down the Pod, or if the Pod shuts down unexpectedly. This configuration means that messages are not lost, as they would be with a standard deployment. Prerequisites This tutorial builds upon Preparing a template-based broker deployment . Completion of the Deploying a basic broker tutorial is recommended. You must have sufficient persistent storage provisioned to your OpenShift cluster to associate with your broker Pod via a persistent volume claim. For more information, see Understanding persistent storage (OpenShift Container Platform 4.5) 8.2.1. Deploy the image and template Procedure Navigate to the OpenShift web console and log in. Select the amq-demo project space. Click Add to Project Browse catalog to list all of the default image streams and templates. Use the Filter search bar to limit the list to those that match amq . You might need to click See all to show the desired application template. Select the amq-broker-78-persistence-ssl template, which is labelled Red Hat AMQ Broker 7.8 (Persistence, with SSL) . Set the following values in the configuration and click create . Table 8.2. Example template Environment variable Display Name Value Description AMQ_PROTOCOL AMQ Protocols openwire,amqp,stomp,mqtt,hornetq The protocols to be accepted by the broker AMQ_QUEUES Queues demoQueue Creates an anycast queue called demoQueue AMQ_ADDRESSES Addresses demoTopic Creates an address (or topic) called demoTopic. By default, this address has no assigned routing type. VOLUME_CAPACITY AMQ Volume Size 1Gi The persistent volume size created for the journal AMQ_USER AMQ Username amq-demo-user The username the client uses AMQ_PASSWORD AMQ Password password The password the client uses with the username AMQ_TRUSTSTORE Trust Store Filename broker.ts The SSL truststore file name AMQ_TRUSTSTORE_PASSWORD Truststore Password password The password used when creating the Truststore AMQ_KEYSTORE AMQ Keystore Filename broker.ks The SSL keystore file name AMQ_KEYSTORE_PASSWORD AMQ Keystore Password password The password used when creating the Keystore 8.2.2. Deploy the application Once the application has been created it needs to be deployed. Deploying the application creates a Pod and starts the broker. Procedure Click StatefulSets in the OpenShift Container Platform web console. Click the broker-amq deployment. Click Deploy to deploy the application. Click the broker Pod and then click the Logs tab to verify the state of the broker. You should see the queue created via the template. If the broker logs have not loaded, and the Pod status shows ErrImagePull or ImagePullBackOff , your configuration was not able to directly pull the specified broker image from the Red Hat Container Registry. In this case, edit your deployment configuration to reference the correct broker image name and the image pull secret name associated with the account used for authentication in the Red Hat Container Registry. Then, you can import the broker image and start the broker. To do this, complete steps similar to those in Deploying and starting the broker application . Click the Terminal tab to access a shell where you can use the CLI to send some messages. Alternatively, use the OpenShift client to access the shell using the Pod name, as shown in the following example. Now scale down the broker using the oc command. You can use the console to check that the Pod count is 0 Now scale the broker back up to 1 . Consume the messages again by using the terminal. For example: Additional resources For more information on managing stateful applications, see StatefulSets (external). 8.2.3. Creating a Route Create a Route for the broker so that clients outside of OpenShift Container Platform can connect using SSL. By default, the broker protocols are available through the 61617/TCP port. Note If you scale your deployment up to multiple brokers in a cluster, you must manually create a Service and a Route for each broker, and then use each Service-and-Route combination to direct a given client to a given broker, or broker list. For an example of configuring multiple Services and Routes to connect clustered brokers to their own instances of the AMQ Broker management console, see Creating Routes for the AMQ Broker management console . Prerequisites Before creating an SSL Route, you should understand how external clients use this Route to connect to the broker. For more information, see Creating an SSL Route . Procedure Click Services broker-amq-tcp-ssl . Click Actions Create a route . To display the TLS parameters, select the Secure route check box. From the TLS Termination drop-down menu, choose Passthrough . This selection relays all communication to AMQ Broker without the OpenShift router decrypting and resending it. To view the Route, click Routes . For example: This hostname will be used by external clients to connect to the broker using SSL with SNI. Additional resources For more information on Routes in the OpenShift Container Platform, see Routes . 8.3. Deploying a set of clustered brokers Deploy a clustered set of brokers where each broker runs in its own Pod. 8.3.1. Distributing messages Message distribution is configured to use ON_DEMAND . This means that when messages arrive at a clustered broker, the messages are distributed in a round-robin fashion to any broker that has consumers. This message distribution policy safeguards against messages getting stuck on a specific broker while a consumer, connected either directly or through the OpenShift router, is connected to a different broker. The redistribution delay is zero by default. If a message is on a queue that has no consumers, it will be redistributed to another broker. Note When redistribution is enabled, messages can be delivered out of order. 8.3.2. Deploy the image and template Prerequisites This procedure builds upon Preparing a template-based broker deployment . Completion of the Deploying a basic broker tutorial is recommended. Procedure Navigate to the OpenShift web console and log in. Select the amq-demo project space. Click Add to Project > Browse catalog to list all of the default image streams and templates Use the Filter search bar to limit the list to those that match amq . Click See all to show the desired application template. Select the amq-broker-78-persistence-clustered template which is labeled Red Hat AMQ Broker 7.8 (no SSL, clustered) . Set the following values in the configuration and click create . Table 8.3. Example template Environment variable Display Name Value Description AMQ_PROTOCOL AMQ Protocols openwire,amqp,stomp,mqtt,hornetq The protocols to be accepted by the broker AMQ_QUEUES Queues demoQueue Creates an anycast queue called demoQueue AMQ_ADDRESSES Addresses demoTopic Creates an address (or topic) called demoTopic. By default, this address has no assigned routing type. VOLUME_CAPACITY AMQ Volume Size 1Gi The persistent volume size created for the journal AMQ_CLUSTERED Clustered true This needs to be true to ensure the brokers cluster AMQ_CLUSTER_USER cluster user generated The username the brokers use to connect with each other AMQ_CLUSTER_PASSWORD cluster password generated The password the brokers use to connect with each other AMQ_USER AMQ Username amq-demo-user The username the client uses AMQ_PASSWORD AMQ Password password The password the client uses with the username 8.3.3. Deploying the application Once the application has been created it needs to be deployed. Deploying the application creates a Pod and starts the broker. Procedure Click StatefulSets in the OpenShift Container Platform web console. Click the broker-amq deployment. Click Deploy to deploy the application. Note The default number of replicas for a clustered template is 0. You should not see any Pods. Scale up the Pods to three to create a cluster of brokers. Check that there are three Pods running. If the Pod status shows ErrImagePull or ImagePullBackOff , your deployment was not able to directly pull the specified broker image from the Red Hat Container Registry. In this case, edit your StatefulSet to reference the correct broker image name and the image pull secret name associated with the account used for authentication in the Red Hat Container Registry. Then, you can import the broker image and start the brokers. To do this, complete steps similar to those in Deploying and starting the broker application . Verify that the brokers have clustered with the new Pod by checking the logs. This shows the logs of the new broker and an entry for a clustered bridge created between the brokers: 8.3.4. Creating Routes for the AMQ Broker management console The clustering templates do not expose the AMQ Broker management console by default. This is because the OpenShift proxy performs load balancing across each broker in the cluster and it would not be possible to control which broker console is connected at a given time. The following example procedure shows how to configure each broker in the cluster to connect to its own management console instance. You do this by creating a dedicated Service-and-Route combination for each broker Pod in the cluster. Prerequisites You have already deployed a clustered set of brokers, where each broker runs in its own Pod. See Deploying a set of clustered brokers . Procedure Create a regular Service for each Pod in the cluster, using a StatefulSet selector to select between Pods. To do this, deploy a Service template, in .yaml format, that looks like the following: apiVersion: v1 kind: Service metadata: annotations: description: 'Service for the management console of broker pod XXXX' labels: app: application2 application: application2 template: amq-broker-78-persistence-clustered name: amq2-amq-console-XXXX namespace: amq75-p-c-ssl-2 spec: ports: - name: console-jolokia port: 8161 protocol: TCP targetPort: 8161 selector: deploymentConfig: application2-amq statefulset.kubernetes.io/pod-name: application2-amq-XXXX type: ClusterIP In the preceding template, replace XXXX with the ordinal value of the broker Pod you want to associate with the Service. For example, to associate the Service with the first Pod in the cluster, set XXXX to 0 . To associate the Service with the second Pod, set XXXX to 1 , and so on. Save and deploy an instance of the template for each broker Pod in your cluster. Note In the example template shown above, the selector uses the Kubernetes-defined Pod name. Create a Route for each broker Pod, so that the AMQ Broker management console can connect to the Pod. Click Routes Create Route . The Edit Route page opens. In the Services drop-down menu, select the previously created broker Service that you want to associate the Route with, for example, amq2-amq-console-0 . Set Target Port to 8161 , to enable access for the AMQ Broker management console. To display the TLS parameters, select the Secure route check box. From the TLS Termination drop-down menu, choose Passthrough . This selection relays all communication to AMQ Broker without the OpenShift router decrypting and resending it. Click Create . When you create a Route associated with one of broker Pods, the resulting .yaml file includes lines that look like the following: spec: host: amq2-amq-console-0-amq75-p-c-2.apps-ocp311.example.com port: targetPort: console-jolokia tls: termination: passthrough to: kind: Service name: amq2-amq-console-0 weight: 100 wildcardPolicy: None To access the management console for a specific broker instance, copy the host URL shown above to a web browser. Additional resources For more information on the clustering of brokers see Configuring message redistribution . 8.4. Deploying a set of clustered SSL brokers Deploy a clustered set of brokers, where each broker runs in its own Pod and the broker is configured to accept connections using SSL. 8.4.1. Distributing messages Message distribution is configured to use ON_DEMAND . This means that when messages arrive at a clustered broker, the messages are distributed in a round-robin fashion to any broker that has consumers. This message distribution policy safeguards against messages getting stuck on a specific broker while a consumer, connected either directly or through the OpenShift router, is connected to a different broker. The redistribution delay is non-zero by default. If a message is on a queue that has no consumers, it will be redistributed to another broker. Note When redistribution is enabled, messages can be delivered out of order. 8.4.2. Deploying the image and template Prerequisites This procedure builds upon Preparing a template-based broker deployment . Completion of the Deploying a basic broker example is recommended. Procedure Navigate to the OpenShift web console and log in. Select the amq-demo project space. Click Add to Project > Browse catalog to list all of the default image streams and templates. Use the Filter search bar to limit the list to those that match amq . Click See all to show the desired application template. Select the amq-broker-78-persistence-clustered-ssl template which is labeled Red Hat AMQ Broker 7.8 (SSL, clustered) . Set the following values in the configuration and click create . Table 8.4. Example template Environment variable Display Name Value Description AMQ_PROTOCOL AMQ Protocols openwire,amqp,stomp,mqtt,hornetq The protocols to be accepted by the broker AMQ_QUEUES Queues demoQueue Creates an anycast queue called demoQueue AMQ_ADDRESSES Addresses demoTopic Creates an address (or topic) called demoTopic. By default, this address has no assigned routing type. VOLUME_CAPACITY AMQ Volume Size 1Gi The persistent volume size created for the journal AMQ_CLUSTERED Clustered true This needs to be true to ensure the brokers cluster AMQ_CLUSTER_USER cluster user generated The username the brokers use to connect with each other AMQ_CLUSTER_PASSWORD cluster password generated The password the brokers use to connect with each other AMQ_USER AMQ Username amq-demo-user The username the client uses AMQ_PASSWORD AMQ Password password The password the client uses with the username AMQ_TRUSTSTORE Trust Store Filename broker.ts The SSL truststore file name AMQ_TRUSTSTORE_PASSWORD Truststore Password password The password used when creating the Truststore AMQ_KEYSTORE AMQ Keystore Filename broker.ks The SSL keystore file name AMQ_KEYSTORE_PASSWORD AMQ Keystore Password password The password used when creating the Keystore 8.4.3. Deploying the application Deploy after creating the application. Deploying the application creates a Pod and starts the broker. Procedure Click StatefulSets in the OpenShift Container Platform web console. Click the broker-amq deployment. Click Deploy to deploy the application. Note The default number of replicas for a clustered template is 0 , so you will not see any Pods. Scale up the Pods to three to create a cluster of brokers. Check that there are three Pods running. If the Pod status shows ErrImagePull or ImagePullBackOff , your deployment was not able to directly pull the specified broker image from the Red Hat Container Registry. In this case, edit your StatefulSet to reference the correct broker image name and the image pull secret name associated with the account used for authentication in the Red Hat Container Registry. Then, you can import the broker image and start the brokers. To do this, complete steps similar to those in Deploy and start the broker application . Verify the brokers have clustered with the new Pod by checking the logs. This shows all the logs of the new broker and an entry for a clustered bridge created between the brokers, for example: Additional resources To learn how to configure each broker in the cluster to connect to its own management console instance, see Creating Routes for the AMQ Broker management console . For more information about messaging in a broker cluster, see Enabling Message Redistribution . 8.5. Deploying a broker with custom configuration Deploy a broker with custom configuration. Although functionality can be obtained by using templates, broker configuration can be customized if needed. Prerequisites This tutorial builds upon Preparing a template-based broker deployment . Completion of the Deploying a basic broker tutorial is recommended. 8.5.1. Deploy the image and template Procedure Navigate to the OpenShift web console and log in. Select the amq-demo project space. Click Add to Project > Browse catalog to list all of the default image streams and templates. Use the Filter search bar to limit results to those that match amq . Click See all to show the desired application template. Select the amq-broker-78-custom template which is labeled Red Hat AMQ Broker 7.8(Ephemeral, no SSL) . In the configuration, update broker.xml with the custom configuration you would like to use. Click Create . Note Use a text editor to create the broker's XML configuration. Then, cut and paste confguration details into the broker.xml field. Note OpenShift Container Platform does not use a ConfigMap object to store the custom configuration that you specify in the broker.xml field, as is common for many applications deployed on this platform. Instead, OpenShift temporarily stores the specified configuration in an environment variable, before transferring the configuration to a standalone file when the broker container starts. 8.5.2. Deploy the application Once the application has been created it needs to be deployed. Deploying the application creates a Pod and starts the broker. Procedure Click Deployments in the OpenShift Container Platform web console. Click the broker-amq deployment Click Deploy to deploy the application. 8.6. Basic SSL client example Implement a client that sends and receives messages from a broker configured to use SSL, using the Qpid JMS client. Prerequisites This tutorial builds upon Preparing a template-based broker deployment . Completion of the Deploying a basic broker with SSL tutorial is recommended. AMQ JMS Examples 8.6.1. Configuring the client Create a sample client that can be updated to connect to the SSL broker. The following procedure builds upon AMQ JMS Examples . Procedure Add an entry into your /etc/hosts file to map the route name onto the IP address of the OpenShift cluster: Update the jndi.properties configuration file to use the route, truststore and keystore created previously, for example: Update the jndi.properties configuration file to use the queue created earlier. Execute the sender client to send a text message. Execute the receiver client to receive the text message. You should see: 8.7. External clients using sub-domains example Expose a clustered set of brokers through a node port and connect to it using the core JMS client. This enables clients to connect to a set of brokers which are configured using the amq-broker-78-persistence-clustered-ssl template. 8.7.1. Exposing the brokers Configure the brokers so that the cluster of brokers are externally available and can be connected to directly, bypassing the OpenShift router. This is done by creating a route that exposes each pod using its own hostname. Prerequisites Deploying a set of clustered brokers Procedure Choose import YAML/JSON from Add to Project drop down Enter the following and click create. Note The important configuration here is the wildcard policy of Subdomain . This allows each broker to be accessible through its own hostname. 8.7.2. Connecting the clients Create a sample client that can be updated to connect to the SSL broker. The steps in this procedure build upon the AMQ JMS Examples . Procedure Add entries into the /etc/hosts file to map the route name onto the actual IP addresses of the brokers: Update the jndi.properties configuration file to use the route, truststore, and keystore created previously, for example: Update the jndi.properties configuration file to use the queue created earlier. Execute the sender client code to send a text message. Execute the receiver client code to receive the text message. You should see: Additional resources For more information on using the AMQ JMS client, see AMQ JMS Examples . 8.8. External clients using port binding example Expose a clustered set of brokers through a NodePort and connect to it using the core JMS client. This enables clients that do not support SNI or SSL. It is used with clusters configured using the amq-broker-78-persistence-clustered template. 8.8.1. Exposing the brokers Configure the brokers so that the cluster of brokers are externally available and can be connected to directly, bypassing the OpenShift router. This is done by creating a service that uses a NodePort to load balance around the clusters. Prerequisites Deploying a set of clustered brokers Procedure Choose import YAML/JSON from Add to Project drop down. Enter the following and click create. Note The NodePort configuration is important. The NodePort is the port in which the client will access the brokers and the type is NodePort . 8.8.2. Connecting the clients Create consumers that are round-robinned around the brokers in the cluster using the AMQ broker CLI. Procedure In a terminal create a consumer and attach it to the IP address where OpenShift is running. Repeat step 1 twice to start another two consumers. Note You should now have three consumers load balanced across the three brokers. Create a producer to send messages. Verify each consumer receives messages.
[ "https://broker-amq-tcp-amq-demo.router.default.svc.cluster.local", "sh-4.2USD ./broker/bin/artemis producer --destination queue://demoQueue Producer ActiveMQQueue[demoQueue], thread=0 Started to calculate elapsed time Producer ActiveMQQueue[demoQueue], thread=0 Produced: 1000 messages Producer ActiveMQQueue[demoQueue], thread=0 Elapsed time in second : 4 s Producer ActiveMQQueue[demoQueue], thread=0 Elapsed time in milli second : 4584 milli seconds sh-4.2USD ./broker/bin/artemis consumer --destination queue://demoQueue Consumer:: filter = null Consumer ActiveMQQueue[demoQueue], thread=0 wait until 1000 messages are consumed Received 1000 Consumer ActiveMQQueue[demoQueue], thread=0 Consumed: 1000 messages Consumer ActiveMQQueue[demoQueue], thread=0 Consumer thread finished", "// Get the Pod names and internal IP Addresses get pods -o wide // Access a broker Pod by name rsh <broker-pod-name>", "oc scale statefulset broker-amq --replicas=0 statefulset \"broker-amq\" scaled", "oc scale statefulset broker-amq --replicas=1 statefulset \"broker-amq\" scaled", "sh-4.2USD broker/bin/artemis consumer --destination queue://demoQueue Consumer:: filter = null Consumer ActiveMQQueue[demoQueue], thread=0 wait until 1000 messages are consumed Received 1000 Consumer ActiveMQQueue[demoQueue], thread=0 Consumed: 1000 messages Consumer ActiveMQQueue[demoQueue], thread=0 Consumer thread finished", "https://broker-amq-tcp-amq-demo.router.default.svc.cluster.local", "oc scale statefulset broker-amq --replicas=3 statefulset \"broker-amq\" scaled", "oc get pods NAME READY STATUS RESTARTS AGE broker-amq-0 1/1 Running 0 33m broker-amq-1 1/1 Running 0 33m broker-amq-2 1/1 Running 0 29m", "oc logs broker-amq-2", "2018-08-29 07:43:55,779 INFO [org.apache.activemq.artemis.core.server] AMQ221027: Bridge ClusterConnectionBridge@1b0e9e9d [name=USD.artemis.internal.sf.my-cluster.4333c830-ab5f-11e8-afb8-0a580a82006e, queue=QueueImpl[name=USD.artemis.internal.sf.my-cluster.4333c830-ab5f-11e8-afb8-0a580a82006e, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=9cedb69d-ab5e-11e8-87a4-0a580a82006c], temp=false]@5e0c0398 targetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge@1b0e9e9d [name=USD.artemis.internal.sf.my-cluster.4333c830-ab5f-11e8-afb8-0a580a82006e, queue=QueueImpl[name=USD.artemis.internal.sf.my-cluster.4333c830-ab5f-11e8-afb8-0a580a82006e, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=9cedb69d-ab5e-11e8-87a4-0a580a82006c], temp=false]@5e0c0398 targetConnector=ServerLocatorImpl [initialConnectors=[TransportConfiguration(name=artemis, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=10-130-0-110], discoveryGroupConfiguration=null]]::ClusterConnectionImpl@806813022[nodeUUID=9cedb69d-ab5e-11e8-87a4-0a580a82006c, connector=TransportConfiguration(name=artemis, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=10-130-0-108, address=, server=ActiveMQServerImpl::serverUUID=9cedb69d-ab5e-11e8-87a4-0a580a82006c])) [initialConnectors=[TransportConfiguration(name=artemis, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=10-130-0-110], discoveryGroupConfiguration=null]] is connected", "apiVersion: v1 kind: Service metadata: annotations: description: 'Service for the management console of broker pod XXXX' labels: app: application2 application: application2 template: amq-broker-78-persistence-clustered name: amq2-amq-console-XXXX namespace: amq75-p-c-ssl-2 spec: ports: - name: console-jolokia port: 8161 protocol: TCP targetPort: 8161 selector: deploymentConfig: application2-amq statefulset.kubernetes.io/pod-name: application2-amq-XXXX type: ClusterIP", "spec: host: amq2-amq-console-0-amq75-p-c-2.apps-ocp311.example.com port: targetPort: console-jolokia tls: termination: passthrough to: kind: Service name: amq2-amq-console-0 weight: 100 wildcardPolicy: None", "oc scale statefulset broker-amq --replicas=3 statefulset \"broker-amq\" scaled", "oc get pods NAME READY STATUS RESTARTS AGE broker-amq-0 1/1 Running 0 33m broker-amq-1 1/1 Running 0 33m broker-amq-2 1/1 Running 0 29m", "oc logs broker-amq-2", "2018-08-29 07:43:55,779 INFO [org.apache.activemq.artemis.core.server] AMQ221027: Bridge ClusterConnectionBridge@1b0e9e9d [name=USD.artemis.internal.sf.my-cluster.4333c830-ab5f-11e8-afb8-0a580a82006e, queue=QueueImpl[name=USD.artemis.internal.sf.my-cluster.4333c830-ab5f-11e8-afb8-0a580a82006e, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=9cedb69d-ab5e-11e8-87a4-0a580a82006c], temp=false]@5e0c0398 targetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge@1b0e9e9d [name=USD.artemis.internal.sf.my-cluster.4333c830-ab5f-11e8-afb8-0a580a82006e, queue=QueueImpl[name=USD.artemis.internal.sf.my-cluster.4333c830-ab5f-11e8-afb8-0a580a82006e, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=9cedb69d-ab5e-11e8-87a4-0a580a82006c], temp=false]@5e0c0398 targetConnector=ServerLocatorImpl [initialConnectors=[TransportConfiguration(name=artemis, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=10-130-0-110], discoveryGroupConfiguration=null]]::ClusterConnectionImpl@806813022[nodeUUID=9cedb69d-ab5e-11e8-87a4-0a580a82006c, connector=TransportConfiguration(name=artemis, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=10-130-0-108, address=, server=ActiveMQServerImpl::serverUUID=9cedb69d-ab5e-11e8-87a4-0a580a82006c])) [initialConnectors=[TransportConfiguration(name=artemis, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=10-130-0-110], discoveryGroupConfiguration=null]] is connected", "10.0.0.1 broker-amq-tcp-amq-demo.router.default.svc.cluster.local", "connectionfactory.myFactoryLookup = amqps://broker-amq-tcp-amq-demo.router.default.svc.cluster.local:8443?transport.keyStoreLocation=<keystore-path>client.ks&transport.keyStorePassword=password&transport.trustStoreLocation=<truststore-path>/client.ts&transport.trustStorePassword=password&transport.verifyHost=false", "queue.myDestinationLookup = demoQueue", "Received message: Message Text!", "apiVersion: v1 kind: Route metadata: labels: app: broker-amq application: broker-amq name: tcp-ssl spec: port: targetPort: ow-multi-ssl tls: termination: passthrough to: kind: Service name: broker-amq-headless weight: 100 wildcardPolicy: Subdomain host: star.broker-ssl-amq-headless.amq-demo.svc", "10.0.0.1 broker-amq-0.broker-ssl-amq-headless.amq-demo.svc broker-amq-1.broker-ssl-amq-headless.amq-demo.svc broker-amq-2.broker-ssl-amq-headless.amq-demo.svc", "connectionfactory.myFactoryLookup = amqps://broker-amq-0.broker-ssl-amq-headless.amq-demo.svc:443?transport.keyStoreLocation=/home/ataylor/projects/jboss-amq-7-broker-openshift-image/client.ks&transport.keyStorePassword=password&transport.trustStoreLocation=/home/ataylor/projects/jboss-amq-7-broker-openshift-image/client.ts&transport.trustStorePassword=password&transport.verifyHost=false", "queue.myDestinationLookup = demoQueue", "Received message: Message Text!", "apiVersion: v1 kind: Service metadata: annotations: description: The broker's OpenWire port. service.alpha.openshift.io/dependencies: >- [{\"name\": \"broker-amq-amqp\", \"kind\": \"Service\"},{\"name\": \"broker-amq-mqtt\", \"kind\": \"Service\"},{\"name\": \"broker-amq-stomp\", \"kind\": \"Service\"}] creationTimestamp: '2018-08-29T14:46:33Z' labels: application: broker template: amq-broker-78-statefulset-clustered name: broker-external-tcp namespace: amq-demo resourceVersion: '2450312' selfLink: /api/v1/namespaces/amq-demo/services/broker-amq-tcp uid: 52631fa0-ab9a-11e8-9380-c280f77be0d0 spec: externalTrafficPolicy: Cluster ports: - nodePort: 30001 port: 61616 protocol: TCP targetPort: 61616 selector: deploymentConfig: broker-amq sessionAffinity: None type: NodePort status: loadBalancer: {}", "artemis consumer --url tcp://<IP_ADDRESS>:30001 --message-count 100 --destination queue://demoQueue", "artemis producer --url tcp://<IP_ADDRESS>:30001 --message-count 300 --destination queue://demoQueue", "Consumer:: filter = null Consumer ActiveMQQueue[demoQueue], thread=0 wait until 100 messages are consumed Consumer ActiveMQQueue[demoQueue], thread=0 Consumed: 100 messages Consumer ActiveMQQueue[demoQueue], thread=0 Consumer thread finished" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/deploying_amq_broker_on_openshift/template-based-broker-deployment-examples_broker-ocp
5.6. Load Balancing Policy: None
5.6. Load Balancing Policy: None If no load balancing policy is selected, virtual machines are started on the host within a cluster with the lowest CPU utilization and available memory. To determine CPU utilization a combined metric is used that takes into account the virtual CPU count and the CPU usage percent. This approach is the least dynamic, as the only host selection point is when a new virtual machine is started. Virtual machines are not automatically migrated to reflect increased demand on a host. An administrator must decide which host is an appropriate migration target for a given virtual machine. Virtual machines can also be associated with a particular host using pinning. Pinning prevents a virtual machine from being automatically migrated to other hosts. For environments where resources are highly consumed, manual migration is the best approach.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/technical_reference/load_balancing_policy_none
Chapter 10. CodeReady Workspaces architectural elements
Chapter 10. CodeReady Workspaces architectural elements This section contains information about the CodeReady Workspaces architecture. Section starts with a High-level CodeReady Workspaces architecture overview and continues with providing information about the CodeReady Workspaces workspace controller and CodeReady Workspaces workspace architecture. High-level CodeReady Workspaces architecture CodeReady Workspaces workspace controller CodeReady Workspaces workspaces architecture 10.1. High-level CodeReady Workspaces architecture Figure 10.1. High-level CodeReady Workspaces architecture At a high-level, CodeReady Workspaces is composed of one central workspace controller that manages the CodeReady Workspaces workspaces through the OpenShift API. When CodeReady Workspaces is installed on a OpenShift cluster, the workspace controller is the only component that is deployed. A CodeReady Workspaces workspace is created immediately after a user requests it. This section describes the different services that create the workspaces controller and the CodeReady Workspaces workspaces. CodeReady Workspaces workspace controller CodeReady Workspaces workspaces architecture 10.2. CodeReady Workspaces workspace controller The workspaces controller manages the container-based development environments: CodeReady Workspaces workspaces. It can be deployed in the following distinct configurations: Single-user : No authentication service is set up. Development environments are not secured. This configuration requires fewer resources. It is more adapted for local installations, such as when using Minikube. Multi-user : This is a multi-tenant configuration. Development environments are secured, and this configuration requires more resources. Appropriate for cloud installations. The different services that are a part of the CodeReady Workspaces workspaces controller are shown in the following diagram. Note that RH-SSO and PostgreSQL are only needed in the multi-user configuration. Figure 10.2. CodeReady Workspaces workspaces controller 10.2.1. CodeReady Workspaces server The CodeReady Workspaces server, also known as wsmaster , is the central service of the workspaces controller. It is a Java web service that exposes an HTTP REST API to manage CodeReady Workspaces workspaces and, in multi-user mode, CodeReady Workspaces users. Source code Red Hat CodeReady Workspaces GitHub Container image eclipse/che-server Environment variables Advanced configuration options for the Che server component 10.2.2. CodeReady Workspaces user dashboard The user dashboard is the landing page of Red Hat CodeReady Workspaces. It is an Angular front-end application. CodeReady Workspaces users create, start, and manage CodeReady Workspaces workspaces from their browsers through the user dashboard. Source code CodeReady Workspaces Dashboard Container image eclipse/che-server 10.2.3. Devfile registry The CodeReady Workspaces devfile registry is a service that provides a list of CodeReady Workspaces stacks to create ready-to-use workspaces. This list of stacks is used in the Dashboard Create Workspace window. The devfile registry runs in a container and can be deployed wherever the user dashboard can connect. For more information about devfile registry customization, see the Customizing devfile registry section. Source code CodeReady Workspaces Devfile registry Container image quay.io/crw/che-devfile-registry 10.2.4. CodeReady Workspaces plug-in registry The CodeReady Workspaces plug-in registry is a service that provides the list of plug-ins and editors for the CodeReady Workspaces workspaces. A devfile only references a plug-in that is published in a CodeReady Workspaces plug-in registry. It runs in a container and can be deployed wherever wsmaster connects. For more information about plug-in registry customization, see the Chapter 1, Customizing the devfile and plug-in registries section. Source code CodeReady Workspaces plug-in registry Container image quay.io/crw/che-plugin-registry 10.2.5. CodeReady Workspaces and PostgreSQL The PostgreSQL database is a prerequisite to configure CodeReady Workspaces in multi-user mode. The CodeReady Workspaces administrator can choose to connect CodeReady Workspaces to an existing PostgreSQL instance or let the CodeReady Workspaces deployment start a new dedicated PostgreSQL instance. The CodeReady Workspaces server uses the database to persist user configurations (workspaces metadata, Git credentials). RH-SSO uses the database as its back end to persist user information. Source code CodeReady Workspaces Postgres Container image eclipse/che-postgres 10.2.6. CodeReady Workspaces and RH-SSO RH-SSO is a prerequisite to configure CodeReady Workspaces in multi-user mode. The CodeReady Workspaces administrator can choose to connect CodeReady Workspaces to an existing RH-SSO instance or let the CodeReady Workspaces deployment start a new dedicated RH-SSO instance. The CodeReady Workspaces server uses RH-SSO as an OpenID Connect (OIDC) provider to authenticate CodeReady Workspaces users and secure access to CodeReady Workspaces resources. Source code CodeReady Workspaces Keycloak Container image eclipse/che-keycloak 10.3. CodeReady Workspaces workspaces architecture A CodeReady Workspaces deployment on the cluster consists of the CodeReady Workspaces server component, a database for storing user profile and preferences, and a number of additional deployments hosting workspaces. The CodeReady Workspaces server orchestrates the creation of workspaces, which consist of a deployment containing the workspace containers and enabled plug-ins, plus related components, such as: configmaps services endpoints ingresses/routes secrets PVs The CodeReady Workspaces workspace is a web application. It is composed of microservices running in containers that provide all the services of a modern IDE (an editor, language auto-completion, debugging tools). The IDE services are deployed with the development tools, packaged in containers and user runtime applications, which are defined as OpenShift resources. The source code of the projects of a CodeReady Workspaces workspace is persisted in a OpenShift PersistentVolume . Microservices run in containers that have read-write access to the source code (IDE services, development tools), and runtime applications have read-write access to this shared directory. The following diagram shows the detailed components of a CodeReady Workspaces workspace. Figure 10.3. CodeReady Workspaces workspace components In the diagram, there are three running workspaces: two belonging to User A and one to User C . A fourth workspace is getting provisioned where the plug-in broker is verifying and completing the workspace configuration. Use the devfile format to specify the tools and runtime applications of a CodeReady Workspaces workspace. 10.3.1. CodeReady Workspaces workspace components This section describes the components of a CodeReady Workspaces workspace. 10.3.1.1. Che Plugin plug-ins Che Plugin plug-ins are special services that extend CodeReady Workspaces workspace capabilities. Che Plugin plug-ins are packaged as containers. Packaging plug-ins into a container has the following benefits: It isolates the plug-ins from the main IDE, therefore limiting the resources that a plug-in has access to. It uses the consolidated standard of container registries to publish and distribute plug-ins (as with any container image). The containers that plug-ins are packaged into run as sidecars of the CodeReady Workspaces workspace editor and augment its capabilities. Visual Studio Code extensions packaged in containers are CodeReady Workspaces plug-ins for the Che-Theia editor. Multiple CodeReady Workspaces plug-ins can run in the same container (for better resource use), or a Che Plugin can run in its dedicated container (for better isolation). 10.3.1.2. Che Editor plug-in A Che Editor plug-in is a CodeReady Workspaces workspace plug-in. It defines the web application that is used as an editor in a workspace. The default CodeReady Workspaces workspace editor is Che-Theia. The Che-Theia source-code repository is at Che-Theia Github . It is based on the Eclipse Theia open-source project . Che-Theia is written in TypeScript and is built on the Microsoft Monaco editor . It is a web-based source-code editor similar to Visual Studio Code (VS Code). It has a plug-in system that supports VS Code extensions. Source code Che-Theia Container image eclipse/che-theia Endpoints theia , webviews , theia-dev , theia-redirect-1 , theia-redirect-2 , theia-redirect-3 10.3.1.3. CodeReady Workspaces user runtimes Use any non-terminating user container as a user runtime. An application that can be defined as a container image or as a set of OpenShift resources can be included in a CodeReady Workspaces workspace. This makes it easy to test applications in the CodeReady Workspaces workspace. To test an application in the CodeReady Workspaces workspace, include the application YAML definition used in stage or production in the workspace specification. It is a 12-factor app dev/prod parity. Examples of user runtimes are Node.js, SpringBoot or MongoDB, and MySQL. 10.3.1.4. CodeReady Workspaces workspace JWT proxy The JWT proxy is responsible for securing the communication of the CodeReady Workspaces workspace services. The CodeReady Workspaces workspace JWT proxy is included in a CodeReady Workspaces workspace only if the CodeReady Workspaces server is configured in multi-user mode. An HTTP proxy is used to sign outgoing requests from a workspace service to the CodeReady Workspaces server and to authenticate incoming requests from the IDE client running on a browser. Source code JWT proxy Container image eclipse/che-jwtproxy 10.3.1.5. CodeReady Workspaces plug-ins broker Plug-in brokers are special services that, given a plug-in meta.yaml file: Gather all the information to provide a plug-in definition that the CodeReady Workspaces server knows. Perform preparation actions in the workspace namespace (download, unpack files, process configuration). The main goal of the plug-in broker is to decouple the CodeReady Workspaces plug-ins definitions from the actual plug-ins that CodeReady Workspaces can support. With brokers, CodeReady Workspaces can support different plug-ins without updating the CodeReady Workspaces server. The CodeReady Workspaces server starts the plug-in broker. The plug-in broker runs in the same OpenShift namespace as the workspace. It has access to the plug-ins and project persistent volumes. A plug-ins broker is defined as a container image (for example, eclipse/che-plugin-broker ). The plug-in type determines the type of the broker that is started. Two types of plug-ins are supported: Che Plugin and Che Editor . Source code CodeReady Workspaces Plug-in broker Container image eclipse/che-init-plugin-broker eclipse/che-unified-plugin-broker 10.3.2. CodeReady Workspaces workspace configuration This section describes the properties of the CodeReady Workspaces server that affect the provisioning of a CodeReady Workspaces workspace. 10.3.2.1. Storage strategies for codeready-workspaces workspaces Workspace Pods use Persistent Volume Claims (PVCs), which are bound to the physical Persistent Volumes (PVs) with ReadWriteOnce access mode . It is possible to configure how the CodeReady Workspaces server uses PVCs for workspaces. The individual methods for this configuration are called PVC strategies: strategy details pros cons unique One PVC per workspace volume or user-defined PVC Storage isolation An undefined number of PVs is required per-workspace (default) One PVC for one workspace Easier to manage and control storage compared to unique strategy PV count still is not known and depends on workspaces number common One PVC for all workspaces in one OpenShift namespace Easy to manage and control storage If PV does not support ReadWriteMany (RWX) access mode then workspaces must be in a separate OpenShift namespaces Or there must not be more than 1 running workspace per namespace at the same time See how to configure namespace strategy Red Hat CodeReady Workspaces uses the common PVC strategy in combination with the "one namespace per user" namespace strategy when all CodeReady Workspaces workspaces operate in the user's namespace, sharing one PVC. 10.3.2.1.1. The common PVC strategy All workspaces inside a OpenShift-native namespace use the same Persistent Volume Claim (PVC) as the default data storage when storing data such as the following in their declared volumes: projects workspace logs additional Volumes defined by a use When the common PVC strategy is in use, user-defined PVCs are ignored and volumes that refer to these user-defined PVCs are replaced with a volume that refers to the common PVC. In this strategy, all {prod-short) workspaces use the same PVC. When the user runs one workspace, it only binds to one node in the cluster at a time. The corresponding containers volume mounts link to a common volume, and sub-paths are prefixed with '{workspaceID}/{originalPVCName}' . For more details, see Section 10.3.2.1.4, "How subpaths are used in PVCs" . The CodeReady Workspaces Volume name is identical to the name of the user-defined PVC. It means that if a machine is configured to use a CodeReady Workspaces volume with the same name as the user-defined PVC has, they will use the same shared folder in the common PVC. When a workspace is deleted, a corresponding subdirectory ( USD{ws-id} ) is deleted in the PV directory. Restrictions on using the common PVC strategy When the common strategy is used and a workspace PVC access mode is ReadWriteOnce (RWO), only one {admin-context} node can simultaneously use the PVC. If there are several nodes, you can use the common strategy, but: The workspace PVC access mode must be reconfigured to ReadWriteMany (RWM), so multiple nodes can use this PVC simultaneously. Only one workspace in the same namespace may be running. See Configuring namespace strategies . The common PVC strategy is not suitable for large multi-node clusters. Therefore, it is best to use it in single-node clusters. However, in combination with per-workspace namespace strategy, the common PVC strategy is usable for clusters with around 75 nodes. The PVC used in this strategy must be large enough to accommodate all projects since there is a risk of the event, in which one project depletes the resources of others. 10.3.2.1.2. The per-workspace PVC strategy The per-workspace strategy is similar to the common PVC strategy. The only difference is that all workspace Volumes, but not all the workspaces, use the same PVC as the default data storage for: projects workspace logs additional Volumes defined by a user It's a strategy when CodeReady Workspaces keeps its workspace data in assigned PVs that are allocated by a single PVC. The per-workspace PVC strategy is the most universal strategy out of the PVC strategies available and acts as a proper option for large multi-node clusters with a higher amount of users. Using the per-workspace PVC strategy, users can run multiple workspaces simultaneously, results in more PVCs being created. 10.3.2.1.3. The unique PVC strategy When using the `unique `PVC strategy, every CodeReady Workspaces Volume of a workspace has its own PVC. This means that workspace PVCs are: Created when a workspace starts for the first time. Deleted when a corresponding workspace is deleted. User-defined PVCs are created with the following specifics: They are provisioned with generated names to prevent naming conflicts with other PVCs in a namespace. Subpaths of the mounted Physical persistent volumes that reference user-defined PVCs are prefixed with {workspace id}/{PVC name} . This ensures that the same PV data structure is set up with different PVC strategies. For details, see Section 10.3.2.1.4, "How subpaths are used in PVCs" . The unique PVC strategy is suitable for larger multi-node clusters with a lesser amount of users. Since this strategy operates with separate PVCs for each volume in a workspace, vastly more PVCs are created. 10.3.2.1.4. How subpaths are used in PVCs Subpaths illustrate the folder hierarchy in the Persistent Volumes (PV). When a user defines volumes for components in the devfile, all components that define the volume of the same name will be backed by the same directory in the PV as USD{PV}/USD{ws-id}/USD{volume-name} . Each component can have this location mounted on a different path in its containers. Example Using the common PVC strategy, user-defined PVCs are replaced with subpaths on the common PVC. When the user references a volume as my-volume , it is mounted in the common-pvc with the /workspace-id/my-volume subpath. 10.3.2.2. Configuring a CodeReady Workspaces workspace with a persistent volume strategy A persistent volume (PV) acts as a virtual storage instance that adds a volume to a cluster. A persistent volume claim (PVC) is a request to provision persistent storage of a specific type and configuration, available in the following CodeReady Workspaces storage configuration strategies: Common Per-workspace Unique The mounted PVC is displayed as a folder in a container file system. 10.3.2.2.1. Configuring a PVC strategy using the Operator The following section describes how to configure workspace persistent volume claim (PVC) strategies of a CodeReady Workspaces server using the Operator. Warning It is not recommended to reconfigure PVC strategies on an existing CodeReady Workspaces cluster with existing workspaces. Doing so causes data loss. Operators are software extensions to OpenShift that use custom resources to manage applications and their components. When deploying CodeReady Workspaces using the Operator, configure the intended strategy by modifying the spec.storage.pvcStrategy property of the CheCluster Custom Resource object YAML file. Prerequisites A OpenShift orchestration tool, the OpenShift command-line tool, oc , is installed. Procedure The following procedure steps are available for: OpenShift command-line tool, oc To do changes to the CheCluster YAML file, choose one of the following: Create a new cluster by executing the oc apply command. For example: Update the YAML file properties of an already running cluster by executing the oc patch command. For example: Depending on the strategy used, replace the <per-workspace> option in the above example with unique or common . 10.3.2.3. Workspace namespaces configuration The OpenShift namespace where a new workspace Pod is deployed depends on the CodeReady Workspaces server configuration. By default, every workspace is deployed in a distinct namespace, but the user can configure the CodeReady Workspaces server to deploy all workspaces in one specific namespace. The name of a namespace must be provided as a CodeReady Workspaces server configuration property and cannot be changed at runtime. 10.3.3. CodeReady Workspaces workspace creation flow The following is a CodeReady Workspaces workspace creation flow: A user starts a CodeReady Workspaces workspace defined by: An editor (the default is Che-Theia) A list of plug-ins (for example, Java and OpenShift tools) A list of runtime applications wsmaster retrieves the editor and plug-in metadata from the plug-in registry. For every plug-in type, wsmaster starts a specific plug-in broker. The CodeReady Workspaces plug-ins broker transforms the plug-in metadata into a Che Plugin definition. It executes the following steps: Downloads a plug-in and extracts its content. Processes the plug-in meta.yaml file and sends it back to wsmaster in the format of a Che Plugin. wsmaster starts the editor and the plug-in sidecars. The editor loads the plug-ins from the plug-in persistent volume.
[ "/pv0001 /workspaceID1 /workspaceID2 /workspaceIDn /che-logs /projects /<volume1> /<volume2> /<User-defined PVC name 1 | volume 3>", "oc apply -f <my-cluster.yaml>", "oc patch checluster codeready-workspaces --type=json -p '[{\"op\": \"replace\", \"path\": \"/spec/storage/pvcStrategy\", \"value\": \" <per-workspace> \"}]'" ]
https://docs.redhat.com/en/documentation/red_hat_codeready_workspaces/2.1/html/administration_guide/codeready-workspaces-architectural-elements_crw
1.5. Starting and Stopping a Directory Server Instance
1.5. Starting and Stopping a Directory Server Instance 1.5.1. Starting and Stopping a Directory Server Instance Using the Command Line Use the dsctl utility to start, stop, or restart an instance: To start the instance: To stop the instance: To restart the instance: Optionally, you can enable Directory Server instances to automatically start when the system boots: For a single instance: For all instances on a server: For further details, see the Managing System Services section in the Red Hat System Administrator's Guide . 1.5.2. Starting and Stopping a Directory Server Instance Using the Web Console As an alternative to command line, you can use the web console to start, stop, or restart instances. To start, stop, or restart a Directory Server instance: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Click the Actions button and select the action to execute: Start Instance Stop Instance Restart Instance
[ "dsctl instance_name start", "dsctl instance_name stop", "dsctl instance_name restart", "systemctl enable dirsrv@ instance_name", "systemctl enable dirsrv.target" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/Starting_and_Stopping-DS
function::task_fd_lookup
function::task_fd_lookup Name function::task_fd_lookup - get the file struct for a task's fd Synopsis Arguments task task_struct pointer. fd file descriptor number. Description Returns the file struct pointer for a task's file descriptor.
[ "task_fd_lookup:long(task:long,fd:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-task-fd-lookup
Chapter 2. Installing Red Hat OpenShift GitOps
Chapter 2. Installing Red Hat OpenShift GitOps Red Hat OpenShift GitOps uses Argo CD to manage specific cluster-scoped resources, including cluster Operators, optional Operator Lifecycle Manager (OLM) Operators, and user management. 2.1. Prerequisites You have access to the OpenShift Container Platform web console. You are logged in as a user with the cluster-admin role. You are logged in to the OpenShift Container Platform cluster as an administrator. Your cluster has the Marketplace capability enabled or the Red Hat Operator catalog source configured manually. Warning If you have already installed the Community version of the Argo CD Operator, remove the Argo CD Community Operator before you install the Red Hat OpenShift GitOps Operator. This guide explains how to install the Red Hat OpenShift GitOps Operator to an OpenShift Container Platform cluster and log in to the Argo CD instance. Important The latest channel enables installation of the most recent stable version of the Red Hat OpenShift GitOps Operator. Currently, it is the default channel for installing the Red Hat OpenShift GitOps Operator. To install a specific version of the Red Hat OpenShift GitOps Operator, cluster administrators can use the corresponding gitops-<version> channel. For example, to install the Red Hat OpenShift GitOps Operator version 1.8.x, you can use the gitops-1.8 channel. 2.2. Installing Red Hat OpenShift GitOps Operator in web console You can install Red Hat OpenShift GitOps Operator from the OperatorHub by using the web console. Procedure Open the Administrator perspective of the web console and go to Operators OperatorHub . Search for OpenShift GitOps , click the Red Hat OpenShift GitOps tile, and then click Install . On the Install Operator page: Select an Update channel . Select a GitOps Version to install. Choose an Installed Namespace . The default installation namespace is openshift-gitops-operator . Note For the GitOps version 1.10 and later, the default namespace changed from openshift-operators to openshift-gitops operator . Select the Enable Operator recommended cluster monitoring on this Namespace checkbox to enable cluster monitoring. Note You can enable cluster monitoring on any namespace by applying the openshift.io/cluster-monitoring=true label: USD oc label namespace <namespace> openshift.io/cluster-monitoring=true Example output namespace/<namespace> labeled Click Install to make the GitOps Operator available on the OpenShift Container Platform cluster. Red Hat OpenShift GitOps is installed in all namespaces of the cluster. Verify that the Red Hat OpenShift GitOps Operator is listed in Operators Installed Operators . The Status should resolve to Succeeded . After the Red Hat OpenShift GitOps Operator is installed, it automatically sets up a ready-to-use Argo CD instance that is available in the openshift-gitops namespace, and an Argo CD icon is displayed in the console toolbar. You can create subsequent Argo CD instances for your applications under your projects. 2.3. Installing Red Hat OpenShift GitOps Operator using CLI You can install Red Hat OpenShift GitOps Operator from the OperatorHub by using the CLI. Note For the GitOps version 1.10 and later, the default namespace changed from openshift-operators to openshift-gitops operator . Procedure Create a openshift-gitops-operator namespace: USD oc create ns openshift-gitops-operator Example output namespace/openshift-gitops-operator created Note You can enable cluster monitoring on openshift-gitops-operator , or any namespace, by applying the openshift.io/cluster-monitoring=true label: USD oc label namespace <namespace> openshift.io/cluster-monitoring=true Example output namespace/<namespace> labeled Create a OperatorGroup object YAML file, for example, gitops-operator-group.yaml : Example OperatorGroup apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-gitops-operator namespace: openshift-gitops-operator spec: upgradeStrategy: Default Apply the OperatorGroup to the cluster: USD oc apply -f gitops-operator-group.yaml Example output operatorgroup.operators.coreos.com/openshift-gitops-operator created Create a Subscription object YAML file to subscribe a namespace to the Red Hat OpenShift GitOps Operator, for example, openshift-gitops-sub.yaml : Example Subscription apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-gitops-operator namespace: openshift-gitops-operator spec: channel: latest 1 installPlanApproval: Automatic name: openshift-gitops-operator 2 source: redhat-operators 3 sourceNamespace: openshift-marketplace 4 1 Specify the channel name from where you want to subscribe the Operator. 2 Specify the name of the Operator to subscribe to. 3 Specify the name of the CatalogSource that provides the Operator. 4 The namespace of the CatalogSource. Use openshift-marketplace for the default OperatorHub CatalogSources. Apply the Subscription to the cluster: USD oc apply -f openshift-gitops-sub.yaml Example output subscription.operators.coreos.com/openshift-gitops-operator created After the installation is complete, verify that all the pods in the openshift-gitops namespace are running: USD oc get pods -n openshift-gitops Example output NAME READY STATUS RESTARTS AGE cluster-b5798d6f9-zr576 1/1 Running 0 65m kam-69866d7c48-8nsjv 1/1 Running 0 65m openshift-gitops-application-controller-0 1/1 Running 0 53m openshift-gitops-applicationset-controller-6447b8dfdd-5ckgh 1/1 Running 0 65m openshift-gitops-dex-server-569b498bd9-vf6mr 1/1 Running 0 65m openshift-gitops-redis-74bd8d7d96-49bjf 1/1 Running 0 65m openshift-gitops-repo-server-c999f75d5-l4rsg 1/1 Running 0 65m openshift-gitops-server-5785f7668b-wj57t 1/1 Running 0 53m Verify that the pods in the openshift-gitops-operator namespace are running: USD oc get pods -n openshift-gitops-operator Example output NAME READY STATUS RESTARTS AGE openshift-gitops-operator-controller-manager-664966d547-vr4vb 2/2 Running 0 65m 2.4. Logging in to the Argo CD instance by using the Argo CD admin account Red Hat OpenShift GitOps automatically creates a ready-to-use Argo CD instance that is available in the openshift-gitops namespace. Optionally, you can create a new Argo CD instance to manage cluster configurations or deploy applications. Use the Argo CD admin account to log in to the default ready-to-use Argo CD instance or the newly installed and deployed Argo CD instance. Prerequisites You have installed the Red Hat OpenShift GitOps Operator in your cluster. Procedure In the Administrator perspective of the web console, navigate to Operators Installed Operators to verify that the Red Hat OpenShift GitOps Operator is installed. Navigate to the menu OpenShift GitOps Cluster Argo CD . The login page of the Argo CD UI is displayed in a new window. Optional: To log in with your OpenShift Container Platform credentials, ensure you are a user of the cluster-admins group and then select the LOG IN VIA OPENSHIFT option in the Argo CD user interface. Note To be a user of the cluster-admins group, use the oc adm groups new cluster-admins <user> command, where <user> is the default cluster role that you can bind to users and groups cluster-wide or locally. Obtain the password for the Argo CD instance: Use the navigation panel to go to the Workloads Secrets page. Use the Project drop-down list and select the namespace where the Argo CD instance is created. Select the <argo_CD_instance_name>-cluster instance to display the password. On the Details tab, copy the password under Data admin.password . Use admin as the Username and the copied password as the Password to log in to the Argo CD UI in the new window. Note You cannot create two Argo CD CRs in the same namespace. 2.5. Additional resources Setting up an Argo CD instance
[ "oc label namespace <namespace> openshift.io/cluster-monitoring=true", "namespace/<namespace> labeled", "oc create ns openshift-gitops-operator", "namespace/openshift-gitops-operator created", "oc label namespace <namespace> openshift.io/cluster-monitoring=true", "namespace/<namespace> labeled", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-gitops-operator namespace: openshift-gitops-operator spec: upgradeStrategy: Default", "oc apply -f gitops-operator-group.yaml", "operatorgroup.operators.coreos.com/openshift-gitops-operator created", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-gitops-operator namespace: openshift-gitops-operator spec: channel: latest 1 installPlanApproval: Automatic name: openshift-gitops-operator 2 source: redhat-operators 3 sourceNamespace: openshift-marketplace 4", "oc apply -f openshift-gitops-sub.yaml", "subscription.operators.coreos.com/openshift-gitops-operator created", "oc get pods -n openshift-gitops", "NAME READY STATUS RESTARTS AGE cluster-b5798d6f9-zr576 1/1 Running 0 65m kam-69866d7c48-8nsjv 1/1 Running 0 65m openshift-gitops-application-controller-0 1/1 Running 0 53m openshift-gitops-applicationset-controller-6447b8dfdd-5ckgh 1/1 Running 0 65m openshift-gitops-dex-server-569b498bd9-vf6mr 1/1 Running 0 65m openshift-gitops-redis-74bd8d7d96-49bjf 1/1 Running 0 65m openshift-gitops-repo-server-c999f75d5-l4rsg 1/1 Running 0 65m openshift-gitops-server-5785f7668b-wj57t 1/1 Running 0 53m", "oc get pods -n openshift-gitops-operator", "NAME READY STATUS RESTARTS AGE openshift-gitops-operator-controller-manager-664966d547-vr4vb 2/2 Running 0 65m" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.11/html/installing_gitops/installing-openshift-gitops
2.2. Fencing GNBD Server Nodes
2.2. Fencing GNBD Server Nodes GNBD server nodes must be fenced using a fencing method that physically removes the nodes from the network. To physically remove a GNBD server node, you can use any fencing device: except the following: fence_brocade fence agent, fence_vixel fence agent, fence_mcdata fence agent, fence_sanbox2 fence agent, fence_scsi fence agent. In addition, you cannot use the GNBD fencing device ( fence_gnbd fence agent) to fence a GNBD server node. For information about configuring fencing for GNBD server nodes, refer to the Global File System manual.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/global_network_block_device/s1-gnbd-mp-sn
Deploying OpenShift Data Foundation using IBM Cloud
Deploying OpenShift Data Foundation using IBM Cloud Red Hat OpenShift Data Foundation 4.9 Instructions on deploying Red Hat OpenShift Data Foundation using IBM Cloud Red Hat Storage Documentation Team Abstract Read this document for instructions about how to install Red Hat OpenShift Data Foundation using Red Hat OpenShift Container Platform on IBM cloud clusters.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_openshift_data_foundation_using_ibm_cloud/index
probe::tty.register
probe::tty.register Name probe::tty.register - Called when a tty device is registred Synopsis tty.register Values name the driver .dev_name name module the module name index the tty index requested driver_name the driver name
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-tty-register
Chapter 1. Working with kernel modules
Chapter 1. Working with kernel modules This Chapter explains: What is a kernel module. How to use the kmod utilities to manage modules and their dependencies. How to configure module parameters to control behavior of the kernel modules. How to load modules at boot time. Note In order to use the kernel module utilities described in this chapter, first ensure the kmod package is installed on your system by running, as root: 1.1. What is a kernel module? The Linux kernel is monolithic by design. However, it is compiled with optional or additional modules as required by each use case. This means that you can extend the kernel's capabilities through the use of dynamically-loaded kernel modules . A kernel module can provide: A device driver which adds support for new hardware. Support for a file system such as GFS2 or NFS . Like the kernel itself, modules can take parameters that customize their behavior. Though the default parameters work well in most cases. In relation to kernel modules, user-space tools can do the following operations: Listing modules currently loaded into a running kernel. Querying all available modules for available parameters and module-specific information. Loading or unloading (removing) modules dynamically into or from a running kernel. Many of these utilities, which are provided by the kmod package, take module dependencies into account when performing operations. As a result, manual dependency-tracking is rarely necessary. On modern systems, kernel modules are automatically loaded by various mechanisms when needed. However, there are occasions when it is necessary to load or unload modules manually. For example, when one module is preferred over another although either is able to provide basic functionality, or when a module performs unexpectedly. 1.2. Kernel module dependencies Certain kernel modules sometimes depend on one or more other kernel modules. The /lib/modules/<KERNEL_VERSION>/modules.dep file contains a complete list of kernel module dependencies for the respective kernel version. The dependency file is generated by the depmod program, which is a part of the kmod package. Many of the utilities provided by kmod take module dependencies into account when performing operations so that manual dependency-tracking is rarely necessary. Warning The code of kernel modules is executed in kernel-space in the unrestricted mode. Because of this, you should be mindful of what modules you are loading. Additional resources For more information about /lib/modules/<KERNEL_VERSION>/modules.dep , refer to the modules.dep(5) manual page. For further details including the synopsis and options of depmod , see the depmod(8) manual page. 1.3. Listing currently-loaded modules You can list all kernel modules that are currently loaded into the kernel by running the lsmod command, for example: The lsmod output specifies three columns: Module The name of a kernel module currently loaded in memory. Size The amount of memory the kernel module uses in kilobytes. Used by A decimal number representing how many dependencies there are on the Module field. A comma separated string of dependent Module names. Using this list, you can first unload all the modules depending on the module you want to unload. Finally, note that lsmod output is less verbose and considerably easier to read than the content of the /proc/modules pseudo-file. 1.4. Displaying information about a module You can display detailed information about a kernel module using the modinfo <MODULE_NAME> command. Note When entering the name of a kernel module as an argument to one of the kmod utilities, do not append a .ko extension to the end of the name. Kernel module names do not have extensions; their corresponding files do. Example 1.1. Listing information about a kernel module with lsmod To display information about the e1000e module, which is the Intel PRO/1000 network driver, enter the following command as root : # modinfo e1000e filename: /lib/modules/3.10.0-121.el7.x86_64/kernel/drivers/net/ethernet/intel/e1000e/e1000e.ko version: 2.3.2-k license: GPL description: Intel(R) PRO/1000 Network Driver author: Intel Corporation, 1.5. Loading kernel modules at system runtime The optimal way to expand the functionality of the Linux kernel is by loading kernel modules. The following procedure describes how to use the modprobe command to find and load a kernel module into the currently running kernel. Prerequisites Root permissions. The kmod package is installed. The respective kernel module is not loaded. To ensure this is the case, see Listing Currently Loaded Modules . Procedure Select a kernel module you want to load. The modules are located in the /lib/modules/USD(uname -r)/kernel/<SUBSYSTEM>/ directory. Load the relevant kernel module: Note When entering the name of a kernel module, do not append the .ko.xz extension to the end of the name. Kernel module names do not have extensions; their corresponding files do. Optionally, verify the relevant module was loaded: If the module was loaded correctly, this command displays the relevant kernel module. For example: Important The changes described in this procedure will not persist after rebooting the system. For information on how to load kernel modules to persist across system reboots, see Loading kernel modules automatically at system boot time . Additional resources For further details about modprobe , see the modprobe(8) manual page. 1.6. Unloading kernel modules at system runtime At times, you find that you need to unload certain kernel modules from the running kernel. The following procedure describes how to use the modprobe command to find and unload a kernel module at system runtime from the currently loaded kernel. Prerequisites Root permissions. The kmod package is installed. Procedure Execute the lsmod command and select a kernel module you want to unload. If a kernel module has dependencies, unload those prior to unloading the kernel module. For details on identifying modules with dependencies, see Listing Currently Loaded Modules and Kernel module dependencies . Unload the relevant kernel module: When entering the name of a kernel module, do not append the .ko.xz extension to the end of the name. Kernel module names do not have extensions; their corresponding files do. Warning Do not unload kernel modules when they are used by the running system. Doing so can lead to an unstable or non-operational system. Optionally, verify the relevant module was unloaded: If the module was unloaded successfully, this command does not display any output. Important After finishing this procedure, the kernel modules that are defined to be automatically loaded on boot, will not stay unloaded after rebooting the system. For information on how to counter this outcome, see Preventing kernel modules from being automatically loaded at system boot time . Additional resources For further details about modprobe , see the modprobe(8) manual page. 1.7. Loading kernel modules automatically at system boot time The following procedure describes how to configure a kernel module so that it is loaded automatically during the boot process. Prerequisites Root permissions. The kmod package is installed. Procedure Select a kernel module you want to load during the boot process. The modules are located in the /lib/modules/USD(uname -r)/kernel/<SUBSYSTEM>/ directory. Create a configuration file for the module: Note When entering the name of a kernel module, do not append the .ko.xz extension to the end of the name. Kernel module names do not have extensions; their corresponding files do. Optionally, after reboot, verify the relevant module was loaded: The example command above should succeed and display the relevant kernel module. Important The changes described in this procedure will persist after rebooting the system. Additional resources For further details about loading kernel modules during the boot process, see the modules-load.d(5) manual page. 1.8. Preventing kernel modules from being automatically loaded at system boot time The following procedure describes how to add a kernel module to a denylist so that it will not be automatically loaded during the boot process. Prerequisites Root permissions. The kmod package is installed. Ensure that a kernel module in a denylist is not vital for your current system configuration. Procedure Select a kernel module that you want to put in a denylist: The lsmod command displays a list of modules loaded to the currently running kernel. Alternatively, identify an unloaded kernel module you want to prevent from potentially loading. All kernel modules are located in the /lib/modules/<KERNEL_VERSION>/kernel/<SUBSYSTEM>/ directory. Create a configuration file for a denylist: The example shows the contents of the blacklist.conf file, edited by the vim editor. The blacklist line ensures that the relevant kernel module will not be automatically loaded during the boot process. The blacklist command, however, does not prevent the module from being loaded as a dependency for another kernel module that is not in a denylist. Therefore the install line causes the /bin/false to run instead of installing a module. The lines starting with a hash sign are comments to make the file more readable. Note When entering the name of a kernel module, do not append the .ko.xz extension to the end of the name. Kernel module names do not have extensions; their corresponding files do. Create a backup copy of the current initial ramdisk image before rebuilding: The command above creates a backup initramfs image in case the new version has an unexpected problem. Alternatively, create a backup copy of other initial ramdisk image which corresponds to the kernel version for which you want to put kernel modules in a denylist: Generate a new initial ramdisk image to reflect the changes: If you are building an initial ramdisk image for a different kernel version than you are currently booted into, specify both target initramfs and kernel version: Reboot the system: Important The changes described in this procedure will take effect and persist after rebooting the system. If you improperly put a key kernel module in a denylist, you can face an unstable or non-operational system. Additional resources For further details concerning the dracut utility, refer to the dracut(8) manual page. For more information on preventing automatic loading of kernel modules at system boot time on Red Hat Enterprise Linux 8 and earlier versions, see How do I prevent a kernel module from loading automatically? 1.9. Signing kernel modules for secure boot Red Hat Enterprise Linux 7 includes support for the UEFI Secure Boot feature, which means that Red Hat Enterprise Linux 7 can be installed and run on systems where UEFI Secure Boot is enabled. Note that Red Hat Enterprise Linux 7 does not require the use of Secure Boot on UEFI systems. If Secure Boot is enabled, the UEFI operating system boot loaders, the Red Hat Enterprise Linux kernel, and all kernel modules must be signed with a private key and authenticated with the corresponding public key. If they are not signed and authenticated, the system will not be allowed to finish the booting process. The Red Hat Enterprise Linux 7 distribution includes: Signed boot loaders Signed kernels Signed kernel modules In addition, the signed first-stage boot loader and the signed kernel include embedded Red Hat public keys. These signed executable binaries and embedded keys enable Red Hat Enterprise Linux 7 to install, boot, and run with the Microsoft UEFI Secure Boot Certification Authority keys that are provided by the UEFI firmware on systems that support UEFI Secure Boot. Note Not all UEFI-based systems include support for Secure Boot. The information provided in the following sections describes the steps to self-sign privately built kernel modules for use with Red Hat Enterprise Linux 7 on UEFI-based build systems where Secure Boot is enabled. These sections also provide an overview of available options for importing your public key into a target system where you want to deploy your kernel modules. To sign and load kernel modules, you need to: Have the relevant utilities installed on your system . Authenticate a kernel module . Generate a public and private key pair . Import the public key on the target system . Sign the kernel module with the private key . Load the signed kernel module . 1.9.1. Prerequisites To be able to sign externally built kernel modules, install the utilities listed in the following table on the build system. Table 1.1. Required utilities Utility Provided by package Used on Purpose openssl openssl Build system Generates public and private X.509 key pair sign-file kernel-devel Build system Perl script used to sign kernel modules perl perl Build system Perl interpreter used to run the signing script mokutil mokutil Target system Optional utility used to manually enroll the public key keyctl keyutils Target system Optional utility used to display public keys in the system key ring Note The build system, where you build and sign your kernel module, does not need to have UEFI Secure Boot enabled and does not even need to be a UEFI-based system. 1.9.2. Kernel module authentication In Red Hat Enterprise Linux 7, when a kernel module is loaded, the module's signature is checked using the public X.509 keys on the kernel's system key ring, excluding keys on the kernel's system black-list key ring. The following sections provide an overview of sources of keys/keyrings, examples of loaded keys from different sources in the system. Also, the user can see what it takes to authenticate a kernel module. 1.9.2.1. Sources for public keys used to authenticate kernel modules During boot, the kernel loads X.509 keys into the system key ring or the system black-list key ring from a set of persistent key stores as shown in the table below. Table 1.2. Sources for system key rings Source of X.509 keys User ability to add keys UEFI Secure Boot state Keys loaded during boot Embedded in kernel No - .system_keyring UEFI Secure Boot "db" Limited Not enabled No Enabled .system_keyring UEFI Secure Boot "dbx" Limited Not enabled No Enabled .system_keyring Embedded in shim.efi boot loader No Not enabled No Enabled .system_keyring Machine Owner Key (MOK) list Yes Not enabled No Enabled .system_keyring If the system is not UEFI-based or if UEFI Secure Boot is not enabled, then only the keys that are embedded in the kernel are loaded onto the system key ring. In that case you have no ability to augment that set of keys without rebuilding the kernel. The system black list key ring is a list of X.509 keys which have been revoked. If your module is signed by a key on the black list then it will fail authentication even if your public key is in the system key ring. You can display information about the keys on the system key rings using the keyctl utility. The following is a shortened example output from a Red Hat Enterprise Linux 7 system where UEFI Secure Boot is not enabled. The following is a shortened example output from a Red Hat Enterprise Linux 7 system where UEFI Secure Boot is enabled. The above output shows the addition of two keys from the UEFI Secure Boot "db" keys as well as the Red Hat Secure Boot (CA key 1) , which is embedded in the shim.efi boot loader. You can also look for the kernel console messages that identify the keys with an UEFI Secure Boot related source. These include UEFI Secure Boot db, embedded shim, and MOK list. 1.9.2.2. Kernel module authentication requirements This section explains what conditions have to be met for loading kernel modules on systems with enabled UEFI Secure Boot functionality. If UEFI Secure Boot is enabled or if the module.sig_enforce kernel parameter has been specified, you can only load signed kernel modules that are authenticated using a key on the system key ring. In addition, the public key must not be on the system black list key ring. If UEFI Secure Boot is disabled and if the module.sig_enforce kernel parameter has not been specified, you can load unsigned kernel modules and signed kernel modules without a public key. This is summarized in the table below. Table 1.3. Kernel module authentication requirements for loading Module signed Public key found and signature valid UEFI Secure Boot state sig_enforce Module load Kernel tainted Unsigned - Not enabled Not enabled Succeeds Yes Not enabled Enabled Fails - Enabled - Fails - Signed No Not enabled Not enabled Succeeds Yes Not enabled Enabled Fails - Enabled - Fails - Signed Yes Not enabled Not enabled Succeeds No Not enabled Enabled Succeeds No Enabled - Succeeds No 1.9.3. Generating a public and private X.509 key pair You need to generate a public and private X.509 key pair to succeed in your efforts of using kernel modules on a Secure Boot-enabled system. You will later use the private key to sign the kernel module. You will also have to add the corresponding public key to the Machine Owner Key (MOK) for Secure Boot to validate the signed module. For instructions to do so, see Section 1.9.4.2, "System administrator manually adding public key to the MOK list" . Some of the parameters for this key pair generation are best specified with a configuration file. Create a configuration file with parameters for the key pair generation: Create an X.509 public and private key pair as shown in the following example: The public key will be written to the my_signing_key_pub .der file and the private key will be written to the my_signing_key .priv file. Enroll your public key on all systems where you want to authenticate and load your kernel module. For details, see Section 1.9.4, "Enrolling public key on target system" . Warning Apply strong security measures and access policies to guard the contents of your private key. In the wrong hands, the key could be used to compromise any system which is authenticated by the corresponding public key. 1.9.4. Enrolling public key on target system When Red Hat Enterprise Linux 7 boots on a UEFI-based system with Secure Boot enabled, the kernel loads onto the system key ring all public keys that are in the Secure Boot db key database, but not in the dbx database of revoked keys. The sections below describe different ways of importing a public key on a target system so that the system key ring is able to use the public key to authenticate a kernel module. 1.9.4.1. Factory firmware image including public key To facilitate authentication of your kernel module on your systems, consider requesting your system vendor to incorporate your public key into the UEFI Secure Boot key database in their factory firmware image. 1.9.4.2. System administrator manually adding public key to the MOK list The Machine Owner Key (MOK) facility feature can be used to expand the UEFI Secure Boot key database. When Red Hat Enterprise Linux 7 boots on a UEFI-enabled system with Secure Boot enabled, the keys on the MOK list are also added to the system key ring in addition to the keys from the key database. The MOK list keys are also stored persistently and securely in the same fashion as the Secure Boot database keys, but these are two separate facilities. The MOK facility is supported by shim.efi , MokManager.efi , grubx64.efi , and the Red Hat Enterprise Linux 7 mokutil utility. Enrolling a MOK key requires manual interaction by a user at the UEFI system console on each target system. Nevertheless, the MOK facility provides a convenient method for testing newly generated key pairs and testing kernel modules signed with them. To add your public key to the MOK list: Request the addition of your public key to the MOK list: You will be asked to enter and confirm a password for this MOK enrollment request. Reboot the machine. The pending MOK key enrollment request will be noticed by shim.efi and it will launch MokManager.efi to allow you to complete the enrollment from the UEFI console. Enter the password you previously associated with this request and confirm the enrollment. Your public key is added to the MOK list, which is persistent. Once a key is on the MOK list, it will be automatically propagated to the system key ring on this and subsequent boots when UEFI Secure Boot is enabled. 1.9.5. Signing kernel module with the private key Assuming you have your kernel module ready: Use a Perl script to sign your kernel module with your private key: Note The Perl script requires that you provide both the files that contain your private and the public key as well as the kernel module file that you want to sign. Your kernel module is in ELF image format and the Perl script computes and appends the signature directly to the ELF image in your kernel module file. The modinfo utility can be used to display information about the kernel module's signature, if it is present. For information on using modinfo , see Section 1.4, "Displaying information about a module" . The appended signature is not contained in an ELF image section and is not a formal part of the ELF image. Therefore, utilities such as readelf will not be able to display the signature on your kernel module. Your kernel module is now ready for loading. Note that your signed kernel module is also loadable on systems where UEFI Secure Boot is disabled or on a non-UEFI system. That means you do not need to provide both a signed and unsigned version of your kernel module. 1.9.6. Loading signed kernel module Once your public key is enrolled and is in the system key ring, use mokutil to add your public key to the MOK list. Then manually load your kernel module with the modprobe command. Optionally, verify that your kernel module will not load before you have enrolled your public key. For details on how to list currently loaded kernel modules, see Section 1.3, "Listing currently-loaded modules" . Verify what keys have been added to the system key ring on the current boot: Since your public key has not been enrolled yet, it should not be displayed in the output of the command. Request enrollment of your public key: Reboot, and complete the enrollment at the UEFI console: Verify the keys on the system key ring again: Copy the module into the /extra/ directory of the kernel you want: Update the modular dependency list: Load the kernel module and verify that it was successfully loaded: Optionally, to load the module on boot, add it to the /etc/modules-loaded.d/my_module.conf file:
[ "# yum install kmod", "# lsmod Module Size Used by tcp_lp 12663 0 bnep 19704 2 bluetooth 372662 7 bnep rfkill 26536 3 bluetooth fuse 87661 3 ebtable_broute 12731 0 bridge 110196 1 ebtable_broute stp 12976 1 bridge llc 14552 2 stp,bridge ebtable_filter 12827 0 ebtables 30913 3 ebtable_broute,ebtable_nat,ebtable_filter ip6table_nat 13015 1 nf_nat_ipv6 13279 1 ip6table_nat iptable_nat 13011 1 nf_conntrack_ipv4 14862 4 nf_defrag_ipv4 12729 1 nf_conntrack_ipv4 nf_nat_ipv4 13263 1 iptable_nat nf_nat 21798 4 nf_nat_ipv4,nf_nat_ipv6,ip6table_nat,iptable_nat [output truncated]", "modinfo e1000e filename: /lib/modules/3.10.0-121.el7.x86_64/kernel/drivers/net/ethernet/intel/e1000e/e1000e.ko version: 2.3.2-k license: GPL description: Intel(R) PRO/1000 Network Driver author: Intel Corporation,", "modprobe < MODULE_NAME >", "lsmod | grep < MODULE_NAME >", "lsmod | grep serio_raw serio_raw 16384 0", "modprobe -r < MODULE_NAME >", "lsmod | grep < MODULE_NAME >", "echo < MODULE_NAME > > /etc/modules-load.d/< MODULE_NAME >.conf", "lsmod | grep < MODULE_NAME >", "lsmod Module Size Used by fuse 126976 3 xt_CHECKSUM 16384 1 ipt_MASQUERADE 16384 1 uinput 20480 1 xt_conntrack 16384 1 ...", "vim /etc/modprobe.d/blacklist.conf # Blacklists < KERNEL_MODULE_1 > blacklist < MODULE_NAME_1 > install < MODULE_NAME_1 > /bin/false # Blacklists < KERNEL_MODULE_2 > blacklist < MODULE_NAME_2 > install < MODULE_NAME_2 > /bin/false # Blacklists < KERNEL_MODULE_n > blacklist < MODULE_NAME_n > install < MODULE_NAME_n > /bin/false ...", "cp /boot/initramfs-USD(uname -r).img /boot/initramfs-USD(uname -r).bak.USD(date +%m-%d-%H%M%S).img", "cp /boot/initramfs-< SOME_VERSION >.img /boot/initramfs-< SOME_VERSION >.img.bak.USD(date +%m-%d-%H%M%S)", "dracut -f -v", "dracut -f -v /boot/initramfs-< TARGET_VERSION >.img < CORRESPONDING_TARGET_KERNEL_VERSION >", "reboot", "keyctl list %:.system_keyring 3 keys in keyring: ...asymmetric: Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87 ...asymmetric: Red Hat Enterprise Linux kernel signing key: 4249689eefc77e95880b ...asymmetric: Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b7", "keyctl list %:.system_keyring 6 keys in keyring: ...asymmetric: Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87 ...asymmetric: Red Hat Secure Boot (CA key 1): 4016841644ce3a810408050766e8f8a29 ...asymmetric: Microsoft Corporation UEFI CA 2011: 13adbf4309bd82709c8cd54f316ed ...asymmetric: Microsoft Windows Production PCA 2011: a92902398e16c49778cd90f99e ...asymmetric: Red Hat Enterprise Linux kernel signing key: 4249689eefc77e95880b ...asymmetric: Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b7", "dmesg | grep 'EFI: Loaded cert' [5.160660] EFI: Loaded cert 'Microsoft Windows Production PCA 2011: a9290239 [5.160674] EFI: Loaded cert 'Microsoft Corporation UEFI CA 2011: 13adbf4309b [5.165794] EFI: Loaded cert 'Red Hat Secure Boot (CA key 1): 4016841644ce3a8", "cat << EOF > configuration_file.config [ req ] default_bits = 4096 distinguished_name = req_distinguished_name prompt = no string_mask = utf8only x509_extensions = myexts [ req_distinguished_name ] O = Organization CN = Organization signing key emailAddress = E-mail address [ myexts ] basicConstraints=critical,CA:FALSE keyUsage=digitalSignature subjectKeyIdentifier=hash authorityKeyIdentifier=keyid EOF", "openssl req -x509 -new -nodes -utf8 -sha256 -days 36500 -batch -config configuration_file.config -outform DER -out my_signing_key_pub.der -keyout my_signing_key.priv", "mokutil --import my_signing_key_pub.der", "perl /usr/src/kernels/USD(uname -r)/scripts/sign-file sha256 my_signing_key.priv my_signing_key_pub.der my_module.ko", "keyctl list %:.system_keyring", "mokutil --import my_signing_key_pub.der", "reboot", "keyctl list %:.system_keyring", "cp my_module.ko /lib/modules/USD(uname -r)/extra/", "depmod -a", "modprobe -v my_module lsmod | grep my_module", "echo \"my_module\" > /etc/modules-load.d/my_module.conf" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/kernel_administration_guide/chap-Documentation-Kernel_Administration_Guide-Working_with_kernel_modules
23.21. A Sample Virtual Machine XML Configuration
23.21. A Sample Virtual Machine XML Configuration The following table shows a sample XML configuration of a guest virtual machine (VM), also referred to as domain XML , and explains the content of the configuration. To obtain the XML configuration of a VM, use the virsh dumpxml command. For information about editing VM configuration, see the Virtualization Getting Started Guide . Table 23.33. A Sample Domain XML Configuration Domain XML section Description <domain type='kvm'> <name>Testguest1</name> <uuid>ec6fbaa1-3eb4-49da-bf61-bb02fbec4967</uuid> <memory unit='KiB'>1048576</memory> <currentMemory unit='KiB'>1048576</currentMemory> <vcpu placement='static'>1</vcpu> This is a KVM called Testguest1 with 1024 MiB allocated RAM. For information about configuring general VM parameters, see Section 23.1, "General Information and Metadata" . <vcpu placement='static'>1</vcpu> The guest VM has 1 allocated vCPU. For information about CPU allocation, see Section 23.4, "CPU allocation" . <os> <type arch='x86_64' machine='pc-i440fx-2.9'>hvm</type> <boot dev='hd'/> </os> The machine architecture is set to AMD64 and Intel 64 architecture, and uses the Intel 440FX machine type to determine feature compatibility. The OS is booted from the hard drive. For information about modifying OS parameters, see Section 23.2, "Operating System Booting" . <features> <acpi/> <apic/> <vmport state='off'/> </features> The hypervisor features acpi and apic are disabled and the VMWare IO port is turned off. For information about modifying Hypervisor features, see - Section 23.14, "Hypervisor Features" . <cpu mode='host-passthrough' check='none'/> The guest CPU features are set to be the same as those on the host CPU. For information about modifying CPU features, see - Section 23.12, "CPU Models and Topology" . <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> The guest's virtual hardware clock uses the UTC time zone. In addition, three different timers are set up for synchronization with the QEMU hypervisor. For information about modifying time-keeping settings, see - Section 23.15, "Timekeeping" . <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> When the VM powers off, or its OS terminates unexpectedly, libvirt terminates the guest and releases all its allocated resources. When the guest is rebooted, it is restarted with the same configuration. For more information about configuring these settings, see - Section 23.13, "Events Configuration" . <pm> <suspend-to-mem enabled='no'/> <suspend-to-disk enabled='no'/> </pm> The S3 and S4 ACPI sleep states for this guest VM are disabled. " />. <devices> <emulator>/usr/bin/qemu-kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/var/lib/libvirt/images/Testguest.qcow2'/> <target dev='hda' bus='ide'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <target dev='hdb' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> The VM uses the /usr/bin/qemu-kvm binary file for emulation. In addition, it has two disks attached. The first disk is a virtualized hard-drive based on the /var/lib/libvirt/images/Testguest.qcow2 stored on the host, and its logical device name is set to hda . For more information about managing disks, see - Section 23.17.1, "Hard Drives, Floppy Disks, and CD-ROMs" . <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'/> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </controller> The VM uses four controllers for attaching USB devices, and a root controller for PCI-Express (PCIe) devices. In addition, a virtio-serial controller is available, which enables the VM to interact with the host in a variety of ways, such as the serial console. For more information about configuring controllers, see - Section 23.17.3, "Controllers" . <interface type='network'> <mac address='52:54:00:65:29:21'/> <source network='default'/> <model type='rtl8139'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> A network interface is set up in the VM that uses the default virtual network and the rtl8139 network device model. For more information about configuring network interfaces, see - Section 23.17.8, "Network Interfaces" . <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='spicevmc'> <target type='virtio' name='com.redhat.spice.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> A pty serial console is set up on the VM, which enables the most rudimentary VM communication with the host. The console uses the paravirtualized SPICE channel. This is set up automatically and changing these settings is not recommended. For an overview of character devices, see - Section 23.17.8, "Network Interfaces" . For detailed information about serial ports and consoles , see Section 23.17.14, "Guest Virtual Machine Interfaces" . For more information about channels , see Section 23.17.15, "Channel" . <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> The VM uses a virtual ps2 port which is set up to receive mouse and keyboard input. This is set up automatically and changing these settings is not recommended. For more information, see Section 23.17.9, "Input Devices" . <graphics type='spice' autoport='yes'> <listen type='address'/> <image compression='off'/> </graphics> The VM uses the SPICE protocol for rendering its graphical output with auto-allocated port numbers and image compression turned off. For information about configuring graphic devices, see Section 23.17.11, "Graphical Framebuffers" . <sound model='ich6'> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </sound> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> An ICH6 HDA sound device is set up for the VM, and the QEMU QXL paravirtualized framebuffer device is set up as the video accelerator. This is set up automatically and changing these settings is not recommended. For information about configuring sound devices , see Section 23.17.17, "Sound Devices" . For configuring video devices , see Section 23.17.12, "Video Devices" . <redirdev bus='usb' type='spicevmc'> <address type='usb' bus='0' port='1'/> </redirdev> <redirdev bus='usb' type='spicevmc'> <address type='usb' bus='0' port='2'/> </redirdev> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </memballoon> </devices> </domain> The VM has two redirectors for attaching USB devices remotely, and memory ballooning is turned on. This is set up automatically and changing these settings is not recommended. For detailed information, see Section 23.17.6, "Redirected devices"
[ "<domain type='kvm'> <name>Testguest1</name> <uuid>ec6fbaa1-3eb4-49da-bf61-bb02fbec4967</uuid> <memory unit='KiB'>1048576</memory> <currentMemory unit='KiB'>1048576</currentMemory> <vcpu placement='static'>1</vcpu>", "<vcpu placement='static'>1</vcpu>", "<os> <type arch='x86_64' machine='pc-i440fx-2.9'>hvm</type> <boot dev='hd'/> </os>", "<features> <acpi/> <apic/> <vmport state='off'/> </features>", "<cpu mode='host-passthrough' check='none'/>", "<clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock>", "<on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash>", "<pm> <suspend-to-mem enabled='no'/> <suspend-to-disk enabled='no'/> </pm>", "<devices> <emulator>/usr/bin/qemu-kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/var/lib/libvirt/images/Testguest.qcow2'/> <target dev='hda' bus='ide'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <target dev='hdb' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk>", "<controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'/> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </controller>", "<interface type='network'> <mac address='52:54:00:65:29:21'/> <source network='default'/> <model type='rtl8139'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface>", "<serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='spicevmc'> <target type='virtio' name='com.redhat.spice.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel>", "<input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/>", "<graphics type='spice' autoport='yes'> <listen type='address'/> <image compression='off'/> </graphics>", "<sound model='ich6'> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </sound> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video>", "<redirdev bus='usb' type='spicevmc'> <address type='usb' bus='0' port='1'/> </redirdev> <redirdev bus='usb' type='spicevmc'> <address type='usb' bus='0' port='2'/> </redirdev> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </memballoon> </devices> </domain>" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-manipulating_the_domain_xml-a_sample_configuration_file
Chapter 133. KafkaConnector schema reference
Chapter 133. KafkaConnector schema reference Property Property type Description spec KafkaConnectorSpec The specification of the Kafka Connector. status KafkaConnectorStatus The status of the Kafka Connector.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-kafkaconnector-reference
Chapter 5. Cluster extensions
Chapter 5. Cluster extensions 5.1. Managing cluster extensions Important Operator Lifecycle Manager (OLM) v1 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . After a catalog has been added to your cluster, you have access to the versions, patches, and over-the-air updates of the extensions and Operators that are published to the catalog. You can manage extensions declaratively from the CLI using custom resources (CRs). Important Currently, Operator Lifecycle Manager (OLM) v1 cannot authenticate private registries, such as the Red Hat-provided Operator catalogs. This is a known issue. As a result, the OLM v1 procedures that rely on having the Red Hat Operators catalog installed do not work. ( OCPBUGS-36364 ) 5.1.1. Supported extensions Currently, Operator Lifecycle Manager (OLM) v1 supports installing cluster extensions that meet all of the following criteria: The extension must use the registry+v1 bundle format introduced in existing OLM. The extension must support installation via the AllNamespaces install mode. The extension must not use webhooks. The extension must not declare dependencies by using any of the following file-based catalog properties: olm.gvk.required olm.package.required olm.constraint OLM v1 checks that the extension you want to install meets these constraints. If the extension that you want to install does not meet these constraints, an error message is printed in the cluster extension's conditions. Important Operator Lifecycle Manager (OLM) v1 does not support the OperatorConditions API introduced in existing OLM. If an extension relies on only the OperatorConditions API to manage updates, the extension might not install correctly. Most extensions that rely on this API fail at start time, but some might fail during reconciliation. As a workaround, you can pin your extension to a specific version. When you want to update your extension, consult the extension's documentation to find out when it is safe to pin the extension to a new version. Additional resources Operator conditions 5.1.2. Finding Operators to install from a catalog After you add a catalog to your cluster, you can query the catalog to find Operators and extensions to install. Before you can query catalogs, you must port forward the catalog server service. Prerequisites You have added a catalog to your cluster. You have installed the jq CLI tool. Procedure Port forward the catalog server service in the openshift-catalogd namespace by running the following command: USD oc -n openshift-catalogd port-forward svc/catalogd-catalogserver 8080:443 In a new terminal window or tab, download the catalog's JSON file locally by running the following command: USD curl -L -k https://localhost:8080/catalogs/<catalog_name>/all.json \ -C - -o /<path>/<catalog_name>.json Example 5.1. Example command USD curl -L -k https://localhost:8080/catalogs/redhat-operators/all.json \ -C - -o /home/username/catalogs/rhoc.json Run one of the following commands to return a list of Operators and extensions in a catalog. Important Currently, Operator Lifecycle Manager (OLM) v1 supports installing cluster extensions that meet all of the following criteria: The extension must use the registry+v1 bundle format introduced in existing OLM. The extension must support installation via the AllNamespaces install mode. The extension must not use webhooks. The extension must not declare dependencies by using any of the following file-based catalog properties: olm.gvk.required olm.package.required olm.constraint OLM v1 checks that the extension you want to install meets these constraints. If the extension that you want to install does not meet these constraints, an error message is printed in the cluster extension's conditions. Get a list of all the Operators and extensions from the local catalog file by running the following command: USD jq -s '.[] | select(.schema == "olm.package") | .name' \ /<path>/<filename>.json Example 5.2. Example command USD jq -s '.[] | select(.schema == "olm.package") | .name' \ /home/username/catalogs/rhoc.json Example 5.3. Example output NAME AGE "3scale-operator" "advanced-cluster-management" "amq-broker-rhel8" "amq-online" "amq-streams" "amq7-interconnect-operator" "ansible-automation-platform-operator" "ansible-cloud-addons-operator" "apicast-operator" "aws-efs-csi-driver-operator" "aws-load-balancer-operator" "bamoe-businessautomation-operator" "bamoe-kogito-operator" "bare-metal-event-relay" "businessautomation-operator" ... Get list of packages that support AllNamespaces install mode and do not use webhooks from the local catalog file by running the following command: USD jq -c 'select(.schema == "olm.bundle") | \ {"package":.package, "version":.properties[] | \ select(.type == "olm.bundle.object").value.data | @base64d | fromjson | \ select(.kind == "ClusterServiceVersion" and (.spec.installModes[] | \ select(.type == "AllNamespaces" and .supported == true) != null) \ and .spec.webhookdefinitions == null).spec.version}' \ /<path>/<catalog_name>.json Example 5.4. Example output {"package":"3scale-operator","version":"0.10.0-mas"} {"package":"3scale-operator","version":"0.10.5"} {"package":"3scale-operator","version":"0.11.0-mas"} {"package":"3scale-operator","version":"0.11.1-mas"} {"package":"3scale-operator","version":"0.11.2-mas"} {"package":"3scale-operator","version":"0.11.3-mas"} {"package":"3scale-operator","version":"0.11.5-mas"} {"package":"3scale-operator","version":"0.11.6-mas"} {"package":"3scale-operator","version":"0.11.7-mas"} {"package":"3scale-operator","version":"0.11.8-mas"} {"package":"amq-broker-rhel8","version":"7.10.0-opr-1"} {"package":"amq-broker-rhel8","version":"7.10.0-opr-2"} {"package":"amq-broker-rhel8","version":"7.10.0-opr-3"} {"package":"amq-broker-rhel8","version":"7.10.0-opr-4"} {"package":"amq-broker-rhel8","version":"7.10.1-opr-1"} {"package":"amq-broker-rhel8","version":"7.10.1-opr-2"} {"package":"amq-broker-rhel8","version":"7.10.2-opr-1"} {"package":"amq-broker-rhel8","version":"7.10.2-opr-2"} ... Inspect the contents of an Operator or extension's metadata by running the following command: USD jq -s '.[] | select( .schema == "olm.package") | \ select( .name == "<package_name>")' /<path>/<catalog_name>.json Example 5.5. Example command USD jq -s '.[] | select( .schema == "olm.package") | \ select( .name == "openshift-pipelines-operator-rh")' \ /home/username/rhoc.json Example 5.6. Example output { "defaultChannel": "stable", "icon": { "base64data": "PHN2ZyB4bWxu..." "mediatype": "image/png" }, "name": "openshift-pipelines-operator-rh", "schema": "olm.package" } 5.1.2.1. Common catalog queries You can query catalogs by using the jq CLI tool. Table 5.1. Common package queries Query Request Available packages in a catalog USD jq -s '.[] | select( .schema == "olm.package") | \ .name' <catalog_name>.json Packages that support AllNamespaces install mode and do not use webhooks USD jq -c 'select(.schema == "olm.bundle") | \ {"package":.package, "version":.properties[] | \ select(.type == "olm.bundle.object").value.data | \ @base64d | fromjson | \ select(.kind == "ClusterServiceVersion" and (.spec.installModes[] | \ select(.type == "AllNamespaces" and .supported == true) != null) \ and .spec.webhookdefinitions == null).spec.version}' \ <catalog_name>.json Package metadata USD jq -s '.[] | select( .schema == "olm.package") | \ select( .name == "<package_name>")' <catalog_name>.json Catalog blobs in a package USD jq -s '.[] | select( .package == "<package_name>")' \ <catalog_name>.json Table 5.2. Common channel queries Query Request Channels in a package USD jq -s '.[] | select( .schema == "olm.channel" ) | \ select( .package == "<package_name>") | .name' \ <catalog_name>.json Versions in a channel USD jq -s '.[] | select( .package == "<package_name>" ) | \ select( .schema == "olm.channel" ) | \ select( .name == "<channel_name>" ) | \ .entries | .[] | .name' <catalog_name>.json Latest version in a channel Upgrade path USD jq -s '.[] | select( .schema == "olm.channel" ) | \ select ( .name == "<channel>") | \ select( .package == "<package_name>")' \ <catalog_name>.json Table 5.3. Common bundle queries Query Request Bundles in a package USD jq -s '.[] | select( .schema == "olm.bundle" ) | \ select( .package == "<package_name>") | .name' \ <catalog_name>.json Bundle dependencies Available APIs USD jq -s '.[] | select( .schema == "olm.bundle" ) | \ select ( .name == "<bundle_name>") | \ select( .package == "<package_name>")' \ <catalog_name>.json 5.1.3. Creating a service account to manage cluster extensions Unlike existing Operator Lifecycle Manager (OLM), OLM v1 does not have permissions to install, update, and manage cluster extensions. Cluster administrators must create a service account and assign the role-based access controls (RBAC) required to install, update, and manage cluster extensions. Important There is a known issue in OLM v1. If you do not assign the correct role-based access controls (RBAC) to an extension's service account, OLM v1 gets stuck and reconciliation stops. Currently, OLM v1 does not have tools to help extension administrators find the correct RBAC for a service account. Because OLM v1 is a Technology Preview feature and must not be used on production clusters, you can avoid this issue by using the more permissive RBAC included in the documentation. This RBAC is intended for testing purposes only. Do not use it on production clusters. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. Procedure Create a service account, similar to the following example: apiVersion: v1 kind: ServiceAccount metadata: name: <extension>-installer namespace: <namespace> Example 5.7. Example extension-service-account.yaml file apiVersion: v1 kind: ServiceAccount metadata: name: pipelines-installer namespace: pipelines Apply the service account by running the following command: USD oc apply -f extension-service-account.yaml Create a cluster role and assign RBAC, similar to the following example: Warning The following cluster role does not follow the principle of least privilege. This cluster role is intended for testing purposes only. Do not use it on production clusters. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: <extension>-installer-clusterrole rules: - apiGroups: ["*"] resources: ["*"] verbs: ["*"] Example 5.8. Example pipelines-cluster-role.yaml file apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: pipelines-installer-clusterrole rules: - apiGroups: ["*"] resources: ["*"] verbs: ["*"] Add the cluster role to the cluster by running the following command: USD oc apply -f pipelines-role.yaml Bind the permissions granted by the cluster role to the service account by creating a cluster role binding, similar to the following example: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: <extension>-installer-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <extension>-installer-clusterrole subjects: - kind: ServiceAccount name: <extension>-installer namespace: <namespace> Example 5.9. Example pipelines-cluster-role-binding.yaml file apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: pipelines-installer-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: pipelines-installer-clusterrole subjects: - kind: ServiceAccount name: pipelines-installer namespace: pipelines Apply the cluster role binding by running the following command: USD oc apply -f pipelines-cluster-role-binding.yaml 5.1.4. Installing a cluster extension from a catalog You can install an extension from a catalog by creating a custom resource (CR) and applying it to the cluster. Operator Lifecycle Manager (OLM) v1 supports installing cluster extensions, including existing OLM Operators via the registry+v1 bundle format, that are scoped to the cluster. For more information, see Supported extensions . Important Currently, Operator Lifecycle Manager (OLM) v1 cannot authenticate private registries, such as the Red Hat-provided Operator catalogs. This is a known issue. As a result, the OLM v1 procedures that rely on having the Red Hat Operators catalog installed do not work. ( OCPBUGS-36364 ) Prerequisites You have added a catalog to your cluster. You have downloaded a local copy of the catalog file. You have installed the jq CLI tool. You have created a service account and assigned enough role-based access controls (RBAC) to install, update, and manage the extension you want to install. For more information, see Creating a service account . Procedure Inspect a package for channel and version information from a local copy of your catalog file by completing the following steps: Get a list of channels from a selected package by running the following command: USD jq -s '.[] | select( .schema == "olm.channel" ) | \ select( .package == "<package_name>") | \ .name' /<path>/<catalog_name>.json Example 5.10. Example command USD jq -s '.[] | select( .schema == "olm.channel" ) | \ select( .package == "openshift-pipelines-operator-rh") | \ .name' /home/username/rhoc.json Example 5.11. Example output "latest" "pipelines-1.11" "pipelines-1.12" "pipelines-1.13" "pipelines-1.14" Get a list of the versions published in a channel by running the following command: USD jq -s '.[] | select( .package == "<package_name>" ) | \ select( .schema == "olm.channel" ) | \ select( .name == "<channel_name>" ) | .entries | \ .[] | .name' /<path>/<catalog_name>.json Example 5.12. Example command USD jq -s '.[] | select( .package == "openshift-pipelines-operator-rh" ) | \ select( .schema == "olm.channel" ) | select( .name == "latest" ) | \ .entries | .[] | .name' /home/username/rhoc.json Example 5.13. Example output "openshift-pipelines-operator-rh.v1.12.0" "openshift-pipelines-operator-rh.v1.12.1" "openshift-pipelines-operator-rh.v1.12.2" "openshift-pipelines-operator-rh.v1.13.0" "openshift-pipelines-operator-rh.v1.13.1" "openshift-pipelines-operator-rh.v1.11.1" "openshift-pipelines-operator-rh.v1.12.0" "openshift-pipelines-operator-rh.v1.12.1" "openshift-pipelines-operator-rh.v1.12.2" "openshift-pipelines-operator-rh.v1.13.0" "openshift-pipelines-operator-rh.v1.14.1" "openshift-pipelines-operator-rh.v1.14.2" "openshift-pipelines-operator-rh.v1.14.3" "openshift-pipelines-operator-rh.v1.14.4" If you want to install your extension into a new namespace, run the following command: USD oc adm new-project <new_namespace> Create a CR, similar to the following example: Example pipelines-operator.yaml CR apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace> serviceAccount: name: <service_account> channel: <channel> version: "<version>" where: <namespace> Specifies the namespace where you want the bundle installed, such as pipelines or my-extension . Extensions are still cluster-scoped and might contain resources that are installed in different namespaces. <service_account> Specifies the name of the service account you created to install, update, and manage your extension. <channel> Optional: Specifies the channel, such as pipelines-1.11 or latest , for the package you want to install or update. <version> Optional: Specifies the version or version range, such as 1.11.1 , 1.12.x , or >=1.12.1 , of the package you want to install or update. For more information, see "Example custom resources (CRs) that specify a target version" and "Support for version ranges". Important If you try to install an Operator or extension that does not have unique name, the installation might fail or lead to an unpredictable result. This occurs for the following reasons: If mulitple catalogs are installed on a cluster, Operator Lifecycle Manager (OLM) v1 does not include a mechanism to specify a catalog when you install an Operator or extension. OLM v1 requires that all of the Operators and extensions that are available to install on a cluster use a unique name for their bundles and packages. Apply the CR to the cluster by running the following command: USD oc apply -f pipeline-operator.yaml Example output clusterextension.olm.operatorframework.io/pipelines-operator created Verification View the Operator or extension's CR in the YAML format by running the following command: USD oc get clusterextension pipelines-operator -o yaml Example 5.14. Example output apiVersion: v1 items: - apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"olm.operatorframework.io/v1alpha1","kind":"ClusterExtension","metadata":{"annotations":{},"name":"pipelines-operator"},"spec":{"channel":"latest","installNamespace":"pipelines","packageName":"openshift-pipelines-operator-rh","serviceAccount":{"name":"pipelines-installer"},"pollInterval":"30m"}} creationTimestamp: "2024-06-10T17:50:51Z" finalizers: - olm.operatorframework.io/cleanup-unpack-cache generation: 1 name: pipelines-operator resourceVersion: "53324" uid: c54237be-cde4-46d4-9b31-d0ec6acc19bf spec: channel: latest installNamespace: pipelines packageName: openshift-pipelines-operator-rh serviceAccount: name: pipelines-installer upgradeConstraintPolicy: Enforce status: conditions: - lastTransitionTime: "2024-06-10T17:50:58Z" message: resolved to "registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:dd3d18367da2be42539e5dde8e484dac3df33ba3ce1d5bcf896838954f3864ec" observedGeneration: 1 reason: Success status: "True" type: Resolved - lastTransitionTime: "2024-06-10T17:51:11Z" message: installed from "registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:dd3d18367da2be42539e5dde8e484dac3df33ba3ce1d5bcf896838954f3864ec" observedGeneration: 1 reason: Success status: "True" type: Installed - lastTransitionTime: "2024-06-10T17:50:58Z" message: "" observedGeneration: 1 reason: Deprecated status: "False" type: Deprecated - lastTransitionTime: "2024-06-10T17:50:58Z" message: "" observedGeneration: 1 reason: Deprecated status: "False" type: PackageDeprecated - lastTransitionTime: "2024-06-10T17:50:58Z" message: "" observedGeneration: 1 reason: Deprecated status: "False" type: ChannelDeprecated - lastTransitionTime: "2024-06-10T17:50:58Z" message: "" observedGeneration: 1 reason: Deprecated status: "False" type: BundleDeprecated - lastTransitionTime: "2024-06-10T17:50:58Z" message: 'unpack successful: observedGeneration: 1 reason: UnpackSuccess status: "True" type: Unpacked installedBundle: name: openshift-pipelines-operator-rh.v1.14.4 version: 1.14.4 resolvedBundle: name: openshift-pipelines-operator-rh.v1.14.4 version: 1.14.4 where: spec.channel Displays the channel defined in the CR of the extension. spec.version Displays the version or version range defined in the CR of the extension. status.conditions Displays information about the status and health of the extension. type: Deprecated Displays whether one or more of following are deprecated: type: PackageDeprecated Displays whether the resolved package is deprecated. type: ChannelDeprecated Displays whether the resolved channel is deprecated. type: BundleDeprecated Displays whether the resolved bundle is deprecated. The value of False in the status field indicates that the reason: Deprecated condition is not deprecated. The value of True in the status field indicates that the reason: Deprecated condition is deprecated. installedBundle.name Displays the name of the bundle installed. installedBundle.version Displays the version of the bundle installed. resolvedBundle.name Displays the name of the resolved bundle. resolvedBundle.version Displays the version of the resolved bundle. Additional resources Supported extensions Creating a service account Example custom resources (CRs) that specify a target version Support for version ranges 5.1.5. Updating a cluster extension You can update your cluster extension or Operator by manually editing the custom resource (CR) and applying the changes. Prerequisites You have a catalog installed. You have downloaded a local copy of the catalog file. You have an Operator or extension installed. You have installed the jq CLI tool. Procedure Inspect a package for channel and version information from a local copy of your catalog file by completing the following steps: Get a list of channels from a selected package by running the following command: USD jq -s '.[] | select( .schema == "olm.channel" ) | \ select( .package == "<package_name>") | \ .name' /<path>/<catalog_name>.json Example 5.15. Example command USD jq -s '.[] | select( .schema == "olm.channel" ) | \ select( .package == "openshift-pipelines-operator-rh") | \ .name' /home/username/rhoc.json Example 5.16. Example output "latest" "pipelines-1.11" "pipelines-1.12" "pipelines-1.13" "pipelines-1.14" Get a list of the versions published in a channel by running the following command: USD jq -s '.[] | select( .package == "<package_name>" ) | \ select( .schema == "olm.channel" ) | \ select( .name == "<channel_name>" ) | .entries | \ .[] | .name' /<path>/<catalog_name>.json Example 5.17. Example command USD jq -s '.[] | select( .package == "openshift-pipelines-operator-rh" ) | \ select( .schema == "olm.channel" ) | select( .name == "latest" ) | \ .entries | .[] | .name' /home/username/rhoc.json Example 5.18. Example output "openshift-pipelines-operator-rh.v1.11.1" "openshift-pipelines-operator-rh.v1.12.0" "openshift-pipelines-operator-rh.v1.12.1" "openshift-pipelines-operator-rh.v1.12.2" "openshift-pipelines-operator-rh.v1.13.0" "openshift-pipelines-operator-rh.v1.14.1" "openshift-pipelines-operator-rh.v1.14.2" "openshift-pipelines-operator-rh.v1.14.3" "openshift-pipelines-operator-rh.v1.14.4" Find out what version or channel is specified in your Operator or extension's CR by running the following command: USD oc get clusterextension <operator_name> -o yaml Example command USD oc get clusterextension pipelines-operator -o yaml Example 5.19. Example output apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"olm.operatorframework.io/v1alpha1","kind":"ClusterExtension","metadata":{"annotations":{},"name":"pipelines-operator"},"spec":{"channel":"latest","installNamespace":"openshift-operators","packageName":"openshift-pipelines-operator-rh","pollInterval":"30m","version":"\u003c1.12"}} creationTimestamp: "2024-06-11T15:55:37Z" generation: 1 name: pipelines-operator resourceVersion: "69776" uid: 6a11dff3-bfa3-42b8-9e5f-d8babbd6486f spec: channel: latest installNamespace: openshift-operators packageName: openshift-pipelines-operator-rh upgradeConstraintPolicy: Enforce version: <1.12 status: conditions: - lastTransitionTime: "2024-06-11T15:56:09Z" message: installed from "registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:e09d37bb1e754db42324fd18c1cb3e7ce77e7b7fcbf4932d0535391579938280" observedGeneration: 1 reason: Success status: "True" type: Installed - lastTransitionTime: "2024-06-11T15:55:50Z" message: resolved to "registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:e09d37bb1e754db42324fd18c1cb3e7ce77e7b7fcbf4932d0535391579938280" observedGeneration: 1 reason: Success status: "True" type: Resolved - lastTransitionTime: "2024-06-11T15:55:50Z" message: "" observedGeneration: 1 reason: Deprecated status: "False" type: Deprecated - lastTransitionTime: "2024-06-11T15:55:50Z" message: "" observedGeneration: 1 reason: Deprecated status: "False" type: PackageDeprecated - lastTransitionTime: "2024-06-11T15:55:50Z" message: "" observedGeneration: 1 reason: Deprecated status: "False" type: ChannelDeprecated - lastTransitionTime: "2024-06-11T15:55:50Z" message: "" observedGeneration: 1 reason: Deprecated status: "False" type: BundleDeprecated installedBundle: name: openshift-pipelines-operator-rh.v1.11.1 version: 1.11.1 resolvedBundle: name: openshift-pipelines-operator-rh.v1.11.1 version: 1.11.1 Edit your CR by using one of the following methods: If you want to pin your Operator or extension to specific version, such as 1.12.1 , edit your CR similar to the following example: Example pipelines-operator.yaml CR apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace> version: "1.12.1" 1 1 Update the version from 1.11.1 to 1.12.1 If you want to define a range of acceptable update versions, edit your CR similar to the following example: Example CR with a version range specified apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace> version: ">1.11.1, <1.13" 1 1 Specifies that the desired version range is greater than version 1.11.1 and less than 1.13 . For more information, see "Support for version ranges" and "Version comparison strings". If you want to update to the latest version that can be resolved from a channel, edit your CR similar to the following example: Example CR with a specified channel apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace> channel: pipelines-1.13 1 1 Installs the latest release that can be resolved from the specified channel. Updates to the channel are automatically installed. If you want to specify a channel and version or version range, edit your CR similar to the following example: Example CR with a specified channel and version range apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace> channel: latest version: "<1.13" For more information, see "Example custom resources (CRs) that specify a target version". Apply the update to the cluster by running the following command: USD oc apply -f pipelines-operator.yaml Example output clusterextension.olm.operatorframework.io/pipelines-operator configured Tip You can patch and apply the changes to your CR from the CLI by running the following command: USD oc patch clusterextension/pipelines-operator -p \ '{"spec":{"version":"<1.13"}}' \ --type=merge Example output clusterextension.olm.operatorframework.io/pipelines-operator patched Verification Verify that the channel and version updates have been applied by running the following command: USD oc get clusterextension pipelines-operator -o yaml Example 5.20. Example output apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"olm.operatorframework.io/v1alpha1","kind":"ClusterExtension","metadata":{"annotations":{},"name":"pipelines-operator"},"spec":{"channel":"latest","installNamespace":"openshift-operators","packageName":"openshift-pipelines-operator-rh","pollInterval":"30m","version":"\u003c1.13"}} creationTimestamp: "2024-06-11T18:23:26Z" generation: 2 name: pipelines-operator resourceVersion: "66310" uid: ce0416ba-13ea-4069-a6c8-e5efcbc47537 spec: channel: latest installNamespace: openshift-operators packageName: openshift-pipelines-operator-rh upgradeConstraintPolicy: Enforce version: <1.13 status: conditions: - lastTransitionTime: "2024-06-11T18:23:33Z" message: resolved to "registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:814742c8a7cc7e2662598e114c35c13993a7b423cfe92548124e43ea5d469f82" observedGeneration: 2 reason: Success status: "True" type: Resolved - lastTransitionTime: "2024-06-11T18:23:52Z" message: installed from "registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:814742c8a7cc7e2662598e114c35c13993a7b423cfe92548124e43ea5d469f82" observedGeneration: 2 reason: Success status: "True" type: Installed - lastTransitionTime: "2024-06-11T18:23:33Z" message: "" observedGeneration: 2 reason: Deprecated status: "False" type: Deprecated - lastTransitionTime: "2024-06-11T18:23:33Z" message: "" observedGeneration: 2 reason: Deprecated status: "False" type: PackageDeprecated - lastTransitionTime: "2024-06-11T18:23:33Z" message: "" observedGeneration: 2 reason: Deprecated status: "False" type: ChannelDeprecated - lastTransitionTime: "2024-06-11T18:23:33Z" message: "" observedGeneration: 2 reason: Deprecated status: "False" type: BundleDeprecated installedBundle: name: openshift-pipelines-operator-rh.v1.12.2 version: 1.12.2 resolvedBundle: name: openshift-pipelines-operator-rh.v1.12.2 version: 1.12.2 Troubleshooting If you specify a target version or channel that is deprecated or does not exist, you can run the following command to check the status of your extension: USD oc get clusterextension <operator_name> -o yaml Example 5.21. Example output for a version that does not exist apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"olm.operatorframework.io/v1alpha1","kind":"ClusterExtension","metadata":{"annotations":{},"name":"pipelines-operator"},"spec":{"channel":"latest","installNamespace":"openshift-operators","packageName":"openshift-pipelines-operator-rh","pollInterval":"30m","version":"3.0"}} creationTimestamp: "2024-06-11T18:23:26Z" generation: 3 name: pipelines-operator resourceVersion: "71852" uid: ce0416ba-13ea-4069-a6c8-e5efcbc47537 spec: channel: latest installNamespace: openshift-operators packageName: openshift-pipelines-operator-rh upgradeConstraintPolicy: Enforce version: "3.0" status: conditions: - lastTransitionTime: "2024-06-11T18:29:02Z" message: 'error upgrading from currently installed version "1.12.2": no package "openshift-pipelines-operator-rh" matching version "3.0" found in channel "latest"' observedGeneration: 3 reason: ResolutionFailed status: "False" type: Resolved - lastTransitionTime: "2024-06-11T18:29:02Z" message: installation has not been attempted as resolution failed observedGeneration: 3 reason: InstallationStatusUnknown status: Unknown type: Installed - lastTransitionTime: "2024-06-11T18:29:02Z" message: deprecation checks have not been attempted as resolution failed observedGeneration: 3 reason: Deprecated status: Unknown type: Deprecated - lastTransitionTime: "2024-06-11T18:29:02Z" message: deprecation checks have not been attempted as resolution failed observedGeneration: 3 reason: Deprecated status: Unknown type: PackageDeprecated - lastTransitionTime: "2024-06-11T18:29:02Z" message: deprecation checks have not been attempted as resolution failed observedGeneration: 3 reason: Deprecated status: Unknown type: ChannelDeprecated - lastTransitionTime: "2024-06-11T18:29:02Z" message: deprecation checks have not been attempted as resolution failed observedGeneration: 3 reason: Deprecated status: Unknown type: BundleDeprecated Additional resources Upgrade edges 5.1.6. Deleting an Operator You can delete an Operator and its custom resource definitions (CRDs) by deleting the ClusterExtension custom resource (CR). Prerequisites You have a catalog installed. You have an Operator installed. Procedure Delete an Operator and its CRDs by running the following command: USD oc delete clusterextension <operator_name> Example output clusterextension.olm.operatorframework.io "<operator_name>" deleted Verification Run the following commands to verify that your Operator and its resources were deleted: Verify the Operator is deleted by running the following command: USD oc get clusterextensions Example output No resources found Verify that the Operator's system namespace is deleted by running the following command: USD oc get ns <operator_name>-system Example output Error from server (NotFound): namespaces "<operator_name>-system" not found 5.2. Upgrade edges Important Operator Lifecycle Manager (OLM) v1 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . When determining upgrade edges, also known as upgrade paths or upgrade constraints, for an installed cluster extension, Operator Lifecycle Manager (OLM) v1 supports existing OLM semantics starting in OpenShift Container Platform 4.16. This support follows the behavior from existing OLM, including replaces , skips , and skipRange directives, with a few noted differences. By supporting existing OLM semantics, OLM v1 now honors the upgrade graph from catalogs accurately. Important Currently, Operator Lifecycle Manager (OLM) v1 cannot authenticate private registries, such as the Red Hat-provided Operator catalogs. This is a known issue. As a result, the OLM v1 procedures that rely on having the Red Hat Operators catalog installed do not work. ( OCPBUGS-36364 ) Differences from original existing OLM implementation If there are multiple possible successors, OLM v1 behavior differs in the following ways: In existing OLM, the successor closest to the channel head is chosen. In OLM v1, the successor with the highest semantic version (semver) is chosen. Consider the following set of file-based catalog (FBC) channel entries: # ... - name: example.v3.0.0 skips: ["example.v2.0.0"] - name: example.v2.0.0 skipRange: >=1.0.0 <2.0.0 If 1.0.0 is installed, OLM v1 behavior differs in the following ways: Existing OLM will not detect an upgrade edge to v2.0.0 because v2.0.0 is skipped and not on the replaces chain. OLM v1 will detect the upgrade edge because OLM v1 does not have a concept of a replaces chain. OLM v1 finds all entries that have a replace , skip , or skipRange value that covers the currently installed version. Additional resources Existing OLM upgrade semantics 5.2.1. Support for version ranges In Operator Lifecycle Manager (OLM) v1, you can specify a version range by using a comparison string in an Operator or extension's custom resource (CR). If you specify a version range in the CR, OLM v1 installs or updates to the latest version of the Operator that can be resolved within the version range. Resolved version workflow The resolved version is the latest version of the Operator that satisfies the constraints of the Operator and the environment. An Operator update within the specified range is automatically installed if it is resolved successfully. An update is not installed if it is outside of the specified range or if it cannot be resolved successfully. 5.2.2. Version comparison strings You can define a version range by adding a comparison string to the spec.version field in an Operator or extension's custom resource (CR). A comparison string is a list of space- or comma-separated values and one or more comparison operators enclosed in double quotation marks ( " ). You can add another comparison string by including an OR , or double vertical bar ( || ), comparison operator between the strings. Table 5.4. Basic comparisons Comparison operator Definition = Equal to != Not equal to > Greater than < Less than >= Greater than or equal to <= Less than or equal to You can specify a version range in an Operator or extension's CR by using a range comparison similar to the following example: Example version range comparison apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace_name> version: ">=1.11, <1.13" You can use wildcard characters in all types of comparison strings. OLM v1 accepts x , X , and asterisks ( * ) as wildcard characters. When you use a wildcard character with the equal sign ( = ) comparison operator, you define a comparison at the patch or minor version level. Table 5.5. Example wildcard characters in comparison strings Wildcard comparison Matching string 1.11.x >=1.11.0, <1.12.0 >=1.12.X >=1.12.0 <=2.x <3 * >=0.0.0 You can make patch release comparisons by using the tilde ( ~ ) comparison operator. Patch release comparisons specify a minor version up to the major version. Table 5.6. Example patch release comparisons Patch release comparison Matching string ~1.11.0 >=1.11.0, <1.12.0 ~1 >=1, <2 ~1.12 >=1.12, <1.13 ~1.12.x >=1.12.0, <1.13.0 ~1.x >=1, <2 You can use the caret ( ^ ) comparison operator to make a comparison for a major release. If you make a major release comparison before the first stable release is published, the minor versions define the API's level of stability. In the semantic versioning (semver) specification, the first stable release is published as the 1.0.0 version. Table 5.7. Example major release comparisons Major release comparison Matching string ^0 >=0.0.0, <1.0.0 ^0.0 >=0.0.0, <0.1.0 ^0.0.3 >=0.0.3, <0.0.4 ^0.2 >=0.2.0, <0.3.0 ^0.2.3 >=0.2.3, <0.3.0 ^1.2.x >= 1.2.0, < 2.0.0 ^1.2.3 >= 1.2.3, < 2.0.0 ^2.x >= 2.0.0, < 3 ^2.3 >= 2.3, < 3 5.2.3. Example custom resources (CRs) that specify a target version In Operator Lifecycle Manager (OLM) v1, cluster administrators can declaratively set the target version of an Operator or extension in the custom resource (CR). You can define a target version by specifying any of the following fields: Channel Version number Version range If you specify a channel in the CR, OLM v1 installs the latest version of the Operator or extension that can be resolved within the specified channel. When updates are published to the specified channel, OLM v1 automatically updates to the latest release that can be resolved from the channel. Example CR with a specified channel apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace_name> serviceAccount: name: <service_account> channel: latest 1 1 Installs the latest release that can be resolved from the specified channel. Updates to the channel are automatically installed. If you specify the Operator or extension's target version in the CR, OLM v1 installs the specified version. When the target version is specified in the CR, OLM v1 does not change the target version when updates are published to the catalog. If you want to update the version of the Operator that is installed on the cluster, you must manually edit the Operator's CR. Specifying an Operator's target version pins the Operator's version to the specified release. Example CR with the target version specified apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace_name> serviceAccount: name: <service_account> version: "1.11.1" 1 1 Specifies the target version. If you want to update the version of the Operator or extension that is installed, you must manually update this field the CR to the desired target version. If you want to define a range of acceptable versions for an Operator or extension, you can specify a version range by using a comparison string. When you specify a version range, OLM v1 installs the latest version of an Operator or extension that can be resolved by the Operator Controller. Example CR with a version range specified apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace_name> serviceAccount: name: <service_account> version: ">1.11.1" 1 1 Specifies that the desired version range is greater than version 1.11.1 . For more information, see "Support for version ranges". After you create or update a CR, apply the configuration file by running the following command: Command syntax USD oc apply -f <extension_name>.yaml 5.2.4. Forcing an update or rollback OLM v1 does not support automatic updates to the major version or rollbacks to an earlier version. If you want to perform a major version update or rollback, you must verify and force the update manually. Warning You must verify the consequences of forcing a manual update or rollback. Failure to verify a forced update or rollback might have catastrophic consequences such as data loss. Prerequisites You have a catalog installed. You have an Operator or extension installed. You have created a service account and assigned enough role-based access controls (RBAC) to install, update, and manage the extension you want to install. For more information, see Creating a service account . Procedure Edit the custom resource (CR) of your Operator or extension as shown in the following example: Example CR apiVersion: olm.operatorframework.io/v1alpha1 kind: Operator metadata: name: <operator_name> 1 spec: packageName: <package_name> 2 installNamespace: <namespace_name> serviceAccount: name: <service_account> version: <version> 3 upgradeConstraintPolicy: Ignore 4 1 Specifies the name of the Operator or extension, such as pipelines-operator 2 Specifies the package name, such as openshift-pipelines-operator-rh . 3 Specifies the blocked update or rollback version. 4 Optional: Specifies the upgrade constraint policy. To force an update or rollback, set the field to Ignore . If unspecified, the default setting is Enforce . Apply the changes to your Operator or extensions CR by running the following command: USD oc apply -f <extension_name>.yaml Additional resources Support for version ranges 5.3. Custom resource definition (CRD) upgrade safety Important Operator Lifecycle Manager (OLM) v1 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . When you update a custom resource definition (CRD) that is provided by a cluster extension, Operator Lifecycle Manager (OLM) v1 runs a CRD upgrade safety preflight check to ensure backwards compatibility with versions of that CRD. The CRD update must pass the validation checks before the change is allowed to progress on a cluster. Additional resources Updating a cluster extension 5.3.1. Prohibited CRD upgrade changes The following changes to an existing custom resource definition (CRD) are caught by the CRD upgrade safety preflight check and prevent the upgrade: A new required field is added to an existing version of the CRD An existing field is removed from an existing version of the CRD An existing field type is changed in an existing version of the CRD A new default value is added to a field that did not previously have a default value The default value of a field is changed An existing default value of a field is removed New enum restrictions are added to an existing field which did not previously have enum restrictions Existing enum values from an existing field are removed The minimum value of an existing field is increased in an existing version The maximum value of an existing field is decreased in an existing version Minimum or maximum field constraints are added to a field that did not previously have constraints Note The rules for changes to minimum and maximum values apply to minimum , minLength , minProperties , minItems , maximum , maxLength , maxProperties , and maxItems constraints. The following changes to an existing CRD are reported by the CRD upgrade safety preflight check and prevent the upgrade, though the operations are technically handled by the Kubernetes API server: The scope changes from Cluster to Namespace or from Namespace to Cluster An existing stored version of the CRD is removed If the CRD upgrade safety preflight check encounters one of the prohibited upgrade changes, it logs an error for each prohibited change detected in the CRD upgrade. Tip In cases where a change to the CRD does not fall into one of the prohibited change categories, but is also unable to be properly detected as allowed, the CRD upgrade safety preflight check will prevent the upgrade and log an error for an "unknown change". 5.3.2. Allowed CRD upgrade changes The following changes to an existing custom resource definition (CRD) are safe for backwards compatibility and will not cause the CRD upgrade safety preflight check to halt the upgrade: Adding new enum values to the list of allowed enum values in a field An existing required field is changed to optional in an existing version The minimum value of an existing field is decreased in an existing version The maximum value of an existing field is increased in an existing version A new version of the CRD is added with no modifications to existing versions 5.3.3. Disabling CRD upgrade safety preflight check The custom resource definition (CRD) upgrade safety preflight check can be disabled by adding the preflight.crdUpgradeSafety.disabled field with a value of true to the ClusterExtension object that provides the CRD. Warning Disabling the CRD upgrade safety preflight check could break backwards compatibility with stored versions of the CRD and cause other unintended consequences on the cluster. You cannot disable individual field validators. If you disable the CRD upgrade safety preflight check, all field validators are disabled. Note The following checks are handled by the Kubernetes API server: The scope changes from Cluster to Namespace or from Namespace to Cluster An existing stored version of the CRD is removed After disabling the CRD upgrade safety preflight check via Operator Lifecycle Manager (OLM) v1, these two operations are still prevented by Kubernetes. Prerequisites You have a cluster extension installed. Procedure Edit the ClusterExtension object of the CRD: USD oc edit clusterextension <clusterextension_name> Set the preflight.crdUpgradeSafety.disabled field to true : Example 5.22. Example ClusterExtension object apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: clusterextension-sample spec: installNamespace: default packageName: argocd-operator version: 0.6.0 preflight: crdUpgradeSafety: disabled: true 1 1 Set to true . 5.3.4. Examples of unsafe CRD changes The following examples demonstrate specific changes to sections of an example custom resource definition (CRD) that would be caught by the CRD upgrade safety preflight check. For the following examples, consider a CRD object in the following starting state: Example 5.23. Example CRD object apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: controller-gen.kubebuilder.io/version: v0.13.0 name: example.test.example.com spec: group: test.example.com names: kind: Sample listKind: SampleList plural: samples singular: sample scope: Namespaced versions: - name: v1alpha1 schema: openAPIV3Schema: properties: apiVersion: type: string kind: type: string metadata: type: object spec: type: object status: type: object pollInterval: type: string type: object served: true storage: true subresources: status: {} 5.3.4.1. Scope change In the following custom resource definition (CRD) example, the scope field is changed from Namespaced to Cluster : Example 5.24. Example scope change in a CRD spec: group: test.example.com names: kind: Sample listKind: SampleList plural: samples singular: sample scope: Cluster versions: - name: v1alpha1 Example 5.25. Example error output validating upgrade for CRD "test.example.com" failed: CustomResourceDefinition test.example.com failed upgrade safety validation. "NoScopeChange" validation failed: scope changed from "Namespaced" to "Cluster" 5.3.4.2. Removal of a stored version In the following custom resource definition (CRD) example, the existing stored version, v1alpha1 , is removed: Example 5.26. Example removal of a stored version in a CRD versions: - name: v1alpha2 schema: openAPIV3Schema: properties: apiVersion: type: string kind: type: string metadata: type: object spec: type: object status: type: object pollInterval: type: string type: object Example 5.27. Example error output validating upgrade for CRD "test.example.com" failed: CustomResourceDefinition test.example.com failed upgrade safety validation. "NoStoredVersionRemoved" validation failed: stored version "v1alpha1" removed 5.3.4.3. Removal of an existing field In the following custom resource definition (CRD) example, the pollInterval property field is removed from the v1alpha1 schema: Example 5.28. Example removal of an existing field in a CRD versions: - name: v1alpha1 schema: openAPIV3Schema: properties: apiVersion: type: string kind: type: string metadata: type: object spec: type: object status: type: object type: object Example 5.29. Example error output validating upgrade for CRD "test.example.com" failed: CustomResourceDefinition test.example.com failed upgrade safety validation. "NoExistingFieldRemoved" validation failed: crd/test.example.com version/v1alpha1 field/^.spec.pollInterval may not be removed 5.3.4.4. Addition of a required field In the following custom resource definition (CRD) example, the pollInterval property has been changed to a required field: Example 5.30. Example addition of a required field in a CRD versions: - name: v1alpha2 schema: openAPIV3Schema: properties: apiVersion: type: string kind: type: string metadata: type: object spec: type: object status: type: object pollInterval: type: string type: object required: - pollInterval Example 5.31. Example error output validating upgrade for CRD "test.example.com" failed: CustomResourceDefinition test.example.com failed upgrade safety validation. "ChangeValidator" validation failed: version "v1alpha1", field "^": new required fields added: [pollInterval]
[ "oc -n openshift-catalogd port-forward svc/catalogd-catalogserver 8080:443", "curl -L -k https://localhost:8080/catalogs/<catalog_name>/all.json -C - -o /<path>/<catalog_name>.json", "curl -L -k https://localhost:8080/catalogs/redhat-operators/all.json -C - -o /home/username/catalogs/rhoc.json", "jq -s '.[] | select(.schema == \"olm.package\") | .name' /<path>/<filename>.json", "jq -s '.[] | select(.schema == \"olm.package\") | .name' /home/username/catalogs/rhoc.json", "NAME AGE \"3scale-operator\" \"advanced-cluster-management\" \"amq-broker-rhel8\" \"amq-online\" \"amq-streams\" \"amq7-interconnect-operator\" \"ansible-automation-platform-operator\" \"ansible-cloud-addons-operator\" \"apicast-operator\" \"aws-efs-csi-driver-operator\" \"aws-load-balancer-operator\" \"bamoe-businessautomation-operator\" \"bamoe-kogito-operator\" \"bare-metal-event-relay\" \"businessautomation-operator\"", "jq -c 'select(.schema == \"olm.bundle\") | {\"package\":.package, \"version\":.properties[] | select(.type == \"olm.bundle.object\").value.data | @base64d | fromjson | select(.kind == \"ClusterServiceVersion\" and (.spec.installModes[] | select(.type == \"AllNamespaces\" and .supported == true) != null) and .spec.webhookdefinitions == null).spec.version}' /<path>/<catalog_name>.json", "{\"package\":\"3scale-operator\",\"version\":\"0.10.0-mas\"} {\"package\":\"3scale-operator\",\"version\":\"0.10.5\"} {\"package\":\"3scale-operator\",\"version\":\"0.11.0-mas\"} {\"package\":\"3scale-operator\",\"version\":\"0.11.1-mas\"} {\"package\":\"3scale-operator\",\"version\":\"0.11.2-mas\"} {\"package\":\"3scale-operator\",\"version\":\"0.11.3-mas\"} {\"package\":\"3scale-operator\",\"version\":\"0.11.5-mas\"} {\"package\":\"3scale-operator\",\"version\":\"0.11.6-mas\"} {\"package\":\"3scale-operator\",\"version\":\"0.11.7-mas\"} {\"package\":\"3scale-operator\",\"version\":\"0.11.8-mas\"} {\"package\":\"amq-broker-rhel8\",\"version\":\"7.10.0-opr-1\"} {\"package\":\"amq-broker-rhel8\",\"version\":\"7.10.0-opr-2\"} {\"package\":\"amq-broker-rhel8\",\"version\":\"7.10.0-opr-3\"} {\"package\":\"amq-broker-rhel8\",\"version\":\"7.10.0-opr-4\"} {\"package\":\"amq-broker-rhel8\",\"version\":\"7.10.1-opr-1\"} {\"package\":\"amq-broker-rhel8\",\"version\":\"7.10.1-opr-2\"} {\"package\":\"amq-broker-rhel8\",\"version\":\"7.10.2-opr-1\"} {\"package\":\"amq-broker-rhel8\",\"version\":\"7.10.2-opr-2\"}", "jq -s '.[] | select( .schema == \"olm.package\") | select( .name == \"<package_name>\")' /<path>/<catalog_name>.json", "jq -s '.[] | select( .schema == \"olm.package\") | select( .name == \"openshift-pipelines-operator-rh\")' /home/username/rhoc.json", "{ \"defaultChannel\": \"stable\", \"icon\": { \"base64data\": \"PHN2ZyB4bWxu...\" \"mediatype\": \"image/png\" }, \"name\": \"openshift-pipelines-operator-rh\", \"schema\": \"olm.package\" }", "jq -s '.[] | select( .schema == \"olm.package\") | .name' <catalog_name>.json", "jq -c 'select(.schema == \"olm.bundle\") | {\"package\":.package, \"version\":.properties[] | select(.type == \"olm.bundle.object\").value.data | @base64d | fromjson | select(.kind == \"ClusterServiceVersion\" and (.spec.installModes[] | select(.type == \"AllNamespaces\" and .supported == true) != null) and .spec.webhookdefinitions == null).spec.version}' <catalog_name>.json", "jq -s '.[] | select( .schema == \"olm.package\") | select( .name == \"<package_name>\")' <catalog_name>.json", "jq -s '.[] | select( .package == \"<package_name>\")' <catalog_name>.json", "jq -s '.[] | select( .schema == \"olm.channel\" ) | select( .package == \"<package_name>\") | .name' <catalog_name>.json", "jq -s '.[] | select( .package == \"<package_name>\" ) | select( .schema == \"olm.channel\" ) | select( .name == \"<channel_name>\" ) | .entries | .[] | .name' <catalog_name>.json", "jq -s '.[] | select( .schema == \"olm.channel\" ) | select ( .name == \"<channel>\") | select( .package == \"<package_name>\")' <catalog_name>.json", "jq -s '.[] | select( .schema == \"olm.bundle\" ) | select( .package == \"<package_name>\") | .name' <catalog_name>.json", "jq -s '.[] | select( .schema == \"olm.bundle\" ) | select ( .name == \"<bundle_name>\") | select( .package == \"<package_name>\")' <catalog_name>.json", "apiVersion: v1 kind: ServiceAccount metadata: name: <extension>-installer namespace: <namespace>", "apiVersion: v1 kind: ServiceAccount metadata: name: pipelines-installer namespace: pipelines", "oc apply -f extension-service-account.yaml", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: <extension>-installer-clusterrole rules: - apiGroups: [\"*\"] resources: [\"*\"] verbs: [\"*\"]", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: pipelines-installer-clusterrole rules: - apiGroups: [\"*\"] resources: [\"*\"] verbs: [\"*\"]", "oc apply -f pipelines-role.yaml", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: <extension>-installer-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <extension>-installer-clusterrole subjects: - kind: ServiceAccount name: <extension>-installer namespace: <namespace>", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: pipelines-installer-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: pipelines-installer-clusterrole subjects: - kind: ServiceAccount name: pipelines-installer namespace: pipelines", "oc apply -f pipelines-cluster-role-binding.yaml", "jq -s '.[] | select( .schema == \"olm.channel\" ) | select( .package == \"<package_name>\") | .name' /<path>/<catalog_name>.json", "jq -s '.[] | select( .schema == \"olm.channel\" ) | select( .package == \"openshift-pipelines-operator-rh\") | .name' /home/username/rhoc.json", "\"latest\" \"pipelines-1.11\" \"pipelines-1.12\" \"pipelines-1.13\" \"pipelines-1.14\"", "jq -s '.[] | select( .package == \"<package_name>\" ) | select( .schema == \"olm.channel\" ) | select( .name == \"<channel_name>\" ) | .entries | .[] | .name' /<path>/<catalog_name>.json", "jq -s '.[] | select( .package == \"openshift-pipelines-operator-rh\" ) | select( .schema == \"olm.channel\" ) | select( .name == \"latest\" ) | .entries | .[] | .name' /home/username/rhoc.json", "\"openshift-pipelines-operator-rh.v1.12.0\" \"openshift-pipelines-operator-rh.v1.12.1\" \"openshift-pipelines-operator-rh.v1.12.2\" \"openshift-pipelines-operator-rh.v1.13.0\" \"openshift-pipelines-operator-rh.v1.13.1\" \"openshift-pipelines-operator-rh.v1.11.1\" \"openshift-pipelines-operator-rh.v1.12.0\" \"openshift-pipelines-operator-rh.v1.12.1\" \"openshift-pipelines-operator-rh.v1.12.2\" \"openshift-pipelines-operator-rh.v1.13.0\" \"openshift-pipelines-operator-rh.v1.14.1\" \"openshift-pipelines-operator-rh.v1.14.2\" \"openshift-pipelines-operator-rh.v1.14.3\" \"openshift-pipelines-operator-rh.v1.14.4\"", "oc adm new-project <new_namespace>", "apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace> serviceAccount: name: <service_account> channel: <channel> version: \"<version>\"", "oc apply -f pipeline-operator.yaml", "clusterextension.olm.operatorframework.io/pipelines-operator created", "oc get clusterextension pipelines-operator -o yaml", "apiVersion: v1 items: - apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"olm.operatorframework.io/v1alpha1\",\"kind\":\"ClusterExtension\",\"metadata\":{\"annotations\":{},\"name\":\"pipelines-operator\"},\"spec\":{\"channel\":\"latest\",\"installNamespace\":\"pipelines\",\"packageName\":\"openshift-pipelines-operator-rh\",\"serviceAccount\":{\"name\":\"pipelines-installer\"},\"pollInterval\":\"30m\"}} creationTimestamp: \"2024-06-10T17:50:51Z\" finalizers: - olm.operatorframework.io/cleanup-unpack-cache generation: 1 name: pipelines-operator resourceVersion: \"53324\" uid: c54237be-cde4-46d4-9b31-d0ec6acc19bf spec: channel: latest installNamespace: pipelines packageName: openshift-pipelines-operator-rh serviceAccount: name: pipelines-installer upgradeConstraintPolicy: Enforce status: conditions: - lastTransitionTime: \"2024-06-10T17:50:58Z\" message: resolved to \"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:dd3d18367da2be42539e5dde8e484dac3df33ba3ce1d5bcf896838954f3864ec\" observedGeneration: 1 reason: Success status: \"True\" type: Resolved - lastTransitionTime: \"2024-06-10T17:51:11Z\" message: installed from \"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:dd3d18367da2be42539e5dde8e484dac3df33ba3ce1d5bcf896838954f3864ec\" observedGeneration: 1 reason: Success status: \"True\" type: Installed - lastTransitionTime: \"2024-06-10T17:50:58Z\" message: \"\" observedGeneration: 1 reason: Deprecated status: \"False\" type: Deprecated - lastTransitionTime: \"2024-06-10T17:50:58Z\" message: \"\" observedGeneration: 1 reason: Deprecated status: \"False\" type: PackageDeprecated - lastTransitionTime: \"2024-06-10T17:50:58Z\" message: \"\" observedGeneration: 1 reason: Deprecated status: \"False\" type: ChannelDeprecated - lastTransitionTime: \"2024-06-10T17:50:58Z\" message: \"\" observedGeneration: 1 reason: Deprecated status: \"False\" type: BundleDeprecated - lastTransitionTime: \"2024-06-10T17:50:58Z\" message: 'unpack successful: observedGeneration: 1 reason: UnpackSuccess status: \"True\" type: Unpacked installedBundle: name: openshift-pipelines-operator-rh.v1.14.4 version: 1.14.4 resolvedBundle: name: openshift-pipelines-operator-rh.v1.14.4 version: 1.14.4", "jq -s '.[] | select( .schema == \"olm.channel\" ) | select( .package == \"<package_name>\") | .name' /<path>/<catalog_name>.json", "jq -s '.[] | select( .schema == \"olm.channel\" ) | select( .package == \"openshift-pipelines-operator-rh\") | .name' /home/username/rhoc.json", "\"latest\" \"pipelines-1.11\" \"pipelines-1.12\" \"pipelines-1.13\" \"pipelines-1.14\"", "jq -s '.[] | select( .package == \"<package_name>\" ) | select( .schema == \"olm.channel\" ) | select( .name == \"<channel_name>\" ) | .entries | .[] | .name' /<path>/<catalog_name>.json", "jq -s '.[] | select( .package == \"openshift-pipelines-operator-rh\" ) | select( .schema == \"olm.channel\" ) | select( .name == \"latest\" ) | .entries | .[] | .name' /home/username/rhoc.json", "\"openshift-pipelines-operator-rh.v1.11.1\" \"openshift-pipelines-operator-rh.v1.12.0\" \"openshift-pipelines-operator-rh.v1.12.1\" \"openshift-pipelines-operator-rh.v1.12.2\" \"openshift-pipelines-operator-rh.v1.13.0\" \"openshift-pipelines-operator-rh.v1.14.1\" \"openshift-pipelines-operator-rh.v1.14.2\" \"openshift-pipelines-operator-rh.v1.14.3\" \"openshift-pipelines-operator-rh.v1.14.4\"", "oc get clusterextension <operator_name> -o yaml", "oc get clusterextension pipelines-operator -o yaml", "apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"olm.operatorframework.io/v1alpha1\",\"kind\":\"ClusterExtension\",\"metadata\":{\"annotations\":{},\"name\":\"pipelines-operator\"},\"spec\":{\"channel\":\"latest\",\"installNamespace\":\"openshift-operators\",\"packageName\":\"openshift-pipelines-operator-rh\",\"pollInterval\":\"30m\",\"version\":\"\\u003c1.12\"}} creationTimestamp: \"2024-06-11T15:55:37Z\" generation: 1 name: pipelines-operator resourceVersion: \"69776\" uid: 6a11dff3-bfa3-42b8-9e5f-d8babbd6486f spec: channel: latest installNamespace: openshift-operators packageName: openshift-pipelines-operator-rh upgradeConstraintPolicy: Enforce version: <1.12 status: conditions: - lastTransitionTime: \"2024-06-11T15:56:09Z\" message: installed from \"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:e09d37bb1e754db42324fd18c1cb3e7ce77e7b7fcbf4932d0535391579938280\" observedGeneration: 1 reason: Success status: \"True\" type: Installed - lastTransitionTime: \"2024-06-11T15:55:50Z\" message: resolved to \"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:e09d37bb1e754db42324fd18c1cb3e7ce77e7b7fcbf4932d0535391579938280\" observedGeneration: 1 reason: Success status: \"True\" type: Resolved - lastTransitionTime: \"2024-06-11T15:55:50Z\" message: \"\" observedGeneration: 1 reason: Deprecated status: \"False\" type: Deprecated - lastTransitionTime: \"2024-06-11T15:55:50Z\" message: \"\" observedGeneration: 1 reason: Deprecated status: \"False\" type: PackageDeprecated - lastTransitionTime: \"2024-06-11T15:55:50Z\" message: \"\" observedGeneration: 1 reason: Deprecated status: \"False\" type: ChannelDeprecated - lastTransitionTime: \"2024-06-11T15:55:50Z\" message: \"\" observedGeneration: 1 reason: Deprecated status: \"False\" type: BundleDeprecated installedBundle: name: openshift-pipelines-operator-rh.v1.11.1 version: 1.11.1 resolvedBundle: name: openshift-pipelines-operator-rh.v1.11.1 version: 1.11.1", "apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace> version: \"1.12.1\" 1", "apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace> version: \">1.11.1, <1.13\" 1", "apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace> channel: pipelines-1.13 1", "apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace> channel: latest version: \"<1.13\"", "oc apply -f pipelines-operator.yaml", "clusterextension.olm.operatorframework.io/pipelines-operator configured", "oc patch clusterextension/pipelines-operator -p '{\"spec\":{\"version\":\"<1.13\"}}' --type=merge", "clusterextension.olm.operatorframework.io/pipelines-operator patched", "oc get clusterextension pipelines-operator -o yaml", "apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"olm.operatorframework.io/v1alpha1\",\"kind\":\"ClusterExtension\",\"metadata\":{\"annotations\":{},\"name\":\"pipelines-operator\"},\"spec\":{\"channel\":\"latest\",\"installNamespace\":\"openshift-operators\",\"packageName\":\"openshift-pipelines-operator-rh\",\"pollInterval\":\"30m\",\"version\":\"\\u003c1.13\"}} creationTimestamp: \"2024-06-11T18:23:26Z\" generation: 2 name: pipelines-operator resourceVersion: \"66310\" uid: ce0416ba-13ea-4069-a6c8-e5efcbc47537 spec: channel: latest installNamespace: openshift-operators packageName: openshift-pipelines-operator-rh upgradeConstraintPolicy: Enforce version: <1.13 status: conditions: - lastTransitionTime: \"2024-06-11T18:23:33Z\" message: resolved to \"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:814742c8a7cc7e2662598e114c35c13993a7b423cfe92548124e43ea5d469f82\" observedGeneration: 2 reason: Success status: \"True\" type: Resolved - lastTransitionTime: \"2024-06-11T18:23:52Z\" message: installed from \"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:814742c8a7cc7e2662598e114c35c13993a7b423cfe92548124e43ea5d469f82\" observedGeneration: 2 reason: Success status: \"True\" type: Installed - lastTransitionTime: \"2024-06-11T18:23:33Z\" message: \"\" observedGeneration: 2 reason: Deprecated status: \"False\" type: Deprecated - lastTransitionTime: \"2024-06-11T18:23:33Z\" message: \"\" observedGeneration: 2 reason: Deprecated status: \"False\" type: PackageDeprecated - lastTransitionTime: \"2024-06-11T18:23:33Z\" message: \"\" observedGeneration: 2 reason: Deprecated status: \"False\" type: ChannelDeprecated - lastTransitionTime: \"2024-06-11T18:23:33Z\" message: \"\" observedGeneration: 2 reason: Deprecated status: \"False\" type: BundleDeprecated installedBundle: name: openshift-pipelines-operator-rh.v1.12.2 version: 1.12.2 resolvedBundle: name: openshift-pipelines-operator-rh.v1.12.2 version: 1.12.2", "oc get clusterextension <operator_name> -o yaml", "apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"olm.operatorframework.io/v1alpha1\",\"kind\":\"ClusterExtension\",\"metadata\":{\"annotations\":{},\"name\":\"pipelines-operator\"},\"spec\":{\"channel\":\"latest\",\"installNamespace\":\"openshift-operators\",\"packageName\":\"openshift-pipelines-operator-rh\",\"pollInterval\":\"30m\",\"version\":\"3.0\"}} creationTimestamp: \"2024-06-11T18:23:26Z\" generation: 3 name: pipelines-operator resourceVersion: \"71852\" uid: ce0416ba-13ea-4069-a6c8-e5efcbc47537 spec: channel: latest installNamespace: openshift-operators packageName: openshift-pipelines-operator-rh upgradeConstraintPolicy: Enforce version: \"3.0\" status: conditions: - lastTransitionTime: \"2024-06-11T18:29:02Z\" message: 'error upgrading from currently installed version \"1.12.2\": no package \"openshift-pipelines-operator-rh\" matching version \"3.0\" found in channel \"latest\"' observedGeneration: 3 reason: ResolutionFailed status: \"False\" type: Resolved - lastTransitionTime: \"2024-06-11T18:29:02Z\" message: installation has not been attempted as resolution failed observedGeneration: 3 reason: InstallationStatusUnknown status: Unknown type: Installed - lastTransitionTime: \"2024-06-11T18:29:02Z\" message: deprecation checks have not been attempted as resolution failed observedGeneration: 3 reason: Deprecated status: Unknown type: Deprecated - lastTransitionTime: \"2024-06-11T18:29:02Z\" message: deprecation checks have not been attempted as resolution failed observedGeneration: 3 reason: Deprecated status: Unknown type: PackageDeprecated - lastTransitionTime: \"2024-06-11T18:29:02Z\" message: deprecation checks have not been attempted as resolution failed observedGeneration: 3 reason: Deprecated status: Unknown type: ChannelDeprecated - lastTransitionTime: \"2024-06-11T18:29:02Z\" message: deprecation checks have not been attempted as resolution failed observedGeneration: 3 reason: Deprecated status: Unknown type: BundleDeprecated", "oc delete clusterextension <operator_name>", "clusterextension.olm.operatorframework.io \"<operator_name>\" deleted", "oc get clusterextensions", "No resources found", "oc get ns <operator_name>-system", "Error from server (NotFound): namespaces \"<operator_name>-system\" not found", "- name: example.v3.0.0 skips: [\"example.v2.0.0\"] - name: example.v2.0.0 skipRange: >=1.0.0 <2.0.0", "apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace_name> version: \">=1.11, <1.13\"", "apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace_name> serviceAccount: name: <service_account> channel: latest 1", "apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace_name> serviceAccount: name: <service_account> version: \"1.11.1\" 1", "apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace_name> serviceAccount: name: <service_account> version: \">1.11.1\" 1", "oc apply -f <extension_name>.yaml", "apiVersion: olm.operatorframework.io/v1alpha1 kind: Operator metadata: name: <operator_name> 1 spec: packageName: <package_name> 2 installNamespace: <namespace_name> serviceAccount: name: <service_account> version: <version> 3 upgradeConstraintPolicy: Ignore 4", "oc apply -f <extension_name>.yaml", "oc edit clusterextension <clusterextension_name>", "apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: clusterextension-sample spec: installNamespace: default packageName: argocd-operator version: 0.6.0 preflight: crdUpgradeSafety: disabled: true 1", "apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: controller-gen.kubebuilder.io/version: v0.13.0 name: example.test.example.com spec: group: test.example.com names: kind: Sample listKind: SampleList plural: samples singular: sample scope: Namespaced versions: - name: v1alpha1 schema: openAPIV3Schema: properties: apiVersion: type: string kind: type: string metadata: type: object spec: type: object status: type: object pollInterval: type: string type: object served: true storage: true subresources: status: {}", "spec: group: test.example.com names: kind: Sample listKind: SampleList plural: samples singular: sample scope: Cluster versions: - name: v1alpha1", "validating upgrade for CRD \"test.example.com\" failed: CustomResourceDefinition test.example.com failed upgrade safety validation. \"NoScopeChange\" validation failed: scope changed from \"Namespaced\" to \"Cluster\"", "versions: - name: v1alpha2 schema: openAPIV3Schema: properties: apiVersion: type: string kind: type: string metadata: type: object spec: type: object status: type: object pollInterval: type: string type: object", "validating upgrade for CRD \"test.example.com\" failed: CustomResourceDefinition test.example.com failed upgrade safety validation. \"NoStoredVersionRemoved\" validation failed: stored version \"v1alpha1\" removed", "versions: - name: v1alpha1 schema: openAPIV3Schema: properties: apiVersion: type: string kind: type: string metadata: type: object spec: type: object status: type: object type: object", "validating upgrade for CRD \"test.example.com\" failed: CustomResourceDefinition test.example.com failed upgrade safety validation. \"NoExistingFieldRemoved\" validation failed: crd/test.example.com version/v1alpha1 field/^.spec.pollInterval may not be removed", "versions: - name: v1alpha2 schema: openAPIV3Schema: properties: apiVersion: type: string kind: type: string metadata: type: object spec: type: object status: type: object pollInterval: type: string type: object required: - pollInterval", "validating upgrade for CRD \"test.example.com\" failed: CustomResourceDefinition test.example.com failed upgrade safety validation. \"ChangeValidator\" validation failed: version \"v1alpha1\", field \"^\": new required fields added: [pollInterval]" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/extensions/cluster-extensions
Chapter 12. Pacemaker Cluster Properties
Chapter 12. Pacemaker Cluster Properties Cluster properties control how the cluster behaves when confronted with situations that may occur during cluster operation. Table 12.1, "Cluster Properties" describes the cluster properties options. Section 12.2, "Setting and Removing Cluster Properties" describes how to set cluster properties. Section 12.3, "Querying Cluster Property Settings" describes how to list the currently set cluster properties. 12.1. Summary of Cluster Properties and Options Table 12.1, "Cluster Properties" summaries the Pacemaker cluster properties, showing the default values of the properties and the possible values you can set for those properties. Note In addition to the properties described in this table, there are additional cluster properties that are exposed by the cluster software. For these properties, it is recommended that you not change their values from their defaults. Table 12.1. Cluster Properties Option Default Description batch-limit 0 The number of resource actions that the cluster is allowed to execute in parallel. The "correct" value will depend on the speed and load of your network and cluster nodes. migration-limit -1 (unlimited) The number of migration jobs that the cluster is allowed to execute in parallel on a node. no-quorum-policy stop What to do when the cluster does not have quorum. Allowed values: * ignore - continue all resource management * freeze - continue resource management, but do not recover resources from nodes not in the affected partition * stop - stop all resources in the affected cluster partition * suicide - fence all nodes in the affected cluster partition symmetric-cluster true Indicates whether resources can run on any node by default. stonith-enabled true Indicates that failed nodes and nodes with resources that cannot be stopped should be fenced. Protecting your data requires that you set this true . If true , or unset, the cluster will refuse to start resources unless one or more STONITH resources have been configured also. stonith-action reboot Action to send to STONITH device. Allowed values: reboot , off . The value poweroff is also allowed, but is only used for legacy devices. cluster-delay 60s Round trip delay over the network (excluding action execution). The "correct" value will depend on the speed and load of your network and cluster nodes. stop-orphan-resources true Indicates whether deleted resources should be stopped. stop-orphan-actions true Indicates whether deleted actions should be canceled. start-failure-is-fatal true Indicates whether a failure to start a resource on a particular node prevents further start attempts on that node. When set to false , the cluster will decide whether to try starting on the same node again based on the resource's current failure count and migration threshold. For information on setting the migration-threshold option for a resource, see Section 8.2, "Moving Resources Due to Failure" . Setting start-failure-is-fatal to false incurs the risk that this will allow one faulty node that is unable to start a resource to hold up all dependent actions. This is why start-failure-is-fatal defaults to true . The risk of setting start-failure-is-fatal=false can be mitigated by setting a low migration threshold so that other actions can proceed after that many failures. pe-error-series-max -1 (all) The number of PE inputs resulting in ERRORs to save. Used when reporting problems. pe-warn-series-max -1 (all) The number of PE inputs resulting in WARNINGs to save. Used when reporting problems. pe-input-series-max -1 (all) The number of "normal" PE inputs to save. Used when reporting problems. cluster-infrastructure The messaging stack on which Pacemaker is currently running. Used for informational and diagnostic purposes; not user-configurable. dc-version Version of Pacemaker on the cluster's Designated Controller (DC). Used for diagnostic purposes; not user-configurable. last-lrm-refresh Last refresh of the Local Resource Manager, given in units of seconds since epoca. Used for diagnostic purposes; not user-configurable. cluster-recheck-interval 15 minutes Polling interval for time-based changes to options, resource parameters and constraints. Allowed values: Zero disables polling, positive values are an interval in seconds (unless other SI units are specified, such as 5min). Note that this value is the maximum time between checks; if a cluster event occurs sooner than the time specified by this value, the check will be done sooner. maintenance-mode false Maintenance Mode tells the cluster to go to a "hands off" mode, and not start or stop any services until told otherwise. When maintenance mode is completed, the cluster does a sanity check of the current state of any services, and then stops or starts any that need it. shutdown-escalation 20min The time after which to give up trying to shut down gracefully and just exit. Advanced use only. stonith-timeout 60s How long to wait for a STONITH action to complete. stop-all-resources false Should the cluster stop all resources. enable-acl false (Red Hat Enterprise Linux 7.1 and later) Indicates whether the cluster can use access control lists, as set with the pcs acl command. placement-strategy default Indicates whether and how the cluster will take utilization attributes into account when determining resource placement on cluster nodes. For information on utilization attributes and placement strategies, see Section 9.6, "Utilization and Placement Strategy" . fence-reaction stop (Red Hat Enterprise Linux 7.8 and later) Determines how a cluster node should react if notified of its own fencing. A cluster node may receive notification of its own fencing if fencing is misconfigured, or if fabric fencing is in use that does not cut cluster communication. Allowed values are stop to attempt to immediately stop Pacemaker and stay stopped, or panic to attempt to immediately reboot the local node, falling back to stop on failure.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/ch-clusteropts-HAAR
Chapter 5. Configuring JDBC data sources for KIE Server
Chapter 5. Configuring JDBC data sources for KIE Server A data source is an object that enables a Java Database Connectivity (JDBC) client, such as an application server, to establish a connection with a database. Applications look up the data source on the Java Naming and Directory Interface (JNDI) tree or in the local application context and request a database connection to retrieve data. You must configure data sources for KIE Server to ensure correct data exchange between the servers and the designated database. Typically, solutions using Red Hat Process Automation Manager manage several resources within a single transaction. JMS for asynchronous jobs, events, and timers, for example. Red Hat Process Automation Manager requires an XA driver in the datasource when possible to ensure data atomicity and consistent results. If transactional code for different schemas exists inside listeners or derives from hooks provided by the jBPM engine, an XA driver is also required. Do not use non-XA datasources unless you are positive you do not have multiple resources participating in single transactions. Note For production environments, specify an actual data source. Do not use the example data source in production environments. Prerequisites The JDBC providers that you want to use to create database connections are configured on all servers on which you want to deploy KIE Server, as described in the "Creating Datasources" and "JDBC Drivers" sections of the Red Hat JBoss Enterprise Application Server Configuration Guide . The Red Hat Process Automation Manager 7.13.5 Add Ons ( rhpam-7.13.5-add-ons.zip ) file is downloaded from the Software Downloads page in the Red Hat Customer Portal. Procedure Complete the following steps to prepare your database: Extract rhpam-7.13.5-add-ons.zip in a temporary directory, for example TEMP_DIR . Extract TEMP_DIR/rhpam-7.13.5-migration-tool.zip . Change your current directory to the TEMP_DIR/rhpam-7.13.5-migration-tool/ddl-scripts directory. This directory contains DDL scripts for several database types. Import the DDL script for your database type into the database that you want to use. The following example creates jBPM database structures in PostreSQL psql jbpm < /ddl-scripts/postgresql/postgresql-jbpm-schema.sql Note If you are using PostgreSQL or Oracle in conjunction with Spring Boot, you must import the respective Spring Boot DDL script, for example /ddl-scripts/oracle/oracle-springboot-jbpm-schema.sql or /ddl-scripts/postgresql/postgresql-springboot-jbpm-schema.sql . Note The PostgreSQL DDL scripts create the PostgreSQL schema with auto-incrementing integer value (OID) columns for entity attributes annotated with @LOB . To use other binary column types such as BYTEA instead of OID, you must create the PostgreSQL schema with the postgresql-bytea-jbpm-schema.sql script and set the Red Hat Process Automation Manager org.kie.persistence.postgresql.useText=true and org.kie.persistence.postgresql.useBytea=true flags. Do not use the postgresql-jbpm-lo-trigger-clob.sql script when creating a BYTEA-based schema. Red Hat Process Automation Manager does not provide a migration tool to change from an OID-based to a BYTEA-based schema. Open EAP_HOME /standalone/configuration/standalone-full.xml in a text editor and locate the <system-properties> tag. Add the following properties to the <system-properties> tag where <DATASOURCE> is the JNDI name of your data source and <HIBERNATE_DIALECT> is the hibernate dialect for your database. Note The default value of the org.kie.server.persistence.ds property is java:jboss/datasources/ExampleDS . The default value of the org.kie.server.persistence.dialect property is org.hibernate.dialect.H2Dialect . <property name="org.kie.server.persistence.ds" value="<DATASOURCE>"/> <property name="org.kie.server.persistence.dialect" value="<HIBERNATE_DIALECT>"/> The following example shows how to configure a datasource for the PostgreSQL hibernate dialect: <system-properties> <property name="org.kie.server.repo" value="USD{jboss.server.data.dir}"/> <property name="org.kie.example" value="true"/> <property name="org.jbpm.designer.perspective" value="full"/> <property name="designerdataobjects" value="false"/> <property name="org.kie.server.user" value="rhpamUser"/> <property name="org.kie.server.pwd" value="rhpam123!"/> <property name="org.kie.server.location" value="http://localhost:8080/kie-server/services/rest/server"/> <property name="org.kie.server.controller" value="http://localhost:8080/business-central/rest/controller"/> <property name="org.kie.server.controller.user" value="kieserver"/> <property name="org.kie.server.controller.pwd" value="kieserver1!"/> <property name="org.kie.server.id" value="local-server-123"/> <!-- Data source properties. --> <property name="org.kie.server.persistence.ds" value="java:jboss/datasources/KieServerDS"/> <property name="org.kie.server.persistence.dialect" value="org.hibernate.dialect.PostgreSQLDialect"/> </system-properties> The following dialects are supported: DB2: org.hibernate.dialect.DB2Dialect MSSQL: org.hibernate.dialect.SQLServer2012Dialect MySQL: org.hibernate.dialect.MySQL5InnoDBDialect MariaDB: org.hibernate.dialect.MySQL5InnoDBDialect Oracle: org.hibernate.dialect.Oracle10gDialect PostgreSQL: org.hibernate.dialect.PostgreSQL82Dialect PostgreSQL plus: org.hibernate.dialect.PostgresPlusDialect Sybase: org.hibernate.dialect.SybaseASE157Dialect
[ "psql jbpm < /ddl-scripts/postgresql/postgresql-jbpm-schema.sql", "<property name=\"org.kie.server.persistence.ds\" value=\"<DATASOURCE>\"/> <property name=\"org.kie.server.persistence.dialect\" value=\"<HIBERNATE_DIALECT>\"/>", "<system-properties> <property name=\"org.kie.server.repo\" value=\"USD{jboss.server.data.dir}\"/> <property name=\"org.kie.example\" value=\"true\"/> <property name=\"org.jbpm.designer.perspective\" value=\"full\"/> <property name=\"designerdataobjects\" value=\"false\"/> <property name=\"org.kie.server.user\" value=\"rhpamUser\"/> <property name=\"org.kie.server.pwd\" value=\"rhpam123!\"/> <property name=\"org.kie.server.location\" value=\"http://localhost:8080/kie-server/services/rest/server\"/> <property name=\"org.kie.server.controller\" value=\"http://localhost:8080/business-central/rest/controller\"/> <property name=\"org.kie.server.controller.user\" value=\"kieserver\"/> <property name=\"org.kie.server.controller.pwd\" value=\"kieserver1!\"/> <property name=\"org.kie.server.id\" value=\"local-server-123\"/> <!-- Data source properties. --> <property name=\"org.kie.server.persistence.ds\" value=\"java:jboss/datasources/KieServerDS\"/> <property name=\"org.kie.server.persistence.dialect\" value=\"org.hibernate.dialect.PostgreSQLDialect\"/> </system-properties>" ]
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/managing_red_hat_process_automation_manager_and_kie_server_settings/eap-data-source-add-proc_execution-server
9.5. Managing Storage Controllers in a Guest Virtual Machine
9.5. Managing Storage Controllers in a Guest Virtual Machine Starting from Red Hat Enterprise Linux 6.4, it is supported to add SCSI and virtio-SCSI devices to guest virtual machines that are running Red Hat Enterprise Linux 6.4 or later. Unlike virtio disks, SCSI devices require the presence of a controller in the guest virtual machine. Virtio-SCSI provides the ability to connect directly to SCSI LUNs and significantly improves scalability compared to virtio-blk. The advantage of virtio-SCSI is that it is capable of handling hundreds of devices compared to virtio-blk which can only handle 28 devices and exhausts PCI slots. Virtio-SCSI is now capable of inheriting the feature set of the target device with the ability to: attach a virtual hard drive or CD through the virtio-scsi controller, pass-through a physical SCSI device from the host to the guest via the QEMU scsi-block device, and allow the usage of hundreds of devices per guest; an improvement from the 28-device limit of virtio-blk. This section details the necessary steps to create a virtual SCSI controller (also known as "Host Bus Adapter", or HBA) and to add SCSI storage to the guest virtual machine. Procedure 9.10. Creating a virtual SCSI controller Display the configuration of the guest virtual machine ( Guest1 ) and look for a pre-existing SCSI controller: If a device controller is present, the command will output one or more lines similar to the following: If the step did not show a device controller, create the description for one in a new file and add it to the virtual machine, using the following steps: Create the device controller by writing a <controller> element in a new file and save this file with an XML extension. virtio-scsi-controller.xml , for example. Associate the device controller you just created in virtio-scsi-controller.xml with your guest virtual machine (Guest1, for example): In this example the --config option behaves the same as it does for disks. Refer to Procedure 13.2, "Adding physical block devices to guests" for more information. Add a new SCSI disk or CD-ROM. The new disk can be added using the methods in sections Section 13.3.1, "Adding File-based Storage to a Guest" and Section 13.3.2, "Adding Hard Drives and Other Block Devices to a Guest" . In order to create a SCSI disk, specify a target device name that starts with sd . Depending on the version of the driver in the guest virtual machine, the new disk may not be detected immediately by a running guest virtual machine. Follow the steps in the Red Hat Enterprise Linux Storage Administration Guide .
[ "# virsh dumpxml Guest1 | grep controller.*scsi", "<controller type='scsi' model='virtio-scsi' index='0'/>", "<controller type='scsi' model='virtio-scsi'/>", "# virsh attach-device --config Guest1 ~/virtio-scsi-controller.xml", "# virsh attach-disk Guest1 /var/lib/libvirt/images/FileName.img sdb --cache none" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-managing_storage_controllers_in_a_guest
Chapter 4. Custom User Attributes
Chapter 4. Custom User Attributes You can add custom user attributes to the registration page and account management console with a custom theme. This chapter describes how to add attributes to a custom theme, but you should refer to the Themes chapter on how to create a custom theme. 4.1. Registration Page To be able to enter custom attributes in the registration page copy the template themes/base/login/register.ftl to the login type of your custom theme. Then open the copy in an editor. As an example to add a mobile number to the registration page add the following snippet to the form: <div class="form-group"> <div class="USD{properties.kcLabelWrapperClass!}"> <label for="user.attributes.mobile" class="USD{properties.kcLabelClass!}">Mobile number</label> </div> <div class="USD{properties.kcInputWrapperClass!}"> <input type="text" class="USD{properties.kcInputClass!}" id="user.attributes.mobile" name="user.attributes.mobile" value="USD{(register.formData['user.attributes.mobile']!'')}"/> </div> </div> Ensure the name of the input html element starts with user.attributes. . In the example above, the attribute will be stored by Keycloak with the name mobile . To see the changes make sure your realm is using your custom theme for the login theme and open the registration page. 4.2. Account Management Console To be able to manage custom attributes in the user profile page in the account management console copy the template themes/base/account/account.ftl to the account type of your custom theme. Then open the copy in an editor. As an example to add a mobile number to the account page add the following snippet to the form: <div class="form-group"> <div class="col-sm-2 col-md-2"> <label for="user.attributes.mobile" class="control-label">Mobile number</label> </div> <div class="col-sm-10 col-md-10"> <input type="text" class="form-control" id="user.attributes.mobile" name="user.attributes.mobile" value="USD{(account.attributes.mobile!'')}"/> </div> </div> Ensure the name of the input html element starts with user.attributes. . To see the changes make sure your realm is using your custom theme for the account theme and open the user profile page in the account management console.
[ "<div class=\"form-group\"> <div class=\"USD{properties.kcLabelWrapperClass!}\"> <label for=\"user.attributes.mobile\" class=\"USD{properties.kcLabelClass!}\">Mobile number</label> </div> <div class=\"USD{properties.kcInputWrapperClass!}\"> <input type=\"text\" class=\"USD{properties.kcInputClass!}\" id=\"user.attributes.mobile\" name=\"user.attributes.mobile\" value=\"USD{(register.formData['user.attributes.mobile']!'')}\"/> </div> </div>", "<div class=\"form-group\"> <div class=\"col-sm-2 col-md-2\"> <label for=\"user.attributes.mobile\" class=\"control-label\">Mobile number</label> </div> <div class=\"col-sm-10 col-md-10\"> <input type=\"text\" class=\"form-control\" id=\"user.attributes.mobile\" name=\"user.attributes.mobile\" value=\"USD{(account.attributes.mobile!'')}\"/> </div> </div>" ]
https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.4/html/server_developer_guide/custom_user_attributes
Chapter 37. Configuring an operating system to optimize memory access
Chapter 37. Configuring an operating system to optimize memory access You can configure the operating system to optimize memory access across workloads with the tools that are included in RHEL 37.1. Tools for monitoring and diagnosing system memory issues The following tools are available in Red Hat Enterprise Linux 9 for monitoring system performance and diagnosing performance problems related to system memory: vmstat tool, provided by the procps-ng package, displays reports of a system's processes, memory, paging, block I/O, traps, disks, and CPU activity. It provides an instantaneous report of the average of these events since the machine was last turned on, or since the report. valgrind framework provides instrumentation to user-space binaries. Install this tool, using the dnf install valgrind command. It includes a number of tools, that you can use to profile and analyze program performance, such as: memcheck option is the default valgrind tool. It detects and reports on a number of memory errors that can be difficult to detect and diagnose, such as: Memory access that should not occur Undefined or uninitialized value use Incorrectly freed heap memory Pointer overlap Memory leaks Note Memcheck can only report these errors, it cannot prevent them from occurring. However, memcheck logs an error message immediately before the error occurs. cachegrind option simulates application interaction with a system's cache hierarchy and branch predictor. It gathers statistics for the duration of application's execution and outputs a summary to the console. massif option measures the heap space used by a specified application. It measures both useful space and any additional space allocated for bookkeeping and alignment purposes. Additional resources vmstat(8) and valgrind(1) man pages on your system /usr/share/doc/valgrind-version/valgrind_manual.pdf file 37.2. Overview of a system's memory The Linux Kernel is designed to maximize the utilization of a system's memory resources (RAM). Due to these design characteristics, and depending on the memory requirements of the workload, part of the system's memory is in use within the kernel on behalf of the workload, while a small part of the memory is free. This free memory is reserved for special system allocations, and for other low or high priority system services. The rest of the system's memory is dedicated to the workload itself, and divided into the following two categories: File memory Pages added in this category represent parts of files in permanent storage. These pages, from the page cache, can be mapped or unmapped in an application's address spaces. You can use applications to map files into their address space using the mmap system calls, or to operate on files via the buffered I/O read or write system calls. Buffered I/O system calls, as well as applications that map pages directly, can re-utilize unmapped pages. As a result, these pages are stored in the cache by the kernel, especially when the system is not running any memory intensive tasks, to avoid re-issuing costly I/O operations over the same set of pages. Anonymous memory Pages in this category are in use by a dynamically allocated process, or are not related to files in permanent storage. This set of pages back up the in-memory control structures of each task, such as the application stack and heap areas. Figure 37.1. Memory usage patterns 37.3. Virtual memory parameters The virtual memory parameters are listed in the /proc/sys/vm directory. The following are the available virtual memory parameters: vm.dirty_ratio Is a percentage value. When this percentage of the total system memory is modified, the system begins writing the modifications to the disk. The default value is 20 percent. vm.dirty_background_ratio A percentage value. When this percentage of total system memory is modified, the system begins writing the modifications to the disk in the background. The default value is 10 percent. vm.overcommit_memory Defines the conditions that determine whether a large memory request is accepted or denied.The default value is 0 . By default, the kernel performs checks if a virtual memory allocation request fits into the present amount of memory (total + swap) and rejects only large requests. Otherwise virtual memory allocations are granted, and this means they allow memory overcommitment. Setting the overcommit_memory parameter's value: When this parameter is set to 1 , the kernel performs no memory overcommit handling. This increases the possibility of memory overload, but improves performance for memory-intensive tasks. When this parameter is set to 2 , the kernel denies requests for memory equal to or larger than the sum of the total available swap space and the percentage of physical RAM specified in the overcommit_ratio . This reduces the risk of overcommitting memory, but is recommended only for systems with swap areas larger than their physical memory. vm.overcommit_ratio Specifies the percentage of physical RAM considered when overcommit_memory is set to 2 . The default value is 50 . vm.max_map_count Defines the maximum number of memory map areas that a process can use. The default value is 65530 . Increase this value if your application needs more memory map areas. vm.min_free_kbytes Sets the size of the reserved free pages pool. It is also responsible for setting the min_page , low_page , and high_page thresholds that govern the behavior of the Linux kernel's page reclaim algorithms. It also specifies the minimum number of kilobytes to keep free across the system. This calculates a specific value for each low memory zone, each of which is assigned a number of reserved free pages in proportion to their size. Setting the vm.min_free_kbytes parameter's value: Increasing the parameter value effectively reduces the application working set usable memory. Therefore, you might want to use it for only kernel-driven workloads, where driver buffers need to be allocated in atomic contexts. Decreasing the parameter value might render the kernel unable to service system requests, if memory becomes heavily contended in the system. Warning Extreme values can be detrimental to the system's performance. Setting the vm.min_free_kbytes to an extremely low value prevents the system from reclaiming memory effectively, which can result in system crashes and failure to service interrupts or other kernel services. However, setting vm.min_free_kbytes too high considerably increases system reclaim activity, causing allocation latency due to a false direct reclaim state. This might cause the system to enter an out-of-memory state immediately. The vm.min_free_kbytes parameter also sets a page reclaim watermark, called min_pages . This watermark is used as a factor when determining the two other memory watermarks, low_pages , and high_pages , that govern page reclaim algorithms. /proc/ PID /oom_adj In the event that a system runs out of memory, and the panic_on_oom parameter is set to 0 , the oom_killer function kills processes, starting with the process that has the highest oom_score , until the system recovers. The oom_adj parameter determines the oom_score of a process. This parameter is set per process identifier. A value of -17 disables the oom_killer for that process. Other valid values range from -16 to 15 . Note Processes created by an adjusted process inherit the oom_score of that process. vm.swappiness The swappiness value, ranging from 0 to 200 , controls the degree to which the system favors reclaiming memory from the anonymous memory pool, or the page cache memory pool. Setting the swappiness parameter's value: Higher values favor file-mapped driven workloads while swapping out the less actively accessed processes' anonymous mapped memory of RAM. This is useful for file-servers or streaming applications that depend on data, from files in the storage, to reside on memory to reduce I/O latency for the service requests. Low values favor anonymous-mapped driven workloads while reclaiming the page cache (file mapped memory). This setting is useful for applications that do not depend heavily on the file system information, and heavily utilize dynamically allocated and private memory, such as mathematical and number crunching applications, and few hardware virtualization supervisors like QEMU. The default value of the vm.swappiness parameter is 60 . Warning Setting the vm.swappiness to 0 aggressively avoids swapping anonymous memory out to a disk, this increases the risk of processes being killed by the oom_killer function when under memory or I/O intensive workloads. Additional resources sysctl(8) man page on your system Setting memory-related kernel parameters 37.4. File system parameters The file system parameters are listed in the /proc/sys/fs directory. The following are the available file system parameters: aio-max-nr Defines the maximum allowed number of events in all active asynchronous input/output contexts. The default value is 65536 , and modifying this value does not pre-allocate or resize any kernel data structures. file-max Determines the maximum number of file handles for the entire system. The default value on Red Hat Enterprise Linux 9 is either 8192 or one tenth of the free memory pages available at the time the kernel starts, whichever is higher. Raising this value can resolve errors caused by a lack of available file handles. Additional resources sysctl(8) man page on your system 37.5. Kernel parameters The default values for the kernel parameters are located in the /proc/sys/kernel/ directory. These are set default values provided by the kernel or values specified by a user via sysctl . The following are the available kernel parameters used to set up limits for the msg* and shm* System V IPC ( sysvipc ) system calls: msgmax Defines the maximum allowed size in bytes of any single message in a message queue. This value must not exceed the size of the queue ( msgmnb ). Use the sysctl msgmax command to determine the current msgmax value on your system. msgmnb Defines the maximum size in bytes of a single message queue. Use the sysctl msgmnb command to determine the current msgmnb value on your system. msgmni Defines the maximum number of message queue identifiers, and therefore the maximum number of queues. Use the sysctl msgmni command to determine the current msgmni value on your system. shmall Defines the total amount of shared memory pages that can be used on the system at one time. For example, a page is 4096 bytes on the AMD64 and Intel 64 architecture. Use the sysctl shmall command to determine the current shmall value on your system. shmmax Defines the maximum size in bytes of a single shared memory segment allowed by the kernel. Shared memory segments up to 1Gb are now supported in the kernel. Use the sysctl shmmax command to determine the current shmmax value on your system. shmmni Defines the system-wide maximum number of shared memory segments. The default value is 4096 on all systems. Additional resources sysvipc(7) and sysctl(8) man pages on your system 37.6. Setting memory-related kernel parameters Setting a parameter temporarily is useful for determining the effect the parameter has on a system. You can later set the parameter persistently when you are sure that the parameter value has the desired effect. This procedure describes how to set a memory-related kernel parameter temporarily and persistently. Procedure To temporarily set the memory-related kernel parameters, edit the respective files in the /proc file system or the sysctl tool. For example, to temporarily set the vm.overcommit_memory parameter to 1 : To persistently set the memory-related kernel parameter, edit the /etc/sysctl.conf file and reload the settings. For example, to persistently set the vm.overcommit_memory parameter to 1 : Add the following content in the /etc/sysctl.conf file: Reload the sysctl settings from the /etc/sysctl.conf file: Additional resources sysctl(8) and proc(5) man pages on your system Additional resources Tuning Red Hat Enterprise Linux for IBM DB2 (Red Hat Knowledgebase)
[ "echo 1 > /proc/sys/vm/overcommit_memory sysctl -w vm.overcommit_memory= 1", "vm.overcommit_memory= 1", "sysctl -p" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/monitoring_and_managing_system_status_and_performance/configuring-an-operating-system-to-optimize-memory-access_monitoring-and-managing-system-status-and-performance
function::real_mount
function::real_mount Name function::real_mount - get the 'struct mount' pointer Synopsis Arguments vfsmnt Pointer to 'struct vfsmount' Description Returns the 'struct mount' pointer value for a 'struct vfsmount' pointer.
[ "real_mount:long(vfsmnt:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-real-mount
Chapter 1. Recommended host practices
Chapter 1. Recommended host practices This topic provides recommended host practices for OpenShift Container Platform. Important These guidelines apply to OpenShift Container Platform with software-defined networking (SDN), not Open Virtual Network (OVN). 1.1. Recommended node host practices The OpenShift Container Platform node configuration file contains important options. For example, two parameters control the maximum number of pods that can be scheduled to a node: podsPerCore and maxPods . When both options are in use, the lower of the two values limits the number of pods on a node. Exceeding these values can result in: Increased CPU utilization. Slow pod scheduling. Potential out-of-memory scenarios, depending on the amount of memory in the node. Exhausting the pool of IP addresses. Resource overcommitting, leading to poor user application performance. Important In Kubernetes, a pod that is holding a single container actually uses two containers. The second container is used to set up networking prior to the actual container starting. Therefore, a system running 10 pods will actually have 20 containers running. Note Disk IOPS throttling from the cloud provider might have an impact on CRI-O and kubelet. They might get overloaded when there are large number of I/O intensive pods running on the nodes. It is recommended that you monitor the disk I/O on the nodes and use volumes with sufficient throughput for the workload. podsPerCore sets the number of pods the node can run based on the number of processor cores on the node. For example, if podsPerCore is set to 10 on a node with 4 processor cores, the maximum number of pods allowed on the node will be 40 . kubeletConfig: podsPerCore: 10 Setting podsPerCore to 0 disables this limit. The default is 0 . podsPerCore cannot exceed maxPods . maxPods sets the number of pods the node can run to a fixed value, regardless of the properties of the node. kubeletConfig: maxPods: 250 1.2. Creating a KubeletConfig CRD to edit kubelet parameters The kubelet configuration is currently serialized as an Ignition configuration, so it can be directly edited. However, there is also a new kubelet-config-controller added to the Machine Config Controller (MCC). This lets you use a KubeletConfig custom resource (CR) to edit the kubelet parameters. Note As the fields in the kubeletConfig object are passed directly to the kubelet from upstream Kubernetes, the kubelet validates those values directly. Invalid values in the kubeletConfig object might cause cluster nodes to become unavailable. For valid values, see the Kubernetes documentation . Consider the following guidance: Create one KubeletConfig CR for each machine config pool with all the config changes you want for that pool. If you are applying the same content to all of the pools, you need only one KubeletConfig CR for all of the pools. Edit an existing KubeletConfig CR to modify existing settings or add new settings, instead of creating a CR for each change. It is recommended that you create a CR only to modify a different machine config pool, or for changes that are intended to be temporary, so that you can revert the changes. As needed, create multiple KubeletConfig CRs with a limit of 10 per cluster. For the first KubeletConfig CR, the Machine Config Operator (MCO) creates a machine config appended with kubelet . With each subsequent CR, the controller creates another kubelet machine config with a numeric suffix. For example, if you have a kubelet machine config with a -2 suffix, the kubelet machine config is appended with -3 . If you want to delete the machine configs, delete them in reverse order to avoid exceeding the limit. For example, you delete the kubelet-3 machine config before deleting the kubelet-2 machine config. Note If you have a machine config with a kubelet-9 suffix, and you create another KubeletConfig CR, a new machine config is not created, even if there are fewer than 10 kubelet machine configs. Example KubeletConfig CR USD oc get kubeletconfig NAME AGE set-max-pods 15m Example showing a KubeletConfig machine config USD oc get mc | grep kubelet ... 99-worker-generated-kubelet-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m ... The following procedure is an example to show how to configure the maximum number of pods per node on the worker nodes. Prerequisites Obtain the label associated with the static MachineConfigPool CR for the type of node you want to configure. Perform one of the following steps: View the machine config pool: USD oc describe machineconfigpool <name> For example: USD oc describe machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: set-max-pods 1 1 If a label has been added it appears under labels . If the label is not present, add a key/value pair: USD oc label machineconfigpool worker custom-kubelet=set-max-pods Procedure View the available machine configuration objects that you can select: USD oc get machineconfig By default, the two kubelet-related configs are 01-master-kubelet and 01-worker-kubelet . Check the current value for the maximum pods per node: USD oc describe node <node_name> For example: USD oc describe node ci-ln-5grqprb-f76d1-ncnqq-worker-a-mdv94 Look for value: pods: <value> in the Allocatable stanza: Example output Allocatable: attachable-volumes-aws-ebs: 25 cpu: 3500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 15341844Ki pods: 250 Set the maximum pods per node on the worker nodes by creating a custom resource file that contains the kubelet configuration: Important Kubelet configurations that target a specific machine config pool also affect any dependent pools. For example, creating a kubelet configuration for the pool containing worker nodes will also apply to any subset pools, including the pool containing infrastructure nodes. To avoid this, you must create a new machine config pool with a selection expression that only includes worker nodes, and have your kubelet configuration target this new pool. apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods 1 kubeletConfig: maxPods: 500 2 1 Enter the label from the machine config pool. 2 Add the kubelet configuration. In this example, use maxPods to set the maximum pods per node. Note The rate at which the kubelet talks to the API server depends on queries per second (QPS) and burst values. The default values, 50 for kubeAPIQPS and 100 for kubeAPIBurst , are sufficient if there are limited pods running on each node. It is recommended to update the kubelet QPS and burst rates if there are enough CPU and memory resources on the node. apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods kubeletConfig: maxPods: <pod_count> kubeAPIBurst: <burst_rate> kubeAPIQPS: <QPS> Update the machine config pool for workers with the label: USD oc label machineconfigpool worker custom-kubelet=set-max-pods Create the KubeletConfig object: USD oc create -f change-maxPods-cr.yaml Verify that the KubeletConfig object is created: USD oc get kubeletconfig Example output NAME AGE set-max-pods 15m Depending on the number of worker nodes in the cluster, wait for the worker nodes to be rebooted one by one. For a cluster with 3 worker nodes, this could take about 10 to 15 minutes. Verify that the changes are applied to the node: Check on a worker node that the maxPods value changed: USD oc describe node <node_name> Locate the Allocatable stanza: ... Allocatable: attachable-volumes-gce-pd: 127 cpu: 3500m ephemeral-storage: 123201474766 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 14225400Ki pods: 500 1 ... 1 In this example, the pods parameter should report the value you set in the KubeletConfig object. Verify the change in the KubeletConfig object: USD oc get kubeletconfigs set-max-pods -o yaml This should show a status of True and type:Success , as shown in the following example: spec: kubeletConfig: maxPods: 500 machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods status: conditions: - lastTransitionTime: "2021-06-30T17:04:07Z" message: Success status: "True" type: Success 1.3. Modifying the number of unavailable worker nodes By default, only one machine is allowed to be unavailable when applying the kubelet-related configuration to the available worker nodes. For a large cluster, it can take a long time for the configuration change to be reflected. At any time, you can adjust the number of machines that are updating to speed up the process. Procedure Edit the worker machine config pool: USD oc edit machineconfigpool worker Add the maxUnavailable field and set the value: spec: maxUnavailable: <node_count> Important When setting the value, consider the number of worker nodes that can be unavailable without affecting the applications running on the cluster. 1.4. Control plane node sizing The control plane node resource requirements depend on the number and type of nodes and objects in the cluster. The following control plane node size recommendations are based on the results of a control plane density focused testing, or Cluster-density . This test creates the following objects across a given number of namespaces: 1 image stream 1 build 5 deployments, with 2 pod replicas in a sleep state, mounting 4 secrets, 4 config maps, and 1 downward API volume each 5 services, each one pointing to the TCP/8080 and TCP/8443 ports of one of the deployments 1 route pointing to the first of the services 10 secrets containing 2048 random string characters 10 config maps containing 2048 random string characters Number of worker nodes Cluster-density (namespaces) CPU cores Memory (GB) 24 500 4 16 120 1000 8 32 252 4000 16 64 501 4000 16 96 On a large and dense cluster with three masters or control plane nodes, the CPU and memory usage will spike up when one of the nodes is stopped, rebooted or fails. The failures can be due to unexpected issues with power, network or underlying infrastructure in addition to intentional cases where the cluster is restarted after shutting it down to save costs. The remaining two control plane nodes must handle the load in order to be highly available which leads to increase in the resource usage. This is also expected during upgrades because the masters are cordoned, drained, and rebooted serially to apply the operating system updates, as well as the control plane Operators update. To avoid cascading failures, keep the overall CPU and memory resource usage on the control plane nodes to at most 60% of all available capacity to handle the resource usage spikes. Increase the CPU and memory on the control plane nodes accordingly to avoid potential downtime due to lack of resources. Important The node sizing varies depending on the number of nodes and object counts in the cluster. It also depends on whether the objects are actively being created on the cluster. During object creation, the control plane is more active in terms of resource usage compared to when the objects are in the running phase. Operator Lifecycle Manager (OLM ) runs on the control plane nodes and it's memory footprint depends on the number of namespaces and user installed operators that OLM needs to manage on the cluster. Control plane nodes need to be sized accordingly to avoid OOM kills. Following data points are based on the results from cluster maximums testing. Number of namespaces OLM memory at idle state (GB) OLM memory with 5 user operators installed (GB) 500 0.823 1.7 1000 1.2 2.5 1500 1.7 3.2 2000 2 4.4 3000 2.7 5.6 4000 3.8 7.6 5000 4.2 9.02 6000 5.8 11.3 7000 6.6 12.9 8000 6.9 14.8 9000 8 17.7 10,000 9.9 21.6 Important You can modify the control plane node size in a running OpenShift Container Platform 4.11 cluster for the following configurations only: Clusters installed with a user-provisioned installation method. AWS clusters installed with an installer-provisioned infrastructure installation method. For all other configurations, you must estimate your total node count and use the suggested control plane node size during installation. Important The recommendations are based on the data points captured on OpenShift Container Platform clusters with OpenShift SDN as the network plugin. Note In OpenShift Container Platform 4.11, half of a CPU core (500 millicore) is now reserved by the system by default compared to OpenShift Container Platform 3.11 and versions. The sizes are determined taking that into consideration. 1.4.1. Selecting a larger Amazon Web Services instance type for control plane machines If the control plane machines in an Amazon Web Services (AWS) cluster require more resources, you can select a larger AWS instance type for the control plane machines to use. 1.4.1.1. Changing the Amazon Web Services instance type by using the AWS console You can change the Amazon Web Services (AWS) instance type that your control plane machines use by updating the instance type in the AWS console. Prerequisites You have access to the AWS console with the permissions required to modify the EC2 Instance for your cluster. You have access to the OpenShift Container Platform cluster as a user with the cluster-admin role. Procedure Open the AWS console and fetch the instances for the control plane machines. Choose one control plane machine instance. For the selected control plane machine, back up the etcd data by creating an etcd snapshot. For more information, see "Backing up etcd". In the AWS console, stop the control plane machine instance. Select the stopped instance, and click Actions Instance Settings Change instance type . Change the instance to a larger type, ensuring that the type is the same base as the selection, and apply changes. For example, you can change m6i.xlarge to m6i.2xlarge or m6i.4xlarge . Start the instance. If your OpenShift Container Platform cluster has a corresponding Machine object for the instance, update the instance type of the object to match the instance type set in the AWS console. Repeat this process for each control plane machine. Additional resources Backing up etcd 1.5. Recommended etcd practices Because etcd writes data to disk and persists proposals on disk, its performance depends on disk performance. Although etcd is not particularly I/O intensive, it requires a low latency block device for optimal performance and stability. Because etcd's consensus protocol depends on persistently storing metadata to a log (WAL), etcd is sensitive to disk-write latency. Slow disks and disk activity from other processes can cause long fsync latencies. Those latencies can cause etcd to miss heartbeats, not commit new proposals to the disk on time, and ultimately experience request timeouts and temporary leader loss. High write latencies also lead to an OpenShift API slowness, which affects cluster performance. Because of these reasons, avoid colocating other workloads on the control-plane nodes that are I/O sensitive or intensive and share the same underlying I/O infrastructure. In terms of latency, run etcd on top of a block device that can write at least 50 IOPS of 8000 bytes long sequentially. That is, with a latency of 20ms, keep in mind that uses fdatasync to synchronize each write in the WAL. For heavy loaded clusters, sequential 500 IOPS of 8000 bytes (2 ms) are recommended. To measure those numbers, you can use a benchmarking tool, such as fio. To achieve such performance, run etcd on machines that are backed by SSD or NVMe disks with low latency and high throughput. Consider single-level cell (SLC) solid-state drives (SSDs), which provide 1 bit per memory cell, are durable and reliable, and are ideal for write-intensive workloads. Note The load on etcd arises from static factors, such as the number of nodes and pods, and dynamic factors, including changes in endpoints due to pod autoscaling, pod restarts, job executions, and other workload-related events. To accurately size your etcd setup, you must analyze the specific requirements of your workload. Consider the number of nodes, pods, and other relevant factors that impact the load on etcd. The following hard disk features provide optimal etcd performance: Low latency to support fast read operation. High-bandwidth writes for faster compactions and defragmentation. High-bandwidth reads for faster recovery from failures. Solid state drives as a minimum selection, however NVMe drives are preferred. Server-grade hardware from various manufacturers for increased reliability. RAID 0 technology for increased performance. Dedicated etcd drives. Do not place log files or other heavy workloads on etcd drives. Note Avoid NAS or SAN setups and spinning drives. Ceph Rados Block Device (RBD) and other types of network-attached storage can result in unpredictable network latency. To provide fast storage to etcd nodes at scale, use PCI passthrough to pass NVM devices directly to the nodes. Always benchmark by using utilities such as fio. You can use such utilities to continuously monitor the cluster performance as it increases. Note Avoid using the Network File System (NFS) protocol or other network based file systems. Some key metrics to monitor on a deployed OpenShift Container Platform cluster are p99 of etcd disk write ahead log duration and the number of etcd leader changes. Use Prometheus to track these metrics. Note The etcd member database sizes can vary in a cluster during normal operations. This difference does not affect cluster upgrades, even if the leader size is different from the other members. To validate the hardware for etcd before or after you create the OpenShift Container Platform cluster, you can use fio. Prerequisites Container runtimes such as Podman or Docker are installed on the machine that you're testing. Data is written to the /var/lib/etcd path. Procedure Run fio and analyze the results: If you use Podman, run this command: USD sudo podman run --volume /var/lib/etcd:/var/lib/etcd:Z quay.io/cloud-bulldozer/etcd-perf If you use Docker, run this command: USD sudo docker run --volume /var/lib/etcd:/var/lib/etcd:Z quay.io/cloud-bulldozer/etcd-perf The output reports whether the disk is fast enough to host etcd by comparing the 99th percentile of the fsync metric captured from the run to see if it is less than 20 ms. A few of the most important etcd metrics that might affected by I/O performance are as follow: etcd_disk_wal_fsync_duration_seconds_bucket metric reports the etcd's WAL fsync duration etcd_disk_backend_commit_duration_seconds_bucket metric reports the etcd backend commit latency duration etcd_server_leader_changes_seen_total metric reports the leader changes Because etcd replicates the requests among all the members, its performance strongly depends on network input/output (I/O) latency. High network latencies result in etcd heartbeats taking longer than the election timeout, which results in leader elections that are disruptive to the cluster. A key metric to monitor on a deployed OpenShift Container Platform cluster is the 99th percentile of etcd network peer latency on each etcd cluster member. Use Prometheus to track the metric. The histogram_quantile(0.99, rate(etcd_network_peer_round_trip_time_seconds_bucket[2m])) metric reports the round trip time for etcd to finish replicating the client requests between the members. Ensure that it is less than 50 ms. Additional resources How to use fio to check etcd disk performance in OpenShift Container Platform etcd performance troubleshooting guide for OpenShift Container Platform 1.6. Moving etcd to a different disk You can move etcd from a shared disk to a separate disk to prevent or resolve performance issues. The Machine Config Operator (MCO) is responsible for mounting a secondary disk for OpenShift Container Platform 4.11 container storage. Note This procedure does not move parts of the root file system, such as /var/ , to another disk or partition on an installed node. Prerequisites You have installed the OpenShift CLI ( oc ). You have access to the cluster with cluster-admin privileges. The MachineConfigPool must match metadata.labels[machineconfiguration.openshift.io/role] . This applies to a controller, worker, or a custom pool. Procedure Attach the new disk to the cluster and verify that the disk is detected in the node by using the lsblk command in a debug shell: USD oc debug node/<node_name> # lsblk Note the device name of the new disk reported by the lsblk command. Create a MachineConfig YAML file named etcd-mc.yml with contents such as the following, replacing instances of <new_disk_name> with the noted device name: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 98-var-lib-etcd spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Make File System on /dev/<new_disk_name> DefaultDependencies=no BindsTo=dev-<new_disk_name>.device After=dev-<new_disk_name>.device var.mount Before=systemd-fsck@dev-<new_disk_name>.service [Service] Type=oneshot RemainAfterExit=yes ExecStart=/usr/lib/systemd/systemd-makefs xfs /dev/<new_disk_name> TimeoutSec=0 [Install] WantedBy=var-lib-containers.mount enabled: true name: systemd-mkfs@dev-<new_disk_name>.service - contents: | [Unit] Description=Mount /dev/<new_disk_name> to /var/lib/etcd Before=local-fs.target Requires=systemd-mkfs@dev-<new_disk_name>.service After=systemd-mkfs@dev-<new_disk_name>.service var.mount [Mount] What=/dev/<new_disk_name> Where=/var/lib/etcd Type=xfs Options=defaults,prjquota [Install] WantedBy=local-fs.target enabled: true name: var-lib-etcd.mount - contents: | [Unit] Description=Sync etcd data if new mount is empty DefaultDependencies=no After=var-lib-etcd.mount var.mount Before=crio.service [Service] Type=oneshot RemainAfterExit=yes ExecCondition=/usr/bin/test ! -d /var/lib/etcd/member ExecStart=semanage fcontext -a -e /sysroot/ostree/deploy/rhcos/var/lib/etcd/ /var/lib/etcd/ ExecStart=/bin/rsync -ar /sysroot/ostree/deploy/rhcos/var/lib/etcd/ /var/lib/etcd/ TimeoutSec=0 [Install] WantedBy=multi-user.target graphical.target enabled: true name: sync-var-lib-etcd-to-etcd.service - contents: | [Unit] Description=Restore recursive SELinux security contexts DefaultDependencies=no After=var-lib-etcd.mount Before=crio.service [Service] Type=oneshot RemainAfterExit=yes ExecStart=/sbin/restorecon -R /var/lib/etcd/ TimeoutSec=0 [Install] WantedBy=multi-user.target graphical.target enabled: true name: restorecon-var-lib-etcd.service Log in to the cluster as a user with cluster-admin privileges and create the machine configuration: USD oc login -u <username> -p <password> USD oc create -f etcd-mc.yml The nodes are updated and rebooted. After the reboot completes, the following events occur: An XFS file system is created on the specified disk. The disk mounts to /var/lib/etcd . The content from /sysroot/ostree/deploy/rhcos/var/lib/etcd syncs to /var/lib/etcd . A restore of SELinux labels is forced for /var/lib/etcd . The old content is not removed. After the nodes are on a separate disk, update the etcd-mc.yml file with contents such as the following, replacing instances of <new_disk_name> with the noted device name: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 98-var-lib-etcd spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Mount /dev/<new_disk_name> to /var/lib/etcd Before=local-fs.target Requires=systemd-mkfs@dev-<new_disk_name>.service After=systemd-mkfs@dev-<new_disk_name>.service var.mount [Mount] What=/dev/<new_disk_name> Where=/var/lib/etcd Type=xfs Options=defaults,prjquota [Install] WantedBy=local-fs.target enabled: true name: var-lib-etcd.mount Apply the modified version that removes the logic for creating and syncing the device to prevent the nodes from rebooting: USD oc replace -f etcd-mc.yml Verification steps Run the grep <new_disk_name> /proc/mounts command in a debug shell for the node to ensure that the disk mounted: USD oc debug node/<node_name> # grep <new_disk_name> /proc/mounts Example output /dev/nvme1n1 /var/lib/etcd xfs rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,prjquota 0 0 Additional resources Red Hat Enterprise Linux CoreOS (RHCOS) 1.7. Defragmenting etcd data For large and dense clusters, etcd can suffer from poor performance if the keyspace grows too large and exceeds the space quota. Periodically maintain and defragment etcd to free up space in the data store. Monitor Prometheus for etcd metrics and defragment it when required; otherwise, etcd can raise a cluster-wide alarm that puts the cluster into a maintenance mode that accepts only key reads and deletes. Monitor these key metrics: etcd_server_quota_backend_bytes , which is the current quota limit etcd_mvcc_db_total_size_in_use_in_bytes , which indicates the actual database usage after a history compaction etcd_mvcc_db_total_size_in_bytes , which shows the database size, including free space waiting for defragmentation Defragment etcd data to reclaim disk space after events that cause disk fragmentation, such as etcd history compaction. History compaction is performed automatically every five minutes and leaves gaps in the back-end database. This fragmented space is available for use by etcd, but is not available to the host file system. You must defragment etcd to make this space available to the host file system. Defragmentation occurs automatically, but you can also trigger it manually. Note Automatic defragmentation is good for most cases, because the etcd operator uses cluster information to determine the most efficient operation for the user. 1.7.1. Automatic defragmentation The etcd Operator automatically defragments disks. No manual intervention is needed. Verify that the defragmentation process is successful by viewing one of these logs: etcd logs cluster-etcd-operator pod operator status error log Warning Automatic defragmentation can cause leader election failure in various OpenShift core components, such as the Kubernetes controller manager, which triggers a restart of the failing component. The restart is harmless and either triggers failover to the running instance or the component resumes work again after the restart. Example log output for successful defragmentation etcd member has been defragmented: <member_name> , memberID: <member_id> Example log output for unsuccessful defragmentation failed defrag on member: <member_name> , memberID: <member_id> : <error_message> 1.7.2. Manual defragmentation A Prometheus alert indicates when you need to use manual defragmentation. The alert is displayed in two cases: When etcd uses more than 50% of its available space for more than 10 minutes When etcd is actively using less than 50% of its total database size for more than 10 minutes You can also determine whether defragmentation is needed by checking the etcd database size in MB that will be freed by defragmentation with the PromQL expression: (etcd_mvcc_db_total_size_in_bytes - etcd_mvcc_db_total_size_in_use_in_bytes)/1024/1024 Warning Defragmenting etcd is a blocking action. The etcd member will not respond until defragmentation is complete. For this reason, wait at least one minute between defragmentation actions on each of the pods to allow the cluster to recover. Follow this procedure to defragment etcd data on each etcd member. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Determine which etcd member is the leader, because the leader should be defragmented last. Get the list of etcd pods: USD oc -n openshift-etcd get pods -l k8s-app=etcd -o wide Example output etcd-ip-10-0-159-225.example.redhat.com 3/3 Running 0 175m 10.0.159.225 ip-10-0-159-225.example.redhat.com <none> <none> etcd-ip-10-0-191-37.example.redhat.com 3/3 Running 0 173m 10.0.191.37 ip-10-0-191-37.example.redhat.com <none> <none> etcd-ip-10-0-199-170.example.redhat.com 3/3 Running 0 176m 10.0.199.170 ip-10-0-199-170.example.redhat.com <none> <none> Choose a pod and run the following command to determine which etcd member is the leader: USD oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com etcdctl endpoint status --cluster -w table Example output Defaulting container name to etcdctl. Use 'oc describe pod/etcd-ip-10-0-159-225.example.redhat.com -n openshift-etcd' to see all of the containers in this pod. +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.4.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ Based on the IS LEADER column of this output, the https://10.0.199.170:2379 endpoint is the leader. Matching this endpoint with the output of the step, the pod name of the leader is etcd-ip-10-0-199-170.example.redhat.com . Defragment an etcd member. Connect to the running etcd container, passing in the name of a pod that is not the leader: USD oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com Unset the ETCDCTL_ENDPOINTS environment variable: sh-4.4# unset ETCDCTL_ENDPOINTS Defragment the etcd member: sh-4.4# etcdctl --command-timeout=30s --endpoints=https://localhost:2379 defrag Example output Finished defragmenting etcd member[https://localhost:2379] If a timeout error occurs, increase the value for --command-timeout until the command succeeds. Verify that the database size was reduced: sh-4.4# etcdctl endpoint status -w table --cluster Example output +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.4.9 | 41 MB | false | false | 7 | 91624 | 91624 | | 1 | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.4.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ This example shows that the database size for this etcd member is now 41 MB as opposed to the starting size of 104 MB. Repeat these steps to connect to each of the other etcd members and defragment them. Always defragment the leader last. Wait at least one minute between defragmentation actions to allow the etcd pod to recover. Until the etcd pod recovers, the etcd member will not respond. If any NOSPACE alarms were triggered due to the space quota being exceeded, clear them. Check if there are any NOSPACE alarms: sh-4.4# etcdctl alarm list Example output memberID:12345678912345678912 alarm:NOSPACE Clear the alarms: sh-4.4# etcdctl alarm disarm 1.8. OpenShift Container Platform infrastructure components The following infrastructure workloads do not incur OpenShift Container Platform worker subscriptions: Kubernetes and OpenShift Container Platform control plane services that run on masters The default router The integrated container image registry The HAProxy-based Ingress Controller The cluster metrics collection, or monitoring service, including components for monitoring user-defined projects Cluster aggregated logging Service brokers Red Hat Quay Red Hat OpenShift Data Foundation Red Hat Advanced Cluster Manager Red Hat Advanced Cluster Security for Kubernetes Red Hat OpenShift GitOps Red Hat OpenShift Pipelines Any node that runs any other container, pod, or component is a worker node that your subscription must cover. For information on infrastructure nodes and which components can run on infrastructure nodes, see the "Red Hat OpenShift control plane and infrastructure nodes" section in the OpenShift sizing and subscription guide for enterprise Kubernetes document. 1.9. Moving the monitoring solution The monitoring stack includes multiple components, including Prometheus, Thanos Querier, and Alertmanager. The Cluster Monitoring Operator manages this stack. To redeploy the monitoring stack to infrastructure nodes, you can create and apply a custom config map. Procedure Edit the cluster-monitoring-config config map and change the nodeSelector to use the infra label: USD oc edit configmap cluster-monitoring-config -n openshift-monitoring apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector: 1 node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute k8sPrometheusAdapter: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute telemeterClient: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute openshiftStateMetrics: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute thanosQuerier: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute 1 1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration. Watch the monitoring pods move to the new machines: USD watch 'oc get pod -n openshift-monitoring -o wide' If a component has not moved to the infra node, delete the pod with this component: USD oc delete pod -n openshift-monitoring <pod> The component from the deleted pod is re-created on the infra node. 1.10. Moving the default registry You configure the registry Operator to deploy its pods to different nodes. Prerequisites Configure additional machine sets in your OpenShift Container Platform cluster. Procedure View the config/instance object: USD oc get configs.imageregistry.operator.openshift.io/cluster -o yaml Example output apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: 2019-02-05T13:52:05Z finalizers: - imageregistry.operator.openshift.io/finalizer generation: 1 name: cluster resourceVersion: "56174" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: 36fd3724-294d-11e9-a524-12ffeee2931b spec: httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623 logging: 2 managementState: Managed proxy: {} replicas: 1 requests: read: {} write: {} storage: s3: bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c region: us-east-1 status: ... Edit the config/instance object: USD oc edit configs.imageregistry.operator.openshift.io/cluster spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: namespaces: - openshift-image-registry topologyKey: kubernetes.io/hostname weight: 100 logLevel: Normal managementState: Managed nodeSelector: 1 node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved 1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration. Verify the registry pod has been moved to the infrastructure node. Run the following command to identify the node where the registry pod is located: USD oc get pods -o wide -n openshift-image-registry Confirm the node has the label you specified: USD oc describe node <node_name> Review the command output and confirm that node-role.kubernetes.io/infra is in the LABELS list. 1.11. Moving the router You can deploy the router pod to a different machine set. By default, the pod is deployed to a worker node. Prerequisites Configure additional machine sets in your OpenShift Container Platform cluster. Procedure View the IngressController custom resource for the router Operator: USD oc get ingresscontroller default -n openshift-ingress-operator -o yaml The command output resembles the following text: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: 2019-04-18T12:35:39Z finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default namespace: openshift-ingress-operator resourceVersion: "11341" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 79509e05-61d6-11e9-bc55-02ce4781844a spec: {} status: availableReplicas: 2 conditions: - lastTransitionTime: 2019-04-18T12:36:15Z status: "True" type: Available domain: apps.<cluster>.example.com endpointPublishingStrategy: type: LoadBalancerService selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default Edit the ingresscontroller resource and change the nodeSelector to use the infra label: USD oc edit ingresscontroller default -n openshift-ingress-operator spec: nodePlacement: nodeSelector: 1 matchLabels: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved 1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrastructure node, also add a matching toleration. Confirm that the router pod is running on the infra node. View the list of router pods and note the node name of the running pod: USD oc get pod -n openshift-ingress -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none> In this example, the running pod is on the ip-10-0-217-226.ec2.internal node. View the node status of the running pod: USD oc get node <node_name> 1 1 Specify the <node_name> that you obtained from the pod list. Example output NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.24.0 Because the role list includes infra , the pod is running on the correct node. 1.12. Infrastructure node sizing Infrastructure nodes are nodes that are labeled to run pieces of the OpenShift Container Platform environment. The infrastructure node resource requirements depend on the cluster age, nodes, and objects in the cluster, as these factors can lead to an increase in the number of metrics or time series in Prometheus. The following infrastructure node size recommendations are based on the results of cluster maximums and control plane density focused testing. Number of worker nodes Cluster density, or number of namespaces CPU cores Memory (GB) 27 500 4 24 120 1000 8 48 252 4000 16 128 501 4000 32 128 In general, three infrastructure nodes are recommended per cluster. Important These sizing recommendations should be used as a guideline. Prometheus is a highly memory intensive application; the resource usage depends on various factors including the number of nodes, objects, the Prometheus metrics scraping interval, metrics or time series, and the age of the cluster. In addition, the router resource usage can also be affected by the number of routes and the amount/type of inbound requests. These recommendations apply only to infrastructure nodes hosting Monitoring, Ingress and Registry infrastructure components installed during cluster creation. Note In OpenShift Container Platform 4.11, half of a CPU core (500 millicore) is now reserved by the system by default compared to OpenShift Container Platform 3.11 and versions. This influences the stated sizing recommendations. 1.13. Additional resources OpenShift Container Platform cluster maximums Creating infrastructure machine sets
[ "kubeletConfig: podsPerCore: 10", "kubeletConfig: maxPods: 250", "oc get kubeletconfig", "NAME AGE set-max-pods 15m", "oc get mc | grep kubelet", "99-worker-generated-kubelet-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m", "oc describe machineconfigpool <name>", "oc describe machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: set-max-pods 1", "oc label machineconfigpool worker custom-kubelet=set-max-pods", "oc get machineconfig", "oc describe node <node_name>", "oc describe node ci-ln-5grqprb-f76d1-ncnqq-worker-a-mdv94", "Allocatable: attachable-volumes-aws-ebs: 25 cpu: 3500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 15341844Ki pods: 250", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods 1 kubeletConfig: maxPods: 500 2", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods kubeletConfig: maxPods: <pod_count> kubeAPIBurst: <burst_rate> kubeAPIQPS: <QPS>", "oc label machineconfigpool worker custom-kubelet=set-max-pods", "oc create -f change-maxPods-cr.yaml", "oc get kubeletconfig", "NAME AGE set-max-pods 15m", "oc describe node <node_name>", "Allocatable: attachable-volumes-gce-pd: 127 cpu: 3500m ephemeral-storage: 123201474766 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 14225400Ki pods: 500 1", "oc get kubeletconfigs set-max-pods -o yaml", "spec: kubeletConfig: maxPods: 500 machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods status: conditions: - lastTransitionTime: \"2021-06-30T17:04:07Z\" message: Success status: \"True\" type: Success", "oc edit machineconfigpool worker", "spec: maxUnavailable: <node_count>", "sudo podman run --volume /var/lib/etcd:/var/lib/etcd:Z quay.io/cloud-bulldozer/etcd-perf", "sudo docker run --volume /var/lib/etcd:/var/lib/etcd:Z quay.io/cloud-bulldozer/etcd-perf", "oc debug node/<node_name>", "lsblk", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 98-var-lib-etcd spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Make File System on /dev/<new_disk_name> DefaultDependencies=no BindsTo=dev-<new_disk_name>.device After=dev-<new_disk_name>.device var.mount Before=systemd-fsck@dev-<new_disk_name>.service [Service] Type=oneshot RemainAfterExit=yes ExecStart=/usr/lib/systemd/systemd-makefs xfs /dev/<new_disk_name> TimeoutSec=0 [Install] WantedBy=var-lib-containers.mount enabled: true name: systemd-mkfs@dev-<new_disk_name>.service - contents: | [Unit] Description=Mount /dev/<new_disk_name> to /var/lib/etcd Before=local-fs.target Requires=systemd-mkfs@dev-<new_disk_name>.service After=systemd-mkfs@dev-<new_disk_name>.service var.mount [Mount] What=/dev/<new_disk_name> Where=/var/lib/etcd Type=xfs Options=defaults,prjquota [Install] WantedBy=local-fs.target enabled: true name: var-lib-etcd.mount - contents: | [Unit] Description=Sync etcd data if new mount is empty DefaultDependencies=no After=var-lib-etcd.mount var.mount Before=crio.service [Service] Type=oneshot RemainAfterExit=yes ExecCondition=/usr/bin/test ! -d /var/lib/etcd/member ExecStart=semanage fcontext -a -e /sysroot/ostree/deploy/rhcos/var/lib/etcd/ /var/lib/etcd/ ExecStart=/bin/rsync -ar /sysroot/ostree/deploy/rhcos/var/lib/etcd/ /var/lib/etcd/ TimeoutSec=0 [Install] WantedBy=multi-user.target graphical.target enabled: true name: sync-var-lib-etcd-to-etcd.service - contents: | [Unit] Description=Restore recursive SELinux security contexts DefaultDependencies=no After=var-lib-etcd.mount Before=crio.service [Service] Type=oneshot RemainAfterExit=yes ExecStart=/sbin/restorecon -R /var/lib/etcd/ TimeoutSec=0 [Install] WantedBy=multi-user.target graphical.target enabled: true name: restorecon-var-lib-etcd.service", "oc login -u <username> -p <password>", "oc create -f etcd-mc.yml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 98-var-lib-etcd spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Mount /dev/<new_disk_name> to /var/lib/etcd Before=local-fs.target Requires=systemd-mkfs@dev-<new_disk_name>.service After=systemd-mkfs@dev-<new_disk_name>.service var.mount [Mount] What=/dev/<new_disk_name> Where=/var/lib/etcd Type=xfs Options=defaults,prjquota [Install] WantedBy=local-fs.target enabled: true name: var-lib-etcd.mount", "oc replace -f etcd-mc.yml", "oc debug node/<node_name>", "grep <new_disk_name> /proc/mounts", "/dev/nvme1n1 /var/lib/etcd xfs rw,seclabel,relatime,attr2,inode64,logbufs=8,logbsize=32k,prjquota 0 0", "etcd member has been defragmented: <member_name> , memberID: <member_id>", "failed defrag on member: <member_name> , memberID: <member_id> : <error_message>", "oc -n openshift-etcd get pods -l k8s-app=etcd -o wide", "etcd-ip-10-0-159-225.example.redhat.com 3/3 Running 0 175m 10.0.159.225 ip-10-0-159-225.example.redhat.com <none> <none> etcd-ip-10-0-191-37.example.redhat.com 3/3 Running 0 173m 10.0.191.37 ip-10-0-191-37.example.redhat.com <none> <none> etcd-ip-10-0-199-170.example.redhat.com 3/3 Running 0 176m 10.0.199.170 ip-10-0-199-170.example.redhat.com <none> <none>", "oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com etcdctl endpoint status --cluster -w table", "Defaulting container name to etcdctl. Use 'oc describe pod/etcd-ip-10-0-159-225.example.redhat.com -n openshift-etcd' to see all of the containers in this pod. +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.4.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+", "oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com", "sh-4.4# unset ETCDCTL_ENDPOINTS", "sh-4.4# etcdctl --command-timeout=30s --endpoints=https://localhost:2379 defrag", "Finished defragmenting etcd member[https://localhost:2379]", "sh-4.4# etcdctl endpoint status -w table --cluster", "+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.4.9 | 41 MB | false | false | 7 | 91624 | 91624 | | 1 | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.4.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+", "sh-4.4# etcdctl alarm list", "memberID:12345678912345678912 alarm:NOSPACE", "sh-4.4# etcdctl alarm disarm", "oc edit configmap cluster-monitoring-config -n openshift-monitoring", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute k8sPrometheusAdapter: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute telemeterClient: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute openshiftStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute thanosQuerier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute", "watch 'oc get pod -n openshift-monitoring -o wide'", "oc delete pod -n openshift-monitoring <pod>", "oc get configs.imageregistry.operator.openshift.io/cluster -o yaml", "apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: 2019-02-05T13:52:05Z finalizers: - imageregistry.operator.openshift.io/finalizer generation: 1 name: cluster resourceVersion: \"56174\" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: 36fd3724-294d-11e9-a524-12ffeee2931b spec: httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623 logging: 2 managementState: Managed proxy: {} replicas: 1 requests: read: {} write: {} storage: s3: bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c region: us-east-1 status:", "oc edit configs.imageregistry.operator.openshift.io/cluster", "spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: namespaces: - openshift-image-registry topologyKey: kubernetes.io/hostname weight: 100 logLevel: Normal managementState: Managed nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved", "oc get pods -o wide -n openshift-image-registry", "oc describe node <node_name>", "oc get ingresscontroller default -n openshift-ingress-operator -o yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: 2019-04-18T12:35:39Z finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default namespace: openshift-ingress-operator resourceVersion: \"11341\" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 79509e05-61d6-11e9-bc55-02ce4781844a spec: {} status: availableReplicas: 2 conditions: - lastTransitionTime: 2019-04-18T12:36:15Z status: \"True\" type: Available domain: apps.<cluster>.example.com endpointPublishingStrategy: type: LoadBalancerService selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default", "oc edit ingresscontroller default -n openshift-ingress-operator", "spec: nodePlacement: nodeSelector: 1 matchLabels: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved", "oc get pod -n openshift-ingress -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none>", "oc get node <node_name> 1", "NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.24.0" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/scalability_and_performance/recommended-host-practices
Chapter 3. Certification lifecycle
Chapter 3. Certification lifecycle 3.1. Product certification lifecycle Starting with Red Hat OpenStack Platform (RHOSP) 16, certification is granted on a specific major and minor release of RHOSP. For example, RHOSP 16.0 where 16 is the major release and 0 is the minor release. While the certification remains valid for the life of the major release, in our example RHOSP 16, there are instances where recertification will be required. Those instances are described in Recertification . 3.2. Continual testing You are responsible for your own internal continual testing over the lifespan of their product and the Red Hat OpenStack Platform major version they are certified on. You are encouraged to utilize a CI system such as DCI , that includes testing with the certification tests. Evaluating certification testing results from a CI system are not required to be submitted to Red Hat but should be monitored by the Partner for regressions and unexpected behaviors and to indicate when a recertification may be required. You may have access to pre-released software builds of Red Hat OpenStack Platform and are encouraged to begin their initial and CI testing and engagement with the Red Hat Certification team prior to the Red Hat OpenStack Platform version being made generally available to customers. Final testing and container builds must be conducted on the generally available (GA) released containers for that major release. 3.3. Recertification Red Hat will notify you of, and you are requested to recertify your product in the following cases: A new major release of the Red Hat OpenStack Platform. A new minor release of the Red Hat OpenStack Platform that adds additional features or functionality not previously covered in an earlier certification that the partner desires to add to their certification. A new minor release of the Red Hat OpenStack Platform that updates the kernel and the partner product relies on kernel modules. You will notify Red Hat of, and you are required to recertify their product in the following cases: A new major update of the partner's product that invalidates the original testing conducted in the original certification. A new minor update of the partner's product that would alter the original test plan of the certification. A new certification should be submitted for each of these cases. Where possible, in minor release updates of Red Hat and Partner products, the certification efforts and test plan will focus on the new features and functionality not already tested in prior certification(s) as the established feature functionality is expected to be maintained through the required continual testing. When a customized container image is provided as part of your OpenStack certification, it is important to rebuild this customized container image every time a Red Hat OpenStack Platform z-stream is released for a specific Major-Minor release. This will ensure that your image is taking advantage of the latest bug fixes and CVEs. Note For RHOSP container recertification, it is not required to revalidate the functionality of your product if it has not undergone any modification.
null
https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_openstack_certification_policy_guide/assembly-certification-lifecycle_rhosp-pol-certification-targets
3.7. Search: Determining Search Order
3.7. Search: Determining Search Order You can determine the sort order of the returned information by using sortby . Sort direction ( asc for ascending, desc for descending) can be included. For example: events: severity > normal sortby time desc This query returns all Events whose severity is higher than Normal, sorted by time (descending order).
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/search_determining_search_order
Chapter 12. Configuring the Key Recovery Authority
Chapter 12. Configuring the Key Recovery Authority This chapter describes recovering keys with the Key Recovery Authority (KRA). 12.1. Manually setting up key archival IMPORTANT This procedure is unnecessary if the CA and KRA are in the same security domain. This procedure is only required for CAs outside the security domain. Configuring key archival manually demands two things: Having a trusted relationship between a CA and a KRA. Having the enrollment form enabled for key archival, meaning it has key archival configured and the KRA transport certificate stored in the form. In the same security domain, both of these configuration steps are done automatically when the KRA is configured because it is configured to have a trusted relationship with any CA in its security domain. It is possible to create that trusted relationship with Certificate Managers outside its security domain by manually configuring the trust relationships and profile enrollment forms. If necessary, create a trusted manager to establish a relationship between the Certificate Manager and the KRA. For the CA to be able to request key archival of the KRA, the two subsystems must be configured to recognize, trust, and communicate with each other. Verify that the Certificate Manager has been set up as a privileged user, with an appropriate TLS mutual authentication certificate, in the internal database of the KRA. By default, the Certificate Manager uses its subsystem certificate for TLS mutual authentication to the KRA. Copy the base-64 encoded transport certificate for the KRA. The transport certificate is stored in the KRA's certificate database, which can be retrieved using the certutil utility. If the transport certificate is signed by a Certificate Manager, then a copy of the certificate is available through the Certificate Manager end-entities page in the Retrieval tab. Alternatively, download the transport certificate using the pki utility: List the certificate, using the pki ca-cert-find command and the KRA transport certificate's subject common name. For example: Show the details of the certificate and save it as a .pem file: Note For more information on the pki command options, run the pki --help command. Add the transport certificate to the CA's CS.cfg file. Then edit the enrollment form and add or replace the transport certificate value in the keyTransportCert method. 12.2. Encryption of KRA operations Certificate System encrypts the following key operations in the Key Recovery Authority (KRA): Archival: Encryption of keys to archive in a Certificate Request Message Format (CRMF) package for transport to the KRA. Encryption of the key for storage in the KRA LDAP database. Recovery: Encryption of a user-provided session key for transport to the key. Decryption of the secret and re-encryption using the user provided session key or creation of a PKCS#12 package. Generation: Encryption of a generated key for storage. 12.2.1. How clients manage key operation encryption The Certificate System client automatically uses the encryption algorithm set in the KRA's configuration and no further actions are required. 12.2.2. Configuring the encryption algorithm in the KRA Note Only AES CBC (in case when kra.allowEncDecrypt.archival=true and kra.allowEncDecrypt.recovery=true ) and AES Key Wrap (in case when kra.allowEncDecrypt.archival=false and kra.allowEncDecrypt.recovery=false ) are allowed in the following configuration. Any FIPS 140-2 validated HSM that supports either algorithm is allowed for the key archival and recovery feature provided by KRA. Certificate System defines groups of configuration parameters related to key operation encryption in the /var/lib/pki/pki-instance_name/conf/kra/CS.cfg file. We recommend the following set of parameters (see note above for other option): Each group ( kra.storageUnit.wrapping.0. * vs kra.storageUnit.wrapping.1. *) has individual settings, and the number defines which settings belong to the same configuration group. The current configuration group is set in the kra.storageUnit.wrapping.choice parameter in the /var/lib/pki/pki-instance_name/conf/kra/CS.cfg file. Ensure that kra.storageUnit.wrapping.choice=1 is set in the configuration file before continuing. Note Certificate System adds the information required to decrypt the data to the record in the KRA database. Therefore, even after changing the encryption algorithm, Certificate System is still able to decrypt data previously stored in the KRA using a different encryption algorithm. 12.2.2.1. Explanation of parameters and their values Each secret (a "payload") is encrypted with a session key. Parameters controlling this encryption are prefixed with payload . The set of parameters to be used depends on the value of kra.allowEncDecrypt.archival and kra.allowEncDecrypt.recovery . By default, both of these are false. See Section 12.2.2.2, "Solving limitations of HSMs when using AES encryption in KRAs" for the effect of these two parameters on HSMs. When kra.allowEncDecrypt.archival and kra.allowEncDecrypt.recovery are both false: payloadWrapAlgorithm determines the wrapping algorithm used. The only one valid choice is AES KeyWrap . When payloadWrapAlgorithm=AES/CBC/PKCS5Padding , then payloadWrapIVLength=16 has to be specified to tell PKI that we need to generate an IV (as CBC requires one). When kra.allowEncDecrypt.archival and kra.allowEncDecrypt.recovery are both true: payloadEncryptionAlgorithm determines the encryption algorithm used. The only valid choice is AES . payloadEncryptionMode determines the block chaining mode. The only valid choice is CBC . payloadEncryptionPadding determines the padding scheme. The only valid choice is PKCS5Padding . The session key is then wrapped with the KRA Storage Certificate, an RSA token. Parameters controlling the session key and its encryption are prefixed with sessionKey . sessionKeyType is the type of key to generate. The only valid choice is AES . sessionKeyLength is the length of the generated session key. Valid choices are 128 and 256 , to encrypt the payload with 128-bit AES or 256-bit AES respectively. sessionKeyWrapAlgorithm is the type of key the KRA Storage Certificate is. The only valid choice in this guide is RSA . 12.2.2.2. Solving limitations of HSMs when using AES encryption in KRAs If you run Certificate System with AES enabled in the KRA, but the Hardware Security Module (HSM) does not support the AES key wrapping feature, key archival fails. To solve the problem, the following solutions are supported: the section called "Selecting a different algorithm for the key wrapping" the section called "Setting the KRA into encryption mode" Selecting a different algorithm for the key wrapping Sometimes the KRA does not support the default key wrapping algorithm, but it supports other algorithms. For example, to use AES-128-CBC as the key wrapping algorithm: Set the following parameters in the /var/lib/pki/pki-instance_name/conf/kra/CS.cfg file: Restart the instance: OR if using the Nuxwdog watchdog: If the KRA runs in a difference instance then the CA, you need to restart both instances. Selecting a different algorithm for the key wrapping has the benefit that if the HSM later adds support for AES key wrapping, you can revert the settings because the key records have the relevant information set. This configuration uses the kra.storageUnit.wrapping.1.payloadWrap{Algorithm,IVLen} and kra.storageUnit.wrapping.1.payloadEncryption{Algorithm,Mode,Padding} parameters. Setting the KRA into encryption mode If the HSM does not support any KeyWrap algorithms, on some occasions it is necessary to place the KRA into Encryption Mode. When setting the KRA into encryption mode, all keys will be stored using encryption algorithms rather than key wrapping algorithms. To set the KRA into encryption mode: Set the following parameters in the /var/lib/pki/pki-instance_name/conf/kra/CS.cfg file to true : Restart the service: OR if using the Nuxwdog watchdog: If the KRA runs in a difference instance than the CA, you need to restart both instances. This configuration uses kra.storageUnit.wrapping.1.payloadEncryption{Algorithm,Mode,Padding} and kra.storageUnit.wrapping.1.payloadWrap{Algorithm,IVLen} parameters. Note If you later switch to a different algorithm for the key wrapping according to the section called "Selecting a different algorithm for the key wrapping" , you must manually add the appropriate meta data to records created before you set the KRA into encryption mode. 12.3. Setting up agent-approved key recovery schemes Key recovery agents collectively authorize and retrieve private encryption keys and associated certificates in a PKCS#12 package. To authorize key recovery, the required number of recovery agents access the KRA agent services page and use the Authorize Recovery area to enter each authorization separately. One of the agents initiates the key recovery process. For a synchronous recovery, each approving agent uses the reference number (which was returned with the initial request) to open the request and then authorizes key recovery separately. For an asynchronous recovery, the approving agents all search for the key recovery request and then authorize the key recovery. Either way, when all of the authorizations are entered, the KRA checks the information. If the information presented is correct, it retrieves the requested key and returns it along with the corresponding certificate in the form of a PKCS #12 package to the agent who initiated the key recovery process. The key recovery agent scheme configures the KRA to recognize to which group the key recovery agents belong and specifies how many of these agents are required to authorize a key recovery request before the archived key is restored. 12.3.1. Configuring agent-approved key recovery in the command line To set up agent-initiated key recovery, edit two parameters in the KRA configuration: Set the number of recovery managers to require to approve a recovery. Set the group to which these users must belong. These parameters are set in the KRA's CS.cfg configuration file. Stop the server before editing the configuration file. OR if using the Nuxwdog watchdog: Open the KRA's CS.cfg file. Edit the two recovery scheme parameters. Restart the server. OR Note For more information on how to configure agent-approved key recovery in the console, see 4.1 Configuring Agent-Approved Key Recovery in the Console in the Administration Guide (Common Criteria Edition) . 12.3.2. Customizing the Key Recovery Form The default key agent scheme requires a single agent from the Key Recovery Authority Agents group to be in charge of authorizing key recovery. It is also possible to customize the appearance of the key recovery form. Key recovery agents need an appropriate page to initiate the key recovery process. By default, the KRA's agent services page includes the appropriate HTML form to allow key recovery agents to initiate key recovery, authorize key recovery requests, and retrieve the encryption keys. This form is located in the /var/lib/pki/pki-tomcat/kra/webapps/kra/agent/kra/ directory, called confirmRecover.html . IMPORTANT If the key recovery confirmation form is customized, do not to delete any of the information for generating the response. This is vital to the functioning of the form. Restrict any changes to the content in and appearance of the form. 12.3.3. Rewrapping keys in a new private storage key Some private keys (mainly in older deployments) were wrapped in SHA-1, 1024-bit storage keys when they were archived in the KRA. These algorithms have become less secure as processor speeds improve and algorithms have been broken. As a security measure, it is possible to rewrap the private keys in a new, stronger storage key (SHA-256, 2048-bit keys). 12.3.3.1. About KRATool Rewrapping and moving keys and key enrollment and recovery requests is done using the KRATool utility (known in versions of Red Hat Certificate System as DRMTool ). The KRATool performs two operations: it can rewrap keys with a new private key, and it can renumber attributes in the LDIF file entries for key records, including enrollment and recovery requests. At least one operation (rewrap or renumber) must be performed and both can be performed in a single invocation. For rewrapping keys, the tool accesses the key entries in an exported LDIF file for the original KRA, unwraps the keys using the original KRA storage key, and then rewraps the keys in the new KRA's stronger storage key. Example 12.1. Rewrapping Keys When multiple KRA instances are being merged into a single instance, it is important to make sure that no key or request records have conflicting CNs, DNs, serial numbers, or request ID numbers. These values can be processed to append a new, larger number to the existing values. Example 12.2. Renumbering keys The KRATool options and its configuration file are discussed in more detail in the KRATool(1) man page. 12.3.3.2. Rewrapping and Merging Keys from One or More KRAs into a Single KRA This procedure rewraps the keys stored in one or more Certificate System KRA (for example, pki-tomcat on sourcekra.example.com ) and stores them into another Certificate System KRA (for example, pki-tomcat-2 on targetkra.example.com ). This is not the only use case; the tool can be run on the same instance as both the source and target, to rewrap existing keys, or it can be used simply to copy keys from multiple KRA instances into a single instance without rewrapping the keys at all. IMPORTANT The storage key size and type in the pki-tomcat-2 KRA must be set to 2048-bit and RSA. Log in to targetkra.example.com . Stop the pki-tomcat-2 KRA. Create a data directory to store the key data that will be imported from the pki-tomcat KRA instance residing on sourcekra.example.com . Export the public storage certificate for the pki-tomcat-2 KRA to a flat file in the new data directory. Stop the Directory Server instance for the pki-tomcat-2 KRA, if it is on the same machine. Export the configuration information for the pki-tomcat-2 KRA. IMPORTANT Make sure that the LDIF file contains a single blank line at the end. Log in to sourcekra.example.com . Stop the pki-tomcat KRA. Create a data directory to store the key data that will be exported to the pki-tomcat-2 KRA instance residing on targetkra.example.com . Stop the Directory Server instance for the pki-tomcat KRA, if it is on the same machine. Export the configuration information for the pki-tomcat KRA. IMPORTANT Make sure that the LDIF file contains a single blank line at the end. Copy the pki-tomcat KRA NSS security databases to this directory. Copy the file with the public storage key from the pki-tomcat-2 KRA machine to this machine. For example: If necessary, edit the default KRATool.cfg file to use with the tool. The default file can also be used without changes. Run the KRATool ; all of these parameters should be on a single line: Note The command may prompt for the password to the token stored in the pki-tomcat KRA NSS security databases. When it is done, the command creates the file specified in the -target_ldif_file parameter, source2targetKRA.ldif . Copy this LDIF file over to the pki-tomcat-2 KRA machine. For example: Important Make sure that the LDIF file contains a single blank line at the end. If multiple KRA instances are being merged, their data can be merged into a single import operation. Simply perform the same procedure for every KRA, which will be merged. Important Make sure to specify unique values for the -target_ldif_file parameter to create separate LDIF files, and to specify unique -append_id_offset values so that there are no conflicts when the LDIF files are concatenated. On the pki-tomcat-2 KRA machine, import the LDIF file(s) with the other key data by concatenating the pki-tomcat-2 KRA configuration LDIF file and every exported LDIF file for the other KRA instances. For example: Import this combined LDIF file into the Directory Server database for the pki-tomcat-2 KRA instance. Start the Directory Server instance for the pki-tomcat-2 KRA. Start the pki-tomcat-2 KRA.
[ "pki -p <Security Domain Port> -d <nss_db> -w <password> ca-cert-find --name \"DRM Transport Certificate\"", "pki ca-cert-show <serial_number> --output transport.pem", "ca.connector.KRA.enable=true ca.connector.KRA.host=server.example.com ca.connector.KRA.local=false ca.connector.KRA.nickName=subsystemCert cert-pki-ca ca.connector.KRA.port=8443 ca.connector.KRA.timeout=30 ca.connector.KRA.transportCert=MIIDbDCCAlSgAwIBAgIBDDANBgkqhkiG9w0BAQUFADA6MRgwFgYDVQQKEw9Eb21haW4gc28gbmFtZWQxHjAcBgNVBAMTFUNlcnRpZmljYXRlIEF1dGhvcml0eTAeFw0wNjExMTQxODI2NDdaFw0wODEwMTQxNzQwNThaMD4xGDAWBgNVBAoTD0RvbWFpbiBzbyBuYW1lZDEiMCAGA1UEAxMZRFJNIFRyYW5zcG9ydCBDZXJ0aWZpY2F0ZTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAKnMGB3WkznueouwZjrWLFZBLpKt6TimNKV9iz5s0zrGUlpdt81/BTsU5A2sRUwNfoZSMs/d5KLuXOHPyGtmC6yVvaY719hr9EGYuv0Sw6jb3WnEKHpjbUO/vhFwTufJHWKXFN3V4pMbHTkqW/x5fu/3QyyUre/5IhG0fcEmfvYxIyvZUJx+aQBW437ATD99Kuh+I+FuYdW+SqYHznHY8BqOdJwJ1JiJMNceXYAuAdk+9t70RztfAhBmkK0OOP0vH5BZ7RCwE3Y/6ycUdSyPZGGc76a0HrKOz+lwVFulFStiuZIaG1pv0NNivzcj0hEYq6AfJ3hgxcC1h87LmCxgRWUCAwEAAaN5MHcwHwYDVR0jBBgwFoAURShCYtSg+Oh4rrgmLFB/Fg7X3qcwRAYIKwYBBQUHAQEEODA2MDQGCCsGAQUFBzABhihodHRwOi8vY2x5ZGUucmR1LnJlZGhhdC5jb206OTE4MC9jYS9vY3NwMA4GA1UdDwEB/wQEAwIE8DANBgkqhkiG9w0BAQUFAAOCAQEAFYz5ibujdIXgnJCbHSPWdKG0T+FmR67YqiOtoNlGyIgJ42fi5lsDPfCbIAe3YFqmF3wU472h8LDLGyBjy9RJxBj+aCizwHkuoH26KmPGntIayqWDH/UGsIL0mvTSOeLqI3KM0IuH7bxGXjlION83xWbxumW/kVLbT9RCbL4216tqq5jsjfOHNNvUdFhWyYdfEOjpp/UQZOhOM1d8GFiw8N8ClWBGc3mdlADQp6tviodXueluZ7UxJLNx3HXKFYLleewwIFhC82zqeQ1PbxQDL8QLjzca+IUzq6Cd/t7OAgvv3YmpXgNR0/xoWQGdM1/YwHxtcAcVlskXJw5ZR0Y2zA== ca.connector.KRA.uri=/kra/agent/kra/connector", "vim /var/lib/pki/pki-tomcat/ca/webapps/ca/ee/ca/ProfileSelect.template var keyTransportCert = MIIDbDCCAlSgAwIBAgIBDDANBgkqhkiG9w0BAQUFADA6MRgwFgYDVQQKEw9Eb21haW4gc28gbmFtZWQxHjAcBgNVBAMTFUNlcnRpZmljYXRlIEF1dGhvcml0eTAeFw0wNjExMTQxODI2NDdaFw0wODEwMTQxNzQwNThaMD4xGDAWBgNVBAoTD0RvbWFpbiBzbyBuYW1lZDEiMCAGA1UEAxMZRFJNIFRyYW5zcG9ydCBDZXJ0aWZpY2F0ZTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAKnMGB3WkznueouwZjrWLFZBLpKt6TimNKV9iz5s0zrGUlpdt81/BTsU5A2sRUwNfoZSMs/d5KLuXOHPyGtmC6yVvaY719hr9EGYuv0Sw6jb3WnEKHpjbUO/vhFwTufJHWKXFN3V4pMbHTkqW/x5fu/3QyyUre/5IhG0fcEmfvYxIyvZUJx+aQBW437ATD99Kuh+I+FuYdW+SqYHznHY8BqOdJwJ1JiJMNceXYAuAdk+9t70RztfAhBmkK0OOP0vH5BZ7RCwE3Y/6ycUdSyPZGGc76a0HrKOz+lwVFulFStiuZIaG1pv0NNivzcj0hEYq6AfJ3hgxcC1h87LmCxgRWUCAwEAAaN5MHcwHwYDVR0jBBgwFoAURShCYtSg+Oh4rrgmLFB/Fg7X3qcwRAYIKwYBBQUHAQEEODA2MDQGCCsGAQUFBzABhihodHRwOi8vY2x5ZGUucmR1LnJlZGhhdC5jb206OTE4MC9jYS9vY3NwMA4GA1UdDwEB/wQEAwIE8DANBgkqhkiG9w0BAQUFAAOCAQEAFYz5ibujdIXgnJCbHSPWdKG0T+FmR67YqiOtoNlGyIgJ42fi5lsDPfCbIAe3YFqmF3wU472h8LDLGyBjy9RJxBj+aCizwHkuoH26KmPGntIayqWDH/UGsIL0mvTSOeLqI3KM0IuH7bxGXjlION83xWbxumW/kVLbT9RCbL4216tqq5jsjfOHNNvUdFhWyYdfEOjpp/UQZOhOM1d8GFiw8N8ClWBGc3mdlADQp6tviodXueluZ7UxJLNx3HXKFYLleewwIFhC82zqeQ1PbxQDL8QLjzca+IUzq6Cd/t7OAgvv3YmpXgNR0/xoWQGdM1/YwHxtcAcVlskXJw5ZR0Y2zA==;", "kra.allowEncDecrypt.archival=false kra.allowEncDecrypt.recovery=false kra.storageUnit.wrapping.1.payloadEncryptionAlgorithm=AES kra.storageUnit.wrapping.1.payloadEncryptionIVLen=16 kra.storageUnit.wrapping.1.payloadEncryptionMode=CBC kra.storageUnit.wrapping.1.payloadEncryptionPadding=PKCS5Padding kra.storageUnit.wrapping.1.payloadWrapAlgorithm=AES KeyWrap/Padding kra.storageUnit.wrapping.1.sessionKeyKeyGenAlgorithm=AES kra.storageUnit.wrapping.1.sessionKeyLength=128 kra.storageUnit.wrapping.1.sessionKeyType=AES kra.storageUnit.wrapping.1.sessionKeyWrapAlgorithm=RSA kra.storageUnit.wrapping.choice=1", "kra.storageUnit.wrapping.1.payloadWrapAlgorithm=AES KeyWrap/Padding kra.storageUnit.wrapping.1.payloadWrapIVLen=16 kra.storageUnit.wrapping.1.sessionKeyLength=128", "systemctl restart pki-tomcatd@instance_name.service", "systemctl restart pki-tomcatd-nuxwdog@instance_name.service", "kra.allowEncDecrypt.archival=true kra.allowEncDecrypt.recovery=true", "systemctl restart pki-tomcatd@instance_name.service", "systemctl restart pki-tomcatd-nuxwdog@instance_name.service", "systemctl stop pki-tomcatd@instance_name.service", "systemctl stop pki-tomcatd-nuxwdog@instance_name.service", "vim /var/lib/pki/pki-tomcat/kra/conf/CS.cfg", "kra.noOfRequiredRecoveryAgents=3 kra.recoveryAgentGroup=Key Recovery Authority Agents", "systemctl start pki-tomcatd@instance_name.service", "systemctl start pki-tomcatd-nuxwdog@instance_name.service", "KRATool -kratool_config_file \"/usr/share/pki/java-tools/KRATool.cfg\" -source_ldif_file \"/tmp/files/originalKRA.ldif\" -target_ldif_file \"/tmp/files/newKRA.ldif\" -log_file \"/tmp/kratool.log\" -source_pki_security_database_path \"/tmp/files/\" -source_storage_token_name \"Internal Key Storage Token\" -source_storage_certificate_nickname \"storageCert cert-pki-tomcat KRA\" -target_storage_certificate_file \"/tmp/files/omega.cert\"", "KRATool -kratool_config_file \"/usr/share/pki/java-tools/KRATool.cfg\" -source_ldif_file \"/tmp/files/originalKRA.ldif\" -target_ldif_file \"/tmp/files/newKRA.ldif\" -log_file \"/tmp/kratool.log\" -append_id_offset 100000000000 -source_kra_naming_context \"pki-tomcat-KRA\" -target_kra_naming_context \"pki-tomcat-2-KRA\" -process_requests_and_key_records_only", "systemctl stop [email protected]", "mkdir -p /export/pki", "certutil -L -d /var/lib/pki/pki-tomcat-2/alias -n \"storageCert cert-pki-tomcat-2 KRA\" -a > /export/pki/targetKRA.cert", "systemctl stop dirsrv.target", "grep nsslapd-localuser /etc/dirsrv/slapd-instanceName/dse.ldif nsslapd-localuser: dirsrv chown dirsrv:dirsrv /export/pki /usr/lib64/dirsrv/slapd-instanceName/db2ldif -n pki-tomcat-2-KRA -a /export/pki/pki-tomcat-2.ldif", "systemctl stop [email protected]", "mkdir -p /export/pki", "systemctl stop dirsrv.target", "grep nsslapd-localuser /etc/dirsrv/slapd-instanceName/dse.ldif nsslapd-localuser: dirsrv chown dirsrv:dirsrv /export/pki /usr/lib64/dirsrv/slapd-instanceName/db2ldif -n pki-tomcat-KRA -a /export/pki/pki-tomcat.ldif", "cp -p /var/lib/pki/pki-tomcat/alias/cert9.db /export/pki cp -p /var/lib/pki/pki-tomcat/alias/key4.db /export/pki cp -p /var/lib/pki/pki-tomcat/alias/pkcs11.txt /export/pki", "cd /export/pki sftp [email protected] sftp> cd /export/pki sftp> get targetKRA.cert sftp> quit", "KRATool -kratool_config_file \"/usr/share/pki/java-tools/KRATool.cfg\" -source_ldif_file /export/pki/pki-tomcat.ldif -target_ldif_file /export/pki/source2targetKRA.ldif -log_file /export/pki/kratool.log -source_pki_security_database_path /export/pki -source_storage_token_name 'Internal Key Storage Token' -source_storage_certificate_nickname 'storageCert cert-pki-tomcat KRA' -target_storage_certificate_file /export/pki/targetKRA.cert -append_id_offset 100000000000 -source_kra_naming_context \"pki-tomcat-KRA\" -target_kra_naming_context \"pki-tomcat-2-KRA\" -process_requests_and_key_records_only", "scp /export/pki/source2targetKRA.ldif [email protected]:/export/pki", "cd /export/pki cat pki-tomcat-2.ldif source2targetKRA.ldif > combined.ldif", "/usr/lib64/dirsrv/slapd-instanceName/ldif2db -n pki-tomcat-2-KRA -i /export/pki/combined.ldif", "systemctl start dirsrv.target", "systemctl start [email protected]" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide_common_criteria_edition/configuring_key_recovery_authority
5.247. policycoreutils
5.247. policycoreutils 5.247.1. RHBA-2012:0969 - policycoreutils bug fix update Updated policycoreutils packages that fix several bugs are now available for Red Hat Enterprise Linux 6. The policycoreutils packages contain the core utilities that are required for the basic operation of a Security-Enhanced Linux (SELinux) system and its policies. These updated policycoreutils packages provide fixes for the following bugs: BZ# 784595 The semanage utility did not produce correct audit messages in the Common Criteria certified environment. This update modifies semanage so that it now sends correct audit events when the user is assigned to or removed from a new role. This update also modifies behavior of semanage concerning the user's SELinux Multi-Level Security (MLS) and Multi-Category Security (MCS) range. The utility now works with the user's default range of the MLS/MCS security level instead of the lowest. In addition, the semanage(8) manual page has been corrected to reflect the current semanage functionality. BZ# 751313 Prior to this update, the ppc and ppc64 versions of the policycoreutils package conflicted with each other when installed on the same system. This update fixes this bug; ppc and ppc64 versions of the package can now be installed simultaneously. BZ# 684015 The missing exit(1) function call in the underlying code of the sepolgen-ifgen utility could cause the restorecond daemon to access already freed memory when retrieving user's information. This would cause restorecond to terminate unexpectedly with a segmentation fault. With this update, restorecond has been modified to check the return value of the getpwuid() function to avoid this situation. BZ# 786191 When installing packages on the system in Federal Information Processing Standard (FIPS) mode, parsing errors could occur and installation failed. This was caused by the "/usr/lib64/python2.7/site-packages/sepolgen/yacc.py" parser, which used MD5 checksums that are not supported in FIPS mode. This update modifies the parser to use SHA-256 checksums and installation process is now successful. BZ# 786664 Due to a pam_namespace issue which caused a leak of mount points to the parent namespace, polyinstantiated directories could be seen by users other than the owner of that directory. With this update, the mount points no longer leak to the parent namespace, and users can only see directories they own. BZ# 806736 , BZ# 807011 When a user or a program ran the "semanage fcontext" command, a traceback error was returned. This was due to a typographical error in the source code of the semanage command. This updates fixes this error, and executing the semanage fcontext command works as expected. All users of policycoreutils are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/policycoreutils
2.4. Business and Technical Metadata
2.4. Business and Technical Metadata Metadata can include different types of information about a piece of data. Technical metadata describes the information required to access the data, such as where the data resides or the structure of the data in its native environment. Business metadata details other information about the data, such as keywords related to the meta object or notes about the meta object. Note The terms technical and business metadata, refer to the content of the metadata, namely what type of information is contained in the metadata. Do not confuse these with the terms physical and view metadata that indicate what the metadata represents.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/Business_and_Technical_Metadata
Chapter 3. Adding sources and credentials
Chapter 3. Adding sources and credentials To prepare Discovery to run scans, you must add the parts of your IT infrastructure that you want to scan as one or more sources. You must also add the authentication information, such as a username and password or SSH key, that is required to access those sources as one or more credentials. Because of differing configuration requirements, you add sources and credentials according to the type of source that you are going to scan. Learn more As part of the general process of adding sources and credentials that encompass the different parts of your IT infrastructure, you might need to complete a number of tasks. Add network sources and credentials to scan assets such as physical machines, virtual machines, or containers in your network. To learn more, see the following information: Adding Network sources and credentials Add satellite sources and credentials to scan your deployment of Red Hat Satellite Server to find the assets that it manages. To learn more, see the following information: Adding Satellite sources and credentials Add vcenter sources and credentials to scan your deployment of vCenter Server to find the assets that it manages. To learn more, see the following information: Adding vCenter sources and credentials Add OpenShift sources and credentials to scan your deployment of Red Hat OpenShift Container Platform clusters. To learn more, see the following information: Adding OpenShift sources and credentials Add Ansible sources and credentials to scan your deployment of Ansible Automation Platform to find the secured clusters that it manages. To learn more, see the following information: Adding Ansible sources and credentials Add RHACS sources and credentials to scan your deployment of Red Hat Advanced Cluster Security for Kubernetes to find the secured clusters that RHACS manages. To learn more, see the following information: Adding RHACS sources and credentials 3.1. Adding network sources and credentials To run a scan on one or more of the physical machines, virtual machines, or containers on your network, you must add a source that identifies each of the assets to scan. Then you must add credentials that contain the authentication data to access each asset. Learn more Add one or more network sources and credentials to provide the information needed to scan the assets in your network. To learn more, see the following information: To add a network source, see Adding network sources . To add a network credential, see Adding network credentials . To learn more about sources and credentials and how Red Hat Discovery uses them, see the following information: About sources and credentials To learn more about how Red Hat Discovery authenticates with assets on your network, see the following information. This information includes guidance about running commands with elevated privileges, a choice that you might need to make during network credential configuration: Network authentication Commands that are used in scans of remote network assets 3.1.1. Adding network sources You can add sources from the initial Welcome page or from the Sources view. Procedure Click the option to add a new credential based on your location: From the Welcome page, click Add Source . From the Sources view, click Add . The Add Source wizard opens. On the Type page, select Network Range as the source type and click . On the Credentials page, enter the following information. In the Name field, enter a descriptive name. In the Search Addresses field, enter one or more network identifiers separated by commas. You can enter hostnames, IP addresses, and IP ranges. Enter hostnames as DNS hostnames, for example, server1.example.com . Enter IP ranges in CIDR or Ansible notation, for example, 192.168.1.0/24 for CIDR notation or 192.168.1.[1:254] for Ansible notation. Optional: In the Port field, enter a different port if you do not want a scan for this source to run on the default port 22. In the Credentials list, select the credentials that are required to access the network resources for this source. If a required credential does not exist, click the Add a credential icon to open the Add Credential wizard. If your network resources require the Ansible connection method to be the Python SSH implementation, Paramiko, instead of the default OpenSSH implementation, select the Connect using Paramiko instead of OpenSSH check box. Click Save to save the source and then click Close to close the Add Source wizard. 3.1.2. Adding network credentials You can add credentials from the Credentials view or from the Add Source wizard during the creation of a source. You might need to add several credentials to authenticate to all of the assets that are included in a single source. Procedure Click the option to add a new credential based on your location: From the Credentials view, click Add Network Credential . From the Add Source wizard, click the Add a credential icon for the Credentials field. The Add Credential wizard opens. In the Credential Name field, enter a descriptive name. In the Authentication Type field, select the type of authentication that you want to use. You can select either Username and Password or SSH Key . Enter the authentication data in the appropriate fields, based on the authentication type. For username and password authentication, enter a username and password for a user. This user must have root-level access to your network or to the subset of your network that you want to scan. Alternatively, this user must be able to obtain root-level access with the selected become method. For SSH key authentication, enter a username and paste the contents of the ssh keyfile. Entering a passphrase is optional. Enter the become method for privilege elevation. Privilege elevation is required to run some commands during a network scan. Entering a username and password for the become method is optional. Click Save to save the credential and close the Add Credential wizard. 3.1.3. About sources and credentials To run a scan, you must configure data for two basic structures: sources and credentials. The type of source that you are going to inspect during the scan determines the type of data that is required for both source and credential configuration. A source contains a single asset or a set of multiple assets that are to be inspected during the scan. You can configure any of the following types of sources: Network source One or more physical machines, virtual machines, or containers. These assets can be expressed as hostnames, IP addresses, IP ranges, or subnets. vCenter source A vCenter Server systems management solution that is managing all or part of your IT infrastructure. Satellite source A Satellite systems management solution that is managing all or part of your IT infrastructure. Red Hat OpenShift source A Red Hat OpenShift Container Platform cluster that is managing all or part your Red Hat OpenShift Container Platform nodes and workloads. Ansible source An Ansible management solution that is managing your Ansible nodes and workloads. Red Hat Advanced Cluster Security for Kubernetes source A RHACS security platform solution that secures your Kubernetes environments. When you are working with network sources, you determine how many individual assets you should group within a single source. Currently, you can add multiple assets to a source only for network sources. The following list contains some of the other factors that you should consider when you are adding sources: Whether assets are part of a development, testing, or production environment, and if demands on computing power and similar concerns are a consideration for those assets. Whether you want to scan a particular entity or group of entities more often because of internal business practices such as frequent changes to the installed software. A credential contains data such as the username and password or SSH key of a user with sufficient authority to run the scan on all or part of the assets that are contained in that source. As with sources, credentials are configured as the network, vCenter, satellite, OpenShift, Ansible, or RHACS type. Typically, a network source might require multiple network credentials because it is expected that many credentials would be needed to access all of the assets in a broad IP range. Conversely, a vCenter or satellite source would typically use a single vCenter or satellite credential, as applicable, to access a particular system management solution server, and an OpenShift, Ansible, or RHACS source would use a single credential to access a single cluster. You can add new sources from the Sources view and you can add new credentials from the Credentials view. You can also add new or select previously existing credentials during source creation. It is during source creation that you associate a credential directly with a source. Because sources and credentials must have matching types, any credential that you add during source creation shares the same type as the source. In addition, if you want to use an existing credential during source creation, the list of available credentials contains only credentials of the same type. For example, during network source creation, only network credentials are available for selection. 3.1.4. Network authentication The Discovery application inspects the remote systems in a network scan by using the SSH remote connection capabilities of Ansible. When you add a network credential, you configure the SSH connection by using either a username and password or a username and SSH keyfile pair. If remote systems are accessed with SSH key authentication, you can also supply a passphrase for the SSH key. Also during network credential configuration, you can enable a become method. The become method is used during a scan to elevate privileges. These elevated privileges are needed to run commands and obtain data on the systems that you are scanning. For more information about the commands that do and do not require elevated privileges during a scan, see Commands that are used in scans of remote network assets . 3.1.4.1. Commands that are used in scans of remote network assets When you run a network scan, Discovery must use the credentials that you provide to run certain commands on the remote systems in your network. Some of those commands must run with elevated privileges. This access is typically acquired through the use of the sudo command or similar commands. The elevated privileges are required to gather the types of facts that Discovery uses to build the report about your installed products. Although it is possible to run a scan for a network source without elevated privileges, the results of that scan will be incomplete. The incomplete results from the network scan will affect the quality of the generated report for the scan. The following information lists the commands that Discovery runs on remote hosts during a network scan. The information includes the basic commands that can run without elevated privileges and the commands that must run with elevated privileges to gather the most accurate and complete information for the report. Note In addition to the following commands, Discovery also depends on standard shell facilities, such as those provided by the bash shell. 3.1.4.1.1. Basic commands that do not need elevated privileges The following commands do not require elevated privileges to gather facts during a scan: cat egrep sort uname ctime grep rpm virsh date id test whereis echo sed tune2fs xargs 3.1.4.1.2. Commands that need elevated privileges The following commands require elevated privileges to gather facts during a scan. Each command includes a list of individual facts or categories of facts that Discovery attempts to find during a scan. These facts cannot be included in reports if elevated privileges are not available for that command. awk cat chkconfig command df dirname dmidecode echo egrep fgrep find ifconfig ip java locate ls ps readlink sed sort stat subscription-manager systemctl tail test tr unzip virt-what xargs yum 3.2. Adding satellite sources and credentials To run a scan on a Red Hat Satellite Server deployment, you must add a source that identifies the Satellite Server server to scan. Then you must add a credential that contains the authentication data to access that server. Learn more Add a satellite source and credential to provide the information needed to scan Satellite Server. To learn more, see the following information: To add a satellite source, see Adding satellite sources . To add a satellite credential, see Adding satellite credentials . To learn more about sources and credentials and how Red Hat Discovery uses them, see the following information: About sources and credentials To learn more about how Discovery authenticates with your Satellite Server server, see the following information. This information includes guidance about certificate validation and SSL communication choices that you might need to make during satellite credential configuration. Satellite Server authentication 3.2.1. Adding satellite sources You can add sources from the initial Welcome page or from the Sources view. Procedure Click the option to add a new credential based on your location: From the Welcome page, click Add Source . From the Sources view, click Add . The Add Source wizard opens. On the Type page, select Satellite as the source type and click . On the Credentials page, enter the following information. In the Name field, enter a descriptive name. In the IP Address or Hostname field, enter the IP address or hostname of the Satellite server for this source. Enter a different port if you do not want a scan for this source to run on the default port 443. For example, if the IP address of the Satellite server is 192.0.2.15 and you want to change the port to 80, you would enter 192.0.2.15:80 . In the Credentials list, select the credential that is required to access the Satellite server for this source. If a required credential does not exist, click the Add a credential icon to open the Add Credential wizard. In the Connection list, select the SSL protocol to be used for a secure connection during a scan of this source. Note Satellite Server does not support the disabling of SSL. If you select the Disable SSL option, this option is ignored. If you need to upgrade the SSL validation for the Satellite server to check for a verified SSL certificate from a certificate authority, select the Verify SSL Certificate check box. Click Save to save the source and then click Close to close the Add Source wizard. 3.2.2. Adding satellite credentials You can add credentials from the Credentials view or from the Add Source wizard during the creation of a source. Procedure Click the option to add a new credential based on your location: From the Credentials view, click Add Satellite Credential . From the Add Source wizard, click the Add a credential icon for the Credentials field. The Add Credential wizard opens. In the Credential Name field, enter a descriptive name. Enter the username and password for a Satellite Server administrator. Click Save to save the credential and close the Add Credential wizard. 3.2.3. About sources and credentials To run a scan, you must configure data for two basic structures: sources and credentials. The type of source that you are going to inspect during the scan determines the type of data that is required for both source and credential configuration. A source contains a single asset or a set of multiple assets that are to be inspected during the scan. You can configure any of the following types of sources: Network source One or more physical machines, virtual machines, or containers. These assets can be expressed as hostnames, IP addresses, IP ranges, or subnets. vCenter source A vCenter Server systems management solution that is managing all or part of your IT infrastructure. Satellite source A Satellite systems management solution that is managing all or part of your IT infrastructure. Red Hat OpenShift source A Red Hat OpenShift Container Platform cluster that is managing all or part your Red Hat OpenShift Container Platform nodes and workloads. Ansible source An Ansible management solution that is managing your Ansible nodes and workloads. Red Hat Advanced Cluster Security for Kubernetes source A RHACS security platform solution that secures your Kubernetes environments. When you are working with network sources, you determine how many individual assets you should group within a single source. Currently, you can add multiple assets to a source only for network sources. The following list contains some of the other factors that you should consider when you are adding sources: Whether assets are part of a development, testing, or production environment, and if demands on computing power and similar concerns are a consideration for those assets. Whether you want to scan a particular entity or group of entities more often because of internal business practices such as frequent changes to the installed software. A credential contains data such as the username and password or SSH key of a user with sufficient authority to run the scan on all or part of the assets that are contained in that source. As with sources, credentials are configured as the network, vCenter, satellite, OpenShift, Ansible, or RHACS type. Typically, a network source might require multiple network credentials because it is expected that many credentials would be needed to access all of the assets in a broad IP range. Conversely, a vCenter or satellite source would typically use a single vCenter or satellite credential, as applicable, to access a particular system management solution server, and an OpenShift, Ansible, or RHACS source would use a single credential to access a single cluster. You can add new sources from the Sources view and you can add new credentials from the Credentials view. You can also add new or select previously existing credentials during source creation. It is during source creation that you associate a credential directly with a source. Because sources and credentials must have matching types, any credential that you add during source creation shares the same type as the source. In addition, if you want to use an existing credential during source creation, the list of available credentials contains only credentials of the same type. For example, during network source creation, only network credentials are available for selection. 3.2.4. Satellite Server authentication For a satellite scan, the connectivity and access to Satellite Server derives from basic authentication (username and password) that is encrypted over HTTPS. By default, the satellite scan runs with certificate validation and secure communication through the SSL (Secure Sockets Layer) protocol. During source creation, you can select from several different SSL and TLS (Transport Layer Security) protocols to use for the certificate validation and secure communication. Note The Satellite Server credentials that you use for a satellite scan must be a user with a role that contains the view permissions for hosts, subscriptions, and organizations. You might need to adjust the level of certificate validation to connect properly to the Satellite server during a scan. For example, your Satellite server might use a verified SSL certificate from a certificate authority. During source creation, you can upgrade SSL certificate validation to check for that certificate during a scan of that source. Conversely, your Satellite server might use self-signed certificates. During source creation, you can leave the SSL validation at the default so that a scan of that source does not check for a certificate. This choice, to leave the option at the default for a self-signed certificate, could possibly avoid scan errors. Although the option to disable SSL is currently available in the interface, Satellite Server does not support the disabling of SSL. If you select the Disable SSL option when you create a satellite source, this option is ignored. 3.3. Adding vcenter sources and credentials To run a scan on a vCenter Server deployment, you must add a source that identifies the vCenter Server server to scan. Then you must add a credential that contains the authentication data to access that server. Learn more Add a vcenter source and credential to provide the information needed to scan vCenter Server. To learn more, see the following information: To add a vcenter source, see Adding vcenter sources . To add a vcenter credential, see Adding vcenter credentials . To learn more about sources and credentials and how Discovery uses them, see the following information: About sources and credentials To learn more about how Red Hat Discovery authenticates with your vCenter Server server, see the following information. This information includes guidance about certificate validation and SSL communication choices that you might need to make during vcenter credential configuration: vCenter Server authentication 3.3.1. Adding vcenter sources You can add sources from the initial Welcome page or from the Sources view. Note A vCenter source is only compatible with a vCenter deployment. You cannot use this source to scan other virtualization infrastructures, even those that are supported by Red Hat. Procedure Click the option to add a new credential based on your location: From the Welcome page, click Add Source . From the Sources view, click Add . The Add Source wizard opens. On the Type page, select vCenter Server as the source type and click . On the Credentials page, enter the following information: In the Name field, enter a descriptive name. In the IP Address or Hostname field, enter the IP address or hostname of the vCenter Server for this source. Enter a different port if you do not want a scan for this source to run on the default port 443. For example, if the IP address of the vCenter Server is 192.0.2.15 and you want to change the port to 80, you would enter 192.0.2.15:80 . In the Credentials list, select the credential that is required to access the vCenter Server for this source. If a required credential does not exist, click the Add a credential icon to open the Add Credential wizard. In the Connection list, select the SSL protocol to be used for a secure connection during a scan of this source. Select Disable SSL to disable secure communication during a scan of this source. If you need to upgrade the SSL validation for the vCenter Server to check for a verified SSL certificate from a certificate authority, select the Verify SSL Certificate check box. Click Save to save the source and then click Close to close the Add Source wizard. 3.3.2. Adding vcenter credentials You can add credentials from the Credentials view or from the Add Source wizard during the creation of a source. Procedure Click the option to add a new credential based on your location: From the Credentials view, click Add VCenter Credential . From the Add Source wizard, click the Add a credential icon for the Credentials field. The Add Credential wizard opens. In the Credential Name field, enter a descriptive name. Enter the username and password for a vCenter Server administrator. Click Save to save the credential and close the Add Credential wizard. 3.3.3. About sources and credentials To run a scan, you must configure data for two basic structures: sources and credentials. The type of source that you are going to inspect during the scan determines the type of data that is required for both source and credential configuration. A source contains a single asset or a set of multiple assets that are to be inspected during the scan. You can configure any of the following types of sources: Network source One or more physical machines, virtual machines, or containers. These assets can be expressed as hostnames, IP addresses, IP ranges, or subnets. vCenter source A vCenter Server systems management solution that is managing all or part of your IT infrastructure. Satellite source A Satellite systems management solution that is managing all or part of your IT infrastructure. Red Hat OpenShift source A Red Hat OpenShift Container Platform cluster that is managing all or part your Red Hat OpenShift Container Platform nodes and workloads. Ansible source An Ansible management solution that is managing your Ansible nodes and workloads. Red Hat Advanced Cluster Security for Kubernetes source A RHACS security platform solution that secures your Kubernetes environments. When you are working with network sources, you determine how many individual assets you should group within a single source. Currently, you can add multiple assets to a source only for network sources. The following list contains some of the other factors that you should consider when you are adding sources: Whether assets are part of a development, testing, or production environment, and if demands on computing power and similar concerns are a consideration for those assets. Whether you want to scan a particular entity or group of entities more often because of internal business practices such as frequent changes to the installed software. A credential contains data such as the username and password or SSH key of a user with sufficient authority to run the scan on all or part of the assets that are contained in that source. As with sources, credentials are configured as the network, vCenter, satellite, OpenShift, Ansible, or RHACS type. Typically, a network source might require multiple network credentials because it is expected that many credentials would be needed to access all of the assets in a broad IP range. Conversely, a vCenter or satellite source would typically use a single vCenter or satellite credential, as applicable, to access a particular system management solution server, and an OpenShift, Ansible, or RHACS source would use a single credential to access a single cluster. You can add new sources from the Sources view and you can add new credentials from the Credentials view. You can also add new or select previously existing credentials during source creation. It is during source creation that you associate a credential directly with a source. Because sources and credentials must have matching types, any credential that you add during source creation shares the same type as the source. In addition, if you want to use an existing credential during source creation, the list of available credentials contains only credentials of the same type. For example, during network source creation, only network credentials are available for selection. 3.3.4. vCenter Server authentication For a vcenter scan, the connectivity and access to vCenter Server derives from basic authentication (username and password) that is encrypted over HTTPS. By default, the vcenter scan runs with certificate validation and secure communication through the SSL (Secure Sockets Layer) protocol. During source creation, you can select from several different SSL and TLS (Transport Layer Security) protocols to use for the certificate validation and secure communication. You might need to adjust the level of certificate validation to connect properly to the vCenter server during a scan. For example, your vCenter server might use a verified SSL certificate from a certificate authority. During source creation, you can upgrade SSL certificate validation to check for that certificate during a scan of that source. Conversely, your vCenter server might use self-signed certificates. During source creation, you can leave the SSL validation at the default so that scan of that source does not check for a certificate. This choice, to leave the option at the default for a self-signed certificate, could possibly avoid scan errors. You might also need to disable SSL as the method of secure communication during the scan if the vCenter server is not configured to use SSL communication for web applications. For example, your vCenter server might be configured to communicate with web applications by using HTTP with port 80. If so, then during source creation you can disable SSL communication for scans of that source. 3.4. Adding OpenShift sources and credentials To run a scan on a Red Hat OpenShift Container Platform deployment, you must add a source that identifies the Red Hat OpenShift Container Platform cluster to scan. Then you must add a credential that contains the authentication data to access that cluster. Learn more Add an OpenShift source and credential to provide the information needed to scan a Red Hat OpenShift Container Platform cluster. To learn more, see the following information: To add an OpenShift source, see Add an OpenShift source . To add an OpenShift credential, see Add an OpenShift credential . To learn more about sources and credentials and how Red Hat Discovery uses them, see the following information: About sources and credentials To learn more about how Red Hat Discovery authenticates with your Red Hat OpenShift Container Platform cluster, see the following information. This information includes guidance about certificate validation and SSL communication choices that you might need to make during OpenShift credential configuration: Red Hat OpenShift Container Platform authentication 3.4.1. Adding Red Hat OpenShift Container Platform sources You can add sources from the initial Welcome page or from the Sources view. Prerequisites You will need access to the Red Hat OpenShift Container Platform web console administrator perspective to get the API address and token values. Procedure Click the option to add a new credential based on your location: From the Welcome page, click Add Source . From the Sources view, click Add . The Add Source wizard opens. On the Type page, select OpenShift as the source type and click . On the Credentials page, enter the following information: In the Name field, enter a descriptive name. In the IP Address or Hostname field, enter the Red Hat OpenShift Container Platform cluster API address for this source. You can find the cluster API address by viewing the overview details for the cluster in the web console In the Credentials list, select the credential that is required to access the cluster for this source. If a required credential does not exist, click the Add a credential icon to open the Add Credential wizard. In the Connection list, select the SSL protocol to be used for a secure connection during a scan of this source. Select Disable SSL to disable secure communication during a scan of this source. If you need to upgrade the SSL validation for the cluster to check for a verified SSL certificate from a certificate authority, select the Verify SSL Certificate check box. Click Save to save the source and then click Close to close the Add Source wizard. 3.4.2. Adding Red Hat OpenShift Container Platform credentials You can add credentials from the Credentials view or from the Add Source wizard during the creation of a source. Prerequisites You will need access to the Red Hat OpenShift Container Platform web console administrator perspective to get the API address and token values. Procedure Click the option to add a new credential based on your location: From the Credentials view, click Add OpenShift . From the Add Source wizard, click the Add a credential icon for the Credentials field. The Add Credential wizard opens. In the Credential Name field, enter a descriptive name. Enter the API token for the Red Hat OpenShift Container Platform cluster from your Administrator console. You can find the API token by clicking your username in the console, clicking the Display Token option and copying the value displayed for Your API token is . Click Save to save the credential and close the Add Credential wizard. 3.4.3. About sources and credentials To run a scan, you must configure data for two basic structures: sources and credentials. The type of source that you are going to inspect during the scan determines the type of data that is required for both source and credential configuration. A source contains a single asset or a set of multiple assets that are to be inspected during the scan. You can configure any of the following types of sources: Network source One or more physical machines, virtual machines, or containers. These assets can be expressed as hostnames, IP addresses, IP ranges, or subnets. vCenter source A vCenter Server systems management solution that is managing all or part of your IT infrastructure. Satellite source A Satellite systems management solution that is managing all or part of your IT infrastructure. Red Hat OpenShift source A Red Hat OpenShift Container Platform cluster that is managing all or part your Red Hat OpenShift Container Platform nodes and workloads. Ansible source An Ansible management solution that is managing your Ansible nodes and workloads. Red Hat Advanced Cluster Security for Kubernetes source A RHACS security platform solution that secures your Kubernetes environments. When you are working with network sources, you determine how many individual assets you should group within a single source. Currently, you can add multiple assets to a source only for network sources. The following list contains some of the other factors that you should consider when you are adding sources: Whether assets are part of a development, testing, or production environment, and if demands on computing power and similar concerns are a consideration for those assets. Whether you want to scan a particular entity or group of entities more often because of internal business practices such as frequent changes to the installed software. A credential contains data such as the username and password or SSH key of a user with sufficient authority to run the scan on all or part of the assets that are contained in that source. As with sources, credentials are configured as the network, vCenter, satellite, OpenShift, Ansible, or RHACS type. Typically, a network source might require multiple network credentials because it is expected that many credentials would be needed to access all of the assets in a broad IP range. Conversely, a vCenter or satellite source would typically use a single vCenter or satellite credential, as applicable, to access a particular system management solution server, and an OpenShift, Ansible, or RHACS source would use a single credential to access a single cluster. You can add new sources from the Sources view and you can add new credentials from the Credentials view. You can also add new or select previously existing credentials during source creation. It is during source creation that you associate a credential directly with a source. Because sources and credentials must have matching types, any credential that you add during source creation shares the same type as the source. In addition, if you want to use an existing credential during source creation, the list of available credentials contains only credentials of the same type. For example, during network source creation, only network credentials are available for selection. 3.4.4. Red Hat OpenShift Container Platform authentication For a OpenShift scan, the connectivity and access to OpenShift cluster API address derives from basic authentication with a cluster API address and an API token that is encrypted over HTTPS. By default, the OpenShift scan runs with certificate validation and secure communication through the SSL (Secure Sockets Layer) protocol. During source creation, you can select from several different SSL and TLS (Transport Layer Security) protocols to use for the certificate validation and secure communication. You might need to adjust the level of certificate validation to connect properly to the Red Hat OpenShift Container Platform cluster API address during a scan. For example, your OpenShift cluster API address might use a verified SSL certificate from a certificate authority. During source creation, you can upgrade SSL certificate validation to check for that certificate during a scan of that source. Conversely, your cluster API address might use self-signed certificates. During source creation, you can leave the SSL validation at the default so that scan of that source does not check for a certificate. This choice, to leave the option at the default for a self-signed certificate, could possibly avoid scan errors. You might also need to disable SSL as the method of secure communication during the scan if the OpenShift cluster API address is not configured to use SSL communication for web applications. For example, your OpenShift server might be configured to communicate with web applications by using HTTP with port 80. If so, then during source creation you can disable SSL communication for scans of that source. 3.5. Adding Ansible sources and credentials To run a scan on a Ansible deployment, you must add a source that identifies the Ansible Automation Platform to scan. Then, you must add a credential that contains the authentication data to access that cluster. Learn more Add an Ansible source and credential to provide the information needed to scan your Ansible Automation Platform deployment. To learn more, see the following information: To add an Ansible source, see Add an Ansible source . To add an Ansible credential, see Add an Ansible credential . To learn more about sources and credentials and how Discovery uses them, see the following information: About sources and credentials To learn more about how Discovery authenticates with your Ansible deployment, see the following information. This information includes guidance about certificate validation and SSL communication choices that you might need to make during Ansible credential configuration: Ansible Automation Platform 3.5.1. Adding Red Hat Ansible Automation Platform sources You can add sources from the initial Welcome page or from the Sources view. Procedure Click the option to add a new credential based on your location: From the Welcome page, click Add Source . From the Sources view, click Add Source . The Add Source wizard opens. On the Type page, select Ansible Controller as the source type and click . On the Credentials page, enter the following information: In the Name field, enter a descriptive name. In the IP Address or Hostname field, enter the Ansible host IP address for this source. You can find the host IP address by viewing the overview details for the controller in the portal. In the Credentials list, select the credential that is required to access the cluster for this source. If a required credential does not exist, click the Add a credential icon to open the Add Credential wizard. In the Connection list, select the SSL protocol to be used for a secure connection during a scan of this source. Select Disable SSL to disable secure communication during a scan of this source. If you need to upgrade the SSL validation for the cluster to check for a verified SSL certificate from a certificate authority, select the Verify SSL Certificate check box. Click Save to save the source and then click Close to close the Add Source wizard. 3.5.2. Adding Red Hat Ansible Automation Platform credentials You can add credentials from the Credentials view or from the Add Source wizard during the creation of a source. Procedure Click the option to add a new credential based on your location: From the Credentials view, click Add Ansible Credential . From the Add Source wizard, click the Add a credential icon for the Credentials field. The Add Credential wizard opens. In the Credential Name field, enter a descriptive name. In the User Name field, enter the username for your Ansible Controller instance. In the Password field, enter the password for your Ansible Controller instance. Click Save to save the credential. The Add credential wizard closes. 3.5.3. About sources and credentials To run a scan, you must configure data for two basic structures: sources and credentials. The type of source that you are going to inspect during the scan determines the type of data that is required for both source and credential configuration. A source contains a single asset or a set of multiple assets that are to be inspected during the scan. You can configure any of the following types of sources: Network source One or more physical machines, virtual machines, or containers. These assets can be expressed as hostnames, IP addresses, IP ranges, or subnets. vCenter source A vCenter Server systems management solution that is managing all or part of your IT infrastructure. Satellite source A Satellite systems management solution that is managing all or part of your IT infrastructure. Red Hat OpenShift source A Red Hat OpenShift Container Platform cluster that is managing all or part your Red Hat OpenShift Container Platform nodes and workloads. Ansible source An Ansible management solution that is managing your Ansible nodes and workloads. Red Hat Advanced Cluster Security for Kubernetes source A RHACS security platform solution that secures your Kubernetes environments. When you are working with network sources, you determine how many individual assets you should group within a single source. Currently, you can add multiple assets to a source only for network sources. The following list contains some of the other factors that you should consider when you are adding sources: Whether assets are part of a development, testing, or production environment, and if demands on computing power and similar concerns are a consideration for those assets. Whether you want to scan a particular entity or group of entities more often because of internal business practices such as frequent changes to the installed software. A credential contains data such as the username and password or SSH key of a user with sufficient authority to run the scan on all or part of the assets that are contained in that source. As with sources, credentials are configured as the network, vCenter, satellite, OpenShift, Ansible, or RHACS type. Typically, a network source might require multiple network credentials because it is expected that many credentials would be needed to access all of the assets in a broad IP range. Conversely, a vCenter or satellite source would typically use a single vCenter or satellite credential, as applicable, to access a particular system management solution server, and an OpenShift, Ansible, or RHACS source would use a single credential to access a single cluster. You can add new sources from the Sources view and you can add new credentials from the Credentials view. You can also add new or select previously existing credentials during source creation. It is during source creation that you associate a credential directly with a source. Because sources and credentials must have matching types, any credential that you add during source creation shares the same type as the source. In addition, if you want to use an existing credential during source creation, the list of available credentials contains only credentials of the same type. For example, during network source creation, only network credentials are available for selection. 3.5.4. Ansible authentication For a Ansible scan, the connectivity and access to Ansible host IP addresses derives from basic authentication with a host IP address and a password that is encrypted over HTTPS. By default, the Ansible scan runs with certificate validation and secure communication through the SSL (Secure Sockets Layer) protocol. During source creation, you can select from several different SSL and TLS (Transport Layer Security) protocols to use for the certificate validation and secure communication. You might need to adjust the level of certificate validation to connect properly to the Ansible host IP address during a scan. For example, your Ansible host Ip address might use a verified SSL certificate from a certificate authority. During source creation, you can upgrade SSL certificate validation to check for that certificate during a scan of that source. Conversely, your host IP address might use self-signed certificates. During source creation, you can leave the SSL validation at the default so that scan of that source does not check for a certificate. This choice, to leave the option at the default for a self-signed certificate, could possibly avoid scan errors. You might also need to disable SSL as the method of secure communication during the scan if the Ansible host IP address is not configured to use SSL communication for web applications. For example, your Ansible host IP address might be configured to communicate with web applications by using HTTP with port 80. If so, then during source creation you can disable SSL communication for scans of that source. 3.6. Adding Red Hat Advanced Cluster Security for Kubernetes sources and credentials To run a scan on a Red Hat Advanced Cluster Security for Kubernetes (RHACS) deployment, you must add a source that identifies the RHACS instance to scan. Then you must add a credential that contains the authentication data to access that instance. Learn more Add a RHACS source and credential to provide the information needed to scan a RHACS instance. To learn more, see the following information: To add an RHACS source, see Add a RHACS source . To add an RHACS credential, see Add a RHACS credential . To learn more about sources and credentials and how Discovery uses them, see the following information: About sources and credentials To learn more about how Red Hat Discovery authenticates with your Red Hat Advanced Cluster Security for Kubernetes instance, see the following information. This information includes guidance about certificate validation and SSL communication choices that you might need to make during RHACS credential configuration: Red Hat Advanced Cluster Security for Kubernetes 3.6.1. Adding Red Hat Advanced Cluster Security for Kubernetes sources You can add sources from the initial Welcome page or from the Sources view. Prerequisites You will need access to the Red Hat Advanced Cluster Security for Kubernetes (RHACS) portal to generate admin API token values. You will need either access to the RHACS portal to find the RHACS Central endpoint or access the RHACS Configuration Management Cloud Service instance details. Procedure Click the option to add a new credential based on your location: From the Welcome page, click Add Source . From the Sources view, click Add . The Add Source wizard opens. On the Type page, select RHACS as the source type and click . On the Credentials page, enter the following information: In the Name field, enter a descriptive name. In the IP Address or Hostname field, enter the Red Hat Advanced Cluster Security for Kubernetes Central address for this source. You can find the address by viewing the network routes for the cluster if RHACS was deployed on OpenShift. If RHACS was deployed on the cloud, you can find this information in the instance details. In the Credentials list, select the credential that is required to access the cluster for this source. If a required credential does not exist, click the Add a credential icon to open the Add Credential wizard. In the Connection list, select the SSL protocol to be used for a secure connection during a scan of this source. Select Disable SSL to disable secure communication during a scan of this source. If you need to upgrade the SSL validation for the cluster to check for a verified SSL certificate from a certificate authority, select the Verify SSL Certificate check box. Click Save to save the source and then click Close to close the Add Source wizard. 3.6.2. Adding RHACS credentials You can add credentials from the Credentials view or from the Add Source wizard during the creation of a source. Prerequisites You will need access to the Red Hat Advanced Cluster Security for Kubernetes (RHACS) portal to generate admin API token values. You will need either access to the RHACS portal to find the RHACS Central endpoint or access the RHACS Configuration Management Cloud Service instance details. Procedure Click the option to add a new credential based on your location: From the Credentials view, click Add RHACS . From the Add Source wizard, click the Add a credential icon for the Credentials field. The Add Credential wizard opens. In the Credential Name field, enter a descriptive name. Enter the API token for RHACS from yourRHACS portal. If you do not already have a token, you can generate a token on the RHACSConfiguration Management Cloud Service portal. Click Save to save the credential and close the Add Credential wizard. 3.6.3. About sources and credentials To run a scan, you must configure data for two basic structures: sources and credentials. The type of source that you are going to inspect during the scan determines the type of data that is required for both source and credential configuration. A source contains a single asset or a set of multiple assets that are to be inspected during the scan. You can configure any of the following types of sources: Network source One or more physical machines, virtual machines, or containers. These assets can be expressed as hostnames, IP addresses, IP ranges, or subnets. vCenter source A vCenter Server systems management solution that is managing all or part of your IT infrastructure. Satellite source A Satellite systems management solution that is managing all or part of your IT infrastructure. Red Hat OpenShift source A Red Hat OpenShift Container Platform cluster that is managing all or part your Red Hat OpenShift Container Platform nodes and workloads. Ansible source An Ansible management solution that is managing your Ansible nodes and workloads. Red Hat Advanced Cluster Security for Kubernetes source A RHACS security platform solution that secures your Kubernetes environments. When you are working with network sources, you determine how many individual assets you should group within a single source. Currently, you can add multiple assets to a source only for network sources. The following list contains some of the other factors that you should consider when you are adding sources: Whether assets are part of a development, testing, or production environment, and if demands on computing power and similar concerns are a consideration for those assets. Whether you want to scan a particular entity or group of entities more often because of internal business practices such as frequent changes to the installed software. A credential contains data such as the username and password or SSH key of a user with sufficient authority to run the scan on all or part of the assets that are contained in that source. As with sources, credentials are configured as the network, vCenter, satellite, OpenShift, Ansible, or RHACS type. Typically, a network source might require multiple network credentials because it is expected that many credentials would be needed to access all of the assets in a broad IP range. Conversely, a vCenter or satellite source would typically use a single vCenter or satellite credential, as applicable, to access a particular system management solution server, and an OpenShift, Ansible, or RHACS source would use a single credential to access a single cluster. You can add new sources from the Sources view and you can add new credentials from the Credentials view. You can also add new or select previously existing credentials during source creation. It is during source creation that you associate a credential directly with a source. Because sources and credentials must have matching types, any credential that you add during source creation shares the same type as the source. In addition, if you want to use an existing credential during source creation, the list of available credentials contains only credentials of the same type. For example, during network source creation, only network credentials are available for selection. 3.6.4. Red Hat Advanced Cluster Security for Kubernetes authentication For a Red Hat Advanced Cluster Security for Kubernetes (RHACS) scan, the connectivity and access to the RHACS API derives from bearer token authentication with an API token that is encrypted over TLS (Transport Layer Security). By default, the RHACS scan runs with certificate validation and secure communication through the TLS protocol. During source creation, you can select from several different SSL (Secure Sockets Layer) and TLS protocols to use for the certificate validation and secure communication. You might need to adjust the level of certificate validation to connect to the RHACS portal during a scan. For example, your RHACS instance might use a verified TLS certificate from a certificate authority. During source creation, you can upgrade TLS certificate validation to check for that certificate during a scan of that source. Conversely, your RHACS instance might use self-signed certificates. During source creation, you can leave the TLS validation at the default so that scan of that source does not check for a certificate. This choice, to leave the option at the default for a self-signed certificate, could possibly avoid scan errors. You might also need to disable TSL as the method of secure communication during the scan if the RHACS instance is not configured to use TSL communication for web applications. For example, your RHACS instance might be configured to communicate with web applications by using HTTP with port 80. If so, then during source creation you can disable TSL communication for scans of that source.
null
https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/using_red_hat_discovery/assembly-adding-sources-creds-gui-main
Chapter 2. Fault tolerant deployments using multiple Prism Elements
Chapter 2. Fault tolerant deployments using multiple Prism Elements By default, the installation program installs control plane and compute machines into a single Nutanix Prism Element (cluster). To improve the fault tolerance of your OpenShift Container Platform cluster, you can specify that these machines be distributed across multiple Nutanix clusters by configuring failure domains. A failure domain represents an additional Prism Element instance that is available to OpenShift Container Platform machine pools during and after installation. 2.1. Installation method and failure domain configuration The OpenShift Container Platform installation method determines how and when you configure failure domains: If you deploy using installer-provisioned infrastructure, you can configure failure domains in the installation configuration file before deploying the cluster. For more information, see Configuring failure domains . You can also configure failure domains after the cluster is deployed. For more information about configuring failure domains post-installation, see Adding failure domains to an existing Nutanix cluster . If you deploy using infrastructure that you manage (user-provisioned infrastructure) no additional configuration is required. After the cluster is deployed, you can manually distribute control plane and compute machines across failure domains. 2.2. Adding failure domains to an existing Nutanix cluster By default, the installation program installs control plane and compute machines into a single Nutanix Prism Element (cluster). After an OpenShift Container Platform cluster is deployed, you can improve its fault tolerance by adding additional Prism Element instances to the deployment using failure domains. A failure domain represents a single Prism Element instance where new control plane and compute machines can be deployed and existing control plane and compute machines can be distributed. 2.2.1. Failure domain requirements When planning to use failure domains, consider the following requirements: All Nutanix Prism Element instances must be managed by the same instance of Prism Central. A deployment that is comprised of multiple Prism Central instances is not supported. The machines that make up the Prism Element clusters must reside on the same Ethernet network for failure domains to be able to communicate with each other. A subnet is required in each Prism Element that will be used as a failure domain in the OpenShift Container Platform cluster. When defining these subnets, they must share the same IP address prefix (CIDR) and should contain the virtual IP addresses that the OpenShift Container Platform cluster uses. 2.2.2. Adding failure domains to the Infrastructure CR You add failure domains to an existing Nutanix cluster by modifying its Infrastructure custom resource (CR) ( infrastructures.config.openshift.io ). Tip It is recommended that you configure three failure domains to ensure high-availability. Procedure Edit the Infrastructure CR by running the following command: USD oc edit infrastructures.config.openshift.io cluster Configure the failure domains. Example Infrastructure CR with Nutanix failure domains spec: cloudConfig: key: config name: cloud-provider-config #... platformSpec: nutanix: failureDomains: - cluster: type: UUID uuid: <uuid> name: <failure_domain_name> subnets: - type: UUID uuid: <network_uuid> - cluster: type: UUID uuid: <uuid> name: <failure_domain_name> subnets: - type: UUID uuid: <network_uuid> - cluster: type: UUID uuid: <uuid> name: <failure_domain_name> subnets: - type: UUID uuid: <network_uuid> # ... where: <uuid> Specifies the universally unique identifier (UUID) of the Prism Element. <failure_domain_name> Specifies a unique name for the failure domain. The name is limited to 64 or fewer characters, which can include lower-case letters, digits, and a dash ( - ). The dash cannot be in the leading or ending position of the name. <network_uuid> Specifies the UUID of the Prism Element subnet object. The subnet's IP address prefix (CIDR) should contain the virtual IP addresses that the OpenShift Container Platform cluster uses. Only one subnet per failure domain (Prism Element) in an OpenShift Container Platform cluster is supported. Save the CR to apply the changes. 2.2.3. Distributing control planes across failure domains You distribute control planes across Nutanix failure domains by modifying the control plane machine set custom resource (CR). Prerequisites You have configured the failure domains in the cluster's Infrastructure custom resource (CR). The control plane machine set custom resource (CR) is in an active state. For more information on checking the control plane machine set custom resource state, see "Additional resources". Procedure Edit the control plane machine set CR by running the following command: USD oc edit controlplanemachineset.machine.openshift.io cluster -n openshift-machine-api Configure the control plane machine set to use failure domains by adding a spec.template.machines_v1beta1_machine_openshift_io.failureDomains stanza. Example control plane machine set with Nutanix failure domains apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <cluster_name> name: cluster namespace: openshift-machine-api spec: # ... template: machineType: machines_v1beta1_machine_openshift_io machines_v1beta1_machine_openshift_io: failureDomains: platform: Nutanix nutanix: - name: <failure_domain_name_1> - name: <failure_domain_name_2> - name: <failure_domain_name_3> # ... Save your changes. By default, the control plane machine set propagates changes to your control plane configuration automatically. If the cluster is configured to use the OnDelete update strategy, you must replace your control planes manually. For more information, see "Additional resources". Additional resources Checking the control plane machine set custom resource state Replacing a control plane machine 2.2.4. Distributing compute machines across failure domains You can distribute compute machines across Nutanix failure domains one of the following ways: Editing existing compute machine sets allows you to distribute compute machines across Nutanix failure domains as a minimal configuration update. Replacing existing compute machine sets ensures that the specification is immutable and all your machines are the same. 2.2.4.1. Editing compute machine sets to implement failure domains To distribute compute machines across Nutanix failure domains by using an existing compute machine set, you update the compute machine set with your configuration and then use scaling to replace the existing compute machines. Prerequisites You have configured the failure domains in the cluster's Infrastructure custom resource (CR). Procedure Run the following command to view the cluster's Infrastructure CR. USD oc describe infrastructures.config.openshift.io cluster For each failure domain ( platformSpec.nutanix.failureDomains ), note the cluster's UUID, name, and subnet object UUID. These values are required to add a failure domain to a compute machine set. List the compute machine sets in your cluster by running the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE <machine_set_name_1> 1 1 1 1 55m <machine_set_name_2> 1 1 1 1 55m Edit the first compute machine set by running the following command: USD oc edit machineset <machine_set_name_1> -n openshift-machine-api Configure the compute machine set to use the first failure domain by updating the following to the spec.template.spec.providerSpec.value stanza. Note Be sure that the values you specify for the cluster and subnets fields match the values that were configured in the failureDomains stanza in the cluster's Infrastructure CR. Example compute machine set with Nutanix failure domains apiVersion: machine.openshift.io/v1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <cluster_name> name: <machine_set_name_1> namespace: openshift-machine-api spec: replicas: 2 # ... template: spec: # ... providerSpec: value: apiVersion: machine.openshift.io/v1 failureDomain: name: <failure_domain_name_1> cluster: type: uuid uuid: <prism_element_uuid_1> subnets: - type: uuid uuid: <prism_element_network_uuid_1> # ... Note the value of spec.replicas , because you need it when scaling the compute machine set to apply the changes. Save your changes. List the machines that are managed by the updated compute machine set by running the following command: USD oc get -n openshift-machine-api machines \ -l machine.openshift.io/cluster-api-machineset=<machine_set_name_1> Example output NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Running AHV Unnamed Development-STS 4h <machine_name_original_2> Running AHV Unnamed Development-STS 4h For each machine that is managed by the updated compute machine set, set the delete annotation by running the following command: USD oc annotate machine/<machine_name_original_1> \ -n openshift-machine-api \ machine.openshift.io/delete-machine="true" To create replacement machines with the new configuration, scale the compute machine set to twice the number of replicas by running the following command: USD oc scale --replicas=<twice_the_number_of_replicas> \ 1 machineset <machine_set_name_1> \ -n openshift-machine-api 1 For example, if the original number of replicas in the compute machine set is 2 , scale the replicas to 4 . List the machines that are managed by the updated compute machine set by running the following command: USD oc get -n openshift-machine-api machines -l machine.openshift.io/cluster-api-machineset=<machine_set_name_1> When the new machines are in the Running phase, you can scale the compute machine set to the original number of replicas. To remove the machines that were created with the old configuration, scale the compute machine set to the original number of replicas by running the following command: USD oc scale --replicas=<original_number_of_replicas> \ 1 machineset <machine_set_name_1> \ -n openshift-machine-api 1 For example, if the original number of replicas in the compute machine set was 2 , scale the replicas to 2 . As required, continue to modify machine sets to reference the additional failure domains that are available to the deployment. Additional resources Modifying a compute machine set 2.2.4.2. Replacing compute machine sets to implement failure domains To distribute compute machines across Nutanix failure domains by replacing a compute machine set, you create a new compute machine set with your configuration, wait for the machines that it creates to start, and then delete the old compute machine set. Prerequisites You have configured the failure domains in the cluster's Infrastructure custom resource (CR). Procedure Run the following command to view the cluster's Infrastructure CR. USD oc describe infrastructures.config.openshift.io cluster For each failure domain ( platformSpec.nutanix.failureDomains ), note the cluster's UUID, name, and subnet object UUID. These values are required to add a failure domain to a compute machine set. List the compute machine sets in your cluster by running the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE <original_machine_set_name_1> 1 1 1 1 55m <original_machine_set_name_2> 1 1 1 1 55m Note the names of the existing compute machine sets. Create a YAML file that contains the values for your new compute machine set custom resource (CR) by using one of the following methods: Copy an existing compute machine set configuration into a new file by running the following command: USD oc get machineset <original_machine_set_name_1> \ -n openshift-machine-api -o yaml > <new_machine_set_name_1>.yaml You can edit this YAML file with your preferred text editor. Create a blank YAML file named <new_machine_set_name_1>.yaml with your preferred text editor and include the required values for your new compute machine set. If you are not sure which value to set for a specific field, you can view values of an existing compute machine set CR by running the following command: USD oc get machineset <original_machine_set_name_1> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create machines with a worker or infra role. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Configure the new compute machine set to use the first failure domain by updating or adding the following to the spec.template.spec.providerSpec.value stanza in the <new_machine_set_name_1>.yaml file. Note Be sure that the values you specify for the cluster and subnets fields match the values that were configured in the failureDomains stanza in the cluster's Infrastructure CR. Example compute machine set with Nutanix failure domains apiVersion: machine.openshift.io/v1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <cluster_name> name: <new_machine_set_name_1> namespace: openshift-machine-api spec: replicas: 2 # ... template: spec: # ... providerSpec: value: apiVersion: machine.openshift.io/v1 failureDomain: name: <failure_domain_name_1> cluster: type: uuid uuid: <prism_element_uuid_1> subnets: - type: uuid uuid: <prism_element_network_uuid_1> # ... Save your changes. Create a compute machine set CR by running the following command: USD oc create -f <new_machine_set_name_1>.yaml As required, continue to create compute machine sets to reference the additional failure domains that are available to the deployment. List the machines that are managed by the new compute machine sets by running the following command for each new compute machine set: USD oc get -n openshift-machine-api machines -l machine.openshift.io/cluster-api-machineset=<new_machine_set_name_1> Example output NAME PHASE TYPE REGION ZONE AGE <machine_from_new_1> Provisioned AHV Unnamed Development-STS 25s <machine_from_new_2> Provisioning AHV Unnamed Development-STS 25s When the new machines are in the Running phase, you can delete the old compute machine sets that do not include the failure domain configuration. When you have verified that the new machines are in the Running phase, delete the old compute machine sets by running the following command for each: USD oc delete machineset <original_machine_set_name_1> -n openshift-machine-api Verification To verify that the compute machine sets without the updated configuration are deleted, list the compute machine sets in your cluster by running the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE <new_machine_set_name_1> 1 1 1 1 4m12s <new_machine_set_name_2> 1 1 1 1 4m12s To verify that the compute machines without the updated configuration are deleted, list the machines in your cluster by running the following command: USD oc get -n openshift-machine-api machines Example output while deletion is in progress NAME PHASE TYPE REGION ZONE AGE <machine_from_new_1> Running AHV Unnamed Development-STS 5m41s <machine_from_new_2> Running AHV Unnamed Development-STS 5m41s <machine_from_original_1> Deleting AHV Unnamed Development-STS 4h <machine_from_original_2> Deleting AHV Unnamed Development-STS 4h Example output when deletion is complete NAME PHASE TYPE REGION ZONE AGE <machine_from_new_1> Running AHV Unnamed Development-STS 6m30s <machine_from_new_2> Running AHV Unnamed Development-STS 6m30s To verify that a machine created by the new compute machine set has the correct configuration, examine the relevant fields in the CR for one of the new machines by running the following command: USD oc describe machine <machine_from_new_1> -n openshift-machine-api Additional resources Creating a compute machine set on Nutanix
[ "oc edit infrastructures.config.openshift.io cluster", "spec: cloudConfig: key: config name: cloud-provider-config # platformSpec: nutanix: failureDomains: - cluster: type: UUID uuid: <uuid> name: <failure_domain_name> subnets: - type: UUID uuid: <network_uuid> - cluster: type: UUID uuid: <uuid> name: <failure_domain_name> subnets: - type: UUID uuid: <network_uuid> - cluster: type: UUID uuid: <uuid> name: <failure_domain_name> subnets: - type: UUID uuid: <network_uuid>", "oc edit controlplanemachineset.machine.openshift.io cluster -n openshift-machine-api", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <cluster_name> name: cluster namespace: openshift-machine-api spec: template: machineType: machines_v1beta1_machine_openshift_io machines_v1beta1_machine_openshift_io: failureDomains: platform: Nutanix nutanix: - name: <failure_domain_name_1> - name: <failure_domain_name_2> - name: <failure_domain_name_3>", "oc describe infrastructures.config.openshift.io cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE <machine_set_name_1> 1 1 1 1 55m <machine_set_name_2> 1 1 1 1 55m", "oc edit machineset <machine_set_name_1> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <cluster_name> name: <machine_set_name_1> namespace: openshift-machine-api spec: replicas: 2 template: spec: providerSpec: value: apiVersion: machine.openshift.io/v1 failureDomain: name: <failure_domain_name_1> cluster: type: uuid uuid: <prism_element_uuid_1> subnets: - type: uuid uuid: <prism_element_network_uuid_1>", "oc get -n openshift-machine-api machines -l machine.openshift.io/cluster-api-machineset=<machine_set_name_1>", "NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Running AHV Unnamed Development-STS 4h <machine_name_original_2> Running AHV Unnamed Development-STS 4h", "oc annotate machine/<machine_name_original_1> -n openshift-machine-api machine.openshift.io/delete-machine=\"true\"", "oc scale --replicas=<twice_the_number_of_replicas> \\ 1 machineset <machine_set_name_1> -n openshift-machine-api", "oc get -n openshift-machine-api machines -l machine.openshift.io/cluster-api-machineset=<machine_set_name_1>", "oc scale --replicas=<original_number_of_replicas> \\ 1 machineset <machine_set_name_1> -n openshift-machine-api", "oc describe infrastructures.config.openshift.io cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE <original_machine_set_name_1> 1 1 1 1 55m <original_machine_set_name_2> 1 1 1 1 55m", "oc get machineset <original_machine_set_name_1> -n openshift-machine-api -o yaml > <new_machine_set_name_1>.yaml", "oc get machineset <original_machine_set_name_1> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "apiVersion: machine.openshift.io/v1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <cluster_name> name: <new_machine_set_name_1> namespace: openshift-machine-api spec: replicas: 2 template: spec: providerSpec: value: apiVersion: machine.openshift.io/v1 failureDomain: name: <failure_domain_name_1> cluster: type: uuid uuid: <prism_element_uuid_1> subnets: - type: uuid uuid: <prism_element_network_uuid_1>", "oc create -f <new_machine_set_name_1>.yaml", "oc get -n openshift-machine-api machines -l machine.openshift.io/cluster-api-machineset=<new_machine_set_name_1>", "NAME PHASE TYPE REGION ZONE AGE <machine_from_new_1> Provisioned AHV Unnamed Development-STS 25s <machine_from_new_2> Provisioning AHV Unnamed Development-STS 25s", "oc delete machineset <original_machine_set_name_1> -n openshift-machine-api", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE <new_machine_set_name_1> 1 1 1 1 4m12s <new_machine_set_name_2> 1 1 1 1 4m12s", "oc get -n openshift-machine-api machines", "NAME PHASE TYPE REGION ZONE AGE <machine_from_new_1> Running AHV Unnamed Development-STS 5m41s <machine_from_new_2> Running AHV Unnamed Development-STS 5m41s <machine_from_original_1> Deleting AHV Unnamed Development-STS 4h <machine_from_original_2> Deleting AHV Unnamed Development-STS 4h", "NAME PHASE TYPE REGION ZONE AGE <machine_from_new_1> Running AHV Unnamed Development-STS 6m30s <machine_from_new_2> Running AHV Unnamed Development-STS 6m30s", "oc describe machine <machine_from_new_1> -n openshift-machine-api" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_nutanix/nutanix-failure-domains
Chapter 12. Next steps
Chapter 12. steps To start deploying your OpenShift Data Foundation, you can use the internal mode within OpenShift Container Platform or use external mode to make available services from a cluster running outside of OpenShift Container Platform. Depending on your requirement, go to the respective deployment guides. Internal mode Deploying OpenShift Data Foundation using Amazon web services Deploying OpenShift Data Foundation using Bare Metal Deploying OpenShift Data Foundation using VMWare vSphere Deploying OpenShift Data Foundation using Microsoft Azure Deploying OpenShift Data Foundation using Google Cloud Deploying OpenShift Data Foundation using Red Hat OpenStack Platform [Technology Preview] Deploying OpenShift Data Foundation on IBM Power Deploying OpenShift Data Foundation on IBM Z External mode Deploying OpenShift Data Foundation in external mode
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/planning_your_deployment/next_steps
Chapter 4. Application tuning and deployment
Chapter 4. Application tuning and deployment Tuning a real-time kernel with a combination of optimal configurations and settings can help in enhancing and developing RHEL for Real Time applications. Note In general, try to use POSIX defined APIs (application programming interfaces). RHEL for Real Time is compliant with POSIX standards. Latency reduction in RHEL for Real Time kernel is also based on POSIX . 4.1. Signal processing in real-time applications Traditional UNIX and POSIX signals have their uses, especially for error handling, but they are not suitable as an event delivery mechanism in real-time applications. This is because the current Linux kernel signal handling code is quite complex, mainly due to legacy behavior and the many APIs that need to be supported. This complexity means that the code paths that are taken when delivering a signal are not always optimal, and long latencies can be experienced by applications. The original motivation behind UNIX signals was to multiplex one thread of control (the process) between different "threads" of execution. Signals behave somewhat like operating system interrupts. That is, when a signal is delivered to an application, the application's context is saved and it starts executing a previously registered signal handler. Once the signal handler completes, the application returns to executing where it was when the signal was delivered. This can get complicated in practice. Signals are too non-deterministic to trust in a real-time application. A better option is to use POSIX Threads (pthreads) to distribute your workload and communicate between various components. You can coordinate groups of threads using the pthreads mechanisms of mutexes, condition variables, and barriers. The code paths through these relatively new constructs are much cleaner than the legacy handling code for signals. Additional resources Requirements of the POSIX Signal Model 4.2. Synchronizing threads The sched_yield function is a synchronization mechanism that can allow lower priority threads a chance to run. This type of request is prone to failure when issued from within a poorly-written application. A higher priority thread can call sched_yield() to allow other threads a chance to run. The calling process gets moved to the tail of the queue of processes running at that priority. When this occurs in a situation where there are no other processes running at the same priority, the calling process continues running. If the priority of that process is high, it can potentially create a busy loop, rendering the machine unusable. When a SCHED_DEADLINE task calls sched_yield() , it gives up the configured CPU, and the remaining runtime is immediately throttled until the period. The sched_yield() behavior allows the task to wake up at the start of the period. The scheduler is better able to determine when, and if, there actually are other threads waiting to run. Avoid using sched_yield() on any real-time task. Procedure To call the sched_yield() function, run the following code: The SCHED_DEADLINE task gets throttled by the conflict-based search (CBS) algorithm until the period (start of execution of the loop). Additional resources pthread.h(P) , sched_yield(2) , and sched_yield(3p) man pages on your system 4.3. Real-time scheduler priorities The systemd command can be used to set real-time priority for services launched during the boot process. Some kernel threads can be given a very high priority. This allows the default priorities to integrate well with the requirements of the Real Time Specification for Java (RTSJ). RTSJ requires a range of priorities from 10 to 89. For deployments where RTSJ is not in use, there is a wide range of scheduling priorities below 90 that can be used by applications. Use extreme caution when scheduling any application thread above priority 49 because it can prevent essential system services from running, because it can prevent essential system services from running. This can result in unpredictable behavior, including blocked network traffic, blocked virtual memory paging, and data corruption due to blocked filesystem journaling. If any application threads are scheduled above priority 89, ensure that the threads run only a very short code path. Failure to do so would undermine the low latency capabilities of the RHEL for Real Time kernel. Setting real-time priority for users without mandatory privileges By default, only users with root permissions on the application can change priority and scheduling information. To provide root permissions, you can modify settings and the preferred method is to add a user to the realtime group. Important You can also change user privileges by editing the /etc/security/limits.conf file. However, this can result in duplication and render the system unusable for regular users. If you decide to edit this file, exercise caution and always create a copy before making changes. 4.4. Loading dynamic libraries When developing real-time application, consider resolving symbols at startup to avoid non-deterministic latencies during program execution. Resolving symbols at startup can slow down program initialization. You can instruct Dynamic Libraries to load at application startup by setting the LD_BIND_NOW variable with ld.so , the dynamic linker/loader. For example, the following shell script exports the LD_BIND_NOW variable with a value of 1 , then runs a program with a scheduler policy of FIFO and a priority of 1 . Additional resources ld.so(8) man page on your system
[ "for(;;) { do_the_computation(); /* * Notify the scheduler at the end of the computation * This syscall will block until the next replenishment */ sched_yield(); }", "#!/bin/sh LD_BIND_NOW=1 export LD_BIND_NOW chrt --fifo 1 /opt/myapp/myapp-server &" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/9/html/optimizing_rhel_9_for_real_time_for_low_latency_operation/assembly_application-tuning-and-deployment_optimizing-rhel9-for-real-time-for-low-latency-operation
4.322. texlive-texmf
4.322. texlive-texmf 4.322.1. RHBA-2011:1677 - texlive-texmf bug fix update Updated texlive-texmf packages that fix one bug are now available for Red Hat Enterprise Linux 6. The texlive-texmf packages contain a texmf distribution based upon TeXLive. TeXLive is an implementation of TeX for Linux or UNIX systems. TeX takes a text file and a set of formatting commands as input and creates a printable file as output. Usually, TeX is used in conjunction with a higher level formatting package like LaTeX or PlainTeX. Bug Fix BZ# 711344 Prior to this update, LaTeX did not build documents if the source file was more than five years old. An error message appeared and requested manual confirmation which stopped the build process. With this update, the error message has been changed to a message stating that the source file is more than five years old. Now, the build process completes successfully. All users of texlive-texmf are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/texlive-texmf
Chapter 4. New features
Chapter 4. New features This part describes new features and major enhancements introduced in Red Hat Enterprise Linux 8.10. 4.1. Installer and image creation Ability to use partitioning mode on the blueprint filesystem customization With this update, while using RHEL image builder, you can customize your blueprint with the chosen filesystem customization. You can choose one of the following partition modes while you create an image: Default: auto-lvm LVM: the image uses Logical Volume Manager (LVM) even without extra partitions Raw: the image uses raw partitioning even with extra partitions Jira:RHELDOCS-16337 [1] Filesystem customization policy changes in image builder The following policy changes are in place when using the RHEL image builder filesystem customization in blueprints: Currently, mountpoint and minimum partition minsize can be set. The following image types do not support filesystem customizations: image-installer edge-installer edge-simplified-installer The following image types do not create partitioned operating systems images. Customizing their filesystem is meaningless: edge-commit edge-container tar container The blueprint now supports the mountpoint customization for tpm and its sub-directories. Jira:RHELDOCS-17261 [1] 4.2. Security SCAP Security Guide rebased to 0.1.72 The SCAP Security Guide (SSG) packages have been rebased to upstream version 0.1.72. This version provides bug fixes and various enhancements, most notably: CIS profiles are updated to align with the latest benchmarks. The PCI DSS profile is aligned with the PCI DSS policy version 4.0. STIG profiles are aligned with the latest DISA STIG policies. For additional information, see the SCAP Security Guide release notes . Jira:RHEL-25250 [1] OpenSSL now contains protections against Bleichenbacher-like attacks This release of the OpenSSL TLS toolkit introduces API-level protections against Bleichenbacher-like attacks on the RSA PKCS #1 v1.5 decryption process. The RSA decryption now returns a randomly generated deterministic message instead of an error if it detects an error when checking padding during a PKCS #1 v1.5 decryption. The change provides general protection against vulnerabilities such as CVE-2020-25659 and CVE-2020-25657 . You can disable this protection by calling the EVP_PKEY_CTX_ctrl_str(ctx, "rsa_pkcs1_implicit_rejection". "0") function on the RSA decryption context, but this makes your system more vulnerable. Jira:RHEL-17689 [1] librdkafka rebased to 1.6.1 The librdkafka implementation of the Apache Kafka protocol has been rebased to upstream version 1.6.1. This is the first major feature release for RHEL 8. The rebase provides many important enhancements and bug fixes. For all relevant changes, see the CHANGELOG.md document provided in the librdkafka package. Note This update changes configuration defaults and deprecates some configuration properties. Read the Upgrade considerations section in CHANGELOG.md for more details. The API (C & C++) and ABI (c) in this version are compatible with older versions of librdkafka , but some changes to the configuration properties might require changes to existing applications. Jira:RHEL-12892 [1] libkcapi rebased to 1.4.0 The libkcapi library, which provides access to the Linux kernel cryptographic API, has been rebased to upstream version 1.4.0. The update includes various enhancements and bug fixes, most notably: Added the sm3sum and sm3hmac tools. Added the kcapi_md_sm3 and kcapi_md_hmac_sm3 APIs. Added SM4 convenience functions. Fixed support for link-time optimization (LTO). Fixed LTO regression testing. Fixed support for AEAD encryption of an arbitrary size with kcapi-enc . Jira:RHEL-5366 [1] stunnel rebased to 5.71 The stunnel TLS/SSL tunneling service has been rebased to upstream version 5.71. This update changes the behavior of OpenSSL 1.1 and later versions in FIPS mode. If OpenSSL is in FIPS mode and stunnel default FIPS configuration is set to no , stunnel adapts to OpenSSL and FIPS mode is enabled. Additional new features include: Added support for modern PostgreSQL clients. You can use the protocolHeader service-level option to insert custom connect protocol negotiation headers. You can use the protocolHost option to control the client SMTP protocol negotiation HELO/EHLO value. Added client-side support for Client-side protocol = ldap . You can now configure session resumption by using the service-level sessionResume option. Added support to request client certificates in server mode with CApath (previously, only CAfile was supported). Improved file reading and logging performance. Added support for configurable delay for the retry option. In client mode, OCSP stapling is requested and verified when verifyChain is set. In server mode, OCSP stapling is always available. Inconclusive OCSP verification breaks TLS negotiation. You can disable this by setting OCSPrequire = no . Jira:RHEL-2340 [1] OpenSSH limits artificial delays in authentication OpenSSH's response after login failure is artificially delayed to prevent user enumeration attacks. This update introduces an upper limit so that such artificial delays do not become excessively long when remote authentication takes too long, for example in privilege access management (PAM) processing. Jira:RHEL-1684 libkcapi now provides an option for specifying target file names in hash-sum calculations This update of the libkcapi (Linux kernel cryptographic API) packages introduces the new option -T for specifying target file names in hash-sum calculations. The value of this option overrides file names specified in processed HMAC files. You can use this option only with the -c option, for example: Jira:RHEL-15300 [1] audit rebased to 3.1.2 The Linux Audit system has been updated to version 3.1.2, which provides bug fixes, enhancements, and performance improvements over the previously released version 3.0.7. Notable enhancements include: The auparse library now interprets unnamed and anonymous sockets. You can use the new keyword this-hour in the start and end options of the ausearch and aureport tools. User-friendly keywords for signals have been added to the auditctl program. Handling of corrupt logs in auparse has been improved. The ProtectControlGroups option is now disabled by default in the auditd service. Rule checking for the exclude filter has been fixed. The interpretation of OPENAT2 fields has been enhanced. The audispd af_unix plugin has been moved to a standalone program. The Python binding has been changed to prevent setting Audit rules from the Python API. This change was made due to a bug in the Simplified Wrapper and Interface Generator (SWIG). Jira:RHEL-15001 [1] 4.3. Shells and command-line tools openCryptoki rebased to version 3.22.0 The opencryptoki package has been updated to version 3.22.0. Notable changes include: Added support for the AES-XTS key type by using the CPACF protected keys. Added support for managing certificate objects. Added support for public sessions with the no-login option. Added support for logging in as the Security Officer (SO). Added support for importing and exporting the Edwards and Montgomery keys. Added support for importing the RSA-PSS keys and certificates. For security reasons, the 2 key parts of an AES-XTS key should not be the same. This update adds checks to the key generation and import process to ensure this. Various bug fixes have been implemented. Jira:RHEL-11413 [1] 4.4. Infrastructure services chrony rebased to version 4.5 The chrony suite has been updated to version 4.5. Notable changes include: Added periodic refresh of IP addresses of Network Time Protocol (NTP) sources specified by hostname. The default interval is two weeks and it can be disabled by adding refresh 0 to the chrony.conf file. Improved automatic replacement of unreachable NTP sources. Improved logging of important changes made by the chronyc utility. Improved logging of source selection failures and falsetickers. Added the hwtstimeout directive to configure timeout for late hardware transmit timestamps. Added experimental support for corrections provided by Precision Time Protocol (PTP) transparent clocks to reach accuracy of PTP with hardware timestamping. Fixed the presend option in interleaved mode. Fixed reloading of modified sources specified by IP address from the sourcedir directories. Jira:RHEL-21069 linuxptp rebased to version 4.2 The linuxptp protocol has been updated to version 4.2. Notable changes include: Added support for multiple domains in the phc2sys utility. Added support for notifications on clock updates and changes in the Precision Time Protocol (PTP) parent dataset, for example, clock class. Added support for PTP Power Profile, namely IEEE C37.238-2011 and IEEE C37.238-2017. Jira:RHEL-21326 [1] 4.5. Networking firewalld now avoids unnecessary firewall rule flushes The firewalld service does not remove all existing rules from the iptables configuration if both following conditions are met: firewalld is using the nftables backend. There are no firewall rules created with the --direct option. This change aims at reducing unnecessary operations (firewall rules flushes) and improves integration with other software. Jira:RHEL-47595 The ss utility adds visibility improvement to TCP bound-inactive sockets The iproute2 suite provides a collection of utilities to control TCP/IP networking traffic. TCP bound-inactive sockets are attached to an IP address and a port number but neither connected nor listening on TCP ports. The socket services ( ss ) utility adds support for the kernel to dump TCP bound-inactive sockets. You can view those sockets with the following command options: ss --all : to dump all sockets including TCP bound-inactive ones ss --bound-inactive : to dump only bound-inactive sockets Jira:RHEL-6113 [1] nispor rebased to version 1.2.10 The nispor packages have been upgraded to upstream version 1.2.10, which provides several enhancements and bug fixes over the version: Added support for NetStateFilter to use the kernel filter on network routes and interfaces. Single Root Input and Output Virtualization (SR-IOV) interfaces can query SR-IOV Virtual Function (SR-IOV VF) information per (VF). Newly supported bonding options: lacp_active , arp_missed_max , and ns_ip6_target . Bugzilla:2153166 4.6. Kernel Kernel version in RHEL 8.10 Red Hat Enterprise Linux 8.10 is distributed with the kernel version 4.18.0-553. rtla rebased to version 6.6 of the upstream kernel source code The rtla utility has been upgraded to the latest upstream version, which provides multiple bug fixes and enhancements. Notable changes include: Added the -C option to specify additional control groups for rtla threads to run in, apart from the main rtla thread. Added the --house-keeping option to place rtla threads on a housekeeping CPU and to put measurement threads on different CPUs. Added support to the timerlat tracer so that you can run timerlat hist and timerlat top threads in user space. Jira:RHEL-10081 [1] rteval was upgraded to the upstream version 3.7 With this update, the rteval utility has been upgraded to the upstream version 3.7. The most significant feature in this update concerns the isolcpus kernel parameter. This includes the ability to detect and use the isolcpus mechanism for measurement modules in rteval . As a result, it is easier for isolcpus users to use rteval to get accurate latency numbers and to achieve best latency results measured on a realtime kernel. Jira:RHEL-8967 [1] SGX is now fully supported Software Guard Extensions (SGX) is an Intel(R) technology for protecting software code and data from disclosure and modification. The RHEL kernel provides the SGX version 1 and 2 functionality. Version 1 enables platforms using the Flexible Launch Control mechanism to use the SGX technology. Version 2 adds Enclave Dynamic Memory Management (EDMM). Notable features include: Modifying EPCM permissions of regular enclave pages that belong to an initialized enclave. Dynamic addition of regular enclave pages to an initialized enclave. Expanding an initialized enclave to accommodate more threads. Removing regular and TCS pages from an initialized enclave. In this release, SGX moves from Technology Preview to a fully supported feature. Bugzilla:2041881 [1] The Intel data streaming accelerator driver is now fully supported The Intel data streaming accelerator driver (IDXD) is a kernel driver that provides an Intel CPU integrated accelerator. It includes a shared work queue with process address space ID ( pasid ) submission and shared virtual memory (SVM). In this release, IDXD moves from a Technology Preview to a fully supported feature. Jira:RHEL-10097 [1] rteval now supports adding and removing arbitrary CPUs from the default measurement CPU list With the rteval utility, you can add (using the + sign) or subtract (using the - sign) CPUs to the default measurement CPU list when using the --measurement-cpulist parameter, instead of having to specify an entire new list. Additionally, --measurement-run-on-isolcpus is introduced for adding the set of all isolated CPUs to the default measurement CPU list. This options covers the most common usecase of a real-time application running on isolated CPUs. Other usecases require a more generic feature. For example, some real-time applications used one isolated CPU for housekeeping, requiring it to be excluded from the default measurement CPU list. As a result, you can now not only add, but also remove arbitrary CPUs from the default measurement CPU list in a flexible way. Removing takes precedence over adding. This rule applies to both, CPUs specified with +/- signs and to those defined with --measurement-run-on-isolcpus . Jira:RHEL-21926 [1] 4.7. Boot loader DEP/NX support in the pre-boot stage The memory protection feature known as Data Execution Prevention (DEP), No Execute (NX), or Execute Disable (XD), blocks the execution of code that is marked as non-executable. DEP/NX has been available in RHEL at the operating system level. This release adds DEP/NX support in the GRUB and shim boot loaders. This can prevent certain vulnerabilities during the pre-boot stage, such as a malicious EFI driver that might execute certain attacks without the DEP/NX protection. Jira:RHEL-15856 [1] Support for TD RTMR measurement in GRUB and shim Intel(R) Trust Domain Extension (Intel(R) TDX) is a confidential computing technology that deploys hardware-isolated virtual machines (VMs) called Trust Domains (TDs). TDX extends the Virtual Machine Extensions (VMX) instructions and the Multi-key Total Memory Encryption (MKTME) feature with the TD VM guest. In a TD guest VM, all components in the boot chain, such as grub2 and shim , must log the event and measurement hash to runtime measurement registers (RTMR). TD guest runtime measurement in RTMR is the base for attestation applications. Applications on the TD guest rely on TD measurement to provide trust evidence to get confidential information, such as the key from the relaying part through the attestation service. With this release, the GRUB and shim boot loaders now support the TD measurement protocol. For more information about Intel(R) TDX, see Documentation for Intel(R) Trust Domain Extensions . Jira:RHEL-15583 [1] 4.8. File systems and storage The Storage RHEL System Roles now support shared LVM device management The RHEL System Roles now support the creation and management of shared logical volumes and volume groups. Jira:RHEL-14022 multipathd now supports detecting FPIN-Li events for NVMe devices Previously, the multipathd command would only monitor Integrity Fabric Performance Impact Notification (PFIN-Li) events on SCSI devices. multipathd could listen for Link Integrity events sent by a Fibre Channel fabric and use it to mark paths as marginal. This feature was only supported for multipath devices on top of SCSI devices, and multipathd was unable to mark Non-volatile Memory Express (NVMe) device paths as marginal by limiting the use of this feature. With this update, multipathd supports detecting FPIN-Li events for both SCSI and NVMe devices. As a result, multipath now does not use paths without a good fabric connection, while other paths are available. This helps to avoid IO delays in such situations. Jira:RHEL-6677 4.9. Dynamic programming languages, web and database servers Python 3.12 available in RHEL 8 RHEL 8.10 introduces Python 3.12, provided by the new package python3.12 and a suite of packages built for it, and the ubi8/python-312 container image. Notable enhancements compared to the previously released Python 3.11 include: Python introduces a new type statement and new type parameter syntax for generic classes and functions. Formatted string literal (f-strings) have been formalized in the grammar and can now be integrated into the parser directly. Python now provides a unique per-interpreter global interpreter lock (GIL). You can now use the buffer protocol from Python code. Dictionary, list, and set comprehensions in CPython are now inlined. This significantly increases the speed of a comprehension execution. CPython now supports the Linux perf profiler. CPython now provides stack overflow protection on supported platforms. To install packages from the python3.12 stack, use, for example: To run the interpreter, use, for example: See Installing and using Python for more information. For information about the length of support of Python 3.12, see Red Hat Enterprise Linux Application Streams Life Cycle . Jira:RHEL-14942 A new environment variable in Python to control parsing of email addresses To mitigate CVE-2023-27043 , a backward incompatible change to ensure stricter parsing of email addresses was introduced in Python 3. This update introduces a new PYTHON_EMAIL_DISABLE_STRICT_ADDR_PARSING environment variable. When you set this variable to true , the , less strict parsing behavior is the default for the entire system: However, individual calls to the affected functions can still enable stricter behavior. You can achieve the same result by creating the /etc/python/email.cfg configuration file with the following content: For more information, see the Knowledgebase article Mitigation of CVE-2023-27043 introducing stricter parsing of email addresses in Python . Jira:RHELDOCS-17369 [1] A new module stream: ruby:3.3 RHEL 8.10 introduces Ruby 3.3.0 in a new ruby:3.3 module stream. This version provides several performance improvements, bug and security fixes, and new features over Ruby 3.1 distributed with RHEL 8.7. Notable enhancements include: You can use the new Prism parser instead of Ripper . Prism is a portable, error tolerant, and maintainable recursive descent parser for the Ruby language. YJIT, the Ruby just-in-time (JIT) compiler implementation, is no longer experimental and it provides major performance improvements. The Regexp matching algorithm has been improved to reduce the impact of potential Regular Expression Denial of Service (ReDoS) vulnerabilities. The new experimental RJIT (a pure-Ruby JIT) compiler replaces MJIT. Use YJIT in production. A new M:N thread scheduler is now available. Other notable changes: You must now use the Lrama LALR parser generator instead of Bison . Several deprecated methods and constants have been removed. The Racc gem has been promoted from a default gem to a bundled gem. To install the ruby:3.3 module stream, use: If you want to upgrade from an earlier ruby module stream, see Switching to a later stream . For information about the length of support of Ruby 3.3, see Red Hat Enterprise Linux Application Streams Life Cycle . Jira:RHEL-17090 [1] A new module stream: php:8.2 RHEL 8.10 adds PHP 8.2, which provides several bug fixes and enhancements over version 8.0. With PHP 8.2 , you can: Define a custom type that is limited to one of a discrete number of possible values using the Enumerations (Enums) feature. Declare a property with the readonly modifier to prevent modification of the property after initialization. Use fibers, full-stack, and interruptible functions. Use readonly classes. Declare several new standalone types. Use a new Random extension. Define constraints in traits. To install the php:8.2 module stream, use the following command: If you want to upgrade from an earlier php stream, see Switching to a later stream . For details regarding PHP usage on RHEL 8, see Using the PHP scripting language . For information about the length of support for the php module streams, see the Red Hat Enterprise Linux Application Streams Life Cycle . Jira:RHEL-14705 [1] The name() method of the perl-DateTime-TimeZone module now returns the time zone name The perl-DateTime-TimeZone module has been updated to version 2.62, which changed the value that is returned by the name() method from the time zone alias to the main time zone name. For more information and an example, see the Knowledgebase article Change in the perl-DateTime-TimeZone API related to time zone name and alias . Jira:RHEL-35685 A new module stream: nginx:1.24 The nginx 1.24 web and proxy server is now available as the nginx:1.24 module stream. This update provides several bug fixes, security fixes, new features, and enhancements over the previously released version 1.22. New features and changes related to Transport Layer Security (TLS): Encryption keys are now automatically rotated for TLS session tickets when using shared memory in the ssl_session_cache directive. Memory usage has been optimized in configurations with Secure Sockets Layer (SSL) proxy. You can now disable looking up IPv4 addresses while resolving by using the ipv4=off parameter of the resolver directive. nginx now supports the USDproxy_protocol_tlv_* variables, which store the values of the Type-Length-Value (TLV) fields that appear in the PROXY v2 TLV protocol. The ngx_http_gzip_static_module module now supports byte ranges. Other changes: Header lines are now represented as linked lists in the internal API. nginx now concatenates identically named header strings passed to the FastCGI, SCGI, and uwsgi back ends in the USDr->header_in() method of the ngx_http_perl_module , and during lookups of the USDhttp_... , USDsent_http_... , USDsent_trailer_... , USDupstream_http_... , and USDupstream_trailer_... variables. nginx now displays a warning if protocol parameters of a listening socket are redefined. nginx now closes connections with lingering if pipelining was used by the client. The logging level of various SSL errors has been lowered, for example, from Critical to Informational . To install the nginx:1.24 stream, use: To upgrade from an earlier nginx stream, switch to a later stream . For more information, see Setting up and configuring NGINX . For information about the length of support for the nginx module streams, see the Red Hat Enterprise Linux Application Streams Life Cycle article. Jira:RHEL-14714 [1] A new module stream: mariadb:10.11 MariaDB 10.11 is now available as a new module stream, mariadb:10.11 . Notable enhancements over the previously available version 10.5 include: A new sys_schema feature. Atomic Data Definition Language (DDL) statements. A new GRANT ... TO PUBLIC privilege. Separate SUPER and READ ONLY ADMIN privileges. A new UUID database data type. Support for the Secure Socket Layer (SSL) protocol version 3; the MariaDB server now requires correctly configured SSL to start. Support for the natural sort order through the natural_sort_key() function. A new SFORMAT function for arbitrary text formatting. Changes to the UTF-8 charset and the UCA-14 collation. systemd socket activation files available in the /usr/share/ directory. Note that they are not a part of the default configuration in RHEL as opposed to upstream. Error messages containing the MariaDB string instead of MySQL . Error messages available in the Chinese language. Changes to the default logrotate file. For MariaDB and MySQL clients, the connection property specified on the command line (for example, --port=3306 ), now forces the protocol type of communication between the client and the server, such as tcp , socket , pipe , or memory . For more information about changes in MariaDB 10.11, see Notable differences between MariaDB 10.5 and MariaDB 10.11 . For more information about MariaDB, see Using MariaDB . To install the mariadb:10.11 stream, use: If you want to upgrade from the mariadb:10.5 module stream, see Upgrading from MariaDB 10.5 to MariaDB 10.11 . For information about the length of support for the mariadb module streams, see Red Hat Enterprise Linux Application Streams Life Cycle . Jira:RHEL-3637 A new module stream: postgresql:16 RHEL 8.10 introduces PostgreSQL 16, which provides several new features and enhancements over version 15. Notable enhancements include: Enhanced bulk loading improves performance. The libpq library now supports connection-level load balancing. You can use the new load_balance_hosts option for more efficient load balancing. You can now create custom configuration files and include them in the pg_hba.conf and pg_ident.conf files. PostgreSQL now supports regular expression matching on database and role entries in the pg_hba.conf file. Other changes include: PostgreSQL is no longer distributed with the postmaster binary. Users who start the postgresql server by using the provided systemd unit file (the systemctl start postgres command) are not affected by this change. If you previously started the postgresql server directly through the postmaster binary, you must now use the postgres binary instead. PostgreSQL no longer provides documentation in PDF format within the package. Use the online documentation instead. See also Using PostgreSQL . To install the postgresql:16 stream, use the following command: If you want to upgrade from an earlier postgresql stream within RHEL 8, follow the procedure described in Switching to a later stream and then migrate your PostgreSQL data as described in Migrating to a RHEL 8 version of PostgreSQL . For information about the length of support for the postgresql module streams, see the Red Hat Enterprise Linux Application Streams Life Cycle . Jira:RHEL-3636 Git rebased to version 2.43.0 The Git version control system has been updated to version 2.43.0, which provides bug fixes, enhancements, and performance improvements over the previously released version 2.39. Notable enhancements include: You can now use the new --source option with the git check-attr command to read the .gitattributes file from the provided tree-ish object instead of the current working directory. Git can now pass information from the WWW-Authenticate response-type header to credential helpers. In case of an empty commit, the git format-patch command now writes an output file containing a header of the commit instead of creating an empty file. You can now use the git blame --contents= <file> <revision> -- <path> command to find the origins of lines starting at <file> contents through the history that leads to <revision> . The git log --format command now accepts the %(decorate) placeholder for further customization to extend the capabilities provided by the --decorate option. Jira:RHEL-17103 [1] Git LFS rebased to version 3.4.1 The Git Large File Storage (LFS) extension has been updated to version 3.4.1, which provides bug fixes, enhancements, and performance improvements over the previously released version 3.2.0. Notable changes include: The git lfs push command can now read references and object IDs from standard input. Git LFS now handles alternative remotes without relying on Git. Git LFS now supports the WWW-Authenticate response-type header as a credential helper. Jira:RHEL-17102 [1] Increased performance of the Python interpreter All supported versions of Python in RHEL 8 are now compiled with the -O3 optimization flag, which is the default in upstream. As a result, you can observe increased performance of your Python applications and the interpreter itself. The change is available with the release of the following advisories: python3.12 - RHSA-2024:6961 python3.11 - RHSA-2024:6962 python3 - RHSA-2024:6975 the python39 module - RHSA-2024:5962 Jira:RHEL-49614 [1] , Jira:RHEL-49636, Jira:RHEL-49644, Jira:RHEL-49638 A new nodejs:22 module stream is now available A new module stream, nodejs:22 , is now available with the release of the RHEA-2025:0734 advisory. Node.js 22 included in RHEL 8.10 provides numerous new features, bug fixes, security fixes, and performance improvements over Node.js 20 available since RHEL 8.9. Notable changes include: The V8 JavaScript engine has been upgraded to version 12.4. The V8 Maglev compiler is now enabled by default on architectures where it is available (AMD and Intel 64-bit architectures and the 64-bit ARM architecture). Maglev improves performance for short-lived CLI programs. The npm package manager has been upgraded to version 10.9.0. The node --watch mode is now considered stable. In watch mode, changes in watched files cause the Node.js process to restart. The browser-compatible implementation of WebSocket is now considered stable and enabled by default. As a result, a WebSocket client to Node.js is available without external dependencies. Node.js now includes an experimental feature for execution of scripts from package.json . To use this feature, execute the node --run <script-in-package.json> command. To install the nodejs:22 module stream, enter: If you want to upgrade from the nodejs20 stream, see Switching to a later stream . For information about the length of support for the nodejs Application Streams, see Red Hat Enterprise Linux Application Streams Life Cycle . Jira:RHEL-35991 4.10. Compilers and development tools New GCC Toolset 14 GCC Toolset 14 is a compiler toolset that provides recent versions of development tools. It is available as an Application Stream in the form of a Software Collection in the AppStream repository. The following tools and versions are provided by GCC Toolset 14 available with the release of the RHEA-2024:8851 advisory: GCC 14.2 GDB 14.2 binutils 2.41 annobin 12.70 dwz 0.14 To install GCC Toolset 14, run the following command as root: To run a tool from GCC Toolset 14: To run a shell session where tool versions from GCC Toolset 14 override system versions of these tools: GCC Toolset 14 components are also available in the gcc-toolset-14-toolchain container image. For more information, see GCC Toolset 14 and Using GCC Toolset . Jira:RHEL-34596 [1] , Jira:RHEL-30411 GCC Toolset 14: GCC rebased to version 14.2 In GCC Toolset 14, the GNU Compiler Collection (GCC) has been updated to version 14.2 with the release of the RHEA-2024:8864 advisory. Notable changes include: Optimization and diagnostic improvements A new -fhardened umbrella option, which enables a set of hardening flags A new -fharden-control-flow-redundancy option to detect attacks that transfer control into the middle of functions A new strub type attribute to control stack scrubbing properties of functions and variables A new -finline-stringops option to force inline expansion of certain mem* functions Support for new OpenMP 5.1, 5.2, and 6.0 features Several new C23 features Multiple new C++23 and C++26 features Several resolved C++ defect reports New and improved experimental support for C++20, C++23, and C++26 in the C++ library Support for new CPUs in the 64-bit ARM architecture Multiple new instruction set architecture (ISA) extensions in the 64-bit Intel architecture, for example: AVX10.1, AVX-VNNI-INT16, SHA512, and SM4 New warnings in the GCC's static analyzer Certain warnings changed to errors; for details, see Porting to GCC 14 Various bug fixes For more information about changes in GCC 14, see the upstream GCC release notes . Jira:RHEL-30412 [1] GCC Toolset 14: GDB rebased to version 14.2 In GCC Toolset 14, GDB has been updated to version 14.2 with the release of the RHBA-2024:8862 advisory. The following paragraphs list notable changes since GDB 12.1. General: The info breakpoints command now displays enabled breakpoint locations of disabled breakpoints as in the y- state. Added support for debug sections compressed with Zstandard ( ELFCOMPRESS_ZSTD ) for ELF. The Text User Interface (TUI) no longer styles the source and assembly code highlighted by the current position indicator by default. To re-enable styling, use the new command set style tui-current-position . A new USD_inferior_thread_count convenience variable contains the number of live threads in the current inferior. For breakpoints with multiple code locations, GDB now prints the code location using the <breakpoint_number>.<location_number> syntax. When a breakpoint is hit, GDB now sets the USD_hit_bpnum and USD_hit_locno convenience variables to the hit breakpoint number and code location number. You can now disable the last hit breakpoint by using the disable USD_hit_bpnum command, or disable only the specific breakpoint code location by using the disable USD_hit_bpnum.USD_hit_locno command. Added support for the NO_COLOR environment variable. Added support for integer types larger than 64 bits. You can use new commands for multi-target feature configuration to configure remote target feature sets (see the set remote <name>-packet and show remote <name>-packet in Commands). Added support for the Debugger Adapter Protocol. You can now use the new inferior keyword to make breakpoints inferior-specific (see break or watch in Commands). You can now use the new USD_shell() convenience function to execute a shell command during expression evaluation. Changes to existing commands: break , watch Using the thread or task keywords multiple times with the break and watch commands now results in an error instead of using the thread or task ID of the last instance of the keyword. Using more than one of the thread , task , and inferior keywords in the same break or watch command is now invalid. printf , dprintf The printf and dprintf commands now accept the %V output format, which formats an expression the same way as the print command. You can also modify the output format by using additional print options in brackets [... ] following the command, for example: printf "%V[-array-indexes on]", <array> . list You can now use the . argument to print the location around the point of execution in the current frame, or around the beginning of the main() function if the inferior has not started yet. Attempting to list more source lines in a file than are available now issues a warning, referring the user to the . argument. document user-defined It is now possible to document user-defined aliases. New commands: set print nibbles [on|off] (default: off ), show print nibbles - controls whether the print/t command displays binary values in groups of four bits (nibbles). set debug infcall [on|off] (default: off ), show debug infcall - prints additional debug messages about inferior function calls. set debug solib [on|off] (default: off ), show debug solib - prints additional debug messages about shared library handling. set print characters <LIMIT> , show print characters , print -characters <LIMIT> - controls how many characters of a string are printed. set debug breakpoint [on|off] (default: off ), show debug breakpoint - prints additional debug messages about breakpoint insertion and removal. maintenance print record-instruction [ N ] - prints the recorded information for a given instruction. maintenance info frame-unwinders - lists the frame unwinders currently in effect in the order of priority (highest first). maintenance wait-for-index-cache - waits until all pending writes to the index cache are completed. info main - prints information on the main symbol to identify an entry point into the program. set tui mouse-events [on|off] (default: on ), show tui mouse-events - controls whether mouse click events are sent to the TUI and Python extensions (when on ), or the terminal (when off ). Machine Interface (MI) changes: MI version 1 has been removed. MI now reports no-history when reverse execution history is exhausted. The thread and task breakpoint fields are no longer reported twice in the output of the -break-insert command. Thread-specific breakpoints can no longer be created on non-existent thread IDs. The --simple-values argument to the -stack-list-arguments , -stack-list-locals , -stack-list-variables , and -var-list-children commands now considers reference types as simple if the target is simple. The -break-insert command now accepts a new -g thread-group-id option to create inferior-specific breakpoints. Breakpoint-created notifications and the output of the -break-insert command can now include an optional inferior field for the main breakpoint and each breakpoint location. The async record stating the breakpoint-hit stopped reason now contains an optional field locno giving the code location number in case of a multi-location breakpoint. Changes in the GDB Python API: Events A new gdb.ThreadExitedEvent event. A new gdb.executable_changed event registry, which emits the ExecutableChangedEvent objects that have progspace and reload attributes. New gdb.events.new_progspace and gdb.events.free_progspace event registries, which emit the NewProgpspaceEvent and FreeProgspaceEvent event types. Both of these event types have a single attribute progspace to specify the gdb.Progspace program space that is being added to or removed from GDB. The gdb.unwinder.Unwinder class The name attribute is now read-only. The name argument of the __init__ function must be of the str type, otherwise a TypeError is raised. The enabled attribute now accepts only the bool type. The gdb.PendingFrame class New methods: name , is_valid , pc , language , find_sal , block , and function , which mirror similar methods of the gdb.Frame class. The frame-id argument of the create_unwind_info function can now be either an integer or a gdb.Value object for the pc , sp , and special attributes. A new gdb.unwinder.FrameId class, which can be passed to the gdb.PendingFrame.create_unwind_info function. The gdb.disassembler.DisassemblerResult class can no longer be sub-classed. The gdb.disassembler module now includes styling support. A new gdb.execute_mi(COMMAND, [ARG]... ) function, which invokes a GDB/MI command and returns result as a Python dictionary. A new gdb.block_signals() function, which returns a context manager that blocks any signals that GDB needs to handle. A new gdb.Thread subclass of the threading.Thread class, which calls the gdb.block_signals function in its start method. The gdb.parse_and_eval function has a new global_context parameter to restrict parsing on global symbols. The gdb.Inferior class A new arguments attribute, which holds the command-line arguments to the inferior, if known. A new main_name attribute, which holds the name of the inferior's main function, if known. New clear_env , set_env , and unset_env methods, which can modify the inferior's environment before it is started. The gdb.Value class A new assign method to assign a value of an object. A new to_array method to convert an array-like value to an array. The gdb.Progspace class A new objfile_for_address method, which returns the gdb.Objfile object that covers a given address (if exists). A new symbol_file attribute holding the gdb.Objfile object that corresponds to the Progspace.filename variable (or None if the filename is None ). A new executable_filename attribute, which holds the string with a filename that is set by the exec-file or file commands, or None if no executable file is set. The gdb.Breakpoint class A new inferior attribute, which contains the inferior ID (an integer) for breakpoints that are inferior-specific, or None if no such breakpoints are set. The gdb.Type class New is_array_like and is_string_like methods, which reflect whether a type might be array- or string-like regardless of the type's actual type code. A new gdb.ValuePrinter class, which can be used as the base class for the result of applying a pretty-printer. A newly implemented gdb.LazyString.__str__ method. The gdb.Frame class A new static_link method, which returns the outer frame of a nested function frame. A new gdb.Frame.language method that returns the name of the frame's language. The gdb.Command class GDB now reformats the doc string for the gdb.Command class and the gdb.Parameter sub-classes to remove unnecessary leading whitespace from each line before using the string as the help output. The gdb.Objfile class A new is_file attribute. A new gdb.format_address(ADDRESS, PROGSPACE, ARCHITECTURE) function, which uses the same format as when printing address, symbol, and offset information from the disassembler. A new gdb.current_language function, which returns the name of the current language. A new Python API for wrapping GDB's disassembler, including gdb.disassembler.register_disassembler(DISASSEMBLER, ARCH) , gdb.disassembler.Disassembler , gdb.disassembler.DisassembleInfo , gdb.disassembler.builtin_disassemble(INFO, MEMORY_SOURCE) , and gdb.disassembler.DisassemblerResult . A new gdb.print_options function, which returns a dictionary of the prevailing print options, in the form accepted by the gdb.Value.format_string function. The gdb.Value.format_string function gdb.Value.format_string now uses the format provided by the print command if it is called during a print or other similar operation. gdb.Value.format_string now accepts the summary keyword. A new gdb.BreakpointLocation Python type. The gdb.register_window_type method now restricts the set of acceptable window names. Architecture-specific changes: AMD and Intel 64-bit architectures Added support for disassembler styling using the libopcodes library, which is now used by default. You can modify how the disassembler output is styled by using the set style disassembler * commands. To use the Python Pygments styling instead, use the new maintenance set libopcodes-styling off command. The 64-bit ARM architecture Added support for dumping memory tag data for the Memory Tagging Extension (MTE). Added support for the Scalable Matrix Extension 1 and 2 (SME/SME2). Some features are still considered experimental or alpha, for example, manual function calls with ZA state or tracking Scalable Vector Graphics (SVG) changes based on DWARF. Added support for Thread Local Storage (TLS) variables. Added support for hardware watchpoints. The 64-bit IBM Z architecture Record and replay support for the new arch14 instructions on IBM Z targets, except for the specialized-function-assist instruction NNPA . IBM Power Systems, Little Endian Added base enablement support for POWER11. Jira:RHELDOCS-18598 [1] , Jira:RHEL-36225, Jira:RHEL-36518 GCC Toolset 14: annobin rebased to version 12.70 In GCC Toolset 14, annobin has been updated to version 12.70 with the release of the RHBA-2024:8863 advisory. The updated set of the annobin tools for testing binaries provides various bug fixes, introduces new tests, and updates the tools to build and work with newer versions of the GCC, Clang, LLVM, and Go compilers. With the enhanced tools, you can detect new issues in programs that are built in a non-standard way. Jira:RHEL-30409 [1] GCC Toolset 13: GCC supports AMD Zen 5 With the release of the RHBA-2024:8829 advisory, the GCC Toolset 13 version of GCC adds support for the AMD Zen 5 processor microarchitecture. To enable the support, use the -march=znver5 command-line option. Jira:RHEL-36524 [1] LLVM Toolset updated to 18.1.8 LLVM Toolset has been updated to version 18.1.8 with the release of the RHBA-2024:8828 advisory. Notable LLVM updates: The constant expression variants of the following instructions have been removed: and , or , lshr , ashr , zext , sext , fptrunc , fpext , fptoui , fptosi , uitofp , sitofp . The llvm.exp10 intrinsic has been added. The code_model attribute for global variables has been added. The backend for the AArch64, AMDGPU, PowerPC, RISC-V, SystemZ and x86 architectures has been improved. LLVM tools have been improved. Notable Clang enhancements: C++20 feature support: Clang no longer performs One Definition Rule (ODR) checks for declarations in the global module fragment. To enable more strict behavior, use the -Xclang -fno-skip-odr-check-in-gmf option. C++23 feature support: A new diagnostic flag -Wc++23-lambda-attributes has been added to warn about the use of attributes on lambdas. C++2c feature support: Clang now allows using the _ character as a placeholder variable name multiple times in the same scope. Attributes now expect unevaluated strings in attribute parameters that are string literals. The deprecated arithmetic conversion on enumerations from C++26 has been removed. The specification of template parameter initialization has been improved. For a complete list of changes, see the upstream release notes for Clang . ABI changes in Clang: Following the SystemV ABI for x86_64, the __int128 arguments are no longer split between a register and a stack slot. For more information, see the list of ABI changes in Clang . Notable backwards incompatible changes: A bug fix in the reversed argument order for templated operators breaks code in C++20 that was previously accepted in C++17. The GCC_INSTALL_PREFIX CMake variable (which sets the default --gcc-toolchain= ) is deprecated and will be removed. Specify the --gcc-install-dir= or --gcc-triple= option in a configuration file instead. The default extension name for precompiled headers (PCH) generation ( -c -xc-header and -c -xc++-header ) is now .pch instead of .gch . When -include a.h probes the a.h.gch file, the include now ignores a.h.gch if it is not a Clang PCH file or a directory containing any Clang PCH file. A bug that caused __has_cpp_attribute and __has_c_attribute to return incorrect values for certain C++-11-style attributes has been fixed. A bug in finding a matching operator!= while adding a reversed operator== has been fixed. The name mangling rules for function templates have been changed to accept that functions can be overloaded on their template parameter lists or requires-clauses. The -Wenum-constexpr-conversion warning is now enabled by default on system headers and macros. It will be turned into a hard (non-downgradable) error in the Clang release. A path to the imported modules for C++20 named modules can no longer be hardcoded. You must specify all the dependent modules from the command line. It is no longer possible to import modules by using import <module> ; Clang uses explicitly-built modules. For more details, see the list of potentially breaking changes . For more information, see the LLVM release notes and Clang release notes . LVM Toolset is a rolling Application Stream, and only the latest version is supported. For more information, see the Red Hat Enterprise Linux Application Streams Life Cycle document. Jira:RHEL-30907 [1] Rust Toolset rebased to version 1.79.0 Rust Toolset has been updated to version 1.79.0 with the release of the RHBA-2024:8827 advisory. Notable enhancements since the previously available version 1.75.0 include: A new offset_of! macro Support for C-string literals Support for inline const expressions Support for bounds in associated type position Improved automatic temporary lifetime extension Debug assertions for unsafe preconditions Rust Toolset is a rolling Application Stream, and only the latest version is supported. For more information, see the Red Hat Enterprise Linux Application Streams Life Cycle document. Jira:RHEL-30073 [1] Go Toolset rebased to version 1.22 Go Toolset has been updated to version 1.22 with the release of the RHSA-2024:8876 advisory. Notable enhancements include: Variables in for loops are now created per iteration, preventing accidental sharing bugs. Additionally, for loops can now range over integers. Commands in workspaces can now use a vendor directory for the dependencies of the workspace. The go get command no longer supports the legacy GOPATH mode. This change does not affect the go build and go test commands. The vet tool has been updated to match the new behavior of the for loops. CPU performance has been improved by keeping type-based garbage collection metadata nearer to each heap object. Go now provides improved inlining optimizations and better profile-guided optimization support for higher performance. A new math/rand/v2 package is available. Go now provides enhanced HTTP routing patterns with support for methods and wildcards. For more information, see the Go upstream release notes. Go Toolset is a rolling Application Stream, and only the latest version is supported. For more information, see the Red Hat Enterprise Linux Application Streams Life Cycle document. Jira:RHEL-46972 [1] elfutils rebased to version 0.190 The elfutils package has been updated to version 0.190. Notable improvements include: The libelf library now supports relative relocation (RELR). The libdw library now recognizes .debug_[ct]u_index sections. The eu-readelf utility now supports a new -Ds , --use-dynamic --symbol option to show symbols through the dynamic segment without using ELF sections. The eu-readelf utility can now show .gdb_index version 9. A new eu-scrlines utility compiles a list of source files associated with a specified DWARF or ELF file. A debuginfod server schema has changed for a 60% compression in file name representation (this requires reindexing). Jira:RHEL-15924 valgrind updated to 3.22 The valgrind package has been updated to version 3.22. Notable improvements include: valgrind memcheck now checks that the values given to the C functions memalign , posix_memalign , and aligned_alloc , and the C++17 aligned new operator are valid alignment values. valgrind memcheck now supports mismatch detection for C++14 sized and C++17 aligned new and delete operators. Added support for lazy reading of DWARF debugging information, resulting in faster startup when debuginfo packages are installed. Jira:RHEL-15926 Clang resource directory moved The Clang resource directory, where Clang stores its internal headers and libraries, has been moved from /usr/lib64/clang/17 to /usr/lib/clang/17 . Jira:RHEL-9299 A new grafana-selinux package Previously, the default installation of grafana-server ran as an unconfined_service_t SELinux type. This update adds the new grafana-selinux package, which contains an SELinux policy for grafana-server and which is installed by default with grafana-server . As a result, grafana-server now runs as grafana_t SELinux type. Jira:RHEL-7503 Updated GCC Toolset 13 GCC Toolset 13 is a compiler toolset that provides recent versions of development tools. It is available as an Application Stream in the form of a Software Collection in the AppStream repository. Notable changes introduced in RHEL 8.10 include: The GCC compiler has been updated to version 13.2.1, which provides many bug fixes and enhancements that are available in upstream GCC. binutils now support AMD CPUs based on the znver5 core through the -march=znver5 compiler switch. annobin has been updated to version 12.32. The annobin plugin for GCC now defaults to using a more compressed format for the notes that it stores in object files, resulting in smaller object files and faster link times, especially in large, complex programs. The following tools and versions are provided by GCC Toolset 13: Tool Version GCC 13.2.1 GDB 12.1 binutils 2.40 dwz 0.14 annobin 12.32 To install GCC Toolset 13, run the following command as root: To run a tool from GCC Toolset 13: To run a shell session where tool versions from GCC Toolset 13 override system versions of these tools: For more information, see GCC Toolset 13 and Using GCC Toolset . Jira:RHEL-25405 [1] LLVM Toolset rebased to version 17.0.6 LLVM Toolset has been updated to version 17.0.6. Notable enhancements include: The opaque pointers migration is now completed. Removed support for the legacy pass manager in middle-end optimization. Clang changes: C++20 coroutines are no longer considered experimental. Improved code generation for the std::move function and similar in unoptimized builds. For more information, see the LLVM and Clang upstream release notes. Jira:RHEL-9028 Rust Toolset rebased to version 1.75.0 Rust Toolset has been updated to version 1.75.0. Notable enhancements include: Constant evaluation time is now unlimited Cleaner panic messages Cargo registry authentication async fn and opaque return types in traits Jira:RHEL-12964 Go Toolset rebased to version 1.21.0 Go Toolset has been updated to version 1.21.0. Notable enhancements include: min , max , and clear built-ins have been added. Official support for profile guided optimization has been added. Package initialization order is now more precisely defined. Type inferencing is improved. Backwards compatibility support is improved. For more information, see the Go upstream release notes. Jira:RHEL-11872 [1] papi supports new processor microarchitectures With this enhancement, you can access performance monitoring hardware using papi events presets on the following processor microarchitectures: AMD Zen 4 4th Generation Intel(R) Xeon(R) Scalable Processors Jira:RHEL-9336 [1] , Jira:RHEL-9320, Jira:RHEL-9337 Ant rebased to version 1.10.9 The ant:1.10 module stream has been updated to version 1.10.9. This version provides support for code signing, using a provider class and provider argument. Note The updated ant:1.10 module stream provides only the ant and ant-lib packages. Remaining packages related to Ant are distributed in the javapackages-tools module in the unsupported CodeReady Linux Builder (CRB) repository and have not been updated. Packages from the updated ant:1.10 module stream cannot be used in parallel with packages from the javapackages-tools module. If you want to use the complete set of Ant-related packages, you must uninstall the ant:1.10 module and disable it, enable the CRB repository , and install the javapackages-tools module. Jira:RHEL-5365 New package: maven-openjdk21 The maven:3.8 module stream now includes the maven-openjdk21 subpackage, which provides the Maven JDK binding for OpenJDK 21 and configures Maven to use the system OpenJDK 21. Jira:RHEL-17126 [1] cmake rebased to version 3.26 The cmake package has been updated to version 3.26. Notable improvements include: Added support for the C17 and C18 language standards. cmake can now query the /etc/os-release file for operating system identification information. Added support for the CUDA 20 and nvtx3 libraries. Added support for the Python stable application binary interface. Added support for Perl 5 in the Simplified Wrapper and Interface Generator (SWIG) tool. Jira:RHEL-7396 4.11. Identity Management Identity Management users can now use external identity providers to authenticate to IdM With this enhancement, you can now associate Identity Management (IdM) users with external identity providers (IdPs) that support the OAuth 2 device authorization flow. Examples of such IdPs include Red Hat build of Keycloak, Microsoft Entra ID (formerly Azure Active Directory), GitHub, and Google. If an IdP reference and an associated IdP user ID exist in IdM, you can use them to enable an IdM user to authenticate at the external IdP. After performing authentication and authorization at the external IdP, the IdM user receives a Kerberos ticket with single sign-on capabilities. The user must authenticate with the SSSD version available in RHEL 8.7 or later. Jira:RHELPLAN-123140 [1] ipa rebased to version 4.9.13 The ipa package has been updated from version 4.9.12 to 4.9.13. Notable changes include: The installation of an IdM replica now occurs against a chosen server, not only for Kerberos authentication but also for all IPA API and CA requests. The performance of the cert-find command has been improved dramatically for situations with a large number of certificates. The ansible-freeipa package has been rebased from version 1.11 to 1.12.1. For more information, see the upstream release notes . Jira:RHEL-16936 Deleting expired KCM Kerberos tickets Previously, if you attempted to add a new credential to the Kerberos Credential Manager (KCM) and you had already reached the storage space limit, the new credential was rejected. The user storage space is limited by the max_uid_ccaches configuration option that has a default value of 64. With this update, if you have already reached the storage space limit, your oldest expired credential is removed and the new credential is added to the KCM. If there are no expired credentials, the operation fails and an error is returned. To prevent this issue, you can free some space by removing credentials using the kdestroy command. Jira:SSSD-6216 Support for bcrypt password hashing algorithm for local users With this update, you can enable the bcrypt password hashing algorithm for local users. To switch to the bcrypt hashing algorithm: Edit the /etc/authselect/system-auth and /etc/authselect/password-auth files by changing the pam_unix.so sha512 setting to pam_unix.so blowfish . Apply the changes: Change the password for a user by using the passwd command. In the /etc/shadow file, verify that the hashing algorithm is set to USD2bUSD , indicating that the bcrypt password hashing algorithm is now used. Jira:SSSD-6790 The idp Ansible module allows associating IdM users with external IdPs With this update, you can use the idp ansible-freeipa module to associate Identity Management (IdM) users with external identity providers (IdP) that support the OAuth 2 device authorization flow. If an IdP reference and an associated IdP user ID exist in IdM, you can use them to enable IdP authentication for an IdM user. After performing authentication and authorization at the external IdP, the IdM user receives a Kerberos ticket with single sign-on capabilities. The user must authenticate with the SSSD version available in RHEL 8.7 or later. Jira:RHEL-16938 IdM now supports the idoverrideuser , idoverridegroup and idview Ansible modules With this update, the ansible-freeipa package now contains the following modules: idoverrideuser Allows you to override user attributes for users stored in the Identity Management (IdM) LDAP server, for example, the user login name, home directory, certificate, or SSH keys. idoverridegroup Allows you to override attributes for groups stored in the IdM LDAP server, for example, the name of the group, its GID, or description. idview Allows you to organize user and group ID overrides and apply them to specific IdM hosts. In the future, you will be able to use these modules to enable AD users to use smart cards to log in to IdM. Jira:RHEL-16933 The delegation of DNS zone management enabled in ansible-freeipa You can now use the dnszone ansible-freeipa module to delegate DNS zone management. Use the permission or managedby variable of the dnszone module to set a per-zone access delegation permission. Jira:RHEL-19133 The ansible-freeipa ipauser and ipagroup modules now support a new renamed state With this update, you can use the renamed state in ansible-freeipa ipauser module to change the user name of an existing IdM user. You can also use this state in ansible-freeipa ipagroup module to change the group name of an existing IdM group. Jira:RHEL-4963 The runasuser_group parameter is now available in ansible-freeipa ipasudorule With this update, you can set Groups of RunAs Users for a sudo rule by using the ansible-freeipa ipasudorule module. The option is already available in the Identity Management (IdM) command-line interface and the IdM Web UI. Jira:RHEL-19129 389-ds-base rebased to version 1.4.3.39 The 389-ds-base package has been updated to version 1.4.3.39. Jira:RHEL-19028 The HAProxy protocol is now supported for the 389-ds-base package Previously, Directory Server did not differentiate incoming connections between proxy and non-proxy clients. With this update, you can use the new nsslapd-haproxy-trusted-ip multi-valued configuration attribute to configure the list of trusted proxy servers. When nsslapd-haproxy-trusted-ip is configured under the cn=config entry, Directory Server uses the HAProxy protocol to receive client IP addresses via an additional TCP header so that access control instructions (ACIs) can be correctly evaluated and client traffic can be logged. If an untrusted proxy server initiates a bind request, Directory Server rejects the request and records the following message to the error log file: Jira:RHEL-19240 samba rebased to version 4.19.4 The samba packages have been upgraded to upstream version 4.19.4, which provides bug fixes and enhancements over the version. The most notable changes are: Command-line options in the smbget utility have been renamed and removed for a consistent user experience. However, this can break existing scripts or jobs that use the utility. See the smbget --help command and smbget(1) man page for further details about the new options. If the winbind debug traceid option is enabled, the winbind service now logs, additionally, the following fields: traceid : Tracks the records belonging to the same request. depth : Tracks the request nesting level. Samba no longer uses its own cryptography implementations and, instead, now fully uses cryptographic functionality provided by the GnuTLS library. The directory name cache size option was removed. Note that the server message block version 1 (SMB1) protocol has been deprecated since Samba 4.11 and will be removed in a future release. Back up the database files before starting Samba. When the smbd , nmbd , or winbind services start, Samba automatically updates its tdb database files. Red Hat does not support downgrading tdb database files. After updating Samba, use the testparm utility to verify the /etc/samba/smb.conf file. Jira:RHEL-16483 [1] 4.12. The web console RHEL web console can now generate Ansible and shell scripts In the web console, you can now easily access and copy automation scripts on the kdump configuration page. You can then use the generated script to implement a specific kdump configuration on multiple systems. Jira:RHELDOCS-17060 [1] Simplified managing storage and resizing partitions on Storage The Storage section of the web console is now redesigned. The new design improved visibility across all views. The overview page now presents all storage objects in a comprehensive table, which makes it easier to perform operations directly. You can click any row to view detailed information and any supplementary actions. Additionally, you can now resize partitions from the Storage section. Jira:RHELDOCS-17056 [1] 4.13. Red Hat Enterprise Linux System Roles The ad_integration RHEL system role now supports configuring dynamic DNS update options With this update, the ad_integration RHEL system role supports configuring options for dynamic DNS updates using SSSD when integrated with Active Directory (AD). By default, SSSD will attempt to automatically refresh the DNS record: When the identity provider comes online (always). At a specified interval (optional configuration); by default, the AD provider updates the DNS record every 24 hours. You can change these and other settings using the new variables in ad_integration . For example, you can set ad_dyndns_refresh_interval to 172800 to change the DNS record refresh interval to 48 hours. For more details regarding the role variables, see the resources in the /usr/share/doc/rhel-system-roles/ad_integration/ directory. Jira:RHELDOCS-17372 [1] The metrics RHEL System Role now supports configuring PMIE webhooks With this update, you can automatically configure the global webhook_endpoint PMIE variable using the metrics_webhook_endpoint variable for the metrics RHEL System Role. This enables you to provide a custom URL for your environment that receives messages about important performance events, and is typically used with external tools such as Event-Driven Ansible. Jira:RHEL-18170 The bootloader RHEL system role This update introduces the bootloader RHEL system role. You can use this feature for stable and consistent configuration of boot loaders and kernels on your RHEL systems. For more details regarding requirements, role variables, and example playbooks, see the README resources in the /usr/share/doc/rhel-system-roles/bootloader/ directory. Jira:RHEL-3241 The logging role supports general queue and general action parameters in output modules Previously, it was not possible to configure general queue parameters and general action parameters with the logging role. With this update, the logging RHEL System Role supports configuration of general queue parameters and general action parameters in output modules. Jira:RHEL-15440 Support for new ha_cluster System Role features The ha_cluster System Role now supports the following features: Enablement of the repositories containing resilient storage packages, such as dlm or gfs2 . A Resilient Storage subscription is needed to access the repository. Configuration of fencing levels, allowing a cluster to use multiple devices to fence nodes. Configuration of node attributes. For information about the parameters you configure to implement these features, see Configuring a high-availability cluster by using the ha_cluster RHEL System Role . Jira:RHEL-4624 [1] , Jira:RHEL-22108 , Jira:RHEL-14090 New RHEL System Role for configuring fapolicyd With the new fapolicyd RHEL System Role, you can use Ansible playbooks to manage and configure the fapolicyd framework. The fapolicyd software framework controls the execution of applications based on a user-defined policy. Jira:RHEL-16542 The network RHEL System role now supports new route types With this enhancement, you can now use the following route types with the network RHEL System Role: blackhole prohibit unreachable Jira:RHEL-21491 [1] New rhc_insights.display_name option in the rhc role to set display names You can now configure or update the display name of the system registered to Red Hat Insights by using the new rhc_insights.display_name parameter. The parameter allows you to name the system based on your preference to easily manage systems in the Insights Inventory. If your system is already connected with Red Hat Insights, use the parameter to update the existing display name. If the display name is not set explicitly on registration, it is set to the hostname by default. It is not possible to automatically revert the display name to the hostname, but it can be set so manually. Jira:RHEL-16965 The RHEL system roles now support LVM snapshot management With this enhancement, you can use the new snapshot RHEL system roles to create, configure, and manage LVM snapshots. Jira:RHEL-16553 The postgresql RHEL System Role now supports PostgreSQL 16 The postgresql RHEL System Role, which installs, configures, manages, and starts the PostgreSQL server, now supports PostgreSQL 16. For more information about this system role, see Installing and configuring PostgreSQL by using the postgresql RHEL System Role . Jira:RHEL-18963 New rhc_insights.ansible_host option in the rhc role to set Ansible hostnames You can now configure or update the Ansible hostname for the systems registered to Red Hat Insights by using the new rhc_insights.ansible_host parameter. When set, the parameter changes the ansible_host configuration in the /etc/insights-client/insights-client.conf file to your selected Ansible hostname. If your system is already connected with Red Hat Insights, this parameter will update the existing Ansible hostname. Jira:RHEL-16975 ForwardToSyslog flag is now supported in the journald system role In the journald RHEL System Role, the journald_forward_to_syslog variable controls whether the received messages should be forwarded to the traditional syslog daemon or not. The default value of this variable is false . With this enhancement, you can now configure the ForwardToSyslog flag by setting journald_forward_to_syslog to true in the inventory. As a result, when using remote logging systems such as Splunk, the logs are available in the /var/log files. Jira:RHEL-21123 ratelimit_burst variable is only used if ratelimit_interval is set in logging system role Previously, in the logging RHEL System Role, when the ratelimit_interval variable was not set, the role would use the ratelimit_burst variable to set the rsyslog ratelimit.burst setting. But it had no effect because it is also required to set ratelimit_interval . With this enhancement, if ratelimit_interval is not set, the role does not set ratelimit.burst . If you want to set ratelimit.burst , you must set both ratelimit_interval and ratelimit_burst variables. Jira:RHEL-19047 Use the logging_max_message_size parameter instead of rsyslog_max_message_size in the logging system role Previously, even though the rsyslog_max_message_size parameter was not supported, the logging RHEL System Role was using rsyslog_max_message_size instead of using the logging_max_message_size parameter. This enhancement ensures that logging_max_message_size is used and not rsyslog_max_message_size to set the maximum size for the log messages. Jira:RHEL-15038 The ad_integration RHEL System Role now supports custom SSSD settings Previously, when using the ad_integration RHEL System Role, it was not possible to add custom settings to the [sssd] section in the sssd.conf file using the role. With this enhancement, the ad_integration role can now modify the sssd.conf file and, as a result, you can use custom SSSD settings. Jira:RHEL-21134 The ad_integration RHEL System Role now supports custom SSSD domain configuration settings Previously, when using the ad_integration RHEL System Role, it was not possible to add custom settings to the domain configuration section in the sssd.conf file using the role. With this enhancement, the ad_integration role can now modify the sssd.conf file and, as a result, you can use custom SSSD settings. Jira:RHEL-17667 New logging_preserve_fqdn variable for the logging RHEL System Role Previously, it was not possible to configure a fully qualified domain name (FQDN) using the logging system role. This update adds the optional logging_preserve_fqdn variable, which you can use to set the preserveFQDN configuration option in rsyslog to use the full FQDN instead of a short name in syslog entries. Jira:RHEL-15933 Support for creation of volumes without creating a file system With this enhancement, you can now create a new volume without creating a file system by specifying the fs_type=unformatted option. Similarly, existing file systems can be removed using the same approach by ensuring that the safe mode is disabled. Jira:RHEL-16213 The rhc system role now supports RHEL 7 systems You can now manage RHEL 7 systems by using the rhc system role. Register the RHEL 7 system to Red Hat Subscription Management (RHSM) and Insights and start managing your system using the rhc system role. Using the rhc_insights.remediation parameter has no impact on RHEL 7 systems as the Insights Remediation feature is currently not available on RHEL 7. Jira:RHEL-16977 New mssql_ha_prep_for_pacemaker variable Previously, the microsoft.sql.server RHEL System Role did not have a variable to control whether to configure SQL Server for Pacemaker. This update adds the mssql_ha_prep_for_pacemaker . Set the variable to false if you do not want to configure your system for Pacemaker and you want to use another HA solution. Jira:RHEL-19204 The sshd role now configures certificate-based SSH authentications With the sshd RHEL System Role, you can now configure and manage multiple SSH servers to authenticate by using SSH certificates. This makes SSH authentications more secure because certificates are signed by a trusted CA and provide fine-grained access control, expiration dates, and centralized management. Jira:RHEL-5985 selinux role now supports configuring SELinux in disabled mode With this update, the selinux RHEL System Role supports configuring SELinux ports, file contexts, and boolean mappings on nodes that have SELinux set to disabled. This is useful for configuration scenarios before you enable SELinux to permissive or enforcing mode on a system. Jira:RHEL-15871 selinux role now prints a message when specifying a non-existent module With this release, the selinux RHEL System Role prints an error message when you specify a non-existent module in the selinux_modules.path variable. Jira:RHEL-19044 4.14. Virtualization RHEL now supports Multi-FD migration of virtual machines With this update, multiple file descriptors (multi-FD) migration of virtual machines is now supported. Multi-FD migration uses multiple parallel connections to migrate a virtual machine, which can speed up the process by utilizing all the available network bandwidth. It is recommended to use this feature on high-speed networks (20 Gbps and higher). Jira:RHELDOCS-16970 [1] Secure Execution VMs on IBM Z now support cryptographic coprocessors With this update, you can now assign cryptographic coprocessors as mediated devices to a virtual machine (VM) with IBM Secure Execution on IBM Z. By assigning a cryptographic coprocessor as a mediated device to a Secure Execution VM, you can now use hardware encryption without compromising the security of the VM. Jira:RHEL-11597 [1] You can now replace SPICE with VNC in the web console With this update, you can use the web console to replace the SPICE remote display protocol with the VNC protocol in an existing virtual machine (VM). Because the support for the SPICE protocol is deprecated in RHEL 8 and will be removed in RHEL 9, VMs that use the SPICE protocol fail to migrate to RHEL 9. However, RHEL 8 VMs use SPICE by default, so you must switch from SPICE to VNC for a successful migration. Jira:RHELDOCS-18289 [1] New virtualization features in the RHEL web console With this update, the RHEL web console includes new features in the Virtual Machines page. You can now: Add an SSH public key during virtual machine (VM) creation. This public key will be stored in the ~/.ssh/authorized_keys file of the designated non-root user on the newly created VM, which provides you with an immediate SSH access to the specified user account. Select a pre-formatted block device type when creating a new storage pool. This is a more robust alternative to a physical disk device type, as it prevents unintentional reformatting of a raw disk device. This update also changes some default behavior in the Virtual Machines page: In the Add disk dialog, the Always attach option is now set by default. Jira:RHELDOCS-18323 [1] 4.15. RHEL in cloud environments New cloud-init clean option for deleting generated configuration files The cloud-init clean --configs option has been added for the cloud-init utility. You can use this option to delete unnecessary configuration files generated by cloud-init on your instance. For example, to delete cloud-init configuration files that define network setup, use the following command: Jira:RHEL-7312 [1] RHEL instances on EC2 now support IPv6 IMDS connections With this update, RHEL 8 and 9 instances on Amazon Elastic Cloud Compute (EC2) can use the IPv6 protocol to connect to Instance Metadata Service (IMDS). As a result, you can configure RHEL instances with cloud-init on EC2 with a dual-stack IPv4 and IPv6 connection. In addition, you can launch EC2 instances of RHEL with cloud-init in IPv6-only subnet. Jira:RHEL-7278 4.16. Containers The Container Tools packages have been updated The updated Container Tools packages, which contain the Podman, Buildah, Skopeo, crun, and runc tools, are now available. Notable bug fixes and enhancements over the version include: Notable changes in Podman v4.9: You can now use Podman to load the modules on-demand by using the podman --module <your_module_name> command and to override the system and user configuration files. A new podman farm command with a set of the create , set , remove , and update subcommands has been added. With these commands, you can farm out builds to machines running podman for different architectures. A new podman-compose command has been added, which runs Compose workloads by using an external compose provider such as Docker compose. The podman build command now supports the --layer-label and --cw options. The podman generate systemd command is deprecated. Use Quadlet to run containers and pods under systemd . The podman build command now supports Containerfiles with the HereDoc syntax. The podman machine init and podman machine set commands now support a new --usb option. Use this option to allow USB passthrough for the QEMU provider. The podman kube play command now supports a new --publish-all option. Use this option to expose all containerPorts on the host. For more information about notable changes, see upstream release notes . Jira:RHELPLAN-167794 [1] Podman now supports containers.conf modules You can use Podman modules to load a predetermined set of configurations. Podman modules are containers.conf files in the Tom's Obvious Minimal Language (TOML) format. These modules are located in the following directories, or their subdirectories: For rootless users: USDHOME/.config/containers/containers.conf.modules For root users: /etc/containers/containers.conf.modules , or /usr/share/containers/containers.conf.modules You can load the modules on-demand with the podman --module <your_module_name> command to override the system and user configuration files. Working with modules involve the following facts: You can specify modules multiple times by using the --module option. If <your_module_name> is the absolute path, the configuration file will be loaded directly. The relative paths are resolved relative to the three module directories mentioned previously. Modules in USDHOME override those in the /etc/ and /usr/share/ directories. For more information, see the upstream documentation . Jira:RHELPLAN-167830 [1] The Podman v4.9 RESTful API now displays data of progress With this enhancement, the Podman v4.9 RESTful API now displays data of progress when you pull or push an image to the registry. Jira:RHELPLAN-167822 [1] SQLite is now fully supported as a default database backend for Podman With Podman v4.9, the SQLite database backend for Podman, previously available as Technology Preview, is now fully supported. The SQLite database provides better stability, performance, and consistency when working with container metadata. The SQLite database backend is the default backend for new installations of RHEL 8.10. If you upgrade from a RHEL version, the default backend is BoltDB. If you have explicitly configured the database backend by using the database_backend option in the containers.conf file, then Podman will continue to use the specified backend. Jira:RHELPLAN-168179 [1] Administrators can set up isolation for firewall rules by using nftables You can use Netavark, a Podman container networking stack, on systems without iptables installed. Previously, when using the container networking interface (CNI) networking, the predecessor to Netavark, there was no way to set up container networking on systems without iptables installed. With this enhancement, the Netavark network stack works on systems with only nftables installed and improves isolation of automatically generated firewall rules. Jira:RHELDOCS-16955 [1] Containerfile now supports multi-line instructions You can use the multi-line HereDoc instructions (Here Document notation) in the Containerfile file to simplify this file and reduce the number of image layers caused by performing multiple RUN directives. For example, the original Containerfile can contain the following RUN directives: Instead of multiple RUN directives, you can use the HereDoc notation: Jira:RHELPLAN-168184 [1] Toolbx is now available With Toolbx, you can install the development and debugging tools, editors, and Software Development Kits (SDKs) into the Toolbx fully mutable container without affecting the base operating system. The Toolbx container is based on the registry.access.redhat.com/ubi8.10/toolbox:latest image. Jira:RHELDOCS-16241 [1]
[ "sha256hmac -c <hmac_file> -T <target_file>", "yum install python3.12 yum install python3.12-pip", "python3.12 python3.12 -m pip --help", "export PYTHON_EMAIL_DISABLE_STRICT_ADDR_PARSING=true", "[email_addr_parsing] PYTHON_EMAIL_DISABLE_STRICT_ADDR_PARSING = true", "yum module install ruby:3.3", "yum module install php:8.2", "yum module install nginx:1.24", "yum module install mariadb:10.11", "yum module install postgresql:16", "dnf module install nodejs:22", "yum install gcc-toolset-14", "scl enable gcc-toolset-14 <tool>", "scl enable gcc-toolset-14 bash", "yum install gcc-toolset-13", "scl enable gcc-toolset-13 tool", "scl enable gcc-toolset-13 bash", "authselect apply-changes", "[time_stamp] conn=5 op=-1 fd=64 Disconnect - Protocol error - Unknown Proxy - P4", "cloud-init clean --configs network", "RUN dnf update RUN dnf -y install golang RUN dnf -y install java", "RUN <<EOF dnf update dnf -y install golang dnf -y install java EOF" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.10_release_notes/new-features
4.4. Creating a Fencing Device
4.4. Creating a Fencing Device The following command creates a stonith device. If you use a single fence device for several nodes, using a different port of each node, you do not need to create a device separately for each node. Instead you can use the pcmk_host_map option to define which port goes to which node. For example, the following command creates a single fencing device called myapc-west-13 that uses an APC powerswitch called west-apc and uses port 15 for node west-13 . The following example, however, uses the APC powerswitch named west-apc to fence nodes west-13 using port 15, west-14 using port 17, west-15 using port 18, and west-16 using port 19.
[ "pcs stonith create stonith_id stonith_device_type [ stonith_device_options ]", "pcs stonith create MyStonith fence_virt pcmk_host_list=f1 op monitor interval=30s", "pcs stonith create myapc-west-13 fence_apc pcmk_host_list=\"west-13\" ipaddr=\"west-apc\" login=\"apc\" passwd=\"apc\" port=\"15\"", "pcs stonith create myapc fence_apc pcmk_host_list=\"west-13,west-14,west-15,west-16\" pcmk_host_map=\"west-13:15;west-14:17;west-15:18;west-16:19\" ipaddr=\"west-apc\" login=\"apc\" passwd=\"apc\"" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/s1-fencedevicecreate-haar
Chapter 8. Managing Cluster Resources
Chapter 8. Managing Cluster Resources This chapter describes various commands you can use to manage cluster resources. It provides information on the following procedures. Section 8.1, "Manually Moving Resources Around the Cluster" Section 8.2, "Moving Resources Due to Failure" Section 8.4, "Enabling, Disabling, and Banning Cluster Resources" Section 8.5, "Disabling a Monitor Operation" 8.1. Manually Moving Resources Around the Cluster You can override the cluster and force resources to move from their current location. There are two occasions when you would want to do this: When a node is under maintenance, and you need to move all resources running on that node to a different node When individually specified resources needs to be moved To move all resources running on a node to a different node, you put the node in standby mode. For information on putting a cluster node in standby node, see Section 4.4.5, "Standby Mode" . You can move individually specified resources in either of the following ways. You can use the pcs resource move command to move a resource off a node on which it is currently running, as described in Section 8.1.1, "Moving a Resource from its Current Node" . You can use the pcs resource relocate run command to move a resource to its preferred node, as determined by current cluster status, constraints, location of resources and other settings. For information on this command, see Section 8.1.2, "Moving a Resource to its Preferred Node" . 8.1.1. Moving a Resource from its Current Node To move a resource off the node on which it is currently running, use the following command, specifying the resource_id of the resource as defined. Specify the destination_node if you want to indicate on which node to run the resource that you are moving. Note When you execute the pcs resource move command, this adds a constraint to the resource to prevent it from running on the node on which it is currently running. You can execute the pcs resource clear or the pcs constraint delete command to remove the constraint. This does not necessarily move the resources back to the original node; where the resources can run at that point depends on how you have configured your resources initially. If you specify the --master parameter of the pcs resource move command, the scope of the constraint is limited to the master role and you must specify master_id rather than resource_id . You can optionally configure a lifetime parameter for the pcs resource move command to indicate a period of time the constraint should remain. You specify the units of a lifetime parameter according to the format defined in ISO 8601, which requires that you specify the unit as a capital letter such as Y (for years), M (for months), W (for weeks), D (for days), H (for hours), M (for minutes), and S (for seconds). To distinguish a unit of minutes(M) from a unit of months(M), you must specify PT before indicating the value in minutes. For example, a lifetime parameter of 5M indicates an interval of five months, while a lifetime parameter of PT5M indicates an interval of five minutes. The lifetime parameter is checked at intervals defined by the cluster-recheck-interval cluster property. By default this value is 15 minutes. If your configuration requires that you check this parameter more frequently, you can reset this value with the following command. You can optionally configure a --wait[= n ] parameter for the pcs resource move command to indicate the number of seconds to wait for the resource to start on the destination node before returning 0 if the resource is started or 1 if the resource has not yet started. If you do not specify n, the default resource timeout will be used. The following command moves the resource resource1 to node example-node2 and prevents it from moving back to the node on which it was originally running for one hour and thirty minutes. The following command moves the resource resource1 to node example-node2 and prevents it from moving back to the node on which it was originally running for thirty minutes. For information on resource constraints, see Chapter 7, Resource Constraints . 8.1.2. Moving a Resource to its Preferred Node After a resource has moved, either due to a failover or to an administrator manually moving the node, it will not necessarily move back to its original node even after the circumstances that caused the failover have been corrected. To relocate resources to their preferred node, use the following command. A preferred node is determined by the current cluster status, constraints, resource location, and other settings and may change over time. If you do not specify any resources, all resource are relocated to their preferred nodes. This command calculates the preferred node for each resource while ignoring resource stickiness. After calculating the preferred node, it creates location constraints which will cause the resources to move to their preferred nodes. Once the resources have been moved, the constraints are deleted automatically. To remove all constraints created by the pcs resource relocate run command, you can enter the pcs resource relocate clear command. To display the current status of resources and their optimal node ignoring resource stickiness, enter the pcs resource relocate show command.
[ "pcs resource move resource_id [ destination_node ] [--master] [lifetime= lifetime ]", "pcs property set cluster-recheck-interval= value", "pcs resource move resource1 example-node2 lifetime=PT1H30M", "pcs resource move resource1 example-node2 lifetime=PT30M", "pcs resource relocate run [ resource1 ] [ resource2 ]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/ch-manageresource-haar
Virtualization
Virtualization OpenShift Container Platform 4.11 OpenShift Virtualization installation, usage, and release notes Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/virtualization/index
Deployment Guide
Deployment Guide Red Hat Trusted Artifact Signer 1 Installing and configuring the Trusted Artifact Signer service for Red Hat platforms Red Hat Trusted Documentation Team
[ "OIDCIssuers: - Issuer: ' OIDC_ISSUER_URL ': ClientID: CLIENT_ID IssuerURL: ' OIDC_ISSUER_URL ' Type: email", "echo https://USD(oc get route keycloak -n keycloak-system | tail -n 1 | awk '{print USD2}')/auth/realms/trusted-artifact-signer", "trillian: database: create: false databaseSecretRef: name: trillian-mysql", "gunzip cosign-amd64.gz chmod +x cosign-amd64", "sudo mv cosign-amd64 /usr/local/bin/cosign", "login --token= TOKEN --server= SERVER_URL_AND_PORT", "oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443", "project PROJECT_NAME", "oc project trusted-artifact-signer", "export TUF_URL=USD(oc get tuf -o jsonpath='{.items[0].status.url}' -n trusted-artifact-signer) export OIDC_ISSUER_URL=https://USD(oc get route keycloak -n keycloak-system | tail -n 1 | awk '{print USD2}')/auth/realms/trusted-artifact-signer export COSIGN_FULCIO_URL=USD(oc get fulcio -o jsonpath='{.items[0].status.url}' -n trusted-artifact-signer) export COSIGN_REKOR_URL=USD(oc get rekor -o jsonpath='{.items[0].status.url}' -n trusted-artifact-signer) export COSIGN_MIRROR=USDTUF_URL export COSIGN_ROOT=USDTUF_URL/root.json export COSIGN_OIDC_CLIENT_ID=\"trusted-artifact-signer\" export COSIGN_OIDC_ISSUER=USDOIDC_ISSUER_URL export COSIGN_CERTIFICATE_OIDC_ISSUER=USDOIDC_ISSUER_URL export COSIGN_YES=\"true\" export SIGSTORE_FULCIO_URL=USDCOSIGN_FULCIO_URL export SIGSTORE_OIDC_ISSUER=USDCOSIGN_OIDC_ISSUER export SIGSTORE_REKOR_URL=USDCOSIGN_REKOR_URL export REKOR_REKOR_SERVER=USDCOSIGN_REKOR_URL", "cosign initialize", "echo \"FROM scratch\" > ./tmp.Dockerfile podman build . -f ./tmp.Dockerfile -t ttl.sh/rhtas/test-image:1h", "podman push ttl.sh/rhtas/test-image:1h", "cosign sign -y IMAGE_NAME:TAG", "cosign sign -y ttl.sh/rhtas/test-image:1h", "rm ./tmp.Dockerfile", "cosign verify --certificate-identity= SIGNING_EMAIL_ADDR IMAGE_NAME:TAG", "cosign verify [email protected] ttl.sh/rhtas/test-image:1h", "gunzip rekor-cli-amd64.gz chmod +x rekor-cli-amd64", "sudo mv rekor-cli-amd64 /usr/local/bin/rekor-cli", "rekor-cli get --log-index 0 --rekor_server USDCOSIGN_REKOR_URL --format json | jq", "rekor-cli search --email SIGNING_EMAIL_ADDR --rekor_server USDCOSIGN_REKOR_URL --format json | jq", "rekor-cli search --email [email protected] --rekor_server USDCOSIGN_REKOR_URL --format json | jq", "rekor-cli get --uuid UUID --rekor_server USDCOSIGN_REKOR_URL --format json | jq", "rekor-cli get --uuid 24296fb24b8ad77a71b9c1374e207537bafdd75b4f591dcee10f3f697f150d7cc5d0b725eea641e7 --rekor_server USDCOSIGN_REKOR_URL --format json | jq", "gunzip gitsign-amd64.gz chmod +x gitsign-amd64", "sudo mv gitsign-amd64 /usr/local/bin/gitsign", "login --token= TOKEN --server= SERVER_URL_AND_PORT", "oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443", "project PROJECT_NAME", "oc project trusted-artifact-signer", "export TUF_URL=USD(oc get tuf -o jsonpath='{.items[0].status.url}' -n trusted-artifact-signer) export OIDC_ISSUER_URL=https://USD(oc get route keycloak -n keycloak-system | tail -n 1 | awk '{print USD2}')/auth/realms/trusted-artifact-signer export COSIGN_FULCIO_URL=USD(oc get fulcio -o jsonpath='{.items[0].status.url}' -n trusted-artifact-signer) export COSIGN_REKOR_URL=USD(oc get rekor -o jsonpath='{.items[0].status.url}' -n trusted-artifact-signer) export COSIGN_MIRROR=USDTUF_URL export COSIGN_ROOT=USDTUF_URL/root.json export COSIGN_OIDC_CLIENT_ID=\"trusted-artifact-signer\" export COSIGN_OIDC_ISSUER=USDOIDC_ISSUER_URL export COSIGN_CERTIFICATE_OIDC_ISSUER=USDOIDC_ISSUER_URL export COSIGN_YES=\"true\" export SIGSTORE_FULCIO_URL=USDCOSIGN_FULCIO_URL export SIGSTORE_OIDC_ISSUER=USDCOSIGN_OIDC_ISSUER export SIGSTORE_REKOR_URL=USDCOSIGN_REKOR_URL export REKOR_REKOR_SERVER=USDCOSIGN_REKOR_URL", "git config --local commit.gpgsign true git config --local tag.gpgsign true git config --local gpg.x509.program gitsign git config --local gpg.format x509 git config --local gitsign.fulcio USDSIGSTORE_FULCIO_URL git config --local gitsign.rekor USDSIGSTORE_REKOR_URL git config --local gitsign.issuer USDSIGSTORE_OIDC_ISSUER git config --local gitsign.clientID trusted-artifact-signer", "git commit --allow-empty -S -m \"Test of a signed commit\"", "cosign initialize", "gitsign verify --certificate-identity= SIGNING_EMAIL --certificate-oidc-issuer=USDSIGSTORE_OIDC_ISSUER HEAD", "gitsign verify [email protected] --certificate-oidc-issuer=USDSIGSTORE_OIDC_ISSUER HEAD", "gunzip ec-amd64.gz chmod +x ec-amd64", "sudo mv ec-amd64 /usr/local/bin/ec", "login --token= TOKEN --server= SERVER_URL_AND_PORT", "oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443", "project PROJECT_NAME", "oc project trusted-artifact-signer", "export TUF_URL=USD(oc get tuf -o jsonpath='{.items[0].status.url}' -n trusted-artifact-signer) export OIDC_ISSUER_URL=https://USD(oc get route keycloak -n keycloak-system | tail -n 1 | awk '{print USD2}')/auth/realms/trusted-artifact-signer export COSIGN_FULCIO_URL=USD(oc get fulcio -o jsonpath='{.items[0].status.url}' -n trusted-artifact-signer) export COSIGN_REKOR_URL=USD(oc get rekor -o jsonpath='{.items[0].status.url}' -n trusted-artifact-signer) export COSIGN_MIRROR=USDTUF_URL export COSIGN_ROOT=USDTUF_URL/root.json export COSIGN_OIDC_CLIENT_ID=\"trusted-artifact-signer\" export COSIGN_OIDC_ISSUER=USDOIDC_ISSUER_URL export COSIGN_CERTIFICATE_OIDC_ISSUER=USDOIDC_ISSUER_URL export COSIGN_YES=\"true\" export SIGSTORE_FULCIO_URL=USDCOSIGN_FULCIO_URL export SIGSTORE_OIDC_ISSUER=USDCOSIGN_OIDC_ISSUER export SIGSTORE_REKOR_URL=USDCOSIGN_REKOR_URL export REKOR_REKOR_SERVER=USDCOSIGN_REKOR_URL", "cosign initialize", "echo \"FROM scratch\" > ./tmp.Dockerfile podman build . -f ./tmp.Dockerfile -t ttl.sh/rhtas/test-image:1h", "podman push ttl.sh/rhtas/test-image:1h", "cosign sign -y IMAGE_NAME:TAG", "cosign sign -y ttl.sh/rhtas/test-image:1h", "rm ./tmp.Dockerfile", "{ \"builder\": { \"id\": \"https://localhost/dummy-id\" }, \"buildType\": \"https://example.com/tekton-pipeline\", \"invocation\": {}, \"buildConfig\": {}, \"metadata\": { \"completeness\": { \"parameters\": false, \"environment\": false, \"materials\": false }, \"reproducible\": false }, \"materials\": [] }", "cosign attest -y --predicate ./predicate.json --type slsaprovenance IMAGE_NAME:TAG", "cosign attest -y --predicate ./predicate.json --type slsaprovenance ttl.sh/rhtas/test-image:1h", "cosign tree IMAGE_NAME:TAG", "cosign tree ttl.sh/rhtas/test-image:1h πŸ“¦ Supply Chain Security Related artifacts for an image: ttl.sh/rhtas/test-image@sha256:7de5fa822a9d1e507c36565ee0cf50c08faa64505461c844a3ce3944d23efa35 └── πŸ’Ύ Attestations for an image tag: ttl.sh/rhtas/test-image:sha256-7de5fa822a9d1e507c36565ee0cf50c08faa64505461c844a3ce3944d23efa35.att └── πŸ’ sha256:40d94d96a6d3ab3d94b429881e1b470ae9a3cac55a3ec874051bdecd9da06c2e └── πŸ” Signatures for an image tag: ttl.sh/rhtas/test-image:sha256-7de5fa822a9d1e507c36565ee0cf50c08faa64505461c844a3ce3944d23efa35.sig └── πŸ’ sha256:f32171250715d4538aec33adc40fac2343f5092631d4fc2457e2116a489387b7", "ec validate image --image IMAGE_NAME:TAG --certificate-identity-regexp ' SIGNER_EMAIL_ADDR ' --certificate-oidc-issuer-regexp 'keycloak-keycloak-system' --output yaml --show-successes", "ec validate image --image ttl.sh/rhtas/test-image:1h --certificate-identity-regexp '[email protected]' --certificate-oidc-issuer-regexp 'keycloak-keycloak-system' --output yaml --show-successes success: true successes: - metadata: code: builtin.attestation.signature_check msg: Pass - metadata: code: builtin.attestation.syntax_check msg: Pass - metadata: code: builtin.image.signature_check msg: Pass ec-version: v0.1.2427-499ef12 effective-time: \"2024-01-21T19:57:51.338191Z\" key: \"\" policy: {} success: true", "login --token= TOKEN --server= SERVER_URL_AND_PORT", "oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443", "edit Securesign NAME -n NAMESPACE", "oc edit Securesign securesign-sample -n trusted-artifact-signer", "OIDCIssuers: - Issuer: \"https://accounts.google.com\" IssuerURL: \"https://accounts.google.com\" ClientID: \" CLIENT_ID \" Type: email", "export OIDC_ISSUER_URL=https://accounts.google.com export COSIGN_OIDC_CLIENT_ID=\"314919563931-35zke44ouf2oiztjg7v8o8c2ge9usnd1.apps.googleexample.com\"", "echo SECRET > my-google-client-secret", "echo \"FROM scratch\" > ./tmp.Dockerfile podman build . -f ./tmp.Dockerfile -t ttl.sh/rhtas/test-image:1h", "podman push ttl.sh/rhtas/test-image:1h", "rm ./tmp.Dockerfile", "cosign sign -y --oidc-client-secret-file= SECRET_FILE IMAGE_NAME:TAG", "cosign sign -y --oidc-client-secret-file=my-google-client-secret ttl.sh/rhtas/test-image:1h", "login --token= TOKEN --server= SERVER_URL_AND_PORT", "oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443", "oc project keycloak-system", "cat <<EOF | oc apply -f - apiVersion: keycloak.org/v1alpha1 kind: Keycloak metadata: labels: app: sso name: keycloak spec: externalAccess: enabled: true instances: 1 keycloakDeploymentSpec: imagePullPolicy: Always postgresDeploymentSpec: imagePullPolicy: Always EOF", "cat <<EOF | oc apply -f - apiVersion: keycloak.org/v1alpha1 kind: KeycloakRealm metadata: labels: app: sso name: trusted-artifact-signer spec: instanceSelector: matchLabels: app: sso realm: displayName: Red-Hat-Trusted-Artifact-Signer enabled: true id: trusted-artifact-signer realm: trusted-artifact-signer sslRequired: none EOF", "cat <<EOF | oc apply -f - apiVersion: keycloak.org/v1alpha1 kind: KeycloakClient metadata: labels: app: sso name: trusted-artifact-signer spec: client: attributes: request.object.signature.alg: RS256 user.info.response.signature.alg: RS256 clientAuthenticatorType: client-secret clientId: trusted-artifact-signer defaultClientScopes: - profile - email description: Client for Red Hat Trusted Artifact Signer authentication directAccessGrantsEnabled: true implicitFlowEnabled: false name: trusted-artifact-signer protocol: openid-connect protocolMappers: - config: claim.name: email id.token.claim: \"true\" jsonType.label: String user.attribute: email userinfo.token.claim: \"true\" name: email protocol: openid-connect protocolMapper: oidc-usermodel-property-mapper - config: claim.name: email-verified id.token.claim: \"true\" user.attribute: emailVerified userinfo.token.claim: \"true\" name: email-verified protocol: openid-connect protocolMapper: oidc-usermodel-property-mapper - config: claim.name: aud claim.value: trusted-artifact-signer id.token.claim: \"true\" access.token.claim: \"true\" userinfo.token.claim: \"true\" name: audience protocol: openid-connect protocolMapper: oidc-hardcoded-claim-mapper publicClient: true standardFlowEnabled: true redirectUris: - \"*\" realmSelector: matchLabels: app: sso EOF", "cat <<EOF | oc apply -f - apiVersion: keycloak.org/v1alpha1 kind: KeycloakUser metadata: labels: app: sso name: jdoe spec: realmSelector: matchLabels: app: sso user: email: [email protected] enabled: true emailVerified: true credentials: - type: \"password\" value: \"secure\" firstName: Jane lastName: Doe username: jdoe EOF", "login --token= TOKEN --server= SERVER_URL_AND_PORT", "oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443", "create secret tls SECRET_NAME -n NAMESPACE --cert CERTIFICATE_FILE_NAME --key PRIVATE_KEY_FILE_NAME", "oc create secret tls keycloak-tls -n keycloak-system --cert certificate.pem --key key.pem", "spec: db: vendor: postgres host: postgresql-db usernameSecret: name: postgresql-db key: username passwordSecret: name: postgresql-db key: password", "spec: http: tlsSecret: keycloak-tls", "spec: fulcio: config: OIDCIssuers: - ClientID: CLIENT_ID Issuer: ' RHBK_REALM_ISSUER_URL ' IssuerURL: ' RHBK_REALM_ISSUER_URL ' Type: email", "spec: fulcio: config: OIDCIssuers: - ClientID: trusted-artifact-signer Issuer: 'https://keycloak-ingress-keycloak-system.apps.openshift.example.com/realms/trusted-artifact-signer' IssuerURL: 'https://keycloak-ingress-keycloak-system.apps.openshift.example.com/realms/trusted-artifact-signer' Type: email", "login --token= TOKEN --server= SERVER_URL_AND_PORT", "oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443", "oc get authentication cluster -o jsonpath='{.spec.serviceAccountIssuer}'", "edit Securesign NAME -n NAMESPACE", "oc edit Securesign securesign-sample -n trusted-artifact-signer", "OIDCIssuers: - Issuer: \"https://example.s3.us-east-1.aws.com/47bd6cg0vs5nn01mue83fbof94dj4m9c\" IssuerURL: \"https://example.s3.us-east-1.aws.com/47bd6cg0vs5nn01mue83fbof94dj4m9c\" ClientID: \"trusted-artifact-signer\" Type: kubernetes", "aws configure", "export account_id=USD(aws sts get-caller-identity --query \"Account\" --output text) export oidc_provider=\"USD(oc get authentication cluster -o jsonpath='{.spec.serviceAccountIssuer}' | cut -d '/' -f3-)\" export role_name=rhtas-sts export namespace=rhtas-sts export service_account=cosign-sts", "cat >trust-relationship.json <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::USD{account_id}:oidc-provider/USD{oidc_provider}\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"USD{oidc_provider}:aud\": \"trusted-artifact-signer\" } } } ] } EOF", "aws iam create-role --role-name rhtas-sts --assume-role-policy-document file://trust-relationship.json --description \"Red Hat Trusted Artifact Signer STS Role\"", "new-project NAMESPACE", "oc new-project rhtas-sts", "cat >service_account.yaml <<EOF apiVersion: v1 kind: ServiceAccount metadata: name: USDservice_account namespace: USDnamespace annotations: eks.amazonaws.com/role-arn: \"arn:aws:iam::USD{account_id}:role/USD{role_name}\" # optional: Defaults to \"sts.amazonaws.com\" if not set eks.amazonaws.com/audience: \"trusted-artifact-signer\" # optional: When \"true\", adds AWS_STS_REGIONAL_ENDPOINTS env var to containers eks.amazonaws.com/sts-regional-endpoints: \"true\" # optional: Defaults to 86400 for expirationSeconds if not set eks.amazonaws.com/token-expiration: \"86400\" EOF", "oc apply -f service_account.yaml", "cat >deployment.yaml <<EOF apiVersion: apps/v1 kind: Deployment metadata: name: cosign-sts namespace: USD{namespace} spec: selector: matchLabels: app: cosign-sts template: metadata: labels: app: cosign-sts spec: securityContext: runAsNonRoot: true serviceAccountName: cosign-sts containers: - args: - -c - env; cosign initialize --mirror=\\USDCOSIGN_MIRROR --root=\\USDCOSIGN_ROOT; while true; do sleep 86400; done command: - /bin/sh name: cosign image: registry.redhat.io/rhtas-tech-preview/cosign-rhel9@sha256:f4c2cec3fc1e24bbe094b511f6fe2fe3c6fa972da0edacaf6ac5672f06253a3e pullPolicy: IfNotPresent env: - name: AWS_ROLE_SESSION_NAME value: signer-identity-session - name: AWS_REGION value: us-east-1 - name: OPENSHIFT_APPS_SUBDOMAIN value: USD(oc get cm -n openshift-config-managed console-public -o go-template=\"{{ .data.consoleURL }}\" | sed 's@https://@@; s/^[^.]*\\.//') - name: OIDC_AUTHENTICATION_REALM value: \"trusted-artifact-signer\" - name: COSIGN_FULCIO_URL value: USD(oc get fulcio -o jsonpath='{.items[0].status.url}' -n trusted-artifact-signer) - name: COSIGN_OIDC_ISSUER value: USD(oc get authentication cluster -o jsonpath='{.spec.serviceAccountIssuer}') - name: COSIGN_CERTIFICATE_OIDC_ISSUER value: USD(oc get authentication cluster -o jsonpath='{.spec.serviceAccountIssuer}') - name: COSIGN_REKOR_URL value: USD(oc get rekor -o jsonpath='{.items[0].status.url}' -n trusted-artifact-signer) - name: COSIGN_MIRROR value: USD(oc get tuf -o jsonpath='{.items[0].status.url}' -n trusted-artifact-signer) - name: COSIGN_ROOT value: \"USD(oc get tuf -o jsonpath='{.items[0].status.url}' -n trusted-artifact-signer)/root.json\" - name: COSIGN_YES value: \"true\" securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: runAsNonRoot: true serviceAccount: USD{service_account} serviceAccountName: USD{service_account} terminationGracePeriodSeconds: 30 EOF", "oc apply -f deployment.yaml", "echo \"FROM scratch\" > ./tmp.Dockerfile podman build . -f ./tmp.Dockerfile -t ttl.sh/rhtas/test-image:1h", "podman push ttl.sh/rhtas/test-image:1h", "rm ./tmp.Dockerfile", "rsh -n NAMESPACE deployment/cosign-sts env IMAGE= IMAGE_NAME:TAG /bin/sh", "oc rsh -n rhtas-sts deployment/cosign-sts env IMAGE=ttl.sh/rhtas/test-image:1h /bin/sh", "cosign sign -y --identity-token=USD(cat USDAWS_WEB_IDENTITY_TOKEN_FILE) ttl.sh/rhtas/test-image:1h", "cosign verify --certificate-identity=https://kubernetes.io/namespaces/USD(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace)/serviceaccounts/cosign-sts --certificate-oidc-issuer=USDCOSIGN_CERTIFICATE_OIDC_ISSUER ttl.sh/rhtas/test-image:1h", "login --token= TOKEN --server= SERVER_URL_AND_PORT", "oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443", "oc get routes -n keycloak-system keycloak -o jsonpath='https://{.spec.host}'", "oc get secret/credential-keycloak -n keycloak-system -o jsonpath='{ .data.ADMIN_PASSWORD }' | base64 -d", "export RHTAS_APP_REGISTRATION=USD(az ad app create --display-name=rhtas --web-redirect-uris=http://localhost:0/auth/callback --enable-id-token-issuance --query appId -o tsv)", "export RHTAS_APP_REGISTRATION_CLIENT_SECRET=USD(az ad app credential reset --id=USDRHTAS_APP_REGISTRATION --display-name=\"RHTAS Client Secret\" -o tsv --query 'password')", "az rest -m post --headers Content-Type=application/json --uri https://graph.microsoft.com/v1.0/policies/claimsMappingPolicies --body '{\"definition\": [\"{\\\"ClaimsMappingPolicy\\\":{\\\"Version\\\":1,\\\"IncludeBasicClaimSet\\\":\\\"true\\\", \\\"ClaimsSchema\\\":[{\\\"value\\\":\\\"true\\\",\\\"JwtClaimType\\\":\\\"email_verified\\\"}]}}\"],\"displayName\": \"EmailVerified\"}'", "export RHTAS_APP_REGISTRATION_OBJ_ID=USD(az ad app show --id USDRHTAS_APP_REGISTRATION --output tsv --query id)", "az rest --method PATCH --uri https://graph.microsoft.com/v1.0/applications/USD{RHTAS_APP_REGISTRATION_OBJ_ID} --headers 'Content-Type=application/json' --body \"{\\\"api\\\":{\\\"acceptMappedClaims\\\":true}}\"", "export SERVICE_PRINCIPAL_ID=USD(az ad sp create --id=USD{RHTAS_APP_REGISTRATION} -o tsv --query 'id')", "export CLAIM_MAPPING_POLICY_ID=USD(az rest --uri https://graph.microsoft.com/v1.0/policies/claimsMappingPolicies -o tsv --query \"value[?displayName=='EmailVerified'] | [0].id\")", "az rest -m post --headers Content-Type=application/json --uri \"https://graph.microsoft.com/v1.0/servicePrincipals/USD{SERVICE_PRINCIPAL_ID}/claimsMappingPolicies/\\USDref\" --body \"{\\\"@odata.id\\\": \\\"https://graph.microsoft.com/v1.0/policies/claimsMappingPolicies/USD{CLAIM_MAPPING_POLICY_ID}\\\"}\"", "export TENANT_ID=USD(az account show -o tsv --query tenantId)", "export ENTRA_ID_OIDC_ENDPOINT=USD(echo https://login.microsoftonline.com/USD{TENANT_ID}/v2.0)", "edit Securesign NAME -n NAMESPACE", "oc edit Securesign securesign-sample -n trusted-artifact-signer", "OIDCIssuers: - Issuer: \"USD{ENTRA_ID_OIDC_ENDPOINT}\" IssuerURL: \"USD{ENTRA_ID_OIDC_ENDPOINT}\" ClientID: \"USD{RHTAS_APP_REGISTRATION}\" Type: email", "echo USDRHTAS_APP_REGISTRATION_CLIENT_SECRET > rhtas-entra-id-client-secret", "export TUF_URL=USD(oc get tuf -n trusted-artifact-signer -o jsonpath='{.items[0].status.url}') export OIDC_ISSUER_URL=USD(oc get securesign -n trusted-artifact-signer rhtas -o jsonpath='{ .spec.fulcio.config.OIDCIssuers[0].Issuer }') export COSIGN_REKOR_URL=USD(oc get rekor -o jsonpath='{.items[0].status.url}' -n trusted-artifact-signer) export COSIGN_MIRROR=USDTUF_URL export COSIGN_ROOT=USDTUF_URL/root.json export COSIGN_OIDC_CLIENT_ID=USDRHTAS_APP_REGISTRATION export SIGSTORE_OIDC_CLIENT_ID=USDCOSIGN_OIDC_CLIENT_ID export COSIGN_OIDC_ISSUER=USDOIDC_ISSUER_URL export COSIGN_CERTIFICATE_OIDC_ISSUER=USDOIDC_ISSUER_URL export COSIGN_YES=\"true\" export COSIGN_FULCIO_URL=USD(oc get fulcio -o jsonpath='{.items[0].status.url}' -n trusted-artifact-signer) export SIGSTORE_FULCIO_URL=USDCOSIGN_FULCIO_URL export SIGSTORE_OIDC_ISSUER=USDCOSIGN_OIDC_ISSUER export SIGSTORE_REKOR_URL=USDCOSIGN_REKOR_URL export COSIGN_OIDC_CLIENT_SECRET_FILE=USD(pwd)/rhtas-entra-id-client-secret", "cosign initialize", "echo \"FROM scratch\" > ./tmp.Dockerfile podman build . -f ./tmp.Dockerfile -t ttl.sh/rhtas/test-image:1h", "podman push ttl.sh/rhtas/test-image:1h", "rm ./tmp.Dockerfile", "cosign sign -y --oidc-client-secret-file= SECRET_FILE IMAGE_NAME:TAG", "cosign sign -y --oidc-client-secret-file=rhtas-entra-id-client-secret ttl.sh/rhtas/test-image:1h", "mysql -h REGIONAL_ENDPOINT -P 3306 -u USER_NAME -p", "mysql -h exampledb.1234.us-east-1.rds.amazonaws.com -P 3306 -u admin -p", "create database trillian;", "use trillian;", "CREATE USER trillian@'%' IDENTIFIED BY ' PASSWORD '; GRANT ALL PRIVILEGES ON trillian.* TO 'trillian'@'%'; FLUSH PRIVILEGES;", "EXIT", "curl -o dbconfig.sql https://raw.githubusercontent.com/securesign/trillian/main/storage/mysql/schema/storage.sql", "mysql -h FQDN_or_SERVICE_ADDR -P 3306 -u USER_NAME -p PASSWORD -D DB_NAME < PATH_TO_CONFIG_FILE", "mysql -h rhtasdb.example.com -P 3306 -u trillian -p mypassword123 -D trillian < dbconfig.sql", "login --token= TOKEN --server= SERVER_URL_AND_PORT", "oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443", "create secret generic OBJECT_NAME --from-literal=mysql-database=trillian --from-literal=mysql-host= FQDN_or_SERVICE_ADDR --from-literal=mysql-password= PASSWORD --from-literal=mysql-port=3306 --from-literal=mysql-root-password= PASSWORD --from-literal=mysql-user= USER_NAME", "oc create secret generic trillian-mysql --from-literal=mysql-database=trillian --from-literal=mysql-host=mariadb.trusted-artifact-signer.svc.cluster.local --from-literal=mysql-password=mypassword123 --from-literal=mysql-port=3306 --from-literal=mysql-root-password=myrootpassword123 --from-literal=mysql-user=trillian", "mysql -u USDMYSQL_USER -pUSDMYSQL_PASSWORD -DUSDMYSQL_DATABASE", "EXIT", "curl -o dbconfig.sql https://raw.githubusercontent.com/securesign/trillian/main/storage/mysql/schema/storage.sql", "mysql -h FQDN_or_SERVICE_ADDR -P 3306 -u USER_NAME -p PASSWORD -D DB_NAME < PATH_TO_CONFIG_FILE", "mysql -h rhtasdb.example.com -P 3306 -u trillian -p mypassword123 -D trillian < dbconfig.sql", "login --token= TOKEN --server= SERVER_URL_AND_PORT", "oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443", "create secret generic OBJECT_NAME --from-literal=mysql-database=trillian --from-literal=mysql-host= FQDN_or_SERVICE_ADDR --from-literal=mysql-password= PASSWORD --from-literal=mysql-port=3306 --from-literal=mysql-root-password= PASSWORD --from-literal=mysql-user= USER_NAME", "oc create secret generic trillian-mysql --from-literal=mysql-database=trillian --from-literal=mysql-host=mariadb.trusted-artifact-signer.svc.cluster.local --from-literal=mysql-password=mypassword123 --from-literal=mysql-port=3306 --from-literal=mysql-root-password=myrootpassword123 --from-literal=mysql-user=trillian", "apiVersion: v1 kind: Service metadata: annotations: service.beta.openshift.io/serving-cert-secret-name: keycloak-tls labels: app: keycloak app.kubernetes.io/instance: keycloak name: keycloak-service-trusted namespace: keycloak-system spec: internalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - name: https port: 8443 selector: app: keycloak app.kubernetes.io/instance: keycloak", "spec: ingress: annotations: route.openshift.io/destination-ca-certificate-secret: keycloak-tls route.openshift.io/termination: reencrypt", "spec: http: tlsSecret: keycloak-tls", "spec: hostname: hostname: example.com", "spec: ingress: className: openshift-default", "oc get ingressclass", "spec: hostname: hostname: example-keycloak-ingress-keycloak-system.apps.rhtas.example.com", "--- apiVersion: v1 kind: Service metadata: name: postgresql-db namespace: keycloak-system spec: internalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - port: 5432 selector: app: postgresql-db --- apiVersion: apps/v1 kind: StatefulSet metadata: name: postgresql-db namespace: keycloak-system spec: persistentVolumeClaimRetentionPolicy: whenDeleted: Retain whenScaled: Retain podManagementPolicy: OrderedReady replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app: postgresql-db serviceName: postgresql-db template: metadata: labels: app: postgresql-db spec: containers: - env: - name: POSTGRESQL_USER valueFrom: secretKeyRef: key: username name: postgresql-db - name: POSTGRESQL_PASSWORD valueFrom: secretKeyRef: key: password name: postgresql-db - name: POSTGRESQL_DATABASE valueFrom: secretKeyRef: key: database name: postgresql-db image: registry.redhat.io/rhel9/postgresql-15:latest imagePullPolicy: IfNotPresent livenessProbe: exec: command: - /usr/libexec/check-container - --live failureThreshold: 3 initialDelaySeconds: 120 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 10 name: postgresql-db readinessProbe: exec: command: - /usr/libexec/check-container failureThreshold: 3 initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/lib/pgsql/data name: data dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault terminationGracePeriodSeconds: 30 updateStrategy: rollingUpdate: partition: 0 type: RollingUpdate volumeClaimTemplates: - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi volumeMode: Filesystem" ]
https://docs.redhat.com/en/documentation/red_hat_trusted_artifact_signer/1/html-single/deployment_guide/index
Part I. SELinux
Part I. SELinux This documentation part describes the basics and principles upon which Security Enhanced Linux (SELinux) functions.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/part_i-selinux
Chapter 1. Preparing to deploy OpenShift Data Foundation
Chapter 1. Preparing to deploy OpenShift Data Foundation When you deploy OpenShift Data Foundation on OpenShift Container Platform using the local storage devices on any platform, you can create internal cluster resources. This approach internally provisions base services so that all the applications can access additional storage classes. You can also deploy OpenShift Data Foundation to use an external Red Hat Ceph Storage cluster and IBM FlashSystem. For instructions, see Deploying OpenShift Data Foundation in external mode . External mode deployment works on clusters that are detected as non-cloud. If your cluster is not detected correctly, open up a bug in Bugzilla . Before you begin the deployment of Red Hat OpenShift Data Foundation using a local storage, ensure that you meet the resource requirements. See Requirements for installing OpenShift Data Foundation using local storage devices . After completing the preparatory steps, perform the following procedures: Install the Local Storage Operator . Install the Red Hat OpenShift Data Foundation Operator . Create the OpenShift Data Foundation cluster on any platform . 1.1. Requirements for installing OpenShift Data Foundation using local storage devices Node requirements The cluster must consist of at least three OpenShift Container Platform worker or infrastructure nodes with locally attached-storage devices on each of them. Each of the three selected nodes must have at least one raw block device available. OpenShift Data Foundation uses the one or more available raw block devices. Note Make sure that the devices have a unique by-id device name for each available raw block device. The devices you use must be empty, the disks must not include Physical Volumes (PVs), Volume Groups (VGs), or Logical Volumes (LVs) remaining on the disk. For more information, see the Resource requirements section in the Planning guide . Disaster recovery requirements Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced subscription. A valid Red Hat Advanced Cluster Management (RHACM) for Kubernetes subscription. To know in detail how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . For detailed disaster recovery solution requirements, see Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads guide, and Requirements and recommendations section of the Install guide in Red Hat Advanced Cluster Management for Kubernetes documentation. Minimum starting node requirements An OpenShift Data Foundation cluster is deployed with a minimum configuration when the resource requirement for a standard deployment is not met. For more information, see the Resource requirements section in the Planning guide .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_openshift_data_foundation_on_any_platform/preparing_to_deploy_openshift_data_foundation
Chapter 17. Setting up Content Synchronization Using the SyncRepl Protocol
Chapter 17. Setting up Content Synchronization Using the SyncRepl Protocol Using the Content Synchronization plug-in, Directory Server supports the SyncRepl protocol according to RFC 4533 . This protocol enables LDAP servers and clients to use Red Hat Directory Server as a source to synchronize their local database with the changing content of Directory Server. To use the SyncRepl protocol: Enable the Content Synchronization plug-in in Directory Server and optionally create a new user which the client will use to bind to Directory Server. The account must have permissions to read the content in the directory. Configure the client. For example, set the search base for a subtree to synchronize. For further details, see your client's documentation. 17.1. Configuring the Content Synchronization Plug-in Using the Command Line To configure the Content Synchronization plug-in using the command line: The Content Synchronization plug-in requires the Retro Changelog plug-in to log the nsuniqueid attribute: To verify if the retro changelog is already enabled, enter: If the nsslapd-pluginEnabled parameter is set to off , the retro changelog is disabled. To enable, see Section 15.21.1, "Enabling the Retro Changelog Plug-in" . Add the nsuniqueid attribute to retro changelog plug-in configuration: Optionally, apply the following recommendations for improved performance: Set maximum validity for entries in the retro change log. For example, to set 2 days ( 2d ): If you know which back end or subtree clients access to synchronize data, limit the scope of the Retro Changelog plug-in. For example, to exclude the cn=demo,dc=example,dc=com subtree, enter: Enable the Content Synchronization plug-in: Using the defaults, Directory Server creates an access control instruction (ACI) in the oid=1.3.6.1.4.1.4203.1.9.1.1,cn=features,cn=config entry that enables all users to use the SyncRepl protocol: Optionally, update the ACI to limit using the SyncRepl control. For further details about ACIs, see Section 18.11, "Defining Bind Rules" . Restart Directory Server: Clients are now able to synchronize data with Directory Server using the SyncRepl protocol.
[ "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin retro-changelog show nsslapd-pluginEnabled: off", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin retro-changelog set --attribute nsuniqueid:targetUniqueId", "ldapmodify -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x dn: cn=changelog5,cn=config changetype: modify replace: nsslapd-changelogmaxage nsslapd-changelogmaxage: 2d", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin retro-changelog set --exclude-suffix \"cn=demo,dc=example,dc=com\"", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin set --enabled on \"Content Synchronization\"", "aci: (targetattr != \"aci\")(version 3.0; acl \"Sync Request Control\"; allow( read, search ) userdn = \"ldap:///all\";)", "dsctl instance_name restart" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/setting_up_content_synchronization_using_the_syncrepl_protocol
Chapter 3. Configuring proxies
Chapter 3. Configuring proxies Fine-tune your deployment by configuring proxies to include additional features according to your specific requirements. 3.1. Configuring virtual clusters A Kafka cluster is represented by the proxy as a virtual cluster. Clients connect to the virtual cluster rather than the actual cluster. When Streams for Apache Kafka Proxy is deployed, it includes configuration to create virtual clusters. A virtual cluster has exactly one target cluster, but many virtual clusters can target the same cluster. Each virtual cluster targets a single listener on the target cluster, so multiple listeners on the Kafka side are represented as multiple virtual clusters by the proxy. Clients connect to a virtual cluster using a bootstrap_servers address. The virtual cluster has a bootstrap address that maps to each broker in the target cluster. When a client connects to the proxy, communication is proxied to the target broker by rewriting the address. Responses back to clients are rewritten to reflect the appropriate network addresses of the virtual clusters. You can secure virtual cluster connections from clients and to target clusters. Streams for Apache Kafka Proxy accepts keys and certificates in PEM (Privacy Enhanced Mail), PKCS #12 (Public-Key Cryptography Standards), or JKS (Java KeyStore) keystore format. 3.2. Example Streams for Apache Kafka Proxy configuration Streams for Apache Kafka Proxy configuration is defined in a ConfigMap resource. Use the data properties of the ConfigMap resource to configure the following: Virtual clusters that represent the Kafka clusters Network addresses for broker communication in a Kafka cluster Filters to introduce additional functionality to the Kafka deployment In this example, configuration for the Record Encryption filter is shown. Example Streams for Apache Kafka Proxy configuration apiVersion: v1 kind: ConfigMap metadata: name: proxy-config data: config.yaml: | adminHttp: 1 endpoints: prometheus: {} virtualClusters: 2 my-cluster-proxy: 3 targetCluster: bootstrap_servers: my-cluster-kafka-bootstrap.kafka.svc.cluster.local:9093 4 tls: 5 trust: storeFile: /opt/proxy/trust/ca.p12 storePassword: passwordFile: /opt/proxy/trust/ca.password clusterNetworkAddressConfigProvider: 6 type: SniRoutingClusterNetworkAddressConfigProvider 7 Config: bootstrapAddress: my-cluster-proxy.kafka:9092 8 brokerAddressPattern: brokerUSD(nodeId).my-cluster-proxy.kafka logNetwork: false 9 logFrames: false tls: 10 key: storeFile: /opt/proxy/server/key-material/keystore.p12 storePassword: passwordFile: /opt/proxy/server/keystore-password/storePassword filters: 11 - type: RecordEncryption 12 config: 13 kms: VaultKmsService kmsConfig: vaultTransitEngineUrl: https://vault.vault.svc.cluster.local:8200/v1/transit vaultToken: passwordFile: /opt/proxy/server/token.txt tls: 14 key: storeFile: /opt/cert/server.p12 storePassword: passwordFile: /opt/cert/store.password keyPassword: passwordFile: /opt/cert/key.password storeType: PKCS12 selector: TemplateKekSelector selectorConfig: template: "USD{topicName}" 1 Enables metrics for the proxy. 2 Virtual cluster configuration. 3 The name of the virtual cluster. 4 The bootstrap address of the target physical Kafka Cluster being proxied. 5 TLS configuration for the connection to the target cluster. 6 The configuration for the cluster network address configuration provider that controls how the virtual cluster is presented to the network. 7 The built-in types are PortPerBrokerClusterNetworkAddressConfigProvider and SniRoutingClusterNetworkAddressConfigProvider . 8 The hostname and port of the bootstrap used by the Kafka clients. The hostname must be resolved by the clients. 9 Logging is disabled by default. Enable logging related to network activity ( logNetwork ) and messages ( logFrames ) by setting the logging properties to true . 10 TLS encryption for securing connections with the clients. 11 Filter configuration. 12 The type of filter, which is the Record Encryption filter using Vault as the KMS in this example. 13 The configuration specific to the type of filter. 14 If required, you can also specify the credentials for TLS authentication with the KMS, with key names under which TLS certificates are stored. 3.3. Securing connections from clients To secure client connections to virtual clusters, configure TLS on the virtual cluster by doing the following: Obtain a server certificate for the virtual cluster from a Certificate Authority (CA). Ensure the certificate matches the names of the virtual cluster's bootstrap and broker addresses. This may require wildcard certificates and Subject Alternative Names (SANs). Provide the TLS configuration using the tls properties in the virtual cluster's configuration to enable it to present the certificate to clients. Depending on your certificate format, apply one of the following examples. For mutual TLS, you may also use the trust properties to configure the virtual cluster to use TLS client authentication. Note TLS is recommended on Kafka clients and virtual clusters for production configurations. Example PKCS #12 configuration virtualClusters: my-cluster-proxy: tls: key: storeFile: <path>/server.p12 1 storePassword: passwordFile: <path>/store.password 2 keyPassword: passwordFile: <path>/key.password 3 storeType: PKCS12 4 # ... 1 PKCS #12 store containing the private-key and certificate/intermediates of the virtual cluster. 2 Password to protect the PKCS #12 store. 3 (Optional) Password for the key. If a password is not specified, the keystore's password is used to decrypt the key too. 4 (Optional) Keystore type. If a keystore type is not specified, the default JKS (Java Keystore) type is used. Example PEM configuration virtualClusters: my-cluster-proxy: tls: key: privateKeyFile: <path>/server.key 1 certificateFile: <path>/server.crt 2 keyPassword: passwordFile: <path>/key.password 3 # ... 1 Private key of the virtual cluster. 2 Public certificate of the virtual cluster. 3 (Optional) Password for the key. You can configure the virtual cluster to require that clients present a certificate for authentication. The virtual cluster verifies that the client's certificate is signed by one of the CA certificates contained in a trust store. If verification fails, the client's connection is refused. Example to configure TLS client authentication using PKCS12 trust store virtualClusters: demo: tls: key: # ... trust: storeFile: <path>/trust.p12 #1 1 storePassword: passwordFile: <path>/trust.password 2 storeType: PKCS12 3 trustOptions: clientAuth: REQUIRED 4 # ... 1 PKCS #12 store containing CA certificate(s) used to verify that the client's certificate is trusted. 2 (Optional) Password to protect the PKCS #12 store. 3 (Optional) Keystore type. If a keystore type is not specified, the default JKS (Java Keystore) type is used. 4 Client authentication mode. If set to REQUIRED , the client must present a valid certificate. If set to REQUESTED , the client is requested to present a certificate. If presented, the certificate is validated. If the client chooses not to present a certificate the connection is still allowed. If set to NONE , client authentication is disabled. Note The client's identity, as established through TLS client authentication, is currently not relayed to the target cluster. For more information, see the related issue . 3.4. Securing connections to target clusters To secure a virtual cluster connection to a target cluster, configure TLS on the virtual cluster. The target cluster must already be configured to use TLS. Specify TLS for the virtual cluster configuration using targetCluster.tls properties Use an empty object ( {} ) to inherit trust from the underlying platform on which the cluster is running. This option is suitable if the target cluster is using a TLS certificate signed by a public CA. Example target cluster configuration for TLS virtualClusters: my-cluster-proxy: targetCluster: bootstrap_servers: my-cluster-kafka-bootstrap.kafka.svc.cluster.local:9093 tls: {} #... If it is using a TLS certificate signed by a private CA, you must add truststore configuration for the target cluster. Example truststore configuration for a target cluster virtualClusters: my-cluster-proxy: targetCluster: bootstrap_servers: my-cluster-kafka-bootstrap.kafka.svc.cluster.local:9093 tls: trust: storeFile: <path>/trust.p12 1 storePassword: passwordFile: <path>/store.password 2 storeType: PKCS12 3 #... 1 PKCS #12 store for the public CA certificate of the Kafka cluster. 2 Password to access the public Kafka cluster CA certificate. 3 (Optional) Keystore type. If a keystore type is not specified, the default JKS (Java Keystore) type is used. For mTLS, you can add keystore configuration for the virtual cluster too. Example keystore and truststore configuration for mTLS virtualClusters: my-cluster-proxy: targetCluster: bootstrap_servers: my-cluster-kafka-bootstrap.kafka.svc.cluster.local:9093:9092 tls: key: privateKeyFile: <path>/client.key 1 certificateFile: <path>/client.crt 2 trust: storeFile: <path>/server.crt storeType: PEM # ... 1 Private key of the virtual cluster. 2 Public CA certificate of the virtual cluster. For the purposes of testing outside of a production environment, you can set the insecure property to true to turn off TLS so that the Streams for Apache Kafka Proxy can connect to any Kafka cluster. Example configuration to turn off TLS virtualClusters: my-cluster-proxy: targetCluster: bootstrap_servers: myprivatecluster:9092 tls: trust: insecure: true #... 3.5. Configuring network addresses Virtual cluster configuration requires a network address configuration provider that manages network communication and provides broker address information to clients. Streams for Apache Kafka Proxy has the following built-in providers: Broker address provider ( PortPerBrokerClusterNetworkAddressConfigProvider ) Node ID ranges provider ( RangeAwarePortPerNodeClusterNetworkAddressConfigProvider ) SNI routing address provider ( SniRoutingClusterNetworkAddressConfigProvider ) Important Make sure that the virtual cluster bootstrap address and generated broker addresses are resolvable and routable by the Kafka client. 3.5.1. Broker address provider The per-broker network address configuration provider opens one port for a virtual cluster's bootstrap address and one port for each broker in the target Kafka cluster. The number of open ports is maintained dynamically. For example, if a broker is removed from the cluster, the port assigned to it is closed. If you have two virtual clusters, each targeting a Kafka cluster with three brokers, eight ports are bound in total. This provider works best with straightforward configurations. Ideally, the target cluster should have sequential, stable broker IDs and a known minimum broker ID, such as 0, 1, 2 for a cluster with three brokers. While it can handle non-sequential broker IDs, this would require exposing ports equal to maxBrokerId - minBrokerId , which could be excessive if your cluster contains broker IDs like 0 and 20000 . The provider supports both cleartext and TLS downstream connections. Example broker address configuration clusterNetworkAddressConfigProvider: type: PortPerBrokerClusterNetworkAddressConfigProvider config: bootstrapAddress: mycluster.kafka.com:9192 1 brokerAddressPattern: mybroker-USD(nodeId).mycluster.kafka.com 2 brokerStartPort: 9193 3 numberOfBrokerPorts: 3 4 lowestTargetBrokerId: 1000 5 bindAddress: 192.168.0.1 6 1 The hostname and port of the bootstrap address used by Kafka clients. 2 (Optional) The broker address pattern used to form broker addresses. If not defined, it defaults to the hostname part of the bootstrap address and the port number allocated to the broker. 3 (Optional) The starting number for the broker port range. Defaults to the port of the bootstrap address plus 1. 4 (Optional) The maximum number of broker ports that are permitted. Set this value according to the maximum number of brokers allowed by your operational rules. Defaults to 3. 5 (Optional) The lowest broker ID in the target cluster. Defaults to 0. This should match the lowest node.id (or broker.id ) in the target cluster. 6 (Optional) The bind address used when binding the ports. If undefined, all network interfaces are bound. Each broker's ID must be greater than or equal to lowestTargetBrokerId and less than lowestTargetBrokerId + numberOfBrokerPorts . The current strategy for mapping node IDs to ports is as follows: nodeId brokerStartPort + nodeId - lowestTargetBrokerId . The example configuration maps broker IDs 1000, 1001, and 1002 to ports 9193, 9194, and 9195, respectively. Reconfigure numberOfBrokerPorts to accommodate the number of brokers in the cluster. The example broker address configuration creates the following broker addresses: mybroker-0.mycluster.kafka.com:9193 mybroker-1.mycluster.kafka.com:9194 mybroker-2.mycluster.kafka.com:9194 The brokerAddressPattern configuration parameter accepts the USD(nodeId) replacement token, which is optional. If included, USD(nodeId) is replaced by the broker's node.id (or broker.id ) in the target cluster. For example, with the configuration shown above, if your cluster has three brokers, your Kafka client receives broker addresses like this: 3.5.2. Node ID ranges provider As an alternative to the broker address provider, the node ID ranges provider allows you to model specific ranges of node IDs in the target cluster, enabling efficient port allocation even when broker IDs are non-sequential or widely spaced This ensures a deterministic mapping of node IDs to ports while minimizing the number of ports needed. Example node ID ranges configuration clusterNetworkAddressConfigProvider: type: RangeAwarePortPerNodeClusterNetworkAddressConfigProvider config: bootstrapAddress: mycluster.kafka.com:9192 brokerAddressPattern: mybroker-USD(nodeId).mycluster.kafka.com brokerStartPort: 9193 nodeIdRanges: 1 - name: brokers 2 range: startInclusive: 0 3 endExclusive: 3 4 1 The list of Node ID ranges, which must be non-empty. 2 The name of the range, which must be unique within the nodeIdRanges list. 3 The start of the range (inclusive). 4 The end of the range (exclusive). It must be greater than startInclusive ; empty ranges are not allowed. Node ID ranges must be distinct, meaning a node ID cannot belong to more than one range. KRaft roles given to cluster nodes can be accommodated in the configuration. For example, consider a target cluster using KRaft with the following node IDs and roles: nodeId: 0, roles: controller nodeId: 1, roles: controller nodeId: 2, roles: controller nodeId: 1000, roles: broker nodeId: 1001, roles: broker nodeId: 1002, roles: broker nodeId: 99999, roles: broker This can be modeled as three node ID ranges, as shown in the following example. Example node ID ranges configuration with KRaft roles clusterNetworkAddressConfigProvider: type: RangeAwarePortPerNodeClusterNetworkAddressConfigProvider config: bootstrapAddress: mycluster.kafka.com:9192 nodeIdRanges: - name: controller range: startInclusive: 0 endExclusive: 3 - name: brokers range: startInclusive: 1000 endExclusive: 1003 - name: broker-outlier range: startInclusive: 99999 endExclusive: 100000 This configuration results in the following mapping from node ID to port: nodeId: 0 port 9193 nodeId: 1 port 9194 nodeId: 2 port 9195 nodeId: 1000 port 9196 nodeId: 1001 port 9197 nodeId: 1002 port 9198 nodeId: 99999 port 9199 3.5.3. SNI routing address provider The SNI (Server Name Indication) routing provider opens a single port for all virtual clusters or a port for each. You can open a port for the whole cluster or each broker. The SNI routing provider uses SNI information to determine where to route the traffic, so requires downstream TLS. Example SNI routing address provider configuration clusterNetworkAddressConfigProvider: type: SniRoutingClusterNetworkAddressConfigProvider config: bootstrapAddress: mycluster.kafka.com:9192 1 brokerAddressPattern: mybroker-USD(nodeId).mycluster.kafka.com bindAddress: 192.168.0.1 1 A single address for all traffic, including bootstrap address and brokers. In the SNI routing address configuration, the brokerAddressPattern specification is mandatory, as it is required to generate routes for each broker. Note Single port operation may have cost advantages when using load balancers of public clouds, as it allows a single cloud provider load balancer to be shared across all virtual clusters.
[ "apiVersion: v1 kind: ConfigMap metadata: name: proxy-config data: config.yaml: | adminHttp: 1 endpoints: prometheus: {} virtualClusters: 2 my-cluster-proxy: 3 targetCluster: bootstrap_servers: my-cluster-kafka-bootstrap.kafka.svc.cluster.local:9093 4 tls: 5 trust: storeFile: /opt/proxy/trust/ca.p12 storePassword: passwordFile: /opt/proxy/trust/ca.password clusterNetworkAddressConfigProvider: 6 type: SniRoutingClusterNetworkAddressConfigProvider 7 Config: bootstrapAddress: my-cluster-proxy.kafka:9092 8 brokerAddressPattern: brokerUSD(nodeId).my-cluster-proxy.kafka logNetwork: false 9 logFrames: false tls: 10 key: storeFile: /opt/proxy/server/key-material/keystore.p12 storePassword: passwordFile: /opt/proxy/server/keystore-password/storePassword filters: 11 - type: RecordEncryption 12 config: 13 kms: VaultKmsService kmsConfig: vaultTransitEngineUrl: https://vault.vault.svc.cluster.local:8200/v1/transit vaultToken: passwordFile: /opt/proxy/server/token.txt tls: 14 key: storeFile: /opt/cert/server.p12 storePassword: passwordFile: /opt/cert/store.password keyPassword: passwordFile: /opt/cert/key.password storeType: PKCS12 selector: TemplateKekSelector selectorConfig: template: \"USD{topicName}\"", "virtualClusters: my-cluster-proxy: tls: key: storeFile: <path>/server.p12 1 storePassword: passwordFile: <path>/store.password 2 keyPassword: passwordFile: <path>/key.password 3 storeType: PKCS12 4 #", "virtualClusters: my-cluster-proxy: tls: key: privateKeyFile: <path>/server.key 1 certificateFile: <path>/server.crt 2 keyPassword: passwordFile: <path>/key.password 3 ...", "virtualClusters: demo: tls: key: # trust: storeFile: <path>/trust.p12 #1 1 storePassword: passwordFile: <path>/trust.password 2 storeType: PKCS12 3 trustOptions: clientAuth: REQUIRED 4 ...", "virtualClusters: my-cluster-proxy: targetCluster: bootstrap_servers: my-cluster-kafka-bootstrap.kafka.svc.cluster.local:9093 tls: {} #", "virtualClusters: my-cluster-proxy: targetCluster: bootstrap_servers: my-cluster-kafka-bootstrap.kafka.svc.cluster.local:9093 tls: trust: storeFile: <path>/trust.p12 1 storePassword: passwordFile: <path>/store.password 2 storeType: PKCS12 3 #", "virtualClusters: my-cluster-proxy: targetCluster: bootstrap_servers: my-cluster-kafka-bootstrap.kafka.svc.cluster.local:9093:9092 tls: key: privateKeyFile: <path>/client.key 1 certificateFile: <path>/client.crt 2 trust: storeFile: <path>/server.crt storeType: PEM", "virtualClusters: my-cluster-proxy: targetCluster: bootstrap_servers: myprivatecluster:9092 tls: trust: insecure: true #", "clusterNetworkAddressConfigProvider: type: PortPerBrokerClusterNetworkAddressConfigProvider config: bootstrapAddress: mycluster.kafka.com:9192 1 brokerAddressPattern: mybroker-USD(nodeId).mycluster.kafka.com 2 brokerStartPort: 9193 3 numberOfBrokerPorts: 3 4 lowestTargetBrokerId: 1000 5 bindAddress: 192.168.0.1 6", "mybroker-0.mycluster.kafka.com:9193 mybroker-1.mycluster.kafka.com:9194 mybroker-2.mycluster.kafka.com:9194", "0. mybroker-0.mycluster.kafka.com:9193 1. mybroker-1.mycluster.kafka.com:9194 2. mybroker-2.mycluster.kafka.com:9195", "clusterNetworkAddressConfigProvider: type: RangeAwarePortPerNodeClusterNetworkAddressConfigProvider config: bootstrapAddress: mycluster.kafka.com:9192 brokerAddressPattern: mybroker-USD(nodeId).mycluster.kafka.com brokerStartPort: 9193 nodeIdRanges: 1 - name: brokers 2 range: startInclusive: 0 3 endExclusive: 3 4", "clusterNetworkAddressConfigProvider: type: RangeAwarePortPerNodeClusterNetworkAddressConfigProvider config: bootstrapAddress: mycluster.kafka.com:9192 nodeIdRanges: - name: controller range: startInclusive: 0 endExclusive: 3 - name: brokers range: startInclusive: 1000 endExclusive: 1003 - name: broker-outlier range: startInclusive: 99999 endExclusive: 100000", "clusterNetworkAddressConfigProvider: type: SniRoutingClusterNetworkAddressConfigProvider config: bootstrapAddress: mycluster.kafka.com:9192 1 brokerAddressPattern: mybroker-USD(nodeId).mycluster.kafka.com bindAddress: 192.168.0.1" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/using_the_streams_for_apache_kafka_proxy/assembly-configuring-proxy-str
Chapter 1. Get Started Orchestrating Containers with Kubernetes
Chapter 1. Get Started Orchestrating Containers with Kubernetes 1.1. Overview Important Procedures and software described in this chapter for manually configuring and using Kubernetes are deprecated and, therefore, no longer supported. For information on which software and documentation are impacted, see the Red Hat Enterprise Linux Atomic Host Release Notes . For information on Red Hat's officially supported Kubernetes-based products, refer to Red Hat OpenShift Container Platform , OpenShift Online , OpenShift Dedicated , OpenShift.io , Container Development Kit or Development Suite . Kubernetes is a tool for orchestrating and managing Docker containers. Red Hat provides several ways you can use Kubernetes including: OpenShift Container Platform : Kubernetes is built into OpenShift, allowing you to configure Kubernetes, assign host computers as Kubernetes nodes, deploy containers to those nodes in pods, and manage containers across multiple systems. The OpenShift Container Platform web console provides a browser-based interface to using Kubernetes. Container Development Kit (CDK) : The CDK provides Vagrantfiles to launch the CDK with either OpenShift (which includes Kubernetes) or a bare-bones Kubernetes configuration. This gives you the choice of using the OpenShift tools or Kubernetes commands (such as kubectl ) to manage Kubernetes. Kubernetes in Red Hat Enterprise Linux : To try out Kubernetes on a standard Red Hat Enterprise Linux server system, you can install a combination of RPM packages and container images to manually set up your own Kubernetes configuration. The procedures in this section describe how to set up Kubernetes using the last listed option - Kubernetes on Red Hat Enterprise Linux or Red Hat Enterprise Linux Atomic Host. Specifically, in this chapter you set up a single-system Kubernetes sandbox so you can: Deploy and run two containers with Kubernetes on a single system. Manage those containers in pods with Kubernetes. This procedure results in a setup that provides an all-in-one Kubernetes configuration in which you can begin trying out Kubernetes and exploring how it works. In this procedure, services that are typically on a separate Kubernetes master system and two or more Kubernetes node systems are all running on a single system. Note The Kubernetes software described in this chapter is packaged and configured differently than the Kubernetes included in OpenShift. We recommend you use the OpenShift version of Kubernetes for permanent setups and production use. The procedure described in this chapter should only be used as a convenient way to try out Kubernetes on an all-in-one RHEL or RHEL Atomic Host system. As of RHEL 7.3, support for the procedure for configuring a Kubernetes cluster (separate master and multiple nodes) directly on RHEL and RHEL Atomic Host has ended. For further details on Red Hat support for Kubernetes, see How are container orchestration tools supported with Red Hat Enterprise Linux? 1.2. Understanding Kubernetes While the Docker project defines a container format and builds and manages individual containers, an orchestration tool is needed to deploy and manage sets of containers. Kubernetes is a tool designed to orchestrate Docker containers. After building the container images you want, you can use a Kubernetes Master to deploy one or more containers in what is referred to as a pod. The Master tells each Kubernetes Node to pull the needed containers to that Node, where the containers run. Kubernetes can manage the interconnections between a set of containers by defining Kubernetes Services. As demand for individual container pods increases or decreases, Kubernetes can run or stop container pods as needed using its replication controller feature. For this example, both the Kubernetes Master and Node are on the same computer, which can be either a RHEL 7 Server or RHEL 7 Atomic Host. Kubernetes relies on a set of service daemons to implement features of the Kubernetes Master and Node. Some of those run as systemd services while others run from containers. You need to understand the following about Kubernetes Masters and Node: Master : A Kubernetes Master is where you direct API calls to services that control the activities of the pods, replications controllers, services, nodes and other components of a Kubernetes cluster. Typically, those calls are made by running kubectl commands. From the Master, containers are deployed to run on Nodes. Node : A Node is a system providing the run-time environments for the containers. A set of container pods can span multiple nodes. Pods are defined in configuration files (in YAML or JSON formats). Using the following procedure, you will set up a single RHEL 7 or RHEL Atomic system, configure it as a Kubernetes Master and Node, use YAML files to define each container in a pod, and deploy those containers using Kubernetes ( kubectl command). Note Three of the Kubernetes services that were defined run as systemd services ( kube-apiserver , kube-controller-manager , and kube-scheduler ) in versions of this procedure have been containerized. As of RHEL 7.3, only containerized versions of those services are available. So this procedure describes how to use those containerized Kubernetes services. 1.3. Running Containers from Kubernetes Pods You need a RHEL 7 or RHEL Atomic system to build the Docker containers and orchestrate them with Kubernetes. There are different sets of service daemons needed on Kubernetes Master and Node systems. In this procedure, all service daemons run on the same system. Once the containers, system and services are in place, you use the kubectl command to deploy those containers so they run on the Kubernetes Node (in this case, that will be the local system). Here's how to do those steps: 1.3.1. Setting up to Deploy Docker Containers with Kubernetes To prepare for Kubernetes, you need to install RHEL 7 or RHEL Atomic Host, disable firewalld, get two containers, and add them to a Docker Registry. Note RHEL Atomic Host does not support the yum command for installing packages. To get around this issue, you could use the yumdownloader docker-distribution command to download the package to a RHEL system, copy it to the Atomic system, install it on the Atomic system using rpm-ostree install ./docker-distribution*rpm and reboot. You could then set up the docker-distribution service as described below. Install a RHEL 7 or RHEL Atomic system : For this Kubernetes sandbox system, install a RHEL 7 or RHEL Atomic system, subscribe the system, then install and start the docker service. Refer here for information on setting up a basic RHEL or RHEL Atomic system to use with Kubernetes: Get Started with Docker Formatted Container Images on Red Hat Systems Install Kubernetes : If you are on a RHEL 7 system, install the docker, etcd, and some kubernetes packages. These packages are already installed on RHEL Atomic: Disable firewalld : If you are using a RHEL 7 host, be sure that the firewalld service is disabled (the firewalld service is not installed on an Atomic host). On RHEL 7, type the following to disable and stop the firewalld service: Get Docker Containers : Build the following two containers using the following instructions: Simple Apache Web Server in a Docker Container Simple Database Server in a Docker Container After you build, test and stop the containers ( docker stop mydbforweb and docker stop mywebwithdb ), add them to a registry. Install registry : To get the Docker Registry service (v2) on your local system, you must install the docker-distribution package. For example: Start the local Docker Registry : To start the local Docker Registry, type the following: Tag images : Using the image ID of each image, tag the two images so they can be pushed to your local Docker Registry. Assuming the registry is running on the local system, tag the two images as follows: The two images are now available from your local Docker Registry. 1.3.2. Starting Kubernetes Because both Kubernetes Master and Node services are running on the local system, you don't need to change the Kubernetes configuration files. Master and Node services will point to each other on localhost and services are made available only on localhost. Pull Kubernetes containers : To pull the Kubernetes container images, type the following: Create manifest files : Create the following apiserver-pod.json, controller-mgr-pod.json, and scheduler-pod.json files and put them in the /etc/kubernetes/manifests directory. These files identify the images representing the three Kubernetes services that are started later by the kubelet service: apiserver-pod.json NOTE : The --service-cluster-ip-range allocates the IP address range (CIDR notation) used by the kube-apiserver to assign to services in the cluster. Make sure that any addresses assigned in the range here are not assigned to any pods in the cluster. Also, keep in mind that a 255-address range (/24) is allocated to each node. So you should at least assign a /20 range for a small cluster and up to a /14 range to allow up to 1000 nodes. controller-mgr-pod.json scheduler-pod.json Configure the kubelet service : Because the manifests define Kubernetes services as pods, the kubelet service is needed to start these containerized Kubernetes services. To configure the kubelet service, edit the /etc/kubernetes/kubelet and modify the KUBELET_ARGS line to read as follows (all other content can stay the same): Start kubelet and other Kubernetes services : Start and enable the docker, etcd, kube-proxy and kubelet services as follows: Start the Kubernetes Node service daemons : You need to start several services associated with a Kubernetes Node: Check the services : Run the ss command to check which ports the services are running on: Test the etcd service : Use the curl command as follows to check the etcd service: 1.3.3. Launching container pods with Kubernetes With Master and Node services running on the local system and the two container images in place, you can now launch the containers using Kubernetes pods. Here are a few things you should know about that: Separate pods : Although you can launch multiple containers in a single pod, by having them in separate pods each container can replicate multiple instances as demands require, without having to launch the other container. Kubernetes service : This procedure defines Kubernetes services for the database and web server pods so containers can go through Kubernetes to find those services. In this way, the database and web server can find each other without knowing the IP address, port number, or even the node the pod providing the service is running on. The following steps show how to launch and test the two pods: IMPORTANT : It is critical that the indents in the YAML file be maintained. Spacing in YAML files are part of what keep the format cleaner (not requiring curly braces or other characters to maintain the structure). Create a Database Kubernetes service : Create a db-service.yaml file to identify the pod providing the database service to Kubernetes. Create a Database server replication controller file : Create a db-rc.yaml file that you will use to deploy the Database server pod. Here is what it could contain: Create a Web server Kubernetes Service file : Create a webserver-service.yaml file that you will use to deploy the Web server pod. Here is what it could contain: Create a Web server replication controller file : Create a webserver-rc.yaml file that you will use to deploy the Web server pod. Here is what it could contain: Orchestrate the containers with kubectl : With the two YAML files in the current directory, run the following commands to start the pods to begin running the containers: Check rc, pods, and services : Run the following commands to make sure that Kubernetes master services, the replication controllers, pods, and services are all running: Check containers : If both containers are running and the Web server container can see the Database server, you should be able to run the curl command to see that everything is working, as follows (note that the IP address matches webserver-service address): If you have a Web browser installed on the localhost, you can open that Web browser to see a better representation of the few lines of output. Just open the browser to this URL: http://10.254.159.86/cgi-bin/action . 1.4. Exploring Kubernetes pods If something goes wrong along the way, there are several ways to determine what happened. One thing you can do is to examine services inside of the containers. To do that, you can look at the logs inside the container to see what happened. Run the following command (replacing the last argument with the pod name you want to examine). Another problem that people have had comes from forgetting to disable firewalld. If firewalld is active, it could block access to ports when a service tries to access them between your containers. Make sure you have run systemctl stop firewalld ; systemctl disable firewalld on your host. If you made a mistake creating your two-pod application, you can delete the replication controllers and the services. (The pods will just go away when the replication controllers are removed.) After that, you can fix the YAML files and create them again. Here's how you would delete the replication controllers and services: Remember to not just delete the pods. If you do, without removing the replication controllers, the replication controllers will just start new pods to replace the ones you deleted. The example you have just seen is a simple approach to getting started with Kubernetes. Because it involves only one master and one node on the same system, it is not scalable. To set up a more formal and permanent Kubernetes configuration, Red Hat recommends using OpenShift Container Platform .
[ "yum install docker kubernetes-client kubernetes-node etcd", "systemctl disable firewalld systemctl stop firewalld", "yum install docker-distribution", "systemctl start docker-distribution systemctl enable docker-distribution systemctl is-active docker-distribution active", "docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE dbforweb latest c29665465a6c 4 minutes ago 556.2 MB webwithdb latest 80e7af79c507 14 minutes ago 405.6 MB docker tag c29665465a6c localhost:5000/dbforweb docker push localhost:5000/dbforweb docker tag 80e7af79c507 localhost:5000/webwithdb docker push localhost:5000/webwithdb", "docker pull registry.access.redhat.com/rhel7/kubernetes-apiserver docker pull registry.access.redhat.com/rhel7/kubernetes-controller-mgr docker pull registry.access.redhat.com/rhel7/kubernetes-scheduler", "{ \"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": { \"name\": \"kube-apiserver\" }, \"spec\": { \"hostNetwork\": true, \"containers\": [ { \"name\": \"kube-apiserver\", \"image\": \"rhel7/kubernetes-apiserver\", \"command\": [ \"/usr/bin/kube-apiserver\", \"--v=0\", \"--address=0.0.0.0\", \"--etcd_servers=http://127.0.0.1:2379\", \"--service-cluster-ip-range=10.254.0.0/16\", \"--admission_control=AlwaysAdmit\" ], \"ports\": [ { \"name\": \"https\", \"hostPort\": 443, \"containerPort\": 443 }, { \"name\": \"local\", \"hostPort\": 8080, \"containerPort\": 8080 } ], \"volumeMounts\": [ { \"name\": \"etcssl\", \"mountPath\": \"/etc/ssl\", \"readOnly\": true }, { \"name\": \"config\", \"mountPath\": \"/etc/kubernetes\", \"readOnly\": true } ], \"livenessProbe\": { \"httpGet\": { \"path\": \"/healthz\", \"port\": 8080 }, \"initialDelaySeconds\": 15, \"timeoutSeconds\": 15 } } ], \"volumes\": [ { \"name\": \"etcssl\", \"hostPath\": { \"path\": \"/etc/ssl\" } }, { \"name\": \"config\", \"hostPath\": { \"path\": \"/etc/kubernetes\" } } ] } }", "{ \"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": { \"name\": \"kube-controller-manager\" }, \"spec\": { \"hostNetwork\": true, \"containers\": [ { \"name\": \"kube-controller-manager\", \"image\": \"rhel7/kubernetes-controller-mgr\", \"volumeMounts\": [ { \"name\": \"etcssl\", \"mountPath\": \"/etc/ssl\", \"readOnly\": true }, { \"name\": \"config\", \"mountPath\": \"/etc/kubernetes\", \"readOnly\": true } ], \"livenessProbe\": { \"httpGet\": { \"path\": \"/healthz\", \"port\": 10252 }, \"initialDelaySeconds\": 15, \"timeoutSeconds\": 15 } } ], \"volumes\": [ { \"name\": \"etcssl\", \"hostPath\": { \"path\": \"/etc/ssl\" } }, { \"name\": \"config\", \"hostPath\": { \"path\": \"/etc/kubernetes\" } } ] } }", "{ \"kind\": \"Pod\", \"apiVersion\": \"v1\", \"metadata\": { \"name\": \"kube-scheduler\" }, \"spec\": { \"hostNetwork\": true, \"containers\": [ { \"name\": \"kube-scheduler\", \"image\": \"rhel7/kubernetes-scheduler\", \"volumeMounts\": [ { \"name\": \"config\", \"mountPath\": \"/etc/kubernetes\", \"readOnly\": true } ], \"livenessProbe\": { \"httpGet\": { \"path\": \"/healthz\", \"port\": 10251 }, \"initialDelaySeconds\": 15, \"timeoutSeconds\": 15 } } ], \"volumes\": [ { \"name\": \"config\", \"hostPath\": { \"path\": \"/etc/kubernetes\" } } ] } }", "KUBELET_ADDRESS=\"--address=127.0.0.1\" KUBELET_HOSTNAME=\"--hostname-override=127.0.0.1\" KUBELET_ARGS=\"--register-node=true --config=/etc/kubernetes/manifests --register-schedulable=true\" KUBELET_API_SERVER=\"--api-servers=http://127.0.0.1:8080\" KUBELET_POD_INFRA_CONTAINER=\"--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest\"", "for SERVICES in docker etcd kube-proxy kubelet; do systemctl restart USDSERVICES systemctl enable USDSERVICES systemctl is-active USDSERVICES done", "for SERVICES in docker kube-proxy.service kubelet.service; do systemctl restart USDSERVICES systemctl enable USDSERVICES systemctl status USDSERVICES done", "ss -tulnp | grep -E \"(kube)|(etcd)\"", "curl -s -L http://localhost:2379/version {\"etcdserver\":\"3.0.15\",\"etcdcluster\":\"3.0.0\"}", "apiVersion: v1 kind: Service metadata: labels: name: db name: db-service namespace: default spec: ports: - port: 3306 selector: app: db", "apiVersion: v1 kind: ReplicationController metadata: name: db-controller spec: replicas: 1 selector: app: \"db\" template: metadata: name: \"db\" labels: app: \"db\" spec: containers: - name: \"db\" image: \"localhost:5000/dbforweb\" ports: - containerPort: 3306", "apiVersion: v1 kind: Service metadata: labels: app: webserver name: webserver-service namespace: default spec: ports: - port: 80 selector: app: webserver", "kind: \"ReplicationController\" apiVersion: \"v1\" metadata: name: \"webserver-controller\" spec: replicas: 1 selector: app: \"webserver\" template: spec: containers: - name: \"apache-frontend\" image: \"localhost:5000/webwithdb\" ports: - containerPort: 80 metadata: labels: app: \"webserver\" uses: db", "kubectl create -f db-service.yaml services/db-service kubectl create -f db-rc.yaml replicationcontrollers/db-controller kubectl create -f webserver-service.yaml services/webserver-service kubectl create -f webserver-rc.yaml replicationcontrollers/webserver-controller", "kubectl cluster-info Kubernetes master is running at http://localhost:8080 kubectl get rc NAME DESIRED CURRENT READY AGE db-controller 1 1 1 7d webserver-controller 1 1 1 7d kubectl get pods --all-namespaces=true NAMESPACE NAME READY STATUS RESTARTS AGE default db-controller-kf126 1/1 Running 9 7d default kube-apiserver-127.0.0.1 1/1 Running 0 29m default kube-controller-manager-127.0.0.1 1/1 Running 4 7d default kube-scheduler-127.0.0.1 1/1 Running 4 7d default webserver-controller-l4r2j 1/1 Running 9 7d kubectl get service --all-namespaces=true NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE default db-service 10.254.109.7 <none> 3306/TCP 7d default kubernetes 10.254.0.1 <none> 443/TCP 8d default webserver-service 10.254.159.86 <none> 80/TCP 7d", "http://10.254.159.86:80/cgi-bin/action <html> <head> <title>My Application</title> </head> <body> <h2>RedHat rocks</h2> <h2>Success</h2> </body> </html>", "kubectl logs kube-controller-manager-127.0.0.1", "kubectl delete rc webserver-controller replicationcontrollers/webserver-controller kubectl delete rc db-controller replicationcontrollers/db-controller kubectl delete service webserver-service services/webserver-service kubectl delete service db-service" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/getting_started_with_kubernetes/get_started_orchestrating_containers_with_kubernetes
Chapter 5. Fixed issues
Chapter 5. Fixed issues The Cryostat release might include fixes for issues that were identified in earlier releases of Cryostat. Review each fixed issue note for a description of the issue and the subsequent fix. Issues fixed in Cryostat 2.3.1 The following issues have been fixed in the Cryostat 2.3.1 release: Stored credentials incorrectly match against target applications that require JMX authentication and integrate the Cryostat Agent Typically, the Cryostat Agent is configured to expose a readonly HTTP API that Cryostat interacts with. The Cryostat Agent provides this HTTP API URL to Cryostat as a discovery plug-in implementation. If a target application has an embedded Cryostat Agent and Cryostat attempts to connect to the target over Java Management Extensions (JMX) rather than HTTP, a conflict might arise. In this situation, the stored credentials of the Agent might overlap and conflict with any stored credentials that are necessary for JMX authentication of the target application. Before Cryostat 2.3.1, this conflict resulted in the wrong credentials being presented for JMX authentication, and Cryostat operations such as listing recordings or activating Automated Rules might fail. This issue could occur when the integrated Agent was configured with the property cryostat.agent.registration.prefer-jmx and the target application had JMX enabled. This issue could also occur when the integrated Agent was configured to register itself with an HTTP URL for discovery, which is the default behavior, but the target application instance was also discoverable by some other mechanism such as Kubernetes API discovery. From Cryostat 2.3.1 onward, the Cryostat Agent uses a more specific and unique selector for identifying its credentials. This fix enables Cryostat to distinguish between the Agent's credentials and any credentials necessary for JMX authentication. CRYOSTAT_DISABLE_BUILTIN_DISCOVERY environment variable disables Custom Targets Before Cryostat 2.3.1, when you set the CRYOSTAT_DISABLE_BUILTIN_DISCOVERY environment variable to True , this action also disabled Custom Targets functionality in addition to other built-in discovery mechanisms. The expected behavior is that the CRYOSTAT_DISABLE_BUILTIN_DISCOVERY environment variable disables all built-in discovery mechanisms except Custom Targets. This issue is resolved in the Cryostat 2.3.1 release, which ensures that the Custom Targets functionality is always available, even if you set CRYOSTAT_DISABLE_BUILTIN_DISCOVERY environment variable to True . Unable to log out of the Cryostat web application on OpenShift Container Platform 4.12 and later Before Cryostat 2.3.1, when you clicked Logout to log out of the Cryostat web application, the logout operation failed for Cryostat instances that were deployed on OpenShift Container Platform 4.12 and later. The expected behavior is that the logout operation redirects you to the cluster OAuth login. Instead, the logout attempt failed and the following error message appeared: Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://oauth-openshift.apps-crc.testing/logout . (Reason: CORS header 'Access-Control-Allow-Origin' missing). Status code: 200. Creating automated rules through HTTP API fails for multipart/form-data submissions Before Cryostat 2.3.1, when you attempted to create automated rules by using data submitted as a multipart/form-data media type through an HTTP API, an "HTTP 415" error occurred. This error occurred because Cryostat did not support the multipart/form-data media type. From Cryostat 2.3.1 onward, Cryostat can create automated rules for data submitted through any of the following media types: multipart/form-data application/x-www-form-urlencoded application/json Deleting a namespace containing a Cryostat installation might freeze Before Cryostat 2.3.1, when you attempted to delete a namespace on which a Cryostat instance was still installed, the deletion operation might freeze. This could occur if the Lock ConfigMap object was deleted before final cleanup actions were completed for the Cryostat or Cluster Cryostat custom resource (CR). The expected behavior is that the deletion operation succeeds and cleanup actions on any resources that were created for the Cryostat installation are complete. This issue is resolved in the Cryostat 2.3.1 release in all cases except when the Cryostat Operator is part of the deleted namespace. In this situation, consider reinstalling the Cryostat Operator by using the default installation mode All namespaces on the cluster (default) . The reinstalled Operator can then clean up any leftover state and allow your namespace to be deleted. JMC probe template validation error Before Cryostat 2.3.1, when you attempted to upload a probe template through the Events view in the Cryostat web console, the upload could fail with a validation error. This validation error resulted from issues when parsing method parameter content types that you can define in the probe template. Unable to upload JMC probe templates after a failure Before Cryostat 2.3.1, if a failure occurred when uploading a probe template, any further attempts to upload this template also failed with an HTTP 500 error. This issue occurred if you uploaded an invalid template that failed validation checks and you subsequently attempted to upload a valid version of the same template. In this situation, Cryostat did not alert you that a template with the same name already existed. From Cryostat 2.3.1 onward, Cryostat displays an error message if you attempt to upload probe templates with duplicate file names. Wrong port number in Agent configuration when publishing a JMX URL Before Cryostat 2.3.1, if you configured the Cryostat Agent to register itself as reachable through JMX rather than HTTP, the publication URL in the Agent configuration did not contain the correct JMX port number. Wrong text in warning modal for disabling a rule Before Cryostat 2.3.1, when you disabled an automated rule in the Cryostat web console, the warning modal displayed the following incorrect text: If you click Delete, the rule will be disabled. From Cryostat 2.3.1 onward, the warning modal displays the following text: If you click Disable, the rule will be disabled. Topology view shows toggle icons in the wrong order Before Cryostat 2.3.1, the Topology view of the Cryostat web console did not show toggle icons in the correct order when you toggled between graph mode and list mode. From Cryostat 2.3.1 onward, the graph mode correctly shows the list mode icon, and the list mode correctly shows the graph mode icon.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/release_notes_for_the_red_hat_build_of_cryostat_2.3/cryostat-2-3-fixed-issues_cryostat
Deploying OpenShift Data Foundation on any platform
Deploying OpenShift Data Foundation on any platform Red Hat OpenShift Data Foundation 4.17 Instructions on deploying OpenShift Data Foundation on any platform including virtualized and cloud environments. Red Hat Storage Documentation Team Abstract Read this document for instructions about how to install Red Hat OpenShift Data Foundation to use local storage on any platform.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_openshift_data_foundation_on_any_platform/index
5.4.3. Creating Mirrored Volumes
5.4.3. Creating Mirrored Volumes Note As of the Red Hat Enterprise Linux 6.3 release, LVM supports RAID4/5/6 and a new implementation of mirroring. For information on this new implementation, see Section 5.4.16, "RAID Logical Volumes" . Note Creating a mirrored LVM logical volume in a cluster requires the same commands and procedures as creating a mirrored LVM logical volume on a single node. However, in order to create a mirrored LVM volume in a cluster the cluster and cluster mirror infrastructure must be running, the cluster must be quorate, and the locking type in the lvm.conf file must be set correctly to enable cluster locking. For an example of creating a mirrored volume in a cluster, see Section 6.5, "Creating a Mirrored LVM Logical Volume in a Cluster" . Attempting to run multiple LVM mirror creation and conversion commands in quick succession from multiple nodes in a cluster might cause a backlog of these commands. This might cause some of the requested operations to time-out and, subsequently, fail. To avoid this issue, it is recommended that cluster mirror creation commands be executed from one node of the cluster. When you create a mirrored volume, you specify the number of copies of the data to make with the -m argument of the lvcreate command. Specifying -m1 creates one mirror, which yields two copies of the file system: a linear logical volume plus one copy. Similarly, specifying -m2 creates two mirrors, yielding three copies of the file system. The following command creates a mirrored logical volume with a single mirror. The volume is 50 gigabytes in size, is named mirrorlv , and is carved out of volume group vg0 : An LVM mirror divides the device being copied into regions that, by default, are 512KB in size. You can use the -R argument of the lvcreate command to specify the region size in megabytes. You can also change the default region size by editing the mirror_region_size setting in the lvm.conf file. Note Due to limitations in the cluster infrastructure, cluster mirrors greater than 1.5TB cannot be created with the default region size of 512KB. Users that require larger mirrors should increase the region size from its default to something larger. Failure to increase the region size will cause LVM creation to hang and may hang other LVM commands as well. As a general guideline for specifying the region size for mirrors that are larger than 1.5TB, you could take your mirror size in terabytes and round up that number to the power of 2, using that number as the -R argument to the lvcreate command. For example, if your mirror size is 1.5TB, you could specify -R 2 . If your mirror size is 3TB, you could specify -R 4 . For a mirror size of 5TB, you could specify -R 8 . The following command creates a mirrored logical volume with a region size of 2MB: When a mirror is created, the mirror regions are synchronized. For large mirror components, the sync process may take a long time. As of the Red Hat Enterprise Linux 6.3 release, When you are creating a new mirror that does not need to be revived, you can specify the --nosync argument to indicate that an initial synchronization from the first device is not required. LVM maintains a small log which it uses to keep track of which regions are in sync with the mirror or mirrors. By default, this log is kept on disk, which keeps it persistent across reboots and ensures that the mirror does not need to be resynced every time a machine reboots or crashes. You can specify instead that this log be kept in memory with the --mirrorlog core argument; this eliminates the need for an extra log device, but it requires that the entire mirror be resynchronized at every reboot. The following command creates a mirrored logical volume from the volume group bigvg . The logical volume is named ondiskmirvol and has a single mirror. The volume is 12MB in size and keeps the mirror log in memory. The mirror log is created on a separate device from the devices on which any of the mirror legs are created. It is possible, however, to create the mirror log on the same device as one of the mirror legs by using the --alloc anywhere argument of the vgcreate command. This may degrade performance, but it allows you to create a mirror even if you have only two underlying devices. The following command creates a mirrored logical volume with a single mirror for which the mirror log is on the same device as one of the mirror legs. In this example, the volume group vg0 consists of only two devices. This command creates a 500 MB volume named mirrorlv in the vg0 volume group. Note With clustered mirrors, the mirror log management is completely the responsibility of the cluster node with the currently lowest cluster ID. Therefore, when the device holding the cluster mirror log becomes unavailable on a subset of the cluster, the clustered mirror can continue operating without any impact, as long as the cluster node with lowest ID retains access to the mirror log. Since the mirror is undisturbed, no automatic corrective action (repair) is issued, either. When the lowest-ID cluster node loses access to the mirror log, however, automatic action will kick in (regardless of accessibility of the log from other nodes). To create a mirror log that is itself mirrored, you can specify the --mirrorlog mirrored argument. The following command creates a mirrored logical volume from the volume group bigvg . The logical volume is named twologvol and has a single mirror. The volume is 12MB in size and the mirror log is mirrored, with each log kept on a separate device. Just as with a standard mirror log, it is possible to create the redundant mirror logs on the same device as the mirror legs by using the --alloc anywhere argument of the vgcreate command. This may degrade performance, but it allows you to create a redundant mirror log even if you do not have sufficient underlying devices for each log to be kept on a separate device than the mirror legs. When a mirror is created, the mirror regions are synchronized. For large mirror components, the sync process may take a long time. When you are creating a new mirror that does not need to be revived, you can specify the --nosync argument to indicate that an initial synchronization from the first device is not required. You can specify which devices to use for the mirror legs and log, and which extents of the devices to use. To force the log onto a particular disk, specify exactly one extent on the disk on which it will be placed. LVM does not necessary respect the order in which devices are listed in the command line. If any physical volumes are listed that is the only space on which allocation will take place. Any physical extents included in the list that are already allocated will get ignored. The following command creates a mirrored logical volume with a single mirror and a single log that is not mirrored. The volume is 500 MB in size, it is named mirrorlv , and it is carved out of volume group vg0 . The first leg of the mirror is on device /dev/sda1 , the second leg of the mirror is on device /dev/sdb1 , and the mirror log is on /dev/sdc1 . The following command creates a mirrored logical volume with a single mirror. The volume is 500 MB in size, it is named mirrorlv , and it is carved out of volume group vg0 . The first leg of the mirror is on extents 0 through 499 of device /dev/sda1 , the second leg of the mirror is on extents 0 through 499 of device /dev/sdb1 , and the mirror log starts on extent 0 of device /dev/sdc1 . These are 1MB extents. If any of the specified extents have already been allocated, they will be ignored. Note As of the Red Hat Enterprise Linux 6.1 release, you can combine striping and mirroring in a single logical volume. Creating a logical volume while simultaneously specifying the number of mirrors ( --mirrors X ) and the number of stripes ( --stripes Y ) results in a mirror device whose constituent devices are striped. 5.4.3.1. Mirrored Logical Volume Failure Policy You can define how a mirrored logical volume behaves in the event of a device failure with the mirror_image_fault_policy and mirror_log_fault_policy parameters in the activation section of the lvm.conf file. When these parameters are set to remove , the system attempts to remove the faulty device and run without it. When this parameter is set to allocate , the system attempts to remove the faulty device and tries to allocate space on a new device to be a replacement for the failed device; this policy acts like the remove policy if no suitable device and space can be allocated for the replacement. By default, the mirror_log_fault_policy parameter is set to allocate . Using this policy for the log is fast and maintains the ability to remember the sync state through crashes and reboots. If you set this policy to remove , when a log device fails the mirror converts to using an in-memory log and the mirror will not remember its sync status across crashes and reboots and the entire mirror will be resynced. By default, the mirror_image_fault_policy parameter is set to remove . With this policy, if a mirror image fails the mirror will convert to a non-mirrored device if there is only one remaining good copy. Setting this policy to allocate for a mirror device requires the mirror to resynchronize the devices; this is a slow process, but it preserves the mirror characteristic of the device. Note When an LVM mirror suffers a device failure, a two-stage recovery takes place. The first stage involves removing the failed devices. This can result in the mirror being reduced to a linear device. The second stage, if the mirror_log_fault_policy parameter is set to allocate , is to attempt to replace any of the failed devices. Note, however, that there is no guarantee that the second stage will choose devices previously in-use by the mirror that had not been part of the failure if others are available. For information on manually recovering from an LVM mirror failure, see Section 7.3, "Recovering from LVM Mirror Failure" .
[ "lvcreate -L 50G -m1 -n mirrorlv vg0", "lvcreate -m1 -L 2T -R 2 -n mirror vol_group", "lvcreate -L 12MB -m1 --mirrorlog core -n ondiskmirvol bigvg Logical volume \"ondiskmirvol\" created", "lvcreate -L 500M -m1 -n mirrorlv -alloc anywhere vg0", "lvcreate -L 12MB -m1 --mirrorlog mirrored -n twologvol bigvg Logical volume \"twologvol\" created", "lvcreate -L 500M -m1 -n mirrorlv vg0 /dev/sda1 /dev/sdb1 /dev/sdc1", "lvcreate -L 500M -m1 -n mirrorlv vg0 /dev/sda1:0-499 /dev/sdb1:0-499 /dev/sdc1:0" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/mirror_create
Chapter 1. Administrator metrics
Chapter 1. Administrator metrics 1.1. Serverless administrator metrics Metrics enable cluster administrators to monitor how OpenShift Serverless cluster components and workloads are performing. You can view different metrics for OpenShift Serverless by navigating to Dashboards in the web console Administrator perspective. 1.1.1. Prerequisites See the OpenShift Container Platform documentation on Managing metrics for information about enabling metrics for your cluster. You have access to an account with cluster administrator access (or dedicated administrator access for OpenShift Dedicated or Red Hat OpenShift Service on AWS). You have access to the Administrator perspective in the web console. Warning If Service Mesh is enabled with mTLS, metrics for Knative Serving are disabled by default because Service Mesh prevents Prometheus from scraping metrics. For information about resolving this issue, see Enabling Knative Serving metrics when using Service Mesh with mTLS . Scraping the metrics does not affect autoscaling of a Knative service, because scraping requests do not go through the activator. Consequently, no scraping takes place if no pods are running. 1.2. Serverless controller metrics The following metrics are emitted by any component that implements a controller logic. These metrics show details about reconciliation operations and the work queue behavior upon which reconciliation requests are added to the work queue. Metric name Description Type Tags Unit work_queue_depth The depth of the work queue. Gauge reconciler Integer (no units) reconcile_count The number of reconcile operations. Counter reconciler , success Integer (no units) reconcile_latency The latency of reconcile operations. Histogram reconciler , success Milliseconds workqueue_adds_total The total number of add actions handled by the work queue. Counter name Integer (no units) workqueue_queue_latency_seconds The length of time an item stays in the work queue before being requested. Histogram name Seconds workqueue_retries_total The total number of retries that have been handled by the work queue. Counter name Integer (no units) workqueue_work_duration_seconds The length of time it takes to process and item from the work queue. Histogram name Seconds workqueue_unfinished_work_seconds The length of time that outstanding work queue items have been in progress. Histogram name Seconds workqueue_longest_running_processor_seconds The length of time that the longest outstanding work queue items has been in progress. Histogram name Seconds 1.3. Webhook metrics Webhook metrics report useful information about operations. For example, if a large number of operations fail, this might indicate an issue with a user-created resource. Metric name Description Type Tags Unit request_count The number of requests that are routed to the webhook. Counter admission_allowed , kind_group , kind_kind , kind_version , request_operation , resource_group , resource_namespace , resource_resource , resource_version Integer (no units) request_latencies The response time for a webhook request. Histogram admission_allowed , kind_group , kind_kind , kind_version , request_operation , resource_group , resource_namespace , resource_resource , resource_version Milliseconds 1.4. Knative Eventing metrics Cluster administrators can view the following metrics for Knative Eventing components. By aggregating the metrics from HTTP code, events can be separated into two categories; successful events (2xx) and failed events (5xx). 1.4.1. Broker ingress metrics You can use the following metrics to debug the broker ingress, see how it is performing, and see which events are being dispatched by the ingress component. Metric name Description Type Tags Unit event_count Number of events received by a broker. Counter broker_name , event_type , namespace_name , response_code , response_code_class , unique_name Integer (no units) event_dispatch_latencies The time taken to dispatch an event to a channel. Histogram broker_name , event_type , namespace_name , response_code , response_code_class , unique_name Milliseconds 1.4.2. Broker filter metrics You can use the following metrics to debug broker filters, see how they are performing, and see which events are being dispatched by the filters. You can also measure the latency of the filtering action on an event. Metric name Description Type Tags Unit event_count Number of events received by a broker. Counter broker_name , container_name , filter_type , namespace_name , response_code , response_code_class , trigger_name , unique_name Integer (no units) event_dispatch_latencies The time taken to dispatch an event to a channel. Histogram broker_name , container_name , filter_type , namespace_name , response_code , response_code_class , trigger_name , unique_name Milliseconds event_processing_latencies The time it takes to process an event before it is dispatched to a trigger subscriber. Histogram broker_name , container_name , filter_type , namespace_name , trigger_name , unique_name Milliseconds 1.4.3. InMemoryChannel dispatcher metrics You can use the following metrics to debug InMemoryChannel channels, see how they are performing, and see which events are being dispatched by the channels. Metric name Description Type Tags Unit event_count Number of events dispatched by InMemoryChannel channels. Counter broker_name , container_name , filter_type , namespace_name , response_code , response_code_class , trigger_name , unique_name Integer (no units) event_dispatch_latencies The time taken to dispatch an event from an InMemoryChannel channel. Histogram broker_name , container_name , filter_type , namespace_name , response_code , response_code_class , trigger_name , unique_name Milliseconds 1.4.4. Event source metrics You can use the following metrics to verify that events have been delivered from the event source to the connected event sink. Metric name Description Type Tags Unit event_count Number of events sent by the event source. Counter broker_name , container_name , filter_type , namespace_name , response_code , response_code_class , trigger_name , unique_name Integer (no units) retry_event_count Number of retried events sent by the event source after initially failing to be delivered. Counter event_source , event_type , name , namespace_name , resource_group , response_code , response_code_class , response_error , response_timeout Integer (no units) 1.5. Knative Serving metrics Cluster administrators can view the following metrics for Knative Serving components. 1.5.1. Activator metrics You can use the following metrics to understand how applications respond when traffic passes through the activator. Metric name Description Type Tags Unit request_concurrency The number of concurrent requests that are routed to the activator, or average concurrency over a reporting period. Gauge configuration_name , container_name , namespace_name , pod_name , revision_name , service_name Integer (no units) request_count The number of requests that are routed to activator. These are requests that have been fulfilled from the activator handler. Counter configuration_name , container_name , namespace_name , pod_name , response_code , response_code_class , revision_name , service_name , Integer (no units) request_latencies The response time in milliseconds for a fulfilled, routed request. Histogram configuration_name , container_name , namespace_name , pod_name , response_code , response_code_class , revision_name , service_name Milliseconds 1.5.2. Autoscaler metrics The autoscaler component exposes a number of metrics related to autoscaler behavior for each revision. For example, at any given time, you can monitor the targeted number of pods the autoscaler tries to allocate for a service, the average number of requests per second during the stable window, or whether the autoscaler is in panic mode if you are using the Knative pod autoscaler (KPA). Metric name Description Type Tags Unit desired_pods The number of pods the autoscaler tries to allocate for a service. Gauge configuration_name , namespace_name , revision_name , service_name Integer (no units) excess_burst_capacity The excess burst capacity served over the stable window. Gauge configuration_name , namespace_name , revision_name , service_name Integer (no units) stable_request_concurrency The average number of requests for each observed pod over the stable window. Gauge configuration_name , namespace_name , revision_name , service_name Integer (no units) panic_request_concurrency The average number of requests for each observed pod over the panic window. Gauge configuration_name , namespace_name , revision_name , service_name Integer (no units) target_concurrency_per_pod The number of concurrent requests that the autoscaler tries to send to each pod. Gauge configuration_name , namespace_name , revision_name , service_name Integer (no units) stable_requests_per_second The average number of requests-per-second for each observed pod over the stable window. Gauge configuration_name , namespace_name , revision_name , service_name Integer (no units) panic_requests_per_second The average number of requests-per-second for each observed pod over the panic window. Gauge configuration_name , namespace_name , revision_name , service_name Integer (no units) target_requests_per_second The number of requests-per-second that the autoscaler targets for each pod. Gauge configuration_name , namespace_name , revision_name , service_name Integer (no units) panic_mode This value is 1 if the autoscaler is in panic mode, or 0 if the autoscaler is not in panic mode. Gauge configuration_name , namespace_name , revision_name , service_name Integer (no units) requested_pods The number of pods that the autoscaler has requested from the Kubernetes cluster. Gauge configuration_name , namespace_name , revision_name , service_name Integer (no units) actual_pods The number of pods that are allocated and currently have a ready state. Gauge configuration_name , namespace_name , revision_name , service_name Integer (no units) not_ready_pods The number of pods that have a not ready state. Gauge configuration_name , namespace_name , revision_name , service_name Integer (no units) pending_pods The number of pods that are currently pending. Gauge configuration_name , namespace_name , revision_name , service_name Integer (no units) terminating_pods The number of pods that are currently terminating. Gauge configuration_name , namespace_name , revision_name , service_name Integer (no units) 1.5.3. Go runtime metrics Each Knative Serving control plane process emits a number of Go runtime memory statistics ( MemStats ). Note The name tag for each metric is an empty tag. Metric name Description Type Tags Unit go_alloc The number of bytes of allocated heap objects. This metric is the same as heap_alloc . Gauge name Integer (no units) go_total_alloc The cumulative bytes allocated for heap objects. Gauge name Integer (no units) go_sys The total bytes of memory obtained from the operating system. Gauge name Integer (no units) go_lookups The number of pointer lookups performed by the runtime. Gauge name Integer (no units) go_mallocs The cumulative count of heap objects allocated. Gauge name Integer (no units) go_frees The cumulative count of heap objects that have been freed. Gauge name Integer (no units) go_heap_alloc The number of bytes of allocated heap objects. Gauge name Integer (no units) go_heap_sys The number of bytes of heap memory obtained from the operating system. Gauge name Integer (no units) go_heap_idle The number of bytes in idle, unused spans. Gauge name Integer (no units) go_heap_in_use The number of bytes in spans that are currently in use. Gauge name Integer (no units) go_heap_released The number of bytes of physical memory returned to the operating system. Gauge name Integer (no units) go_heap_objects The number of allocated heap objects. Gauge name Integer (no units) go_stack_in_use The number of bytes in stack spans that are currently in use. Gauge name Integer (no units) go_stack_sys The number of bytes of stack memory obtained from the operating system. Gauge name Integer (no units) go_mspan_in_use The number of bytes of allocated mspan structures. Gauge name Integer (no units) go_mspan_sys The number of bytes of memory obtained from the operating system for mspan structures. Gauge name Integer (no units) go_mcache_in_use The number of bytes of allocated mcache structures. Gauge name Integer (no units) go_mcache_sys The number of bytes of memory obtained from the operating system for mcache structures. Gauge name Integer (no units) go_bucket_hash_sys The number of bytes of memory in profiling bucket hash tables. Gauge name Integer (no units) go_gc_sys The number of bytes of memory in garbage collection metadata. Gauge name Integer (no units) go_other_sys The number of bytes of memory in miscellaneous, off-heap runtime allocations. Gauge name Integer (no units) go_next_gc The target heap size of the garbage collection cycle. Gauge name Integer (no units) go_last_gc The time that the last garbage collection was completed in Epoch or Unix time . Gauge name Nanoseconds go_total_gc_pause_ns The cumulative time in garbage collection stop-the-world pauses since the program started. Gauge name Nanoseconds go_num_gc The number of completed garbage collection cycles. Gauge name Integer (no units) go_num_forced_gc The number of garbage collection cycles that were forced due to an application calling the garbage collection function. Gauge name Integer (no units) go_gc_cpu_fraction The fraction of the available CPU time of the program that has been used by the garbage collector since the program started. Gauge name Integer (no units)
null
https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.33/html/observability/administrator-metrics
Chapter 14. High Availability
Chapter 14. High Availability XFS and High Availability Add On Usage of XFS in conjunction with Red Hat Enterprise Linux 6.2 High Availability Add On as a file system resource is now fully supported. HA support for VMWare Applications running inside VMWare based guests can now be configured for high availability. This also includes full support for the use of GFS2 shared storage file system in the environment. A new SOAP-based fence agent has been added that has the ability to fence guests when necessary. Administrative UI enhancements Luci, the web-based administrative UI for configuring clusters has been updated to include the following: Role-based access control (RBAC): enables fine-grained access levels by defining user classes to access specific cluster operations. Improved response times for destructive operations in a cluster. Support for UDP-Unicast IP multicasting has been the only supported option for a cluster transport. IP multicasting is inherently complex to configure and often requires re-configuration of network switches. UDP-unicast in contrast offers a simpler approach to cluster configuration and is an established protocol for cluster communication. UDP-unicast, initially introduced as a Technology Preview, is now fully supported. Watchdog integration with fence_scsi Watchdog is a general timer service available in Linux that can be used to periodically monitor system resources. Fence agents have now been integrated with watchdog such that the watchdog service can reboot a node after it has been fenced using fence_scsi . This eliminates the need for manual intervention to reboot the node after it has been fenced using fence_scsi .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_release_notes/highavailability
About
About OpenShift Container Platform 4.11 Introduction to OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/about/index
4.4.5. Renaming Logical Volumes
4.4.5. Renaming Logical Volumes To rename an existing logical volume, use the lvrename command. Either of the following commands renames logical volume lvold in volume group vg02 to lvnew . For more information on activating logical volumes on individual nodes in a cluster, see Section 4.8, "Activating Logical Volumes on Individual Nodes in a Cluster" .
[ "lvrename /dev/vg02/lvold /dev/vg02/lvnew", "lvrename vg02 lvold lvnew" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/lv_rename
Chapter 16. System and Subscription Management
Chapter 16. System and Subscription Management New search-disabled-repos plug-in for yum The search-disabled-repos plug-in for yum has been added to the subscription-manager packages. This plug-in allows users to successfully complete yum operations that fail due to the source repository being dependent on a disabled repository. When search-disabled-repos is installed in the described scenario, yum displays instructions to temporarily enable repositories that are currently disabled and to search for missing dependencies. If you choose to follow the instructions and turn off the default notify_only behavior in the /etc/yum/pluginconf.d/search-disabled-repos.conf file, future yum operations will prompt you to temporarily or permanently enable all the disabled repositories needed to fulfill the yum transaction. (BZ#1268376) Easier troubleshooting with yum The yum utility is now able to identify certain frequently occurring errors and provides a link to a relevant Red Hat Knowledgebase article. This helps users identify typical problems and address their cause. (BZ#1248686) New package: rear Relax-and-Recover (rear) is a recovery and system migration utility. Written in bash , it allows you to use tools already present on your system to continuously create recovery images which can be saved locally or on a remote server, and to use these images to easily restore the system in case of software or hardware failure. The tool also supports integration with various external tools such as backup solutions ( Symantec NetBackup , duplicity , IBM TSM , etc.) and monitoring systems ( Nagios , Opsview ). The rear utility is available in base channels for all variants of Red Hat Enterprise Linux 6.8 on all architectures. The utility produces a bootable image and restores from backup using this image. It also allows to restore to different hardware and can therefore be used as a migration utility as well. (BZ#981637) iostat now supports separate statistics for r_await and w_await The iostat tool now supports separate statistics for r_await (average time for read requests issued to the device to be served) and w_await (average time for write requests issued to the device to be served) in the Device Utilization Report. Use the -x option to obtain a report which includes this information. (BZ# 1185057 ) TLS 1.1 and 1.2 are now enabled by default in libcurl Previously, versions 1.1 and 1.2 of the TLS protocol were disabled by default in libcurl . Users were required to explicitly enable these TLS versions in utilities based on libcurl in order to allow these utilities to securely communicate with servers that do not accept SSL 3.0 and TLS 1.0 connections. With this update, TLS 1.1 and TLS 1.2 are no longer disabled by default in libcurl . You can, however, explicitly disable them using the libcurl API. (BZ# 1289205 ) libcurl can now connect to SCP and SFTP servers through a HTTP proxy Implementations of the SCP and SFTP protocols in libcurl have been enhanced and now support tunneling through HTTP proxies. (BZ#1258566) abrt can now exclude specific programs from being dumped Previously, ignoring crashes of blacklisted programs in abrt did not prevent it from creating their core dumps - the dumps were still written to disk and then deleted. This approach allowed abrt to notify system administrators of a crash while not using disk space to store unneeded crash dumps. However, creating these dumps only to delete them later was unnecessarily wasting system resources. This update introduces a new configuration option IgnoredPaths in the /etc/abrt/plugins/CCpp.conf configuration file, which allows you to specify a comma-separated list of file system path globs which will not be dumped at all. (BZ#1208713) User and group whitelisting added to abrt Previously, abrt allowed all users to generate and collect core dumps, which could potentially enable any user to maliciously generate a large number of core dumps and waste system resources. This update adds a whitelisting functionality to abrt , and you can now only allow specific users or groups to generate core dumps. Use the new AllowedUsers = user1, user2, ... and AllowedGroups = group1, group2, ... options in the /etc/abrt/plugins/CCpp.conf configuration file to restrict core dump generation and collection to these users or groups, or leave these options empty to configure abrt to process core dumps for all users and groups. (BZ# 1256705 ) libvpd rebased to version 2.2.5 The libvpd packages have been upgraded to upstream version 2.2.5, which provides a number of bug fixes and enhancements over the version. Notably, this version includes: Improved error handling Security improvements such as fixing a potential buffer overflow and memory allocation validation (BZ#1148140) libservicelog rebased to version 1.1.15 The libservicelog packages have been upgraded to upstream version 1.1.15, which provides a number of bug fixes and enhancements over the version. (BZ#1148141) sysctl configuration files can now contain longer lines Previously, sysctl configuration files could only contain lines up to 255 characters long. With this update, the maximum acceptable line length has been increased to 4095 characters. (BZ# 1201024 ) ps can now display thread cgroups This update introduces a new format specifier thcgr , which can be used to display the cgroup of each listed thread. (BZ# 1284076 ) reporter-upload now allows configuring optional SSH keys The reporter-upload tool, which is used by abrt to submit collected problem data, now allows you to use optional SSH key files. You can specify a key file using one of the following ways: The SSHPublicKey and SSHPrivateKey options in the /etc/libreport/plugins/upload.conf configuration file. Using -b and -r command line options for the public and private key, respectively. Setting the Upload_SSHPublicKey and Upload_SSHPrivateKey environment variables, respectively. If none of these options or variables are used, reporter-upload will attempt to use the default SSH key from the user's ~/.ssh/ directory. (BZ# 1261120 )
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.8_release_notes/new_features_system_and_subscription_management
14.6. Converting an Existing Image to Another Format
14.6. Converting an Existing Image to Another Format The convert option is used to convert one recognized image format to another image format. For a list of accepted formats, see Section 14.12, "Supported qemu-img Formats" . The -p parameter shows the progress of the command (optional and not for every command) and -S flag allows for the creation of a sparse file , which is included within the disk image. Sparse files in all purposes function like a standard file, except that the physical blocks that only contain zeros (that is, nothing). When the Operating System sees this file, it treats it as it exists and takes up actual disk space, even though in reality it does not take any. This is particularly helpful when creating a disk for a guest virtual machine as this gives the appearance that the disk has taken much more disk space than it has. For example, if you set -S to 50Gb on a disk image that is 10Gb, then your 10Gb of disk space will appear to be 60Gb in size even though only 10Gb is actually being used. Convert the disk image filename to disk image output_filename using format output_format . The disk image can be optionally compressed with the -c option, or encrypted with the -o option by setting -o encryption . Note that the options available with the -o parameter differ with the selected format. Only the qcow2 and qcow2 format supports encryption or compression. qcow2 encryption uses the AES format with secure 128-bit keys. qcow2 compression is read-only, so if a compressed sector is converted from qcow2 format, it is written to the new format as uncompressed data. Image conversion is also useful to get a smaller image when using a format which can grow, such as qcow or cow . The empty sectors are detected and suppressed from the destination image.
[ "qemu-img convert [-c] [-p] [-f fmt ] [-t cache ] [-O output_fmt ] [-o options ] [-S sparse_size ] filename output_filename" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-using_qemu_img-converting_an_existing_image_to_another_format
Chapter 3. Evaluating the model
Chapter 3. Evaluating the model If you want to measure the improvements of your new model, you can compare its performance to the base model with the evaluation process. You can also chat with the model directly to qualitatively identify whether the new model has learned the knowledge you created. If you want more quantitative results of the model improvements, you can run the evaluation process in the RHEL AI CLI. 3.1. Evaluating your new model You can run the evaluation process in the RHEL AI CLI with the following procedure. Prerequisites You installed RHEL AI with the bootable container image. You created a custom qna.yaml file with skills or knowledge. You ran the synthetic data generation process. You trained the model using the RHEL AI training process. You downloaded the prometheus-8x7b-v2-0 judge model. You have root user access on your machine. Procedure Navigate to your working Git branch where you created your qna.yaml file. You can now run the evaluation process on different benchmarks. Each command needs the path to the trained samples model to evaluate, you can access these checkpoints in your ~/.local/share/instructlab/checkpoints folder. MMLU_BRANCH benchmark - If you want to measure how your knowledge contributions have impacted your model, run the mmlu_branch benchmark by executing the following command: USD ilab model evaluate --benchmark mmlu_branch --model ~/.local/share/instructlab/phased/phase2/checkpoints/hf_format/<checkpoint> \ --tasks-dir ~/.local/share/instructlab/datasets/<generation-date>/<node-dataset> \ --base-model ~/.cache/instructlab/models/granite-7b-starter where <checkpoint> Specify the best scored checkpoint file generated during multi-phase training <node-dataset> Specify the node_datasets directory that was generated during SDG, in the ~/.local/share/instructlab/datasets/<generation-date> directory, with the same timestamps as the.jsonl files used for training the model. Example output # KNOWLEDGE EVALUATION REPORT ## BASE MODEL (SCORE) /home/user/.cache/instructlab/models/instructlab/granite-7b-lab/ (0.74/1.0) ## MODEL (SCORE) /home/user/local/share/instructlab/phased/phases2/checkpoints/hf_format/samples_665(0.78/1.0) ### IMPROVEMENTS (0.0 to 1.0): 1. tonsils: 0.74 -> 0.78 (+0.04) MT_BENCH_BRANCH benchmark - If you want to measure how your skills contributions have impacted your model, run the mt_bench_branch benchmark by executing the following command: USD ilab model evaluate \ --benchmark mt_bench_branch \ --model ~/.local/share/instructlab/phased/phase2/checkpoints/hf_format/<checkpoint> \ --judge-model ~/.cache/instructlab/models/prometheus-8x7b-v2-0 \ --branch <worker-branch> \ --base-branch <worker-branch> where <checkpoint> Specify the best scored checkpoint file generated during multi-phase training. <worker-branch> Specify the branch you used when adding data to your taxonomy tree. <num-gpus> Specify the number of GPUs you want to use for evaluation. Example output # SKILL EVALUATION REPORT ## BASE MODEL (SCORE) /home/user/.cache/instructlab/models/instructlab/granite-7b-lab (5.78/10.0) ## MODEL (SCORE) /home/user/local/share/instructlab/phased/phases2/checkpoints/hf_format/samples_665(6.00/10.0) ### IMPROVEMENTS (0.0 to 10.0): 1. foundational_skills/reasoning/linguistics_reasoning/object_identification/qna.yaml: 4.0 -> 6.67 (+2.67) 2. foundational_skills/reasoning/theory_of_mind/qna.yaml: 3.12 -> 4.0 (+0.88) 3. foundational_skills/reasoning/linguistics_reasoning/logical_sequence_of_words/qna.yaml: 9.33 -> 10.0 (+0.67) 4. foundational_skills/reasoning/logical_reasoning/tabular/qna.yaml: 5.67 -> 6.33 (+0.67) 5. foundational_skills/reasoning/common_sense_reasoning/qna.yaml: 1.67 -> 2.33 (+0.67) 6. foundational_skills/reasoning/logical_reasoning/causal/qna.yaml: 5.67 -> 6.0 (+0.33) 7. foundational_skills/reasoning/logical_reasoning/general/qna.yaml: 6.6 -> 6.8 (+0.2) 8. compositional_skills/writing/grounded/editing/content/qna.yaml: 6.8 -> 7.0 (+0.2) 9. compositional_skills/general/synonyms/qna.yaml: 4.5 -> 4.67 (+0.17) ### REGRESSIONS (0.0 to 10.0): 1. foundational_skills/reasoning/unconventional_reasoning/lower_score_wins/qna.yaml: 5.67 -> 4.0 (-1.67) 2. foundational_skills/reasoning/mathematical_reasoning/qna.yaml: 7.33 -> 6.0 (-1.33) 3. foundational_skills/reasoning/temporal_reasoning/qna.yaml: 5.67 -> 4.67 (-1.0) ### NO CHANGE (0.0 to 10.0): 1. foundational_skills/reasoning/linguistics_reasoning/odd_one_out/qna.yaml (9.33) 2. compositional_skills/grounded/linguistics/inclusion/qna.yaml (6.5) Optional: You can manually evaluate each checkpoint using the MMLU and MT_BENCH benchmarks. You can evaluate any model against the standardized set of knowledge or skills, allowing you to compare the scores of your own model against other LLMs. MMLU - If you want to see the evaluation score of your new model against a standardized set of knowledge data, set the mmlu benchmark by running the following command: USD ilab model evaluate --benchmark mmlu --model ~/.local/share/instructlab/phased/phase2/checkpoints/hf_format/samples_665 where <checkpoint> Specify one of the checkpoint files generated during multi-phase training. Example output # KNOWLEDGE EVALUATION REPORT ## MODEL (SCORE) /home/user/.local/share/instructlab/phased/phase2/checkpoints/hf_format/samples_665 ### SCORES (0.0 to 1.0): mmlu_abstract_algebra - 0.31 mmlu_anatomy - 0.46 mmlu_astronomy - 0.52 mmlu_business_ethics - 0.55 mmlu_clinical_knowledge - 0.57 mmlu_college_biology - 0.56 mmlu_college_chemistry - 0.38 mmlu_college_computer_science - 0.46 ... MT_BENCH - If you want to see the evaluation score of your new model against a standardized set of skills, set the mt_bench benchmark by running the following command: USD ilab model evaluate --benchmark mt_bench --model ~/.local/share/instructlab/phased/phases2/checkpoints/hf_format/samples_665 where <checkpoint> Specify one of the checkpoint files generated during multi-phase training. Example output # SKILL EVALUATION REPORT ## MODEL (SCORE) /home/user/local/share/instructlab/phased/phases2/checkpoints/hf_format/samples_665(7.27/10.0) ### TURN ONE (0.0 to 10.0): 7.48 ### TURN TWO (0.0 to 10.0): 7.05 3.1.1. Domain-Knowledge benchmark evaluation Important Domain-Knowledge benchmark evaluation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The current knowledge evaluation benchmark in RHEL AI, MMLU and MMLU_branch, evaluates models on their ability to answer multiple choice questions. There was no way to give the model credit on moderately correct or incorrect answers. The Domain-Knowledge benchmark (DK-bench) evaluation provides the ability to bring custom evaluation questions and score the models answers on a scale. Each response given is compared to the reference answer and graded on the following scale by the judge model: Table 3.1. Domain-Knowledge benchmark rubric Score Criteria 1 The response is entirely incorrect, irrelevant, or does not align with the reference in any meaningful way. 2 The response partially matches the reference but contains major errors, significant omissions, or irrelevant information. 3 The response aligns with the reference overall but lacks sufficient detail, clarity, or contains minor inaccuracies. 4 The response is mostly accurate, aligns closely with the reference, and contains only minor issues or omissions. 5 The response is fully accurate, completely aligns with the reference, and is clear, thorough, and detailed. Prerequisites You installed RHEL AI with the bootable container image. You trained the model using the RHEL AI training process. You downloaded the prometheus-8x7b-v2-0 judge model. You have root user access on your machine. Procedure To utilize custom evaluation, you must create a jsonl file that includes every question you want to ask a model to answer and evaluate. Example DK-bench jsonl file {"user_input":"What is the capital of Canada?","reference":"The capital of Canada is Ottawa."} where user_input Contains the question for the model. reference Contains the answer to the question. To run the DK-bench benchmark with your custom evaluation questions, run the following command: USD ilab model evaluate --benchmark dk_bench --input-questions <path-to-jsonl-file> --model <path-to-model> where <path-to-jsonl-file> Specify the path to your jsonl file that contains your questions and answers. <path-to-model> Specify the path to the model you want to evaluate. Example command USD ilab model evaluate --benchmark dk_bench --input-questions /home/use/path/to/questions.jsonl --model ~/.cache/instructlab/models/instructlab/granite-7b-lab Example output of domain-Knowledge benchmark evaluation # DK-BENCH REPORT ## MODEL: granite-7b-lab Question #1: 5/5 Question #2: 5/5 Question #3: 5/5 Question #4: 5/5 Question #5: 2/5 Question #6: 3/5 Question #7: 2/5 Question #8: 3/5 Question #9: 5/5 Question #10: 5/5 ---------------------------- Average Score: 4.00/5 Total Score: 40/50
[ "ilab model evaluate --benchmark mmlu_branch --model ~/.local/share/instructlab/phased/phase2/checkpoints/hf_format/<checkpoint> --tasks-dir ~/.local/share/instructlab/datasets/<generation-date>/<node-dataset> --base-model ~/.cache/instructlab/models/granite-7b-starter", "KNOWLEDGE EVALUATION REPORT ## BASE MODEL (SCORE) /home/user/.cache/instructlab/models/instructlab/granite-7b-lab/ (0.74/1.0) ## MODEL (SCORE) /home/user/local/share/instructlab/phased/phases2/checkpoints/hf_format/samples_665(0.78/1.0) ### IMPROVEMENTS (0.0 to 1.0): 1. tonsils: 0.74 -> 0.78 (+0.04)", "ilab model evaluate --benchmark mt_bench_branch --model ~/.local/share/instructlab/phased/phase2/checkpoints/hf_format/<checkpoint> --judge-model ~/.cache/instructlab/models/prometheus-8x7b-v2-0 --branch <worker-branch> --base-branch <worker-branch>", "SKILL EVALUATION REPORT ## BASE MODEL (SCORE) /home/user/.cache/instructlab/models/instructlab/granite-7b-lab (5.78/10.0) ## MODEL (SCORE) /home/user/local/share/instructlab/phased/phases2/checkpoints/hf_format/samples_665(6.00/10.0) ### IMPROVEMENTS (0.0 to 10.0): 1. foundational_skills/reasoning/linguistics_reasoning/object_identification/qna.yaml: 4.0 -> 6.67 (+2.67) 2. foundational_skills/reasoning/theory_of_mind/qna.yaml: 3.12 -> 4.0 (+0.88) 3. foundational_skills/reasoning/linguistics_reasoning/logical_sequence_of_words/qna.yaml: 9.33 -> 10.0 (+0.67) 4. foundational_skills/reasoning/logical_reasoning/tabular/qna.yaml: 5.67 -> 6.33 (+0.67) 5. foundational_skills/reasoning/common_sense_reasoning/qna.yaml: 1.67 -> 2.33 (+0.67) 6. foundational_skills/reasoning/logical_reasoning/causal/qna.yaml: 5.67 -> 6.0 (+0.33) 7. foundational_skills/reasoning/logical_reasoning/general/qna.yaml: 6.6 -> 6.8 (+0.2) 8. compositional_skills/writing/grounded/editing/content/qna.yaml: 6.8 -> 7.0 (+0.2) 9. compositional_skills/general/synonyms/qna.yaml: 4.5 -> 4.67 (+0.17) ### REGRESSIONS (0.0 to 10.0): 1. foundational_skills/reasoning/unconventional_reasoning/lower_score_wins/qna.yaml: 5.67 -> 4.0 (-1.67) 2. foundational_skills/reasoning/mathematical_reasoning/qna.yaml: 7.33 -> 6.0 (-1.33) 3. foundational_skills/reasoning/temporal_reasoning/qna.yaml: 5.67 -> 4.67 (-1.0) ### NO CHANGE (0.0 to 10.0): 1. foundational_skills/reasoning/linguistics_reasoning/odd_one_out/qna.yaml (9.33) 2. compositional_skills/grounded/linguistics/inclusion/qna.yaml (6.5)", "ilab model evaluate --benchmark mmlu --model ~/.local/share/instructlab/phased/phase2/checkpoints/hf_format/samples_665", "KNOWLEDGE EVALUATION REPORT ## MODEL (SCORE) /home/user/.local/share/instructlab/phased/phase2/checkpoints/hf_format/samples_665 ### SCORES (0.0 to 1.0): mmlu_abstract_algebra - 0.31 mmlu_anatomy - 0.46 mmlu_astronomy - 0.52 mmlu_business_ethics - 0.55 mmlu_clinical_knowledge - 0.57 mmlu_college_biology - 0.56 mmlu_college_chemistry - 0.38 mmlu_college_computer_science - 0.46", "ilab model evaluate --benchmark mt_bench --model ~/.local/share/instructlab/phased/phases2/checkpoints/hf_format/samples_665", "SKILL EVALUATION REPORT ## MODEL (SCORE) /home/user/local/share/instructlab/phased/phases2/checkpoints/hf_format/samples_665(7.27/10.0) ### TURN ONE (0.0 to 10.0): 7.48 ### TURN TWO (0.0 to 10.0): 7.05", "{\"user_input\":\"What is the capital of Canada?\",\"reference\":\"The capital of Canada is Ottawa.\"}", "ilab model evaluate --benchmark dk_bench --input-questions <path-to-jsonl-file> --model <path-to-model>", "ilab model evaluate --benchmark dk_bench --input-questions /home/use/path/to/questions.jsonl --model ~/.cache/instructlab/models/instructlab/granite-7b-lab", "DK-BENCH REPORT ## MODEL: granite-7b-lab Question #1: 5/5 Question #2: 5/5 Question #3: 5/5 Question #4: 5/5 Question #5: 2/5 Question #6: 3/5 Question #7: 2/5 Question #8: 3/5 Question #9: 5/5 Question #10: 5/5 ---------------------------- Average Score: 4.00/5 Total Score: 40/50" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.4/html/generating_a_custom_llm_using_rhel_ai/evaluating_model
Chapter 2. Ceph Dashboard installation and access
Chapter 2. Ceph Dashboard installation and access As a system administrator, you can access the dashboard with the credentials provided on bootstrapping the cluster. Cephadm installs the dashboard by default. Following is an example of the dashboard URL: Note Update the browser and clear the cookies prior to accessing the dashboard URL. The following are the Cephadm bootstrap options that are available for the Ceph dashboard configurations: [-initial-dashboard-user INITIAL_DASHBOARD_USER ] - Use this option while bootstrapping to set initial-dashboard-user. [-initial-dashboard-password INITIAL_DASHBOARD_PASSWORD ] - Use this option while bootstrapping to set initial-dashboard-password. [-ssl-dashboard-port SSL_DASHBOARD_PORT ] - Use this option while bootstrapping to set custom dashboard port other than default 8443. [-dashboard-key DASHBOARD_KEY ] - Use this option while bootstrapping to set Custom key for SSL. [-dashboard-crt DASHBOARD_CRT ] - Use this option while bootstrapping to set Custom certificate for SSL. [-skip-dashboard] - Use this option while bootstrapping to deploy Ceph without dashboard. [-dashboard-password-noupdate] - Use this option while bootstrapping if you used above two options and don't want to reset password at the first time login. [-allow-fqdn-hostname] - Use this option while bootstrapping to allow hostname that is fully-qualified. [-skip-prepare-host] - Use this option while bootstrapping to skip preparing the host. Note To avoid connectivity issues with dashboard related external URL, use the fully qualified domain names (FQDN) for hostnames, for example, host01.ceph.redhat.com . Note Open the Grafana URL directly in the client internet browser and accept the security exception to see the graphs on the Ceph dashboard. Reload the browser to view the changes. Example Note While boostrapping the storage cluster using cephadm , you can use the --image option for either custom container images or local container images. Note You have to change the password the first time you log into the dashboard with the credentials provided on bootstrapping only if --dashboard-password-noupdate option is not used while bootstrapping. You can find the Ceph dashboard credentials in the var/log/ceph/cephadm.log file. Search with the "Ceph Dashboard is now available at" string. This section covers the following tasks: Network port requirements for Ceph dashboard . Accessing the Ceph dashboard . Expanding the cluster on the Ceph dashboard . Upgrading a cluster . Toggling Ceph dashboard features . Understanding the landing page of the Ceph dashboard . Enabling Red Hat Ceph Storage Dashboard manually . Changing the dashboard password using the Ceph dashboard . Changing the Ceph dashboard password using the command line interface . Setting admin user password for Grafana . Creating an admin account for syncing users to the Ceph dashboard . Syncing users to the Ceph dashboard using the Red Hat Single Sign-On . Enabling single sign-on for the Ceph dashboard . Disabling single sign-on for the Ceph dashboard . 2.1. Network port requirements for Ceph Dashboard The Ceph dashboard components use certain TCP network ports which must be accessible. By default, the network ports are automatically opened in firewalld during installation of Red Hat Ceph Storage. Table 2.1. TCP Port Requirements Port Use Originating Host Destination Host 8443 The dashboard web interface IP addresses that need access to Ceph Dashboard UI and the host under Grafana server, since the AlertManager service can also initiate connections to the Dashboard for reporting alerts. The Ceph Manager hosts. 3000 Grafana IP addresses that need access to Grafana Dashboard UI and all Ceph Manager hosts and Grafana server. The host or hosts running Grafana server. 2049 NFS-Ganesha IP addresses that need access to NFS. The IP addresses that provide NFS services. 9095 Default Prometheus server for basic Prometheus graphs IP addresses that need access to Prometheus UI and all Ceph Manager hosts and Grafana server or Hosts running Prometheus. The host or hosts running Prometheus. 9093 Prometheus Alertmanager IP addresses that need access to Alertmanager Web UI and all Ceph Manager hosts and Grafana server or Hosts running Prometheus. All Ceph Manager hosts and the host under Grafana server. 9094 Prometheus Alertmanager for configuring a highly available cluster made from multiple instances All Ceph Manager hosts and the host under Grafana server. Prometheus Alertmanager High Availability (peer daemon sync), so both src and dst should be hosts running Prometheus Alertmanager. 9100 The Prometheus node-exporter daemon Hosts running Prometheus that need to view Node Exporter metrics Web UI and All Ceph Manager hosts and Grafana server or Hosts running Prometheus. All storage cluster hosts, including MONs, OSDS, Grafana server host. 9283 Ceph Manager Prometheus exporter module Hosts running Prometheus that need access to Ceph Exporter metrics Web UI and Grafana server. All Ceph Manager hosts. Additional Resources For more information, see the Red Hat Ceph Storage Installation Guide . For more information, see Using and configuring firewalls in Configuring and managing networking . 2.2. Accessing the Ceph dashboard You can access the Ceph dashboard to administer and monitor your Red Hat Ceph Storage cluster. Prerequisites Successful installation of Red Hat Ceph Storage Dashboard. NTP is synchronizing clocks properly. Procedure Enter the following URL in a web browser: Syntax Replace: HOST_NAME with the fully qualified domain name (FQDN) of the active manager host. PORT with port 8443 Example You can also get the URL of the dashboard by running the following command in the Cephadm shell: Example This command will show you all endpoints that are currently configured. Look for the dashboard key to obtain the URL for accessing the dashboard. On the login page, enter the username admin and the default password provided during bootstrapping. You have to change the password the first time you log in to the Red Hat Ceph Storage dashboard. After logging in, the dashboard default landing page is displayed, which provides details, a high-level overview of status, performance, inventory, and capacity metrics of the Red Hat Ceph Storage cluster. Figure 2.1. Ceph dashboard landing page Click the menu icon ( ) on the dashboard landing page to collapse or display the options in the vertical menu. Additional Resources For more information, see Changing the dashboard password using the Ceph dashboard in the Red Hat Ceph Storage Dashboard guide . 2.3. Expanding the cluster on the Ceph dashboard You can use the dashboard to expand the Red Hat Ceph Storage cluster for adding hosts, adding OSDs, and creating services such as Alertmanager, Cephadm-exporter, CephFS-mirror, Grafana, ingress, MDS, NFS, node-exporter, Prometheus, RBD-mirror, and Ceph Object Gateway. Once you bootstrap a new storage cluster, the Ceph Monitor and Ceph Manager daemons are created and the cluster is in HEALTH_WARN state. After creating all the services for the cluster on the dashboard, the health of the cluster changes from HEALTH_WARN to HEALTH_OK status. Prerequisites Bootstrapped storage cluster. See Bootstrapping a new storage cluster section in the Red Hat Ceph Storage Installation Guide for more details. At least cluster-manager role for the user on the Red Hat Ceph Storage Dashboard. See the User roles and permissions on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide for more details. Procedure Copy the admin key from the bootstrapped host to other hosts: Syntax Example Log in to the dashboard with the default credentials provided during bootstrap. Change the password and log in to the dashboard with the new password . On the landing page, click Expand Cluster . Note Clicking Expand Cluster opens a wizard taking you through the expansion steps. To skip and add hosts and services separately, click Skip . Figure 2.2. Expand cluster Add hosts. This needs to be done for each host in the storage cluster. In the Add Hosts step, click Add . Provide the hostname. This is same as the hostname that was provided while copying the key from the bootstrapped host. Note Add multiple hosts by using a comma-separated list of host names, a range expression, or a comma separated range expression. Optional: Provide the respective IP address of the host. Optional: Select the labels for the hosts on which the services are going to be created. Click the pencil icon to select or add new labels. Click Add Host . The new host is displayed in the Add Hosts pane. Click . Create OSDs: In the Create OSDs step, for Primary devices, Click Add . In the Primary Devices window, filter for the device and select the device. Click Add . Optional: In the Create OSDs window, if you have any shared devices such as WAL or DB devices, then add the devices. Optional: In the Features section, select Encryption to encrypt the features. Click . Create services: In the Create Services step, click Create . In the Create Service form: Select a service type. Provide the service ID. The ID is a unique name for the service. This ID is used in the service name, which is service_type.service_id . ... Optional: Select if the service is Unmanaged . + When Unmanaged services is selected, the orchestrator will not start or stop any daemon associated with this service. Placement and all other properties are ignored. Select if the placement is by hosts or label. Select the hosts. In the Count field, provide the number of daemons or services that need to be deployed. Click Create Service . The new service is displayed in the Create Services pane. In the Create Service window, Click . Review the cluster expansion details. Review the Cluster Resources , Hosts by Services , Host Details . To edit any parameters, click Back and follow the steps. Figure 2.3. Review cluster Click Expand Cluster . The Cluster expansion displayed notification is displayed and the cluster status changes to HEALTH_OK on the dashboard. Verification Log in to the cephadm shell: Example Run the ceph -s command. Example The health of the cluster is HEALTH_OK . Additional Resources See the User roles and permissions on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide for more details. See the Red Hat Ceph Storage Installation Guide for more details. 2.4. Upgrading a cluster Upgrade Ceph clusters using the dashboard. Cluster images are pulled automatically from registry.redhat.io . Optionally, use custom images for upgrade. Procedure View if cluster upgrades are available and upgrade as needed from Administration > Upgrade on the dashboard. Note If the dashboard displays the Not retrieving upgrades message, check if the registries were added to the container configuration files with the appropriate log in credentials to Podman or docker. Click Pause or Stop during the upgrade process, if needed. The upgrade progress is shown in the progress bar along with information messages during the upgrade. Note When stopping the upgrade, the upgrade is first paused and then prompts you to stop the upgrade. Optional. View cluster logs during the upgrade process from the Cluster logs section of the Upgrade page. Verify that the upgrade is completed successfully by confirming that the cluster status displays OK state. 2.5. Toggling Ceph dashboard features You can customize the Red Hat Ceph Storage dashboard components by enabling or disabling features on demand. All features are enabled by default. When disabling a feature, the web-interface elements become hidden and the associated REST API end-points reject any further requests for that feature. Enabling and disabling dashboard features can be done from the command-line interface or the web interface. Available features: Ceph Block Devices: Image management, rbd Mirroring, mirroring Ceph File System, cephfs Ceph Object Gateway, rgw NFS Ganesha gateway, nfs Note By default, the Ceph Manager is collocated with the Ceph Monitor. Note You can disable multiple features at once. Important Once a feature is disabled, it can take up to 20 seconds to reflect the change in the web interface. Prerequisites Installation and configuration of the Red Hat Ceph Storage dashboard software. User access to the Ceph Manager host or the dashboard web interface. Root level access to the Ceph Manager host. Procedure To toggle the dashboard features from the dashboard web interface: On the dashboard landing page, go to Administration->Manager Modules and select the dashboard module. Click Edit . In the Edit Manager module form, you can enable or disable the dashboard features by selecting or clearing the check boxes to the different feature names. After the selections are made, click Update . To toggle the dashboard features from the command-line interface: Log in to the Cephadm shell: Example List the feature status: Example Disable a feature: This example disables the Ceph Object Gateway feature. Enable a feature: This example enables the Ceph Filesystem feature. 2.6. Understanding the landing page of the Ceph dashboard The landing page displays an overview of the entire Ceph cluster using navigation bars and individual panels. The menu bar provides the following options: Tasks and Notifications Provides task and notification messages. Help Provides links to the product and REST API documentation, details about the Red Hat Ceph Storage Dashboard, and a form to report an issue. Dashboard Settings Gives access to user management and telemetry configuration. User Use this menu to see log in status, to change a password, and to sign out of the dashboard. Figure 2.4. Menu bar The navigation menu can be opened or hidden by clicking the navigation menu icon . Dashboard The main dashboard displays specific information about the state of the cluster. The main dashboard can be accessed at any time by clicking Dashboard from the navigation menu. The dashboard landing page organizes the panes into different categories. Figure 2.5. Ceph dashboard landing page Details Displays specific cluster information and if telemetry is active or inactive. Status Displays the health of the cluster and host and daemon states. The current health status of the Ceph storage cluster is displayed. Danger and warning alerts are displayed directly on the landing page. Click View alerts for a full list of alerts. Capacity Displays storage usage metrics. This is displayed as a graph of used, warning, and danger. The numbers are in percentages and in GiB. Inventory Displays the different parts of the cluster, how many are available, and their status. Link directly from Inventory to specific inventory items, where available. Hosts Displays the total number of hosts in the Ceph storage cluster. Monitors Displays the number of Ceph Monitors and the quorum status. Managers Displays the number and status of the Manager Daemons. OSDs Displays the total number of OSDs in the Ceph Storage cluster and the number that are up , and in . Pools Displays the number of storage pools in the Ceph cluster. PGs Displays the total number of placement groups (PGs). The PG states are divided into Working and Warning to simplify the display. Each one encompasses multiple states. + The Working state includes PGs with any of the following states: activating backfill_wait backfilling creating deep degraded forced_backfill forced_recovery peering peered recovering recovery_wait repair scrubbing snaptrim snaptrim_wait + The Warning state includes PGs with any of the following states: backfill_toofull backfill_unfound down incomplete inconsistent recovery_toofull recovery_unfound remapped snaptrim_error stale undersized Object Gateways Displays the number of Object Gateways in the Ceph storage cluster. Metadata Servers Displays the number and status of metadata servers for Ceph File Systems (CephFS). Cluster Utilization The Cluster Utilization pane displays information related to data transfer speeds. Select the time range for the data output from the list. Select a range between the last 5 minutes to the last 24 hours. Used Capacity (RAW) Displays usage in GiB. IOPS Displays total I/O read and write operations per second. OSD Latencies Displays total applies and commits per millisecond. Client Throughput Displays total client read and write throughput in KiB per second. Recovery Throughput Displays the rate of cluster healing and balancing operations. For example, the status of any background data that may be moving due to a loss of disk is displayed. The information is displayed in bytes per second. Additional Resources For more information, see Monitoring the cluster on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard guide for more information. 2.7. Changing the dashboard password using the Ceph dashboard By default, the password for accessing dashboard is randomly generated by the system while bootstrapping the cluster. You have to change the password the first time you log in to the Red Hat Ceph Storage dashboard. You can change the password for the admin user using the dashboard. Prerequisites A running Red Hat Ceph Storage cluster. Procedure Log in to the dashboard: Syntax Go to User->Change password on the menu bar. Enter the old password, for verification. In the New password field enter a new password. Passwords must contain a minimum of 8 characters and cannot be the same as the last one. In the Confirm password field, enter the new password again to confirm. Click Change Password . You will be logged out and redirected to the login screen. A notification appears confirming the password is changed. 2.8. Changing the Ceph dashboard password using the command line interface If you have forgotten your Ceph dashboard password, you can change the password using the command line interface. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the host on which the dashboard is installed. Procedure Log into the Cephadm shell: Example Create the dashboard_password.yml file: Example Edit the file and add the new dashboard password: Example Reset the dashboard password: Syntax Example Verification Log in to the dashboard with your new password. 2.9. Setting admin user password for Grafana By default, cephadm does not create an admin user for Grafana. With the Ceph Orchestrator, you can create an admin user and set the password. With these credentials, you can log in to the storage cluster's Grafana URL with the given password for the admin user. Prerequisites A running Red Hat Ceph Storage cluster with the monitoring stack installed. Root-level access to the cephadm host. The dashboard module enabled. Procedure As a root user, create a grafana.yml file and provide the following details: Syntax Example Mount the grafana.yml file under a directory in the container: Example Note Every time you exit the shell, you have to mount the file in the container before deploying the daemon. Optional: Check if the dashboard Ceph Manager module is enabled: Example Optional: Enable the dashboard Ceph Manager module: Example Apply the specification using the orch command: Syntax Example Redeploy grafana service: Example This creates an admin user called admin with the given password and the user can log in to the Grafana URL with these credentials. Verification: Log in to Grafana with the credentials: Syntax Example 2.10. Enabling Red Hat Ceph Storage Dashboard manually If you have installed a Red Hat Ceph Storage cluster by using --skip-dashboard option during bootstrap, you can see that the dashboard URL and credentials are not available in the bootstrap output. You can enable the dashboard manually using the command-line interface. Although the monitoring stack components such as Prometheus, Grafana, Alertmanager, and node-exporter are deployed, they are disabled and you have to enable them manually. Prerequisite A running Red Hat Ceph Storage cluster installed with --skip-dashboard option during bootstrap. Root-level access to the host on which the dashboard needs to be enabled. Procedure Log into the Cephadm shell: Example Check the Ceph Manager services: Example You can see that the Dashboard URL is not configured. Enable the dashboard module: Example Create the self-signed certificate for the dashboard access: Example Note You can disable the certificate verification to avoid certification errors. Check the Ceph Manager services: Example Create the admin user and password to access the Red Hat Ceph Storage dashboard: Syntax Example Enable the monitoring stack. See the Enabling monitoring stack section in the Red Hat Ceph Storage Dashboard Guide for details. Additional Resources See the Deploying the monitoring stack using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide . 2.11. Creating an admin account for syncing users to the Ceph dashboard You have to create an admin account to synchronize users to the Ceph dashboard. After creating the account, use Red Hat Single Sign-on (SSO) to synchronize users to the Ceph dashboard. See the Syncing users to the Ceph dashboard using Red Hat Single Sign-On section in the Red Hat Ceph Storage Dashboard Guide . Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. Admin level access to the dashboard. Users are added to the dashboard. Root-level access on all the hosts. Java OpenJDK installed. For more information, see the Installing a JRE on RHEL by using yum section of the Installing and using OpenJDK 8 for RHEL guide for OpenJDK on the Red Hat Customer Portal. Red hat Single Sign-On installed from a ZIP file. See the Installing RH-SSO from a ZIP File section of the Server Installation and Configuration Guide for Red Hat Single Sign-On on the Red Hat Customer Portal. Procedure Download the Red Hat Single Sign-On 7.4.0 Server on the system where Red Hat Ceph Storage is installed. Unzip the folder: Navigate to the standalone/configuration directory and open the standalone.xml for editing: From the bin directory of the newly created rhsso-7.4.0 folder, run the add-user-keycloak script to add the initial administrator user: Replace all instances of localhost and two instances of 127.0.0.1 with the IP address of the machine where Red Hat SSO is installed. Start the server. From the bin directory of rh-sso-7.4 folder, run the standalone boot script: Create the admin account in https: IP_ADDRESS :8080/auth with a username and password: Note You have to create an admin account only the first time that you log into the console. Log into the admin console with the credentials created. Additional Resources For adding roles for users on the dashboard, see the Creating roles on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide for more information. For creating users on the dashboard, see the Creating users on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide . 2.12. Syncing users to the Ceph dashboard using Red Hat Single Sign-On You can use Red Hat Single Sign-on (SSO) with Lightweight Directory Access Protocol (LDAP) integration to synchronize users to the Red Hat Ceph Storage Dashboard. The users are added to specific realms in which they can access the dashboard through SSO without any additional requirements of a password. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. Admin level access to the dashboard. Users are added to the dashboard. See the Creating users on Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide . Root-level access on all the hosts. Admin account created for syncing users. See the Creating an admin account for syncing users to the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide . Procedure To create a realm, click the Master drop-down menu. In this realm, you can provide access to users and applications. In the Add Realm window, enter a case-sensitive realm name and set the parameter Enabled to ON and click Create : In the Realm Settings tab, set the following parameters and click Save : Enabled - ON User-Managed Access - ON Make a note of the link address of SAML 2.0 Identity Provider Metadata to paste in Client Settings . In the Clients tab, click Create : In the Add Client window, set the following parameters and click Save : Client ID - BASE_URL:8443/auth/saml2/metadata Example https://example.ceph.redhat.com:8443/auth/saml2/metadata Client Protocol - saml In the Client window, under Settings tab, set the following parameters: Table 2.2. Client Settings tab Name of the parameter Syntax Example Client ID BASE_URL:8443/auth/saml2/metadata https://example.ceph.redhat.com:8443/auth/saml2/metadata Enabled ON ON Client Protocol saml saml Include AuthnStatement ON ON Sign Documents ON ON Signature Algorithm RSA_SHA1 RSA_SHA1 SAML Signature Key Name KEY_ID KEY_ID Valid Redirect URLs BASE_URL:8443/* https://example.ceph.redhat.com:8443/* Base URL BASE_URL:8443 https://example.ceph.redhat.com:8443/ Master SAML Processing URL https://localhost:8080/auth/realms/ REALM_NAME /protocol/saml/descriptor https://localhost:8080/auth/realms/Ceph_LDAP/protocol/saml/descriptor Note Paste the link of SAML 2.0 Identity Provider Metadata from Realm Settings tab. Under Fine Grain SAML Endpoint Configuration, set the following parameters and click Save : Table 2.3. Fine Grain SAML configuration Name of the parameter Syntax Example Assertion Consumer Service POST Binding URL BASE_URL:8443/#/dashboard https://example.ceph.redhat.com:8443/#/dashboard Assertion Consumer Service Redirect Binding URL BASE_URL:8443/#/dashboard https://example.ceph.redhat.com:8443/#/dashboard Logout Service Redirect Binding URL BASE_URL:8443/ https://example.ceph.redhat.com:8443/ In the Clients window, Mappers tab, set the following parameters and click Save : Table 2.4. Client Mappers tab Name of the parameter Value Protocol saml Name username Mapper Property User Property Property username SAML Attribute name username In the Clients Scope tab, select role_list : In Mappers tab, select role list , set the Single Role Attribute to ON. Select User_Federation tab: In User Federation window, select ldap from the drop-down menu: In User_Federation window, Settings tab, set the following parameters and click Save : Table 2.5. User Federation Settings tab Name of the parameter Value Console Display Name rh-ldap Import Users ON Edit_Mode READ_ONLY Username LDAP attribute username RDN LDAP attribute username UUID LDAP attribute nsuniqueid User Object Classes inetOrgPerson, organizationalPerson, rhatPerson Connection URL Example: ldap://ldap.corp.redhat.com Click Test Connection . You will get a notification that the LDAP connection is successful. Users DN ou=users, dc=example, dc=com Bind Type simple Click Test authentication . You will get a notification that the LDAP authentication is successful. In Mappers tab, select first name row and edit the following parameter and Click Save : LDAP Attribute - givenName In User_Federation tab, Settings tab, Click Synchronize all users : You will get a notification that the sync of users is finished successfully. In the Users tab, search for the user added to the dashboard and click the Search icon: To view the user , click the specific row. You should see the federation link as the name provided for the User Federation . Important Do not add users manually as the users will not be synchronized by LDAP. If added manually, delete the user by clicking Delete . Note If Red Hat SSO is currently being used within your work environment, be sure to first enable SSO. For more information, see the Enabling Single Sign-On for the Ceph Dashboard section in the Red Hat Ceph Storage Dashboard Guide . Verification Users added to the realm and the dashboard can access the Ceph dashboard with their mail address and password. Example https://example.ceph.redhat.com:8443 Additional Resources For adding roles for users on the dashboard, see the Creating roles on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide for more information. 2.13. Enabling Single Sign-On for the Ceph Dashboard The Ceph Dashboard supports external authentication of users with the Security Assertion Markup Language (SAML) 2.0 protocol. Before using single sign-On (SSO) with the Ceph dashboard, create the dashboard user accounts and assign the desired roles. The Ceph Dashboard performs authorization of the users and the authentication process is performed by an existing Identity Provider (IdP). You can enable single sign-on using the SAML protocol. Prerequisites A running Red Hat Ceph Storage cluster. Installation of the Ceph Dashboard. Root-level access to The Ceph Manager hosts. Procedure To configure SSO on Ceph Dashboard, run the following command: Syntax Example Replace CEPH_MGR_HOST with Ceph mgr host. For example, host01 CEPH_DASHBOARD_BASE_URL with the base URL where Ceph Dashboard is accessible. IDP_METADATA with the URL to remote or local path or content of the IdP metadata XML. The supported URL types are http, https, and file. Optional : IDP_USERNAME_ATTRIBUTE with the attribute used to get the username from the authentication response. Defaults to uid . Optional : IDP_ENTITY_ID with the IdP entity ID when more than one entity ID exists on the IdP metadata. Optional : SP_X_509_CERT with the file path of the certificate used by Ceph Dashboard for signing and encryption. Optional : SP_PRIVATE_KEY with the file path of the private key used by Ceph Dashboard for signing and encryption. Verify the current SAML 2.0 configuration: Syntax Example To enable SSO, run the following command: Syntax Example Open your dashboard URL. Example On the SSO page, enter the login credentials. SSO redirects to the dashboard web interface. Additional Resources To disable single sign-on, see Disabling Single Sign-on for the Ceph Dashboard in the Red Hat Ceph StorageDashboard Guide . 2.14. Disabling Single Sign-On for the Ceph Dashboard You can disable single sign-on for Ceph Dashboard using the SAML 2.0 protocol. Prerequisites A running Red Hat Ceph Storage cluster. Installation of the Ceph Dashboard. Root-level access to The Ceph Manager hosts. Single sign-on enabled for Ceph Dashboard Procedure To view status of SSO, run the following command: Syntax Example To disable SSO, run the following command: Syntax Example Additional Resources To enable single sign-on, see Enabling Single Sign-on for the Ceph Dashboard in the Red Hat Ceph StorageDashboard Guide .
[ "URL: https://host01:8443/ User: admin Password: zbiql951ar", "cephadm bootstrap --mon-ip 127.0.0.1 --registry-json cephadm.txt --initial-dashboard-user admin --initial-dashboard-password zbiql951ar --dashboard-password-noupdate --allow-fqdn-hostname", "https:// HOST_NAME : PORT", "https://host01:8443", "ceph mgr services", "ssh-copy-id -f -i /etc/ceph/ceph.pub root@ HOST_NAME", "ssh-copy-id -f -i /etc/ceph/ceph.pub root@host02 ssh-copy-id -f -i /etc/ceph/ceph.pub root@host03", "cephadm shell", "ceph -s", "cephadm shell", "ceph dashboard feature status", "ceph dashboard feature disable rgw", "ceph dashboard feature enable cephfs", "https:// HOST_NAME :8443", "cephadm shell", "touch dashboard_password.yml", "vi dashboard_password.yml", "ceph dashboard ac-user-set-password DASHBOARD_USERNAME -i PASSWORD_FILE", "ceph dashboard ac-user-set-password admin -i dashboard_password.yml {\"username\": \"admin\", \"password\": \"USD2bUSD12USDi5RmvN1PolR61Fay0mPgt.GDpcga1QpYsaHUbJfoqaHd1rfFFx7XS\", \"roles\": [\"administrator\"], \"name\": null, \"email\": null, \"lastUpdate\": , \"enabled\": true, \"pwdExpirationDate\": null, \"pwdUpdateRequired\": false}", "service_type: grafana spec: initial_admin_password: PASSWORD", "service_type: grafana spec: initial_admin_password: mypassword", "cephadm shell --mount grafana.yml:/var/lib/ceph/grafana.yml", "ceph mgr module ls", "ceph mgr module enable dashboard", "ceph orch apply -i FILE_NAME .yml", "ceph orch apply -i /var/lib/ceph/grafana.yml", "ceph orch redeploy grafana", "https:// HOST_NAME : PORT", "https://host01:3000/", "cephadm shell", "ceph mgr services { \"prometheus\": \"http://10.8.0.101:9283/\" }", "ceph mgr module enable dashboard", "ceph dashboard create-self-signed-cert", "ceph mgr services { \"dashboard\": \"https://10.8.0.101:8443/\", \"prometheus\": \"http://10.8.0.101:9283/\" }", "echo -n \" PASSWORD \" > PASSWORD_FILE ceph dashboard ac-user-create admin -i PASSWORD_FILE administrator", "echo -n \"p@ssw0rd\" > password.txt ceph dashboard ac-user-create admin -i password.txt administrator", "unzip rhsso-7.4.0.zip", "cd standalone/configuration vi standalone.xml", "./add-user-keycloak.sh -u admin", "./standalone.sh", "cephadm shell CEPH_MGR_HOST ceph dashboard sso setup saml2 CEPH_DASHBOARD_BASE_URL IDP_METADATA IDP_USERNAME_ATTRIBUTE IDP_ENTITY_ID SP_X_509_CERT SP_PRIVATE_KEY", "cephadm shell host01 ceph dashboard sso setup saml2 https://dashboard_hostname.ceph.redhat.com:8443 idp-metadata.xml username https://10.70.59.125:8080/auth/realms/realm_name /home/certificate.txt /home/private-key.txt", "cephadm shell CEPH_MGR_HOST ceph dashboard sso show saml2", "cephadm shell host01 ceph dashboard sso show saml2", "cephadm shell CEPH_MGR_HOST ceph dashboard sso enable saml2 SSO is \"enabled\" with \"SAML2\" protocol.", "cephadm shell host01 ceph dashboard sso enable saml2", "https://dashboard_hostname.ceph.redhat.com:8443", "cephadm shell CEPH_MGR_HOST ceph dashboard sso status", "cephadm shell host01 ceph dashboard sso status SSO is \"enabled\" with \"SAML2\" protocol.", "cephadm shell CEPH_MGR_HOST ceph dashboard sso disable SSO is \"disabled\".", "cephadm shell host01 ceph dashboard sso disable" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/dashboard_guide/ceph-dashboard-installation-and-access
probe::signal.pending
probe::signal.pending Name probe::signal.pending - Examining pending signal Synopsis signal.pending Values name Name of the probe point sigset_size The size of the user-space signal set sigset_add The address of the user-space signal set (sigset_t) Description This probe is used to examine a set of signals pending for delivery to a specific thread. This normally occurs when the do_sigpending kernel function is executed.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-signal-pending
Red Hat Ansible Lightspeed with IBM watsonx Code Assistant User Guide
Red Hat Ansible Lightspeed with IBM watsonx Code Assistant User Guide Red Hat Ansible Lightspeed with IBM watsonx Code Assistant 2.x_latest Learn how to use Red Hat Ansible Lightspeed with IBM watsonx Code Assistant. Red Hat Customer Content Services
[ "fatal: unable to access 'https://private.repo./mine/ansible-rulebook.git': SSL certificate problem: unable to get local issuer certificate", "kubectl create secret generic <resourcename>-custom-certs --from-file=bundle-ca.crt=<PATH/TO/YOUR/CA/PEM/FILE> 1", "spec: bundle_cacert_secret: <resourcename>-custom-certs", "secretGenerator: - name: <resourcename>-custom-certs files: - bundle-ca.crt=<path+filename> options: disableNameSuffixHash: true", "```yaml spec: extra_settings: - setting: LOGOUT_ALLOWED_HOSTS value: \"'<lightspeed_route-HostName>'\" ```", "curl -H \"Authorization: Bearer <token>\" https://<lightspeed_route>/api/v1/me/", "Install postgresql-server & run postgresql-setup command", "Create a keypair called lightspeed-keypair & create a vpc & create vpc_id var & create a security group that allows SSH & create subnet with 10.0.1.0/24 cidr & create an internet gateway & create a route table", "Install postgresql-server & run postgresql-setup command", "Create a keypair called lightspeed-keypair & create a vpc & create vpc_id var & create a security group that allows SSH & create subnet with 10.0.1.0/24 cidr & create an internet gateway & create a route table", "ansible-content-parser --version ansible-content-parser 0.0.1 using ansible-lint:6.20.0 ansible-core:2.15.4 ansible-lint --version ansible-lint 6.13.1 using ansible 2.15.4 A new release of ansible-lint is available: 6.13.1 -> 6.20.0", "ansible-content-parser --version ansible-content-parser 0.0.1 using ansible-lint:6.20.0 ansible-core:2.15.4 ansible-lint --version ansible-lint 6.20.0 using ansible-core:2.15.4 ansible-compat:4.1.10 ruamel-yaml:0.17.32 ruamel-yaml-clib:0.2.7", "ansible-content-parser --profile min --source-license undefined --source-description Samples --repo-name ansible-tower-samples --repo-url 'https://github.com/ansible/ansible-tower-samples' [email protected]:ansible/ansible-tower-samples.git /var/tmp/out_dir", "cat out_dir/ftdata.jsonl| jq { \"data_source_description\": \"Samples\", \"input\": \"---\\n- name: Hello World Sample\\n hosts: all\\n tasks:\\n - name: Hello Message\", \"license\": \"undefined\", \"module\": \"debug\", \"output\": \" debug:\\n msg: Hello World!\", \"path\": \"hello_world.yml\", \"repo_name\": \"ansible-tower-samples\", \"repo_url\": \"https://github.com/ansible/ansible-tower-samples\" }", "output/ |-- ftdata.jsonl # Training dataset 1 |-- report.txt # A human-readable report 2 | |-- repository/ 3 | |-- (files copied from the source repository) | |-- metadata/ 4 |-- (metadata files generated during the execution)", "schedule: interval: daily", "Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1006)'))", "extra_settings: - setting: ANSIBLE_AI_MODEL_MESH_API_VERIFY_SSL value: false", "extra_settings: - setting: ANSIBLE_AI_MODEL_MESH_API_VERIFY_SSL value: false" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_lightspeed_with_ibm_watsonx_code_assistant/2.x_latest/html-single/red_hat_ansible_lightspeed_with_ibm_watsonx_code_assistant_user_guide/index
5.9. Creating Distributed Dispersed Volumes
5.9. Creating Distributed Dispersed Volumes Distributed dispersed volumes support the same configurations of erasure coding as dispersed volumes. The number of bricks in a distributed dispersed volume must be a multiple of (K+M). With this release, the following configurations are supported: Multiple disperse sets containing 6 bricks with redundancy level 2 Multiple disperse sets containing 10 bricks with redundancy level 2 Multiple disperse sets containing 11 bricks with redundancy level 3 Multiple disperse sets containing 12 bricks with redundancy level 4 Multiple disperse sets containing 20 bricks with redundancy level 4 Important Distributed dispersed volume configuration is supported only on JBOD storage. For more information, see Section 19.1.2, "JBOD" . Use gluster volume create to create different types of volumes, and gluster volume info to verify successful volume creation. Prerequisites A trusted storage pool has been created, as described in Section 4.1, "Adding Servers to the Trusted Storage Pool" . Understand how to start and stop volumes, as described in Section 5.10, "Starting Volumes" . Figure 5.5. Illustration of a Distributed Dispersed Volume Creating distributed dispersed volumes Important Red Hat recommends you to review the Distributed Dispersed Volume configuration recommendations explained in Section 11.16, "Recommended Configurations - Dispersed Volume" before creating the Distributed Dispersed volume. Run the gluster volume create command to create the dispersed volume. The syntax is # gluster volume create NEW-VOLNAME disperse-data COUNT [redundancy COUNT ] [transport tcp | rdma (Deprecated) | tcp,rdma] NEW-BRICK... The default value for transport is tcp . Other options can be passed such as auth.allow or auth.reject . See Section 11.1, "Configuring Volume Options" for a full list of parameters. Example 5.9. Distributed Dispersed Volume with Six Storage Servers The above example is illustrated in Figure 5.4, "Illustration of a Dispersed Volume" . In the illustration and example, you are creating 12 bricks from 6 servers. Run # gluster volume start VOLNAME to start the volume. Important The open-behind volume option is enabled by default. If you are accessing the distributed dispersed volume using the SMB protocol, you must disable the open-behind volume option to avoid performance bottleneck on large file workload. Run the following command to disable open-behind volume option: For information on open-behind volume option, see Section 11.1, "Configuring Volume Options" Run gluster volume info command to optionally display the volume information.
[ "gluster v create glustervol disperse-data 4 redundancy 2 transport tcp server1:/rhgs1/brick1 server2:/rhgs2/brick2 server3:/rhgs3/brick3 server4:/rhgs4/brick4 server5:/rhgs5/brick5 server6:/rhgs6/brick6 server1:/rhgs7/brick7 server2:/rhgs8/brick8 server3:/rhgs9/brick9 server4:/rhgs10/brick10 server5:/rhgs11/brick11 server6:/rhgs12/brick12 volume create: glutervol: success: please start the volume to access data.", "gluster v start glustervol volume start: glustervol: success", "gluster volume set VOLNAME open-behind off" ]
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/sect-Creating_Distributed_Dispered_Volumes_1
Preface
Preface Open Java Development Kit (OpenJDK) is a free and open-source implementation of the Java Platform, Standard Edition (Java SE). Eclipse Temurin is available in four LTS versions: OpenJDK 8u, OpenJDK 11u, OpenJDK 17u, and OpenJDK 21u. Binary files for Eclipse Temurin are available for macOS, Microsoft Windows, and multiple Linux x86 Operating Systems including Red Hat Enterprise Linux and Ubuntu.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/eclipse_temurin_8.0.432_release_notes/pr01
Chapter 1. Deploying Developer Hub on AKS with the Operator
Chapter 1. Deploying Developer Hub on AKS with the Operator You can deploy your Developer Hub on AKS using the Red Hat Developer Hub Operator. Procedure Obtain the Red Hat Developer Hub Operator manifest file, named rhdh-operator-<VERSION>.yaml , and modify the default configuration of db-statefulset.yaml and deployment.yaml by adding the following fragment: securityContext: fsGroup: 300 Following is the specified locations in the manifests: db-statefulset.yaml: | spec.template.spec deployment.yaml: | spec.template.spec Apply the modified Operator manifest to your Kubernetes cluster: kubectl apply -f rhdh-operator-<VERSION>.yaml Note Execution of the command is cluster-scoped and requires appropriate cluster privileges. Create an ImagePull Secret named rhdh-pull-secret using your Red Hat credentials to access images from the protected registry.redhat.io as shown in the following example: kubectl -n <your_namespace> create secret docker-registry rhdh-pull-secret \ --docker-server=registry.redhat.io \ --docker-username=<redhat_user_name> \ --docker-password=<redhat_password> \ --docker-email=<email> Create an Ingress manifest file, named rhdh-ingress.yaml , specifying your Developer Hub service name as follows: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: rhdh-ingress namespace: <your_namespace> spec: ingressClassName: webapprouting.kubernetes.azure.com rules: - http: paths: - path: / pathType: Prefix backend: service: name: backstage-<your-CR-name> port: name: http-backend To deploy the created Ingress, run the following command: kubectl -n <your_namespace> apply -f rhdh-ingress.yaml Create a ConfigMap named app-config-rhdh containing the Developer Hub configuration using the following example: apiVersion: v1 kind: ConfigMap metadata: name: app-config-rhdh data: "app-config-rhdh.yaml": | app: title: Red Hat Developer Hub baseUrl: https://<app_address> backend: auth: externalAccess: - type: legacy options: subject: legacy-default-config secret: "USD{BACKEND_SECRET}" baseUrl: https://<app_address> cors: origin: https://<app_address> Create a Secret named secrets-rhdh and add a key named BACKEND_SECRET with a Base64-encoded string value as shown in the following example: apiVersion: v1 kind: Secret metadata: name: secrets-rhdh stringData: BACKEND_SECRET: "xxx" Create a Custom Resource (CR) manifest file named rhdh.yaml and include the previously created rhdh-pull-secret as follows: apiVersion: rhdh.redhat.com/v1alpha1 kind: Backstage metadata: name: <your-rhdh-cr> spec: application: imagePullSecrets: - rhdh-pull-secret appConfig: configMaps: - name: "app-config-rhdh" extraEnvs: secrets: - name: "secrets-rhdh" Apply the CR manifest to your namespace: kubectl -n <your_namespace> apply -f rhdh.yaml Access the deployed Developer Hub using the URL: https://<app_address> , where <app_address> is the Ingress address obtained earlier (for example, https://108.141.70.228 ). Optional: To delete the CR, run the following command: kubectl -n <your_namespace> delete -f rhdh.yaml
[ "securityContext: fsGroup: 300", "db-statefulset.yaml: | spec.template.spec deployment.yaml: | spec.template.spec", "apply -f rhdh-operator-<VERSION>.yaml", "-n <your_namespace> create secret docker-registry rhdh-pull-secret --docker-server=registry.redhat.io --docker-username=<redhat_user_name> --docker-password=<redhat_password> --docker-email=<email>", "apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: rhdh-ingress namespace: <your_namespace> spec: ingressClassName: webapprouting.kubernetes.azure.com rules: - http: paths: - path: / pathType: Prefix backend: service: name: backstage-<your-CR-name> port: name: http-backend", "-n <your_namespace> apply -f rhdh-ingress.yaml", "apiVersion: v1 kind: ConfigMap metadata: name: app-config-rhdh data: \"app-config-rhdh.yaml\": | app: title: Red Hat Developer Hub baseUrl: https://<app_address> backend: auth: externalAccess: - type: legacy options: subject: legacy-default-config secret: \"USD{BACKEND_SECRET}\" baseUrl: https://<app_address> cors: origin: https://<app_address>", "apiVersion: v1 kind: Secret metadata: name: secrets-rhdh stringData: BACKEND_SECRET: \"xxx\"", "apiVersion: rhdh.redhat.com/v1alpha1 kind: Backstage metadata: name: <your-rhdh-cr> spec: application: imagePullSecrets: - rhdh-pull-secret appConfig: configMaps: - name: \"app-config-rhdh\" extraEnvs: secrets: - name: \"secrets-rhdh\"", "-n <your_namespace> apply -f rhdh.yaml", "-n <your_namespace> delete -f rhdh.yaml" ]
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/installing_red_hat_developer_hub_on_microsoft_azure_kubernetes_service/proc-rhdh-deploy-aks-operator_title-install-rhdh-aks
12.5. Suppressing Repetitive Log Messages
12.5. Suppressing Repetitive Log Messages Repetitive log messages in the Red Hat Gluster Storage Server can be configured by setting a log-flush-timeout period and by defining a log-buf-size buffer size options with the gluster volume set command. Suppressing Repetitive Log Messages with a Timeout Period To set the timeout period on the bricks: Example 12.13. Set a timeout period on the bricks To set the timeout period on the clients: Example 12.14. Set a timeout period on the clients To set the timeout period on glusterd : Example 12.15. Set a timeout period on the glusterd Suppressing Repetitive Log Messages by defining a Buffer Size The maximum number of unique log messages that can be suppressed until the timeout or buffer overflow, whichever occurs first on the bricks. To set the buffer size on the bricks: Example 12.16. Set a buffer size on the bricks To set the buffer size on the clients: Example 12.17. Set a buffer size on the clients To set the log buffer size on glusterd : Example 12.18. Set a log buffer size on the glusterd Note To disable suppression of repetitive log messages, set the log-buf-size to zero. See Also: Section 11.1, "Configuring Volume Options"
[ "gluster volume set VOLNAME diagnostics.brick-log-flush-timeout <value in seconds>", "gluster volume set testvol diagnostics.brick-log-flush-timeout 200sec volume set: success", "gluster volume set VOLNAME diagnostics.client-log-flush-timeout <value in seconds>", "gluster volume set testvol diagnostics.client-log-flush-timeout 180sec volume set: success", "glusterd --log-flush-timeout= <value in seconds>", "glusterd --log-flush-timeout=60sec", "gluster volume set VOLNAME diagnostics.brick-log-buf-size <value>", "gluster volume set testvol diagnostics.brick-log-buf-size 10 volume set: success", "gluster volume set VOLNAME diagnostics.client-log-buf-size <value>", "gluster volume set testvol diagnostics.client-log-buf-size 15 volume set: success", "glusterd --log-buf-size= <value>", "glusterd --log-buf-size=10" ]
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/suppressing_repetitive_log_messages
Chapter 2. Migrating to Apache Camel 4
Chapter 2. Migrating to Apache Camel 4 This section provides information that can help you migrate your Apache Camel applications from version 3.20 or higher to 4.0. If you are upgrading from an older Camel 3.x release, such as 3.14, see the individual Upgrade guide to upgrade to the 3.20 release, before upgrading to Apache Camel 4. 2.1. Java versions Apache Camel 4 supports Java 17. Support for Java 11 is dropped. 2.2. Removed Components The following components has been removed: Component Alternative component(s) camel-any23 none camel-atlasmap none camel-atmos none camel-caffeine-lrucache camel-cache, camel-ignite, camel-infinispan camel-cdi camel-spring-boot, camel-quarkus camel-corda none camel-directvm camel-direct camel-dozer camel-mapstruct camel-elasticsearch-rest camel-elasticsearch camel-gora none camel-hbase none camel-hyperledger-aries none camel-iota none camel-ipfs none camel-jbpm none camel-jclouds none camel-johnzon camel-jackson, camel-fastjson, camel-gson camel-microprofile-metrics camel-micrometer, camel-opentelemetry camel-milo none camel-opentracing camel-micrometer, camel-opentelemetry camel-rabbitmq spring-rabbitmq-component camel-rest-swagger camel-openapi-rest camel-restdsl-swagger-plugin camel-restdsl-openapi-plugin camel-resteasy camel-cxf, camel-rest camel-solr none camel-spark none camel-spring-integration none camel-swagger-java camel-openapi-java camel-websocket camel-vertx-websocket camel-websocket-jsr356 camel-vertx-websocket camel-vertx-kafka camel-kafka camel-vm camel-seda camel-weka none camel-xstream camel-jacksonxml camel-zipkin camel-micrometer, camel-opentelemetry 2.3. Logging Camel 4 has upgraded logging facade API slf4j-api from 1.7 to 2.0. 2.4. JUnit 4 All the camel-test modules that were JUnit 4.x based has been removed. All test modules now use JUnit 5. 2.5. API Changes Following APIs are deprecated and removed from version 4: The org.apache.camel.ExchangePattern has removed InOptionalOut . Removed getEndpointMap() method from CamelContext . Removed @FallbackConverter as you should use @Converter(fallback = true) instead. Removed uri attribute on @EndpointInject , @Produce , and @Consume as you should use value (default) instead. For example @Produce(uri = "kafka:cheese") should be changed to @Produce("kafka:cheese") Removed label on @UriEndpoint as you should use category instead. Removed all asyncCallback methods on ProducerTemplate . Use asyncSend or asyncRequest instead. Removed org.apache.camel.spi.OnCamelContextStart . Use org.apache.camel.spi.OnCamelContextStarting instead. Removed org.apache.camel.spi.OnCamelContextStop . Use org.apache.camel.spi.OnCamelContextStopping instead. Decoupled the org.apache.camel.ExtendedCamelContext from the org.apache.camel.CamelContext . Replaced adapt() from org.apache.camel.CamelContext with getCamelContextExtension Decoupled the org.apache.camel.ExtendedExchange from the org.apache.camel.Exchange . Replaced adapt() from org.apache.camel.ExtendedExchange with getExchangeExtension Exchange failure handling status has moved from being a property defined as ExchangePropertyKey.FAILURE_HANDLED to a member of the ExtendedExchange, accessible via `isFailureHandled()`method. Removed Discard and DiscardOldest from org.apache.camel.util.concurrent.ThreadPoolRejectedPolicy . Removed org.apache.camel.builder.SimpleBuilder . Was mostly used internally in Camel with the Java DSL in some situations. Moved org.apache.camel.support.IntrospectionSupport to camel-core-engine for internal use only. End users should use org.apache.camel.spi.BeanInspection instead. Removed archetypeCatalogAsXml method from org.apache.camel.catalog.CamelCatalog . The org.apache.camel.health.HealthCheck method isLiveness is now default false instead of true . Added position method to org.apache.camel.StreamCache . The method configure from the interface org.apache.camel.main.Listener was removed The org.apache.camel.support.EventNotifierSupport abstract class now implements CamelContextAware . The type for dumpRoutes on CamelContext has changed from boolean to String to allow specifying either xml or yaml. Note The org.apache.camel.support.PluginHelper gives easy access to various extensions and context plugins, that was available previously in Camel v3 directly from CamelContext . 2.6. EIP Changes Removed lang attribute for the <description> on every EIPs. The InOnly and InOut EIPs has been removed. Instead, use SetExchangePattern or To where you can specify exchange pattern to use. 2.6.1. Poll Enrich EIP The polled endpoint URI is now stored as property on the Exchange (with key CamelToEndpoint ) like all other EIPs. Before the URI was stored as a message header. 2.6.2. CircuitBreaker EIP The following options in camel-resilience4j was mistakenly not defined as attributes: Option bulkheadEnabled bulkheadMaxConcurrentCalls bulkheadMaxWaitDuration timeoutEnabled timeoutExecutorService timeoutDuration timeoutCancelRunningFuture These options were not exposed in YAML DSL, and in XML DSL you need to migrate from: <circuitBreaker> <resilience4jConfiguration> <timeoutEnabled>true</timeoutEnabled> <timeoutDuration>2000</timeoutDuration> </resilience4jConfiguration> ... </circuitBreaker> To use following attributes instead: <circuitBreaker> <resilience4jConfiguration timeoutEnabled="true" timeoutDuration="2000"/> ... </circuitBreaker> 2.7. XML DSL The <description> to set a description on a route or node, has been changed from an element to an attribute. Example Changed from <route id="myRoute"> <description>Something that this route do</description> <from uri="kafka:cheese"/> ... </route> To [source,xml 2.8. Type Converter The String java.io.File converter has been removed. 2.9. Tracing The Tracer and Backlog Tracer no longer includes internal tracing events from routes that was created by Rest DSL or route templates or Kamelets. You can turn this on, by setting traceTemplates=true in the tracer. The Backlog Tracer has been enhanced and fixed to trace message headers (also streaming types). This means that previously headers of type InputStream was not traced before, but is now included. This could mean that the header stream is positioned at end, and logging the header afterward, may appear as the header value is empty. 2.10. UseOriginalMessage / UseOriginalBody When useOriginalMessage or useOriginalBody is enabled in OnException , OnCompletion or error handlers, then the original message body is defensively copied and if possible converted to StreamCache to ensure the body can be re-read when accessed. Previously the original body was not converted to StreamCache which could lead to the body not able to be read or the stream has been closed. 2.11. Camel Health Health checks are now by default only readiness checks out of the box. Camel provides the CamelContextCheck as both readiness and liveness checks, so there is at least one of each out of the box. Only consumer based health-checks is enabled by default. 2.11.1. Producer Health Checks The option camel.health.components-enabled has been renamed to camel.health.producers-enabled . Some components (in particular AWS) provides also health checks for producers; in Camel 3.x these health checks did not work properly and has been disabled in the source. To continue this behaviour in Camel 4, the producer based health checks are disabled. Notice that camel-kafka comes with producer based health-check that worked in Camel 3, and therefore this change in Camel 4, means that this health-check is disabled. You MUST enable producer health-checks globally, such as in application.properties : camel.health.producers-enabled = true 2.12. JMX Camel now also include MBeans for doCatch and doFinally in the tree of processor MBeans. The ManagedChoiceMBean have renamed choiceStatistics to extendedInformation . The ManagedFailoverLoadBalancerMBean have renamed exceptionStatistics to extendedInformation . The CamelContextMBean and CamelRouteMBean has removed method dumpRouteAsXml(boolean resolvePlaceholders, boolean resolveDelegateEndpoints) . 2.13. YAML DSL The backwards compatible mode Camel 3.14 or older, which allowed to have steps as child to route has been removed. The old syntax: - route: from: uri: "direct:info" steps: - log: "message" should be changed to: - route: from: uri: "direct:info" steps: - log: "message" 2.14. Backlog Tracing The option backlogTracing=true is now automatically enabled to start the tracer on startup. In the versions the tracer was only made available, and had to be manually enabled afterwards. The old behavior can be archived by setting backlogTracingStandby=true . Move the following class from org.apache.camel.api.management.mbean.BacklogTracerEventMessage in camel-management-api JAR to org.apache.camel.spi.BacklogTracerEventMessage in camel-api JAR. The org.apache.camel.impl.debugger.DefaultBacklogTracerEventMessage has been refactored into an interface org.apache.camel.spi.BacklogTracerEventMessage with some additional details about traced messages. For example Camel now captures a first and last trace that contains the input and outgoing (if InOut ) messages. 2.15. XML serialization The default xml serialization using ModelToXMLDumper has been improved and now uses a generated xml serializer located in the camel-xml-io module instead of the JAXB based one from camel-jaxb . 2.16. OpenAPI Maven Plugin The camel-restdsl-openapi-plugin Maven plugin now uses platform-http as the default rest component in the generated Rest DSL code. Previously the default was servlet. However, platform-http is a better default that works out of the box with Spring Boot and Quarkus. 2.17. Component changes 2.17.1. Category The number of enums for org.apache.camel.Category has been reduced from 83 to 37, which means custom components that are using removed values need to choose one of the remainder values. We have done this to consolidate the number of categories of all components in the Camel community. 2.17.2. camel-openapi-rest-dsl-generator This dsl-generator has updated the underlying model classes ( apicurio-data-models ) from 1.1.27 to 2.0.3. 2.17.3. camel-atom The camel-atom component has changed the 3rd party atom client from Apache Abdera to RSSReader. This means the feed object is changed from org.apache.abdera.model.Feed to com.apptasticsoftware.rssreader.Item . 2.17.4. camel-azure-cosmosdb The itemPartitionKey has been updated. It's now a String a not a PartitionKey anymore. More details in CAMEL-19222. 2.17.5. camel-bean When using the method option to refer to a specific method, and using parameter types and values, such as: "bean:myBean?method=foo(com.foo.MyOrder, true)" then any class types must now be using .class syntax, i.e. com.foo.MyOrder should now be com.foo.MyOrder.class . Example This also applies to Java types such as String, int. 2.17.6. camel-box Upgraded from Box Java SDK v2 to v4, which have some method signature changes. The method to get a file thumbnail is no longer available. 2.17.7. camel-caffeine The keyType parameter has been removed. The Key for the cache will now be only String type. More information in CAMEL-18877. 2.17.8. camel-fhir The underlying hapi-fhir library has been upgraded from 4.2.0 to 6.2.4. Only the Delete API method has changed and now returns ca.uhn.fhir.rest.api.MethodOutcome instead of org.hl7.fhir.instance.model.api.IBaseOperationOutcome . See hapi-fhir for a more detailed list of underlying changes (only the hapi-fhir client is used in Camel). 2.17.9. camel-google The API based components camel-google-drive , camel-google-calendar , camel-google-sheets and camel-google-mail has been upgraded from Google Java SDK v1 to v2 and to latest API revisions. The camel-google-drive and camel-google-sheets have some API methods changes, but the others are identical as before. 2.17.10. camel-http The component has been upgraded to use Apache HttpComponents v5 which has an impact on how the underlying client is configured. There are 4 different timeouts ( connectionRequestTimeout , connectTimeout , soTimeout and responseTimeout ) instead of initially 3 ( connectionRequestTimeout , connectTimeout and socketTimeout ) and the default value of some of them has changed so please refer to the documentation for more details. Please note that the socketTimeout has been removed from the possible configuration parameters of HttpClient , use responseTimeout instead. Finally, the option soTimeout along with any parameters included into SocketConfig , need to be prefixed by httpConnection. , the rest of the parameters including those defined into HttpClientBuilder and RequestConfig still need to be prefixed by httpClient. like before. 2.17.11. camel-http-common The API in org.apache.camel.http.common.HttpBinding has changed slightly to be more reusable. The parseBody method now takes in HttpServletRequest as input parameter. And all HttpMessage has been changed to generic Message types. 2.17.12. camel-kubernetes The io.fabric8:kubernetes-client library has been upgraded and some deprecated API usage has been removed. Operations previously prefixed with replace are now prefixed with update . For example replaceConfigMap is now updateConfigMap , replacePod is now updatePod etc. The corresponding constants in class KubernetesOperations are also renamed. REPLACE_CONFIGMAP_OPERATION is now UPDATE_CONFIGMAP_OPERATION , REPLACE_POD_OPERATION is now UPDATE_POD_OPERATION etc. 2.17.13. camel-main The following constants has been moved from BaseMainSupport / Main to MainConstants : Old Name New Name Main.DEFAULT_PROPERTY_PLACEHOLDER_LOCATION MainConstants.DEFAULT_PROPERTY_PLACEHOLDER_LOCATION Main.INITIAL_PROPERTIES_LOCATION MainConstants.INITIAL_PROPERTIES_LOCATION Main.OVERRIDE_PROPERTIES_LOCATION MainConstants.OVERRIDE_PROPERTIES_LOCATION Main.PROPERTY_PLACEHOLDER_LOCATION MainConstants.PROPERTY_PLACEHOLDER_LOCATION 2.17.14. camel-micrometer The metrics has been renamed to follow Micrometer naming convention . Old Name New Name CamelExchangeEventNotifier camel.exchange.event.notifier CamelExchangesFailed camel.exchanges.failed CamelExchangesFailuresHandled camel.exchanges.failures.handled CamelExchangesInflight camel.exchanges.external.redeliveries CamelExchangesSucceeded camel.exchanges.succeeded CamelExchangesTotal camel.exchanges.total CamelMessageHistory camel.message.history CamelRoutePolicy camel.route.policy CamelRoutePolicyLongTask camel.route.policy.long.task CamelRoutesAdded camel.routes.added CamelRoutesRunning camel.routes.running 2.17.15. camel-jbang The command camel dependencies has been renamed to camel dependency . In Camel CLI the -dir parameter for init and run goal has been renamed to require 2 dashes --dir like all the other options. The camel stop command will now by default stop all running integrations (the option --all has been removed). The Placeholders substitutes is changed to use #name instead of USDname syntax. 2.17.16. camel-openapi-java The camel-openapi-java component has been changed to use io.swagger.v3 libraries instead of io.apicurio.datamodels . As a result, the return type of the public method org.apache.camel.openapi.RestOpenApiReader.read() is now io.swagger.v3.oas.models.OpenAPI instead of io.apicurio.datamodels.openapi.models.OasDocument . When an OpenAPI 2.0 (swagger) specification is parsed, it is automatically upgraded to OpenAPI 3.0.x by the swagger parser. This version also supports OpenAPI 3.1.x specifications. The related spring-boot starter components have been modified to use the new return type. 2.17.17. camel-salesforce Property names of blob fields on generated DTOs no longer have 'Url' affixed. For example, the ContentVersionUrl property is now ContentVersion . 2.17.18. camel-slack The default delay (on slack consumer) is changed from 0.5s to 10s to avoid being rate limited to often by Slack. 2.17.19. camel-spring-rabbitmq The option replyTimeout in camel-spring-rabbitmq has been fixed and the default value from 5 to 30 seconds (this is the default used by Spring). 2.18. Camel Spring Boot The camel-spring-boot dependency no longer includes camel-spring-xml . To use legacy Spring XML files <beans> with Camel on Spring Boot, then include the camel-spring-boot-xml-starter dependency. 2.18.1. Graceful Shutdown Camel now shutdowns a bit later during Spring Boot shutdown. This allows Spring Boot graceful shutdown to complete first (stopping Spring Boot HTTP server gracefully), and then afterward Camel is doing its own Graceful Shutdown . Technically camel-spring has changed getPhase() from returning Integer.MAX_VALUE to Integer.MAX_VALUE - 2049 . This gives room for Spring Boot services to shut down first. 2.18.2. camel-micrometer-starter The uri tags are now static instead of dynamic (by default), as potential too many tags generated due to URI with dynamic values. This can be enabled again by setting camel.metrics.uriTagDynamic=true . 2.18.3. camel-platform-http-starter The platform-http-starter has been changed from using camel-servlet to use Spring HTTP server directly. Therefore, all the HTTP endpoints are no longer prefixed with the servlet context-path (default is camel ). For example: from("platform-http:myservice") .to("...") Then calling myservice would before require to include the context-path, such as http://localhost:8080/camel/myservice . Now the context-path is not in use, and the endpoint can be called with http://localhost:8080/myservice . Note The platform-http-starter can also be used with Rest DSL. If the route or consumer is suspended then http status 503 is now returned instead of 404. 2.18.4. camel-twitter The component was updated to use Twitter4j version 4.1.2, which has moved the packages used by a few of its classes. If accessing certain twitter-related data, such as the Twit status, you need to update the packages used from twitter4j.Status to twitter4j.v1.Status .
[ "<circuitBreaker> <resilience4jConfiguration> <timeoutEnabled>true</timeoutEnabled> <timeoutDuration>2000</timeoutDuration> </resilience4jConfiguration> </circuitBreaker>", "<circuitBreaker> <resilience4jConfiguration timeoutEnabled=\"true\" timeoutDuration=\"2000\"/> </circuitBreaker>", "<route id=\"myRoute\"> <description>Something that this route do</description> <from uri=\"kafka:cheese\"/> </route>", "<route id=\"myRoute\" description=\"Something that this route do\"> <from uri=\"kafka:cheese\"/> </route>", "camel.health.producers-enabled = true", "- route: from: uri: \"direct:info\" steps: - log: \"message\"", "- route: from: uri: \"direct:info\" steps: - log: \"message\"", "\"bean:myBean?method=foo(com.foo.MyOrder.class, true)\"", "\"bean:myBean?method=bar(String.class, int.class)\"", "from(\"platform-http:myservice\") .to(\"...\")" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/migrating_to_red_hat_build_of_apache_camel_for_spring_boot/migrating-to-camel-spring-boot-4
Basic authentication
Basic authentication Red Hat build of Quarkus 3.15 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/basic_authentication/index
Configuring Messaging
Configuring Messaging Red Hat JBoss Enterprise Application Platform 7.4 Instructions and information for developers and administrators who want to develop and deploy messaging applications for Red Hat JBoss Enterprise Application Platform. Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/configuring_messaging/index
Chapter 5. Management of managers using the Ceph Orchestrator
Chapter 5. Management of managers using the Ceph Orchestrator As a storage administrator, you can use the Ceph Orchestrator to deploy additional manager daemons. Cephadm automatically installs a manager daemon on the bootstrap node during the bootstrapping process. In general, you should set up a Ceph Manager on each of the hosts running the Ceph Monitor daemon to achieve same level of availability. By default, whichever ceph-mgr instance comes up first is made active by the Ceph Monitors, and others are standby managers. There is no requirement that there should be a quorum among the ceph-mgr daemons. If the active daemon fails to send a beacon to the monitors for more than the mon mgr beacon grace , then it is replaced by a standby. If you want to pre-empt failover, you can explicitly mark a ceph-mgr daemon as failed with ceph mgr fail MANAGER_NAME command. 5.1. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all the nodes. Hosts are added to the cluster. 5.2. Deploying the manager daemons using the Ceph Orchestrator The Ceph Orchestrator deploys two Manager daemons by default. You can deploy additional manager daemons using the placement specification in the command line interface. To deploy a different number of Manager daemons, specify a different number. If you do not specify the hosts where the Manager daemons should be deployed, the Ceph Orchestrator randomly selects the hosts and deploys the Manager daemons to them. Note Ensure your deployment has at least three Ceph Managers in each deployment. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. Procedure Log into the Cephadm shell: Example You can deploy manager daemons in two different ways: Method 1 Deploy manager daemons using placement specification on specific set of hosts: Note Red Hat recommends that you use the --placement option to deploy on specific hosts. Syntax Example Method 2 Deploy manager daemons randomly on the hosts in the storage cluster: Syntax Example Verification List the service: Example List the hosts, daemons, and processes: Syntax Example 5.3. Removing the manager daemons using the Ceph Orchestrator To remove the manager daemons from the host, you can just redeploy the daemons on other hosts. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all the nodes. Hosts are added to the cluster. At least one manager daemon deployed on the hosts. Procedure Log into the Cephadm shell: Example Run the ceph orch apply command to redeploy the required manager daemons: Syntax If you want to remove manager daemons from host02 , then you can redeploy the manager daemons on other hosts. Example Verification List the hosts,daemons, and processes: Syntax Example Additional Resources See Deploying the manager daemons using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide for more information. 5.4. Using the Ceph Manager modules Use the ceph mgr module ls command to see the available modules and the modules that are presently enabled. Enable or disable modules with ceph mgr module enable MODULE command or ceph mgr module disable MODULE command respectively. If a module is enabled, then the active ceph-mgr daemon loads and executes it. In the case of modules that provide a service, such as an HTTP server, the module might publish its address when it is loaded. To see the addresses of such modules, run the ceph mgr services command. Some modules might also implement a special standby mode which runs on standby ceph-mgr daemon as well as the active daemon. This enables modules that provide services to redirect their clients to the active daemon, if the client tries to connect to a standby. Following is an example to enable the dashboard module: The first time the cluster starts, it uses the mgr_initial_modules setting to override which modules to enable. However, this setting is ignored through the rest of the lifetime of the cluster: only use it for bootstrapping. For example, before starting your monitor daemons for the first time, you might add a section like this to your ceph.conf file: Where a module implements comment line hooks, the commands are accessible as ordinary Ceph commands and Ceph automatically incorporates module commands into the standard CLI interface and route them appropriately to the module: You can use the following configuration parameters with the above command: Table 5.1. Configuration parameters Configuration Description Type Default mgr module path Path to load modules from. String "<library dir>/mgr" mgr data Path to load daemon data (such as keyring) String "/var/lib/ceph/mgr/USDcluster-USDid" mgr tick period How many seconds between manager beacons to monitors, and other periodic checks. Integer 5 mon mgr beacon grace How long after last beacon should a manager be considered failed. Integer 30 5.5. Using the Ceph Manager balancer module The balancer is a module for Ceph Manager ( ceph-mgr ) that optimizes the placement of placement groups (PGs) across OSDs in order to achieve a balanced distribution, either automatically or in a supervised fashion. Currently the balancer module cannot be disabled. It can only be turned off to customize the configuration. Modes There are currently two supported balancer modes: crush-compat : The CRUSH compat mode uses the compat weight-set feature, introduced in Ceph Luminous, to manage an alternative set of weights for devices in the CRUSH hierarchy. The normal weights should remain set to the size of the device to reflect the target amount of data that you want to store on the device. The balancer then optimizes the weight-set values, adjusting them up or down in small increments in order to achieve a distribution that matches the target distribution as closely as possible. Because PG placement is a pseudorandom process, there is a natural amount of variation in the placement; by optimizing the weights, the balancer counter-acts that natural variation. This mode is fully backwards compatible with older clients. When an OSDMap and CRUSH map are shared with older clients, the balancer presents the optimized weights as the real weights. The primary restriction of this mode is that the balancer cannot handle multiple CRUSH hierarchies with different placement rules if the subtrees of the hierarchy share any OSDs. Because this configuration makes managing space utilization on the shared OSDs difficult, it is generally not recommended. As such, this restriction is normally not an issue. upmap : Starting with Luminous, the OSDMap can store explicit mappings for individual OSDs as exceptions to the normal CRUSH placement calculation. These upmap entries provide fine-grained control over the PG mapping. This CRUSH mode will optimize the placement of individual PGs in order to achieve a balanced distribution. In most cases, this distribution is "perfect", with an equal number of PGs on each OSD +/-1 PG, as they might not divide evenly. Important To allow use of this feature, you must tell the cluster that it only needs to support luminous or later clients with the following command: This command fails if any pre-luminous clients or daemons are connected to the monitors. Due to a known issue, kernel CephFS clients report themselves as jewel clients. To work around this issue, use the --yes-i-really-mean-it flag: You can check what client versions are in use with: Prerequisites A running Red Hat Ceph Storage cluster. Procedure Ensure the balancer module is enabled: Example Turn on the balancer module: Example The default mode is upmap . The mode can be changed with: Example or Example Status The current status of the balancer can be checked at any time with: Example Automatic balancing By default, when turning on the balancer module, automatic balancing is used: Example The balancer can be turned back off again with: Example This will use the crush-compat mode, which is backward compatible with older clients and will make small changes to the data distribution over time to ensure that OSDs are equally utilized. Throttling No adjustments will be made to the PG distribution if the cluster is degraded, for example, if an OSD has failed and the system has not yet healed itself. When the cluster is healthy, the balancer throttles its changes such that the percentage of PGs that are misplaced, or need to be moved, is below a threshold of 5% by default. This percentage can be adjusted using the target_max_misplaced_ratio setting. For example, to increase the threshold to 7%: Example For automatic balancing: Set the number of seconds to sleep in between runs of the automatic balancer: Example Set the time of day to begin automatic balancing in HHMM format: Example Set the time of day to finish automatic balancing in HHMM format: Example Restrict automatic balancing to this day of the week or later. Uses the same conventions as crontab, 0 is Sunday, 1 is Monday, and so on: Example Restrict automatic balancing to this day of the week or earlier. This uses the same conventions as crontab, 0 is Sunday, 1 is Monday, and so on: Example Define the pool IDs to which the automatic balancing is limited. The default for this is an empty string, meaning all pools are balanced. The numeric pool IDs can be gotten with the ceph osd pool ls detail command: Example Supervised optimization The balancer operation is broken into a few distinct phases: Building a plan . Evaluating the quality of the data distribution, either for the current PG distribution, or the PG distribution that would result after executing a plan . Executing the plan . To evaluate and score the current distribution: Example To evaluate the distribution for a single pool: Syntax Example To see greater detail for the evaluation: Example To generate a plan using the currently configured mode: Syntax Replace PLAN_NAME with a custom plan name. Example To see the contents of a plan: Syntax Example To discard old plans: Syntax Example To see currently recorded plans use the status command: To calculate the quality of the distribution that would result after executing a plan: Syntax Example To execute the plan: Syntax Example Note Only execute the plan if it is expected to improve the distribution. After execution, the plan will be discarded. 5.6. Using the Ceph Manager alerts module You can use the Ceph Manager alerts module to send simple alert messages about the Red Hat Ceph Storage cluster's health by email. Note This module is not intended to be a robust monitoring solution. The fact that it is run as part of the Ceph cluster itself is fundamentally limiting in that a failure of the ceph-mgr daemon prevents alerts from being sent. This module can, however, be useful for standalone clusters that exist in environments where existing monitoring infrastructure does not exist. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the Ceph Monitor node. Procedure Log into the Cephadm shell: Example Enable the alerts module: Example Ensure the alerts module is enabled: Example Configure the Simple Mail Transfer Protocol (SMTP): Syntax Example Optional: Change the port to 465. Syntax Example Important SSL is not supported in Red Hat Ceph Storage 5 cluster. Do not set the smtp_ssl parameter while configuring alerts. Authenticate to the SMTP server: Syntax Example Optional: By default, SMTP From name is Ceph . To change that, set the smtp_from_name parameter: Syntax Example Optional: By default, the alerts module checks the storage cluster's health every minute, and sends a message when there is a change in the cluster health status. To change the frequency, set the interval parameter: Syntax Example In this example, the interval is set to 5 minutes. Optional: Send an alert immediately: Example Additional Resources See the Health messages of a Ceph cluster section in the Red Hat Ceph Storage Troubleshooting Guide for more information on Ceph health messages. 5.7. Using the Ceph manager crash module Using the Ceph manager crash module, you can collect information about daemon crashdumps and store it in the Red Hat Ceph Storage cluster for further analysis. By default, daemon crashdumps are dumped in /var/lib/ceph/crash . You can configure with the option crash dir . Crash directories are named by time, date, and a randomly-generated UUID, and contain a metadata file meta and a recent log file, with a crash_id that is the same. You can use ceph-crash.service to submit these crash automatically and persist in the Ceph Monitors. The ceph-crash.service watches watches the crashdump directory and uploads them with ceph crash post . The RECENT_CRASH heath message is one of the most common health messages in a Ceph cluster. This health message means that one or more Ceph daemons has crashed recently, and the crash has not yet been archived or acknowledged by the administrator. This might indicate a software bug, a hardware problem like a failing disk, or some other problem. The option mgr/crash/warn_recent_interval controls the time period of what recent means, which is two weeks by default. You can disable the warnings by running the following command: Example The option mgr/crash/retain_interval controls the period for which you want to retain the crash reports before they are automatically purged. The default for this option is one year. Prerequisites A running Red Hat Ceph Storage cluster. Procedure Ensure the crash module is enabled: Example Save a crash dump: The metadata file is a JSON blob stored in the crash dir as meta . You can invoke the ceph command -i - option, which reads from stdin. Example List the timestamp or the UUID crash IDs for all the new and archived crash info: Example List the timestamp or the UUID crash IDs for all the new crash information: Example List the timestamp or the UUID crash IDs for all the new crash information: Example List the summary of saved crash information grouped by age: Example View the details of the saved crash: Syntax Example Remove saved crashes older than KEEP days: Here, KEEP must be an integer. Syntax Example Archive a crash report so that it is no longer considered for the RECENT_CRASH health check and does not appear in the crash ls-new output. It appears in the crash ls . Syntax Example Archive all crash reports: Example Remove the crash dump: Syntax Example Additional Resources See the Health messages of a Ceph cluster section in the Red Hat Ceph Storage Troubleshooting Guide for more information on Ceph health messages.
[ "cephadm shell", "ceph orch apply mgr --placement=\" HOST_NAME_1 HOST_NAME_2 HOST_NAME_3 \"", "ceph orch apply mgr --placement=\"host01 host02 host03\"", "ceph orch apply mgr NUMBER_OF_DAEMONS", "ceph orch apply mgr 3", "ceph orch ls", "ceph orch ps --daemon_type= DAEMON_NAME", "ceph orch ps --daemon_type=mgr", "cephadm shell", "ceph orch apply mgr \" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_3 \"", "ceph orch apply mgr \"2 host01 host03\"", "ceph orch ps --daemon_type= DAEMON_NAME", "ceph orch ps --daemon_type=mgr", "ceph mgr module enable dashboard ceph mgr module ls MODULE balancer on (always on) crash on (always on) devicehealth on (always on) orchestrator on (always on) pg_autoscaler on (always on) progress on (always on) rbd_support on (always on) status on (always on) telemetry on (always on) volumes on (always on) cephadm on dashboard on iostat on nfs on prometheus on restful on alerts - diskprediction_local - influx - insights - k8sevents - localpool - mds_autoscaler - mirroring - osd_perf_query - osd_support - rgw - rook - selftest - snap_schedule - stats - telegraf - test_orchestrator - zabbix - ceph mgr services { \"dashboard\": \"http://myserver.com:7789/\", \"restful\": \"https://myserver.com:8789/\" }", "[mon] mgr initial modules = dashboard balancer", "ceph <command | help>", "ceph osd set-require-min-compat-client luminous", "ceph osd set-require-min-compat-client luminous --yes-i-really-mean-it", "ceph features", "ceph mgr module enable balancer", "ceph balancer on", "ceph balancer mode crush-compact", "ceph balancer mode upmap", "ceph balancer status", "ceph balancer on", "ceph balancer off", "ceph config-key set mgr target_max_misplaced_ratio .07", "ceph config set mgr mgr/balancer/sleep_interval 60", "ceph config set mgr mgr/balancer/begin_time 0000", "ceph config set mgr mgr/balancer/end_time 2359", "ceph config set mgr mgr/balancer/begin_weekday 0", "ceph config set mgr mgr/balancer/end_weekday 6", "ceph config set mgr mgr/balancer/pool_ids 1,2,3", "ceph balancer eval", "ceph balancer eval POOL_NAME", "ceph balancer eval rbd", "ceph balancer eval-verbose", "ceph balancer optimize PLAN_NAME", "ceph balancer optimize rbd_123", "ceph balancer show PLAN_NAME", "ceph balancer show rbd_123", "ceph balancer rm PLAN_NAME", "ceph balancer rm rbd_123", "ceph balancer status", "ceph balancer eval PLAN_NAME", "ceph balancer eval rbd_123", "ceph balancer execute PLAN_NAME", "ceph balancer execute rbd_123", "cephadm shell", "ceph mgr module enable alerts", "ceph mgr module ls | more { \"always_on_modules\": [ \"balancer\", \"crash\", \"devicehealth\", \"orchestrator\", \"pg_autoscaler\", \"progress\", \"rbd_support\", \"status\", \"telemetry\", \"volumes\" ], \"enabled_modules\": [ \"alerts\", \"cephadm\", \"dashboard\", \"iostat\", \"nfs\", \"prometheus\", \"restful\" ]", "ceph config set mgr mgr/alerts/smtp_host SMTP_SERVER ceph config set mgr mgr/alerts/smtp_destination RECEIVER_EMAIL_ADDRESS ceph config set mgr mgr/alerts/smtp_sender SENDER_EMAIL_ADDRESS", "ceph config set mgr mgr/alerts/smtp_host smtp.example.com ceph config set mgr mgr/alerts/smtp_destination [email protected] ceph config set mgr mgr/alerts/smtp_sender [email protected]", "ceph config set mgr mgr/alerts/smtp_port PORT_NUMBER", "ceph config set mgr mgr/alerts/smtp_port 587", "ceph config set mgr mgr/alerts/smtp_user USERNAME ceph config set mgr mgr/alerts/smtp_password PASSWORD", "ceph config set mgr mgr/alerts/smtp_user admin1234 ceph config set mgr mgr/alerts/smtp_password admin1234", "ceph config set mgr mgr/alerts/smtp_from_name CLUSTER_NAME", "ceph config set mgr mgr/alerts/smtp_from_name 'Ceph Cluster Test'", "ceph config set mgr mgr/alerts/interval INTERVAL", "ceph config set mgr mgr/alerts/interval \"5m\"", "ceph alerts send", "ceph config set mgr/crash/warn_recent_interval 0", "ceph mgr module ls | more { \"always_on_modules\": [ \"balancer\", \"crash\", \"devicehealth\", \"orchestrator_cli\", \"progress\", \"rbd_support\", \"status\", \"volumes\" ], \"enabled_modules\": [ \"dashboard\", \"pg_autoscaler\", \"prometheus\" ]", "ceph crash post -i meta", "ceph crash ls", "ceph crash ls-new", "ceph crash ls-new", "ceph crash stat 8 crashes recorded 8 older than 1 days old: 2022-05-20T08:30:14.533316Z_4ea88673-8db6-4959-a8c6-0eea22d305c2 2022-05-20T08:30:14.590789Z_30a8bb92-2147-4e0f-a58b-a12c2c73d4f5 2022-05-20T08:34:42.278648Z_6a91a778-bce6-4ef3-a3fb-84c4276c8297 2022-05-20T08:34:42.801268Z_e5f25c74-c381-46b1-bee3-63d891f9fc2d 2022-05-20T08:34:42.803141Z_96adfc59-be3a-4a38-9981-e71ad3d55e47 2022-05-20T08:34:42.830416Z_e45ed474-550c-44b3-b9bb-283e3f4cc1fe 2022-05-24T19:58:42.549073Z_b2382865-ea89-4be2-b46f-9a59af7b7a2d 2022-05-24T19:58:44.315282Z_1847afbc-f8a9-45da-94e8-5aef0738954e", "ceph crash info CRASH_ID", "ceph crash info 2022-05-24T19:58:42.549073Z_b2382865-ea89-4be2-b46f-9a59af7b7a2d { \"assert_condition\": \"session_map.sessions.empty()\", \"assert_file\": \"/builddir/build/BUILD/ceph-16.1.0-486-g324d7073/src/mon/Monitor.cc\", \"assert_func\": \"virtual Monitor::~Monitor()\", \"assert_line\": 287, \"assert_msg\": \"/builddir/build/BUILD/ceph-16.1.0-486-g324d7073/src/mon/Monitor.cc: In function 'virtual Monitor::~Monitor()' thread 7f67a1aeb700 time 2022-05-24T19:58:42.545485+0000\\n/builddir/build/BUILD/ceph-16.1.0-486-g324d7073/src/mon/Monitor.cc: 287: FAILED ceph_assert(session_map.sessions.empty())\\n\", \"assert_thread_name\": \"ceph-mon\", \"backtrace\": [ \"/lib64/libpthread.so.0(+0x12b30) [0x7f679678bb30]\", \"gsignal()\", \"abort()\", \"(ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1a9) [0x7f6798c8d37b]\", \"/usr/lib64/ceph/libceph-common.so.2(+0x276544) [0x7f6798c8d544]\", \"(Monitor::~Monitor()+0xe30) [0x561152ed3c80]\", \"(Monitor::~Monitor()+0xd) [0x561152ed3cdd]\", \"main()\", \"__libc_start_main()\", \"_start()\" ], \"ceph_version\": \"16.2.8-65.el8cp\", \"crash_id\": \"2022-07-06T19:58:42.549073Z_b2382865-ea89-4be2-b46f-9a59af7b7a2d\", \"entity_name\": \"mon.ceph-adm4\", \"os_id\": \"rhel\", \"os_name\": \"Red Hat Enterprise Linux\", \"os_version\": \"8.5 (Ootpa)\", \"os_version_id\": \"8.5\", \"process_name\": \"ceph-mon\", \"stack_sig\": \"957c21d558d0cba4cee9e8aaf9227b3b1b09738b8a4d2c9f4dc26d9233b0d511\", \"timestamp\": \"2022-07-06T19:58:42.549073Z\", \"utsname_hostname\": \"host02\", \"utsname_machine\": \"x86_64\", \"utsname_release\": \"4.18.0-240.15.1.el8_3.x86_64\", \"utsname_sysname\": \"Linux\", \"utsname_version\": \"#1 SMP Wed Jul 06 03:12:15 EDT 2022\" }", "ceph crash prune KEEP", "ceph crash prune 60", "ceph crash archive CRASH_ID", "ceph crash archive 2022-05-24T19:58:42.549073Z_b2382865-ea89-4be2-b46f-9a59af7b7a2d", "ceph crash archive-all", "ceph crash rm CRASH_ID", "ceph crash rm 2022-05-24T19:58:42.549073Z_b2382865-ea89-4be2-b46f-9a59af7b7a2d" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/operations_guide/management-of-managers-using-the-ceph-orchestrator
Chapter 54. Salesforce
Chapter 54. Salesforce Both producer and consumer are supported This component supports producer and consumer endpoints to communicate with Salesforce using Java DTOs. There is a companion maven plugin Camel Salesforce Plugin that generates these DTOs (see further below). Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-salesforce</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency> Note Developers wishing to contribute to the component are instructed to look at the README.md file on instructions on how to get started and setup your environment for running integration tests. 54.1. Configuring Options Camel components are configured on two separate levels: component level endpoint level 54.1.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 54.1.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 54.2. Component Options The Salesforce component supports 90 options, which are listed below. Name Description Default Type apexMethod (common) APEX method name. String apexQueryParams (common) Query params for APEX method. Map apiVersion (common) Salesforce API version. 53.0 String backoffIncrement (common) Backoff interval increment for Streaming connection restart attempts for failures beyond CometD auto-reconnect. 1000 long batchId (common) Bulk API Batch ID. String contentType (common) Bulk API content type, one of XML, CSV, ZIP_XML, ZIP_CSV. Enum values: XML CSV JSON ZIP_XML ZIP_CSV ZIP_JSON ContentType defaultReplayId (common) Default replayId setting if no value is found in initialReplayIdMap. -1 Long fallBackReplayId (common) ReplayId to fall back to after an Invalid Replay Id response. -1 Long format (common) Payload format to use for Salesforce API calls, either JSON or XML, defaults to JSON. As of Camel 3.12, this option only applies to the Raw operation. Enum values: JSON XML PayloadFormat httpClient (common) Custom Jetty Http Client to use to connect to Salesforce. SalesforceHttpClient httpClientConnectionTimeout (common) Connection timeout used by the HttpClient when connecting to the Salesforce server. 60000 long httpClientIdleTimeout (common) Timeout used by the HttpClient when waiting for response from the Salesforce server. 10000 long httpMaxContentLength (common) Max content length of an HTTP response. Integer httpRequestBufferSize (common) HTTP request buffer size. May need to be increased for large SOQL queries. 8192 Integer includeDetails (common) Include details in Salesforce1 Analytics report, defaults to false. Boolean initialReplayIdMap (common) Replay IDs to start from per channel name. Map instanceId (common) Salesforce1 Analytics report execution instance ID. String jobId (common) Bulk API Job ID. String limit (common) Limit on number of returned records. Applicable to some of the API, check the Salesforce documentation. Integer locator (common) Locator provided by salesforce Bulk 2.0 API for use in getting results for a Query job. String maxBackoff (common) Maximum backoff interval for Streaming connection restart attempts for failures beyond CometD auto-reconnect. 30000 long maxRecords (common) The maximum number of records to retrieve per set of results for a Bulk 2.0 Query. The request is still subject to the size limits. If you are working with a very large number of query results, you may experience a timeout before receiving all the data from Salesforce. To prevent a timeout, specify the maximum number of records your client is expecting to receive in the maxRecords parameter. This splits the results into smaller sets with this value as the maximum size. Integer notFoundBehaviour (common) Sets the behaviour of 404 not found status received from Salesforce API. Should the body be set to NULL NotFoundBehaviour#NULL or should a exception be signaled on the exchange NotFoundBehaviour#EXCEPTION - the default. Enum values: EXCEPTION NULL EXCEPTION NotFoundBehaviour notifyForFields (common) Notify for fields, options are ALL, REFERENCED, SELECT, WHERE. Enum values: ALL REFERENCED SELECT WHERE NotifyForFieldsEnum notifyForOperationCreate (common) Notify for create operation, defaults to false (API version = 29.0). Boolean notifyForOperationDelete (common) Notify for delete operation, defaults to false (API version = 29.0). Boolean notifyForOperations (common) Notify for operations, options are ALL, CREATE, EXTENDED, UPDATE (API version 29.0). Enum values: ALL CREATE EXTENDED UPDATE NotifyForOperationsEnum notifyForOperationUndelete (common) Notify for un-delete operation, defaults to false (API version = 29.0). Boolean notifyForOperationUpdate (common) Notify for update operation, defaults to false (API version = 29.0). Boolean objectMapper (common) Custom Jackson ObjectMapper to use when serializing/deserializing Salesforce objects. ObjectMapper packages (common) In what packages are the generated DTO classes. Typically the classes would be generated using camel-salesforce-maven-plugin. Set it if using the generated DTOs to gain the benefit of using short SObject names in parameters/header values. Multiple packages can be separated by comma. String pkChunking (common) Use PK Chunking. Only for use in original Bulk API. Bulk 2.0 API performs PK chunking automatically, if necessary. Boolean pkChunkingChunkSize (common) Chunk size for use with PK Chunking. If unspecified, salesforce default is 100,000. Maximum size is 250,000. Integer pkChunkingParent (common) Specifies the parent object when you're enabling PK chunking for queries on sharing objects. The chunks are based on the parent object's records rather than the sharing object's records. For example, when querying on AccountShare, specify Account as the parent object. PK chunking is supported for sharing objects as long as the parent object is supported. String pkChunkingStartRow (common) Specifies the 15-character or 18-character record ID to be used as the lower boundary for the first chunk. Use this parameter to specify a starting ID when restarting a job that failed between batches. String queryLocator (common) Query Locator provided by salesforce for use when a query results in more records than can be retrieved in a single call. Use this value in a subsequent call to retrieve additional records. String rawPayload (common) Use raw payload String for request and response (either JSON or XML depending on format), instead of DTOs, false by default. false boolean reportId (common) Salesforce1 Analytics report Id. String reportMetadata (common) Salesforce1 Analytics report metadata for filtering. ReportMetadata resultId (common) Bulk API Result ID. String sObjectBlobFieldName (common) SObject blob field name. String sObjectClass (common) Fully qualified SObject class name, usually generated using camel-salesforce-maven-plugin. String sObjectFields (common) SObject fields to retrieve. String sObjectId (common) SObject ID if required by API. String sObjectIdName (common) SObject external ID field name. String sObjectIdValue (common) SObject external ID field value. String sObjectName (common) SObject name if required or supported by API. String sObjectQuery (common) Salesforce SOQL query string. String sObjectSearch (common) Salesforce SOSL search string. String updateTopic (common) Whether to update an existing Push Topic when using the Streaming API, defaults to false. false boolean config (common (advanced)) Global endpoint configuration - use to set values that are common to all endpoints. SalesforceEndpointConfig httpClientProperties (common (advanced)) Used to set any properties that can be configured on the underlying HTTP client. Have a look at properties of SalesforceHttpClient and the Jetty HttpClient for all available options. Map longPollingTransportProperties (common (advanced)) Used to set any properties that can be configured on the LongPollingTransport used by the BayeuxClient (CometD) used by the streaming api. Map workerPoolMaxSize (common (advanced)) Maximum size of the thread pool used to handle HTTP responses. 20 int workerPoolSize (common (advanced)) Size of the thread pool used to handle HTTP responses. 10 int bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean allOrNone (producer) Composite API option to indicate to rollback all records if any are not successful. false boolean apexUrl (producer) APEX method URL. String compositeMethod (producer) Composite (raw) method. String lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean rawHttpHeaders (producer) Comma separated list of message headers to include as HTTP parameters for Raw operation. String rawMethod (producer) HTTP method to use for the Raw operation. String rawPath (producer) The portion of the endpoint URL after the domain name. E.g., '/services/data/v52.0/sobjects/Account/'. String rawQueryParameters (producer) Comma separated list of message headers to include as query parameters for Raw operation. Do not url-encode values as this will be done automatically. String autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean httpProxyExcludedAddresses (proxy) A list of addresses for which HTTP proxy server should not be used. Set httpProxyHost (proxy) Hostname of the HTTP proxy server to use. String httpProxyIncludedAddresses (proxy) A list of addresses for which HTTP proxy server should be used. Set httpProxyPort (proxy) Port number of the HTTP proxy server to use. Integer httpProxySocks4 (proxy) If set to true the configures the HTTP proxy to use as a SOCKS4 proxy. false boolean authenticationType (security) Explicit authentication method to be used, one of USERNAME_PASSWORD, REFRESH_TOKEN or JWT. Salesforce component can auto-determine the authentication method to use from the properties set, set this property to eliminate any ambiguity. Enum values: USERNAME_PASSWORD REFRESH_TOKEN JWT AuthenticationType clientId (security) Required OAuth Consumer Key of the connected app configured in the Salesforce instance setup. Typically a connected app needs to be configured but one can be provided by installing a package. String clientSecret (security) OAuth Consumer Secret of the connected app configured in the Salesforce instance setup. String httpProxyAuthUri (security) Used in authentication against the HTTP proxy server, needs to match the URI of the proxy server in order for the httpProxyUsername and httpProxyPassword to be used for authentication. String httpProxyPassword (security) Password to use to authenticate against the HTTP proxy server. String httpProxyRealm (security) Realm of the proxy server, used in preemptive Basic/Digest authentication methods against the HTTP proxy server. String httpProxySecure (security) If set to false disables the use of TLS when accessing the HTTP proxy. true boolean httpProxyUseDigestAuth (security) If set to true Digest authentication will be used when authenticating to the HTTP proxy, otherwise Basic authorization method will be used. false boolean httpProxyUsername (security) Username to use to authenticate against the HTTP proxy server. String instanceUrl (security) URL of the Salesforce instance used after authentication, by default received from Salesforce on successful authentication. String jwtAudience (security) Value to use for the Audience claim (aud) when using OAuth JWT flow. If not set, the login URL will be used, which is appropriate in most cases. String keystore (security) KeyStore parameters to use in OAuth JWT flow. The KeyStore should contain only one entry with private key and certificate. Salesforce does not verify the certificate chain, so this can easily be a selfsigned certificate. Make sure that you upload the certificate to the corresponding connected app. KeyStoreParameters lazyLogin (security) If set to true prevents the component from authenticating to Salesforce with the start of the component. You would generally set this to the (default) false and authenticate early and be immediately aware of any authentication issues. false boolean loginConfig (security) All authentication configuration in one nested bean, all properties set there can be set directly on the component as well. SalesforceLoginConfig loginUrl (security) Required URL of the Salesforce instance used for authentication, by default set to https://login.salesforce.com . https://login.salesforce.com String password (security) Password used in OAuth flow to gain access to access token. It's easy to get started with password OAuth flow, but in general one should avoid it as it is deemed less secure than other flows. Make sure that you append security token to the end of the password if using one. String refreshToken (security) Refresh token already obtained in the refresh token OAuth flow. One needs to setup a web application and configure a callback URL to receive the refresh token, or configure using the builtin callback at https://login.salesforce.com/services/oauth2/success or https://test.salesforce.com/services/oauth2/success and then retrive the refresh_token from the URL at the end of the flow. Note that in development organizations Salesforce allows hosting the callback web application at localhost. String sslContextParameters (security) SSL parameters to use, see SSLContextParameters class for all available options. SSLContextParameters useGlobalSslContextParameters (security) Enable usage of global SSL context parameters. false boolean userName (security) Username used in OAuth flow to gain access to access token. It's easy to get started with password OAuth flow, but in general one should avoid it as it is deemed less secure than other flows. String 54.3. Endpoint Options The Salesforce endpoint is configured using URI syntax: with the following path and query parameters: 54.3.1. Path Parameters (2 parameters) Name Description Default Type operationName (producer) The operation to use. Enum values: getVersions getResources getGlobalObjects getBasicInfo getDescription getSObject createSObject updateSObject deleteSObject getSObjectWithId upsertSObject deleteSObjectWithId getBlobField query queryMore queryAll search apexCall recent createJob getJob closeJob abortJob createBatch getBatch getAllBatches getRequest getResults createBatchQuery getQueryResultIds getQueryResult getRecentReports getReportDescription executeSyncReport executeAsyncReport getReportInstances getReportResults limits approval approvals composite-tree composite-batch composite compositeRetrieveSObjectCollections compositeCreateSObjectCollections compositeUpdateSObjectCollections compositeUpsertSObjectCollections compositeDeleteSObjectCollections bulk2GetAllJobs bulk2CreateJob bulk2GetJob bulk2CreateBatch bulk2CloseJob bulk2AbortJob bulk2DeleteJob bulk2GetSuccessfulResults bulk2GetFailedResults bulk2GetUnprocessedRecords bulk2CreateQueryJob bulk2GetQueryJob bulk2GetAllQueryJobs bulk2GetQueryJobResults bulk2AbortQueryJob bulk2DeleteQueryJob raw OperationName topicName (consumer) The name of the topic/channel to use. String 54.3.2. Query Parameters (57 parameters) Name Description Default Type apexMethod (common) APEX method name. String apexQueryParams (common) Query params for APEX method. Map apiVersion (common) Salesforce API version. 53.0 String backoffIncrement (common) Backoff interval increment for Streaming connection restart attempts for failures beyond CometD auto-reconnect. 1000 long batchId (common) Bulk API Batch ID. String contentType (common) Bulk API content type, one of XML, CSV, ZIP_XML, ZIP_CSV. Enum values: XML CSV JSON ZIP_XML ZIP_CSV ZIP_JSON ContentType defaultReplayId (common) Default replayId setting if no value is found in initialReplayIdMap. -1 Long fallBackReplayId (common) ReplayId to fall back to after an Invalid Replay Id response. -1 Long format (common) Payload format to use for Salesforce API calls, either JSON or XML, defaults to JSON. As of Camel 3.12, this option only applies to the Raw operation. Enum values: JSON XML PayloadFormat httpClient (common) Custom Jetty Http Client to use to connect to Salesforce. SalesforceHttpClient includeDetails (common) Include details in Salesforce1 Analytics report, defaults to false. Boolean initialReplayIdMap (common) Replay IDs to start from per channel name. Map instanceId (common) Salesforce1 Analytics report execution instance ID. String jobId (common) Bulk API Job ID. String limit (common) Limit on number of returned records. Applicable to some of the API, check the Salesforce documentation. Integer locator (common) Locator provided by salesforce Bulk 2.0 API for use in getting results for a Query job. String maxBackoff (common) Maximum backoff interval for Streaming connection restart attempts for failures beyond CometD auto-reconnect. 30000 long maxRecords (common) The maximum number of records to retrieve per set of results for a Bulk 2.0 Query. The request is still subject to the size limits. If you are working with a very large number of query results, you may experience a timeout before receiving all the data from Salesforce. To prevent a timeout, specify the maximum number of records your client is expecting to receive in the maxRecords parameter. This splits the results into smaller sets with this value as the maximum size. Integer notFoundBehaviour (common) Sets the behaviour of 404 not found status received from Salesforce API. Should the body be set to NULL NotFoundBehaviour#NULL or should a exception be signaled on the exchange NotFoundBehaviour#EXCEPTION - the default. Enum values: EXCEPTION NULL EXCEPTION NotFoundBehaviour notifyForFields (common) Notify for fields, options are ALL, REFERENCED, SELECT, WHERE. Enum values: ALL REFERENCED SELECT WHERE NotifyForFieldsEnum notifyForOperationCreate (common) Notify for create operation, defaults to false (API version = 29.0). Boolean notifyForOperationDelete (common) Notify for delete operation, defaults to false (API version = 29.0). Boolean notifyForOperations (common) Notify for operations, options are ALL, CREATE, EXTENDED, UPDATE (API version 29.0). Enum values: ALL CREATE EXTENDED UPDATE NotifyForOperationsEnum notifyForOperationUndelete (common) Notify for un-delete operation, defaults to false (API version = 29.0). Boolean notifyForOperationUpdate (common) Notify for update operation, defaults to false (API version = 29.0). Boolean objectMapper (common) Custom Jackson ObjectMapper to use when serializing/deserializing Salesforce objects. ObjectMapper pkChunking (common) Use PK Chunking. Only for use in original Bulk API. Bulk 2.0 API performs PK chunking automatically, if necessary. Boolean pkChunkingChunkSize (common) Chunk size for use with PK Chunking. If unspecified, salesforce default is 100,000. Maximum size is 250,000. Integer pkChunkingParent (common) Specifies the parent object when you're enabling PK chunking for queries on sharing objects. The chunks are based on the parent object's records rather than the sharing object's records. For example, when querying on AccountShare, specify Account as the parent object. PK chunking is supported for sharing objects as long as the parent object is supported. String pkChunkingStartRow (common) Specifies the 15-character or 18-character record ID to be used as the lower boundary for the first chunk. Use this parameter to specify a starting ID when restarting a job that failed between batches. String queryLocator (common) Query Locator provided by salesforce for use when a query results in more records than can be retrieved in a single call. Use this value in a subsequent call to retrieve additional records. String rawPayload (common) Use raw payload String for request and response (either JSON or XML depending on format), instead of DTOs, false by default. false boolean reportId (common) Salesforce1 Analytics report Id. String reportMetadata (common) Salesforce1 Analytics report metadata for filtering. ReportMetadata resultId (common) Bulk API Result ID. String sObjectBlobFieldName (common) SObject blob field name. String sObjectClass (common) Fully qualified SObject class name, usually generated using camel-salesforce-maven-plugin. String sObjectFields (common) SObject fields to retrieve. String sObjectId (common) SObject ID if required by API. String sObjectIdName (common) SObject external ID field name. String sObjectIdValue (common) SObject external ID field value. String sObjectName (common) SObject name if required or supported by API. String sObjectQuery (common) Salesforce SOQL query string. String sObjectSearch (common) Salesforce SOSL search string. String updateTopic (common) Whether to update an existing Push Topic when using the Streaming API, defaults to false. false boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean replayId (consumer) The replayId value to use when subscribing. Long exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern allOrNone (producer) Composite API option to indicate to rollback all records if any are not successful. false boolean apexUrl (producer) APEX method URL. String compositeMethod (producer) Composite (raw) method. String lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean rawHttpHeaders (producer) Comma separated list of message headers to include as HTTP parameters for Raw operation. String rawMethod (producer) HTTP method to use for the Raw operation. String rawPath (producer) The portion of the endpoint URL after the domain name. E.g., '/services/data/v52.0/sobjects/Account/'. String rawQueryParameters (producer) Comma separated list of message headers to include as query parameters for Raw operation. Do not url-encode values as this will be done automatically. String 54.4. Authenticating to Salesforce The component supports three OAuth authentication flows: OAuth 2.0 Username-Password Flow OAuth 2.0 Refresh Token Flow OAuth 2.0 JWT Bearer Token Flow For each of the flow different set of properties needs to be set: Table 54.1. Table 1. Properties to set for each authentication flow Property Where to find it on Salesforce Flow clientId Connected App, Consumer Key All flows clientSecret Connected App, Consumer Secret Username-Password, Refresh Token userName Salesforce user username Username-Password, JWT Bearer Token password Salesforce user password Username-Password refreshToken From OAuth flow callback Refresh Token keystore Connected App, Digital Certificate JWT Bearer Token The component auto determines what flow you're trying to configure, to be remove ambiguity set the authenticationType property. Note Using Username-Password Flow in production is not encouraged. Note The certificate used in JWT Bearer Token Flow can be a selfsigned certificate. The KeyStore holding the certificate and the private key must contain only single certificate-private key entry. 54.5. URI format When used as a consumer, receiving streaming events, the URI scheme is: When used as a producer, invoking the Salesforce REST APIs, the URI scheme is: 54.6. Passing in Salesforce headers and fetching Salesforce response headers There is support to pass Salesforce headers via inbound message headers, header names that start with Sforce or x-sfdc on the Camel message will be passed on in the request, and response headers that start with Sforce will be present in the outbound message headers. For example to fetch API limits you can specify: // in your Camel route set the header before Salesforce endpoint //... .setHeader("Sforce-Limit-Info", constant("api-usage")) .to("salesforce:getGlobalObjects") .to(myProcessor); // myProcessor will receive `Sforce-Limit-Info` header on the outbound // message class MyProcessor implements Processor { public void process(Exchange exchange) throws Exception { Message in = exchange.getIn(); String apiLimits = in.getHeader("Sforce-Limit-Info", String.class); } } In addition, HTTP response status code and text are available as headers Exchange.HTTP_RESPONSE_CODE and Exchange.HTTP_RESPONSE_TEXT . 54.7. Supported Salesforce APIs The component supports the following Salesforce APIs Producer endpoints can use the following APIs. Most of the APIs process one record at a time, the Query API can retrieve multiple Records. 54.7.1. Rest API You can use the following for operationName : getVersions - Gets supported Salesforce REST API versions getResources - Gets available Salesforce REST Resource endpoints getGlobalObjects - Gets metadata for all available SObject types getBasicInfo - Gets basic metadata for a specific SObject type getDescription - Gets comprehensive metadata for a specific SObject type getSObject - Gets an SObject using its Salesforce Id createSObject - Creates an SObject updateSObject - Updates an SObject using Id deleteSObject - Deletes an SObject using Id getSObjectWithId - Gets an SObject using an external (user defined) id field upsertSObject - Updates or inserts an SObject using an external id deleteSObjectWithId - Deletes an SObject using an external id query - Runs a Salesforce SOQL query queryMore - Retrieves more results (in case of large number of results) using result link returned from the 'query' API search - Runs a Salesforce SOSL query limits - fetching organization API usage limits recent - fetching recent items approval - submit a record or records (batch) for approval process approvals - fetch a list of all approval processes composite - submit up to 25 possibly related REST requests and receive individual responses. It's also possible to use "raw" composite without limitation. composite-tree - create up to 200 records with parent-child relationships (up to 5 levels) in one go composite-batch - submit a composition of requests in batch compositeRetrieveSObjectCollections - Retrieve one or more records of the same object type. compositeCreateSObjectCollections - Add up to 200 records, returning a list of SaveSObjectResult objects. compositeUpdateSObjectCollections - Update up to 200 records, returning a list of SaveSObjectResult objects. compositeUpsertSObjectCollections - Create or update (upsert) up to 200 records based on an external ID field. Returns a list of UpsertSObjectResult objects. compositeDeleteSObjectCollections - Delete up to 200 records, returning a list of SaveSObjectResult objects. queryAll - Runs a SOQL query. It returns the results that are deleted because of a merge (merges up to three records into one of the records, deletes the others, and reparents any related records) or delete. Also returns the information about archived Task and Event records. getBlobField - Retrieves the specified blob field from an individual record. apexCall - Executes a user defined APEX REST API call. raw - Send requests to salesforce and have full, raw control over endpoint, parameters, body, etc. For example, the following producer endpoint uses the upsertSObject API, with the sObjectIdName parameter specifying 'Name' as the external id field. The request message body should be an SObject DTO generated using the maven plugin. The response message will either be null if an existing record was updated, or CreateSObjectResult with an id of the new record, or a list of errors while creating the new object. ...to("salesforce:upsertSObject?sObjectIdName=Name")... 54.7.2. Bulk 2.0 API The Bulk 2.0 API has a simplified model over the original Bulk API. Use it to quickly load a large amount of data into salesforce, or query a large amount of data out of salesforce. Data must be provided in CSV format. The minimum API version for Bulk 2.0 is v41.0. The minimum API version for Bulk Queries is v47.0. DTO classes mentioned below are from the org.apache.camel.component.salesforce.api.dto.bulkv2 package. The following operations are supported: bulk2CreateJob - Create a bulk job. Supply an instance of Job in the message body. bulk2GetJob - Get an existing Job. jobId parameter is required. bulk2CreateBatch - Add a Batch of CSV records to a job. Supply CSV data in the message body. The first row must contain headers. jobId parameter is required. bulk2CloseJob - Close a job. You must close the job in order for it to be processed or aborted/deleted. jobId parameter is required. bulk2AbortJob - Abort a job. jobId parameter is required. bulk2DeleteJob - Delete a job. jobId parameter is required. bulk2GetSuccessfulResults - Get successful results for a job. Returned message body will contain an InputStream of CSV data. jobId parameter is required. bulk2GetFailedResults - Get failed results for a job. Returned message body will contain an InputStream of CSV data. jobId parameter is required. bulk2GetUnprocessedRecords - Get unprocessed records for a job. Returned message body will contain an InputStream of CSV data. jobId parameter is required. bulk2GetAllJobs - Get all jobs. Response body is an instance of Jobs . If the done property is false, there are additional pages to fetch, and the nextRecordsUrl property contains the value to be set in the queryLocator parameter on subsequent calls. bulk2CreateQueryJob - Create a bulk query job. Supply an instance of QueryJob in the message body. bulk2GetQueryJob - Get a bulk query job. jobId parameter is required. bulk2GetQueryJobResults - Get bulk query job results. jobId parameter is required. Accepts maxRecords and locator parameters. Response message headers include Sforce-NumberOfRecords and Sforce-Locator headers. The value of Sforce-Locator can be passed into subsequent calls via the locator parameter. bulk2AbortQueryJob - Abort a bulk query job. jobId parameter is required. bulk2DeleteQueryJob - Delete a bulk query job. jobId parameter is required. bulk2GetAllQueryJobs - Get all jobs. Response body is an instance of QueryJobs . If the done property is false, there are additional pages to fetch, and the nextRecordsUrl property contains the value to be set in the queryLocator parameter on subsequent calls. 54.7.3. Rest Bulk (original) API Producer endpoints can use the following APIs. All Job data formats, i.e. xml, csv, zip/xml, and zip/csv are supported. The request and response have to be marshalled/unmarshalled by the route. Usually the request will be some stream source like a CSV file, and the response may also be saved to a file to be correlated with the request. You can use the following for operationName : createJob - Creates a Salesforce Bulk Job. Must supply a JobInfo instance in body. PK Chunking is supported via the pkChunking* options. See an explanation here . getJob - Gets a Job using its Salesforce Id closeJob - Closes a Job abortJob - Aborts a Job createBatch - Submits a Batch within a Bulk Job getBatch - Gets a Batch using Id getAllBatches - Gets all Batches for a Bulk Job Id getRequest - Gets Request data (XML/CSV) for a Batch getResults - Gets the results of the Batch when its complete createBatchQuery - Creates a Batch from an SOQL query getQueryResultIds - Gets a list of Result Ids for a Batch Query getQueryResult - Gets results for a Result Id getRecentReports - Gets up to 200 of the reports you most recently viewed by sending a GET request to the Report List resource. getReportDescription - Retrieves the report, report type, and related metadata for a report, either in a tabular or summary or matrix format. executeSyncReport - Runs a report synchronously with or without changing filters and returns the latest summary data. executeAsyncReport - Runs an instance of a report asynchronously with or without filters and returns the summary data with or without details. getReportInstances - Returns a list of instances for a report that you requested to be run asynchronously. Each item in the list is treated as a separate instance of the report. getReportResults : Contains the results of running a report. For example, the following producer endpoint uses the createBatch API to create a Job Batch. The in message must contain a body that can be converted into an InputStream (usually UTF-8 CSV or XML content from a file, etc.) and header fields 'jobId' for the Job and 'contentType' for the Job content type, which can be XML, CSV, ZIP_XML or ZIP_CSV. The put message body will contain BatchInfo on success, or throw a SalesforceException on error. ...to("salesforce:createBatch").. 54.7.4. Rest Streaming API Consumer endpoints can use the following syntax for streaming endpoints to receive Salesforce notifications on create/update. To create and subscribe to a topic from("salesforce:CamelTestTopic?notifyForFields=ALL&notifyForOperations=ALL&sObjectName=Merchandise__c&updateTopic=true&sObjectQuery=SELECT Id, Name FROM Merchandise__c")... To subscribe to an existing topic from("salesforce:CamelTestTopic&sObjectName=Merchandise__c")... 54.7.5. Platform events To emit a platform event use createSObject operation. And set the message body can be JSON string or InputStream with key-value data - in that case sObjectName needs to be set to the API name of the event, or a class that extends from AbstractDTOBase with the appropriate class name for the event. For example using a DTO: class Order_Event__e extends AbstractDTOBase { @JsonProperty("OrderNumber") private String orderNumber; // ... other properties and getters/setters } from("timer:tick") .process(exchange -> { final Message in = exchange.getIn(); String orderNumber = "ORD" + exchange.getProperty(Exchange.TIMER_COUNTER); Order_Event__e event = new Order_Event__e(); event.setOrderNumber(orderNumber); in.setBody(event); }) .to("salesforce:createSObject"); Or using JSON event data: from("timer:tick") .process(exchange -> { final Message in = exchange.getIn(); String orderNumber = "ORD" + exchange.getProperty(Exchange.TIMER_COUNTER); in.setBody("{\"OrderNumber\":\"" + orderNumber + "\"}"); }) .to("salesforce:createSObject?sObjectName=Order_Event__e"); To receive platform events use the consumer endpoint with the API name of the platform event prefixed with event/ (or /event/ ), e.g.: salesforce:events/Order_Event__e . Processor consuming from that endpoint will receive either org.apache.camel.component.salesforce.api.dto.PlatformEvent object or org.cometd.bayeux.Message in the body depending on the rawPayload being false or true respectively. For example, in the simplest form to consume one event: PlatformEvent event = consumer.receiveBody("salesforce:event/Order_Event__e", PlatformEvent.class); 54.7.6. Change data capture events On the one hand, Salesforce could be configured to emit notifications for record changes of select objects. On the other hand, the Camel Salesforce component could react to such notifications, allowing for instance to synchronize those changes into an external system . The notifications of interest could be specified in the from("salesforce:XXX") clause of a Camel route via the subscription channel, e.g: from("salesforce:data/ChangeEvents?replayId=-1").log("being notified of all change events") from("salesforce:data/AccountChangeEvent?replayId=-1").log("being notified of change events for Account records") from("salesforce:data/Employee__ChangeEvent?replayId=-1").log("being notified of change events for Employee__c custom object") The received message contains either java.util.Map<String,Object> or org.cometd.bayeux.Message in the body depending on the rawPayload being false or true respectively. The CamelSalesforceChangeType header could be valued to one of CREATE , UPDATE , DELETE or UNDELETE . More details about how to use the Camel Salesforce component change data capture capabilities could be found in the ChangeEventsConsumerIntegrationTest . The Salesforce developer guide is a good fit to better know the subtleties of implementing a change data capture integration application. The dynamic nature of change event body fields, high level replication steps as well as security considerations could be of interest. 54.8. Examples 54.8.1. Uploading a document to a ContentWorkspace Create the ContentVersion in Java, using a Processor instance: public class ContentProcessor implements Processor { public void process(Exchange exchange) throws Exception { Message message = exchange.getIn(); ContentVersion cv = new ContentVersion(); ContentWorkspace cw = getWorkspace(exchange); cv.setFirstPublishLocationId(cw.getId()); cv.setTitle("test document"); cv.setPathOnClient("test_doc.html"); byte[] document = message.getBody(byte[].class); ObjectMapper mapper = new ObjectMapper(); String enc = mapper.convertValue(document, String.class); cv.setVersionDataUrl(enc); message.setBody(cv); } protected ContentWorkspace getWorkSpace(Exchange exchange) { // Look up the content workspace somehow, maybe use enrich() to add it to a // header that can be extracted here ---- } } Give the output from the processor to the Salesforce component: from("file:///home/camel/library") .to(new ContentProcessor()) // convert bytes from the file into a ContentVersion SObject // for the salesforce component .to("salesforce:createSObject"); 54.9. Using Salesforce Limits API With salesforce:limits operation you can fetch of API limits from Salesforce and then act upon that data received. The result of salesforce:limits operation is mapped to org.apache.camel.component.salesforce.api.dto.Limits class and can be used in a custom processors or expressions. For instance, consider that you need to limit the API usage of Salesforce so that 10% of daily API requests is left for other routes. The body of output message contains an instance of org.apache.camel.component.salesforce.api.dto.Limits object that can be used in conjunction with Content Based Router and Content Based Router and Spring Expression Language (SpEL) to choose when to perform queries. Notice how multiplying 1.0 with the integer value held in body.dailyApiRequests.remaining makes the expression evaluate as with floating point arithmetic, without it - it would end up making integral division which would result with either 0 (some API limits consumed) or 1 (no API limits consumed). from("direct:querySalesforce") .to("salesforce:limits") .choice() .when(spel("#{1.0 * body.dailyApiRequests.remaining / body.dailyApiRequests.max < 0.1}")) .to("salesforce:query?...") .otherwise() .setBody(constant("Used up Salesforce API limits, leaving 10% for critical routes")) .endChoice() 54.10. Working with approvals All the properties are named exactly the same as in the Salesforce REST API prefixed with approval. . You can set approval properties by setting approval.PropertyName of the Endpoint these will be used as template - meaning that any property not present in either body or header will be taken from the Endpoint configuration. Or you can set the approval template on the Endpoint by assigning approval property to a reference onto a bean in the Registry. You can also provide header values using the same approval.PropertyName in the incoming message headers. And finally body can contain one AprovalRequest or an Iterable of ApprovalRequest objects to process as a batch. The important thing to remember is the priority of the values specified in these three mechanisms: value in body takes precedence before any other value in message header takes precedence before template value value in template is set if no other value in header or body was given For example to send one record for approval using values in headers use: Given a route: from("direct:example1")// .setHeader("approval.ContextId", simple("USD{body['contextId']}")) .setHeader("approval.NextApproverIds", simple("USD{body['nextApproverIds']}")) .to("salesforce:approval?"// + "approval.actionType=Submit"// + "&approval.comments=this is a test"// + "&approval.processDefinitionNameOrId=Test_Account_Process"// + "&approval.skipEntryCriteria=true"); You could send a record for approval using: final Map<String, String> body = new HashMap<>(); body.put("contextId", accountIds.iterator().()); body.put("nextApproverIds", userId); final ApprovalResult result = template.requestBody("direct:example1", body, ApprovalResult.class); 54.11. Using Salesforce Recent Items API To fetch the recent items use salesforce:recent operation. This operation returns an java.util.List of org.apache.camel.component.salesforce.api.dto.RecentItem objects ( List<RecentItem> ) that in turn contain the Id , Name and Attributes (with type and url properties). You can limit the number of returned items by specifying limit parameter set to maximum number of records to return. For example: from("direct:fetchRecentItems") to("salesforce:recent") .split().body() .log("USD{body.name} at USD{body.attributes.url}"); 54.12. Using Salesforce Composite API to submit SObject tree To create up to 200 records including parent-child relationships use salesforce:composite-tree operation. This requires an instance of org.apache.camel.component.salesforce.api.dto.composite.SObjectTree in the input message and returns the same tree of objects in the output message. The org.apache.camel.component.salesforce.api.dto.AbstractSObjectBase instances within the tree get updated with the identifier values ( Id property) or their corresponding org.apache.camel.component.salesforce.api.dto.composite.SObjectNode is populated with errors on failure. Note that for some records operation can succeed and for some it can fail - so you need to manually check for errors. Easiest way to use this functionality is to use the DTOs generated by the camel-salesforce-maven-plugin , but you also have the option of customizing the references that identify the each object in the tree, for instance primary keys from your database. Lets look at an example: Account account = ... Contact president = ... Contact marketing = ... Account anotherAccount = ... Contact sales = ... Asset someAsset = ... // build the tree SObjectTree request = new SObjectTree(); request.addObject(account).addChildren(president, marketing); request.addObject(anotherAccount).addChild(sales).addChild(someAsset); final SObjectTree response = template.requestBody("salesforce:composite-tree", tree, SObjectTree.class); final Map<Boolean, List<SObjectNode>> result = response.allNodes() .collect(Collectors.groupingBy(SObjectNode::hasErrors)); final List<SObjectNode> withErrors = result.get(true); final List<SObjectNode> succeeded = result.get(false); final String firstId = succeeded.get(0).getId(); 54.13. Using Salesforce Composite API to submit multiple requests in a batch The Composite API batch operation ( composite-batch ) allows you to accumulate multiple requests in a batch and then submit them in one go, saving the round trip cost of multiple individual requests. Each response is then received in a list of responses with the order preserved, so that the n-th requests response is in the n-th place of the response. Note The results can vary from API to API so the result of the request is given as a java.lang.Object . In most cases the result will be a java.util.Map with string keys and values or other java.util.Map as value. Requests are made in JSON format and hold some type information (i.e. it is known what values are strings and what values are numbers). Lets look at an example: final String acountId = ... final SObjectBatch batch = new SObjectBatch("38.0"); final Account updates = new Account(); updates.setName("NewName"); batch.addUpdate("Account", accountId, updates); final Account newAccount = new Account(); newAccount.setName("Account created from Composite batch API"); batch.addCreate(newAccount); batch.addGet("Account", accountId, "Name", "BillingPostalCode"); batch.addDelete("Account", accountId); final SObjectBatchResponse response = template.requestBody("salesforce:composite-batch", batch, SObjectBatchResponse.class); boolean hasErrors = response.hasErrors(); // if any of the requests has resulted in either 4xx or 5xx HTTP status final List<SObjectBatchResult> results = response.getResults(); // results of three operations sent in batch final SObjectBatchResult updateResult = results.get(0); // update result final int updateStatus = updateResult.getStatusCode(); // probably 204 final Object updateResultData = updateResult.getResult(); // probably null final SObjectBatchResult createResult = results.get(1); // create result @SuppressWarnings("unchecked") final Map<String, Object> createData = (Map<String, Object>) createResult.getResult(); final String newAccountId = createData.get("id"); // id of the new account, this is for JSON, for XML it would be createData.get("Result").get("id") final SObjectBatchResult retrieveResult = results.get(2); // retrieve result @SuppressWarnings("unchecked") final Map<String, Object> retrieveData = (Map<String, Object>) retrieveResult.getResult(); final String accountName = retrieveData.get("Name"); // Name of the retrieved account, this is for JSON, for XML it would be createData.get("Account").get("Name") final String accountBillingPostalCode = retrieveData.get("BillingPostalCode"); // Name of the retrieved account, this is for JSON, for XML it would be createData.get("Account").get("BillingPostalCode") final SObjectBatchResult deleteResult = results.get(3); // delete result final int updateStatus = deleteResult.getStatusCode(); // probably 204 final Object updateResultData = deleteResult.getResult(); // probably null 54.14. Using Salesforce Composite API to submit multiple chained requests The composite operation allows submitting up to 25 requests that can be chained together, for instance identifier generated in request can be used in subsequent request. Individual requests and responses are linked with the provided reference . Note Composite API supports only JSON payloads. Note As with the batch API the results can vary from API to API so the result of the request is given as a java.lang.Object . In most cases the result will be a java.util.Map with string keys and values or other java.util.Map as value. Requests are made in JSON format hold some type information (i.e. it is known what values are strings and what values are numbers). Lets look at an example: SObjectComposite composite = new SObjectComposite("38.0", true); // first insert operation via an external id final Account updateAccount = new TestAccount(); updateAccount.setName("Salesforce"); updateAccount.setBillingStreet("Landmark @ 1 Market Street"); updateAccount.setBillingCity("San Francisco"); updateAccount.setBillingState("California"); updateAccount.setIndustry(Account_IndustryEnum.TECHNOLOGY); composite.addUpdate("Account", "001xx000003DIpcAAG", updateAccount, "UpdatedAccount"); final Contact newContact = new TestContact(); newContact.setLastName("John Doe"); newContact.setPhone("1234567890"); composite.addCreate(newContact, "NewContact"); final AccountContactJunction__c junction = new AccountContactJunction__c(); junction.setAccount__c("001xx000003DIpcAAG"); junction.setContactId__c("@{NewContact.id}"); composite.addCreate(junction, "JunctionRecord"); final SObjectCompositeResponse response = template.requestBody("salesforce:composite", composite, SObjectCompositeResponse.class); final List<SObjectCompositeResult> results = response.getCompositeResponse(); final SObjectCompositeResult accountUpdateResult = results.stream().filter(r -> "UpdatedAccount".equals(r.getReferenceId())).findFirst().get() final int statusCode = accountUpdateResult.getHttpStatusCode(); // should be 200 final Map<String, ?> accountUpdateBody = accountUpdateResult.getBody(); final SObjectCompositeResult contactCreationResult = results.stream().filter(r -> "JunctionRecord".equals(r.getReferenceId())).findFirst().get() 54.15. Using "raw" Salesforce composite It's possible to directly call Salesforce composite by preparing the Salesforce JSON request in the route thanks to the rawPayload option. For instance, you can have the following route: The route directly creates the body as JSON and directly submit to salesforce endpoint using rawPayload=true option. With this approach, you have the complete control on the Salesforce request. POST is the default HTTP method used to send raw Composite requests to salesforce. Use the compositeMethod option to override to the other supported value, GET , which returns a list of other available composite resources. 54.16. Using Raw Operation Send HTTP requests to salesforce with full, raw control of all aspects of the call. Any serialization or deserialization of request and response bodies must be performed in the route. The Content-Type HTTP header will be automatically set based on the format option, but this can be overridden with the rawHttpHeaders option. Parameter Type Description Default Required request body String or InputStream Body of the HTTP request rawPath String The portion of the endpoint URL after the domain name, e.g., '/services/data/v51.0/sobjects/Account/' x rawMethod String The HTTP method x rawQueryParameters String Comma separated list of message headers to include as query parameters. Do not url-encode values as this will be done automatically. rawHttpHeaders String Comma separated list of message headers to include as HTTP headers 54.16.1. Query example In this example we'll send a query to the REST API. The query must be passed in a URL parameter called "q", so we'll create a message header called q and tell the raw operation to include that message header as a URL parameter: 54.16.2. SObject example In this example, we'll pass a Contact the REST API in a create operation. Since the raw operation does not perform any serialization, we make sure to pass XML in the message body The response is: 54.17. Using Composite SObject Collections The SObject Collections API executes actions on multiple records in one request. Use sObject Collections to reduce the number of round-trips between the client and server. The entire request counts as a single call toward your API limits. This resource is available in API version 42.0 and later. SObject records (aka DTOs) supplied to these operations must be instances of subclasses of AbstractDescribedSObjectBase . See the Maven Plugin section for information on generating these DTO classes. These operations serialize supplied DTOs to JSON. 54.17.1. compositeRetrieveSObjectCollections Retrieve one or more records of the same object type. Parameter Type Description Default Required ids List of String or comma-separated string A list of one or more IDs of the objects to return. All IDs must belong to the same object type. x fields List of String or comma-separated string A list of fields to include in the response. The field names you specify must be valid, and you must have read-level permissions to each field. x sObjectName String Type of SObject, e.g. Account x sObjectClass String Fully-qualified class name of DTO class to use for deserializing the response. Required if sObjectName parameter does not resolve to a class that exists in the package specified by the package option. 54.17.2. compositeCreateSObjectCollections Add up to 200 records, returning a list of SaveSObjectResult objects. Mixed SObject types is supported. Parameter Type Description Default Required request body List of SObject A list of SObjects to create x allOrNone boolean Indicates whether to roll back the entire request when the creation of any object fails (true) or to continue with the independent creation of other objects in the request. false 54.17.3. compositeUpdateSObjectCollections Update up to 200 records, returning a list of SaveSObjectResult objects. Mixed SObject types is supported. Parameter Type Description Default Required request body List of SObject A list of SObjects to update x allOrNone boolean Indicates whether to roll back the entire request when the update of any object fails (true) or to continue with the independent update of other objects in the request. false 54.17.4. compositeUpsertSObjectCollections Create or update (upsert) up to 200 records based on an external ID field, returning a list of UpsertSObjectResult objects. Mixed SObject types is not supported. Parameter Type Description Default Required request body List of SObject A list of SObjects to upsert x allOrNone boolean Indicates whether to roll back the entire request when the upsert of any object fails (true) or to continue with the independent upsert of other objects in the request. false sObjectName String Type of SObject, e.g. Account x sObjectIdName String Name of External ID field x 54.17.5. compositeDeleteSObjectCollections Delete up to 200 records, returning a list of DeleteSObjectResult objects. Mixed SObject types is supported. Parameter Type Description Default Required sObjectIds or request body List of String or comma-separated string A list of up to 200 IDs of objects to be deleted. x allOrNone boolean Indicates whether to roll back the entire request when the deletion of any object fails (true) or to continue with the independent deletion of other objects in the request. false 54.18. Sending null values to salesforce By default, SObject fields with null values are not sent to salesforce. In order to send null values to salesforce, use the fieldsToNull property, as follows: accountSObject.getFieldsToNull().add("Site"); 54.19. Generating SOQL query strings org.apache.camel.component.salesforce.api.utils.QueryHelper contains helper methods to generate SOQL queries. For instance to fetch all custom fields from Account SObject you can simply generate the SOQL SELECT by invoking: String allCustomFieldsQuery = QueryHelper.queryToFetchFilteredFieldsOf(new Account(), SObjectField::isCustom); 54.20. Camel Salesforce Maven Plugin This Maven plugin generates DTOs for the Camel. For obvious security reasons it is recommended that the clientId, clientSecret, userName and password fields be not set in the pom.xml. The plugin should be configured for the rest of the properties, and can be executed using the following command: The generated DTOs use Jackson annotations. All Salesforce field types are supported. Date and time fields are mapped to java.time.ZonedDateTime by default, and picklist fields are mapped to generated Java Enumerations. Please refer to README.md for details on how to generate the DTO. 54.21. Spring Boot Auto-Configuration When using salesforce with Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-salesforce-starter</artifactId> </dependency> The component supports 91 options, which are listed below. Name Description Default Type camel.component.salesforce.all-or-none Composite API option to indicate to rollback all records if any are not successful. false Boolean camel.component.salesforce.apex-method APEX method name. String camel.component.salesforce.apex-query-params Query params for APEX method. Map camel.component.salesforce.apex-url APEX method URL. String camel.component.salesforce.api-version Salesforce API version. 53.0 String camel.component.salesforce.authentication-type Explicit authentication method to be used, one of USERNAME_PASSWORD, REFRESH_TOKEN or JWT. Salesforce component can auto-determine the authentication method to use from the properties set, set this property to eliminate any ambiguity. AuthenticationType camel.component.salesforce.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.salesforce.backoff-increment Backoff interval increment for Streaming connection restart attempts for failures beyond CometD auto-reconnect. The option is a long type. 1000 Long camel.component.salesforce.batch-id Bulk API Batch ID. String camel.component.salesforce.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.salesforce.client-id OAuth Consumer Key of the connected app configured in the Salesforce instance setup. Typically a connected app needs to be configured but one can be provided by installing a package. String camel.component.salesforce.client-secret OAuth Consumer Secret of the connected app configured in the Salesforce instance setup. String camel.component.salesforce.composite-method Composite (raw) method. String camel.component.salesforce.config Global endpoint configuration - use to set values that are common to all endpoints. The option is a org.apache.camel.component.salesforce.SalesforceEndpointConfig type. SalesforceEndpointConfig camel.component.salesforce.content-type Bulk API content type, one of XML, CSV, ZIP_XML, ZIP_CSV. ContentType camel.component.salesforce.default-replay-id Default replayId setting if no value is found in initialReplayIdMap. -1 Long camel.component.salesforce.enabled Whether to enable auto configuration of the salesforce component. This is enabled by default. Boolean camel.component.salesforce.fall-back-replay-id ReplayId to fall back to after an Invalid Replay Id response. -1 Long camel.component.salesforce.format Payload format to use for Salesforce API calls, either JSON or XML, defaults to JSON. As of Camel 3.12, this option only applies to the Raw operation. PayloadFormat camel.component.salesforce.http-client Custom Jetty Http Client to use to connect to Salesforce. The option is a org.apache.camel.component.salesforce.SalesforceHttpClient type. SalesforceHttpClient camel.component.salesforce.http-client-connection-timeout Connection timeout used by the HttpClient when connecting to the Salesforce server. 60000 Long camel.component.salesforce.http-client-idle-timeout Timeout used by the HttpClient when waiting for response from the Salesforce server. 10000 Long camel.component.salesforce.http-client-properties Used to set any properties that can be configured on the underlying HTTP client. Have a look at properties of SalesforceHttpClient and the Jetty HttpClient for all available options. Map camel.component.salesforce.http-max-content-length Max content length of an HTTP response. Integer camel.component.salesforce.http-proxy-auth-uri Used in authentication against the HTTP proxy server, needs to match the URI of the proxy server in order for the httpProxyUsername and httpProxyPassword to be used for authentication. String camel.component.salesforce.http-proxy-excluded-addresses A list of addresses for which HTTP proxy server should not be used. Set camel.component.salesforce.http-proxy-host Hostname of the HTTP proxy server to use. String camel.component.salesforce.http-proxy-included-addresses A list of addresses for which HTTP proxy server should be used. Set camel.component.salesforce.http-proxy-password Password to use to authenticate against the HTTP proxy server. String camel.component.salesforce.http-proxy-port Port number of the HTTP proxy server to use. Integer camel.component.salesforce.http-proxy-realm Realm of the proxy server, used in preemptive Basic/Digest authentication methods against the HTTP proxy server. String camel.component.salesforce.http-proxy-secure If set to false disables the use of TLS when accessing the HTTP proxy. true Boolean camel.component.salesforce.http-proxy-socks4 If set to true the configures the HTTP proxy to use as a SOCKS4 proxy. false Boolean camel.component.salesforce.http-proxy-use-digest-auth If set to true Digest authentication will be used when authenticating to the HTTP proxy, otherwise Basic authorization method will be used. false Boolean camel.component.salesforce.http-proxy-username Username to use to authenticate against the HTTP proxy server. String camel.component.salesforce.http-request-buffer-size HTTP request buffer size. May need to be increased for large SOQL queries. 8192 Integer camel.component.salesforce.include-details Include details in Salesforce1 Analytics report, defaults to false. Boolean camel.component.salesforce.initial-replay-id-map Replay IDs to start from per channel name. Map camel.component.salesforce.instance-id Salesforce1 Analytics report execution instance ID. String camel.component.salesforce.instance-url URL of the Salesforce instance used after authentication, by default received from Salesforce on successful authentication. String camel.component.salesforce.job-id Bulk API Job ID. String camel.component.salesforce.jwt-audience Value to use for the Audience claim (aud) when using OAuth JWT flow. If not set, the login URL will be used, which is appropriate in most cases. String camel.component.salesforce.keystore KeyStore parameters to use in OAuth JWT flow. The KeyStore should contain only one entry with private key and certificate. Salesforce does not verify the certificate chain, so this can easily be a selfsigned certificate. Make sure that you upload the certificate to the corresponding connected app. The option is a org.apache.camel.support.jsse.KeyStoreParameters type. KeyStoreParameters camel.component.salesforce.lazy-login If set to true prevents the component from authenticating to Salesforce with the start of the component. You would generally set this to the (default) false and authenticate early and be immediately aware of any authentication issues. false Boolean camel.component.salesforce.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.salesforce.limit Limit on number of returned records. Applicable to some of the API, check the Salesforce documentation. Integer camel.component.salesforce.locator Locator provided by salesforce Bulk 2.0 API for use in getting results for a Query job. String camel.component.salesforce.login-config All authentication configuration in one nested bean, all properties set there can be set directly on the component as well. The option is a org.apache.camel.component.salesforce.SalesforceLoginConfig type. SalesforceLoginConfig camel.component.salesforce.login-url URL of the Salesforce instance used for authentication, by default set to . String camel.component.salesforce.long-polling-transport-properties Used to set any properties that can be configured on the LongPollingTransport used by the BayeuxClient (CometD) used by the streaming api. Map camel.component.salesforce.max-backoff Maximum backoff interval for Streaming connection restart attempts for failures beyond CometD auto-reconnect. The option is a long type. 30000 Long camel.component.salesforce.max-records The maximum number of records to retrieve per set of results for a Bulk 2.0 Query. The request is still subject to the size limits. If you are working with a very large number of query results, you may experience a timeout before receiving all the data from Salesforce. To prevent a timeout, specify the maximum number of records your client is expecting to receive in the maxRecords parameter. This splits the results into smaller sets with this value as the maximum size. Integer camel.component.salesforce.not-found-behaviour Sets the behaviour of 404 not found status received from Salesforce API. Should the body be set to NULL NotFoundBehaviour#NULL or should a exception be signaled on the exchange NotFoundBehaviour#EXCEPTION - the default. NotFoundBehaviour camel.component.salesforce.notify-for-fields Notify for fields, options are ALL, REFERENCED, SELECT, WHERE. NotifyForFieldsEnum camel.component.salesforce.notify-for-operation-create Notify for create operation, defaults to false (API version = 29.0). Boolean camel.component.salesforce.notify-for-operation-delete Notify for delete operation, defaults to false (API version = 29.0). Boolean camel.component.salesforce.notify-for-operation-undelete Notify for un-delete operation, defaults to false (API version = 29.0). Boolean camel.component.salesforce.notify-for-operation-update Notify for update operation, defaults to false (API version = 29.0). Boolean camel.component.salesforce.notify-for-operations Notify for operations, options are ALL, CREATE, EXTENDED, UPDATE (API version 29.0). NotifyForOperationsEnum camel.component.salesforce.object-mapper Custom Jackson ObjectMapper to use when serializing/deserializing Salesforce objects. The option is a com.fasterxml.jackson.databind.ObjectMapper type. ObjectMapper camel.component.salesforce.packages In what packages are the generated DTO classes. Typically the classes would be generated using camel-salesforce-maven-plugin. Set it if using the generated DTOs to gain the benefit of using short SObject names in parameters/header values. Multiple packages can be separated by comma. String camel.component.salesforce.password Password used in OAuth flow to gain access to access token. It's easy to get started with password OAuth flow, but in general one should avoid it as it is deemed less secure than other flows. Make sure that you append security token to the end of the password if using one. String camel.component.salesforce.pk-chunking Use PK Chunking. Only for use in original Bulk API. Bulk 2.0 API performs PK chunking automatically, if necessary. Boolean camel.component.salesforce.pk-chunking-chunk-size Chunk size for use with PK Chunking. If unspecified, salesforce default is 100,000. Maximum size is 250,000. Integer camel.component.salesforce.pk-chunking-parent Specifies the parent object when you're enabling PK chunking for queries on sharing objects. The chunks are based on the parent object's records rather than the sharing object's records. For example, when querying on AccountShare, specify Account as the parent object. PK chunking is supported for sharing objects as long as the parent object is supported. String camel.component.salesforce.pk-chunking-start-row Specifies the 15-character or 18-character record ID to be used as the lower boundary for the first chunk. Use this parameter to specify a starting ID when restarting a job that failed between batches. String camel.component.salesforce.query-locator Query Locator provided by salesforce for use when a query results in more records than can be retrieved in a single call. Use this value in a subsequent call to retrieve additional records. String camel.component.salesforce.raw-http-headers Comma separated list of message headers to include as HTTP parameters for Raw operation. String camel.component.salesforce.raw-method HTTP method to use for the Raw operation. String camel.component.salesforce.raw-path The portion of the endpoint URL after the domain name. E.g., '/services/data/v52.0/sobjects/Account/'. String camel.component.salesforce.raw-payload Use raw payload String for request and response (either JSON or XML depending on format), instead of DTOs, false by default. false Boolean camel.component.salesforce.raw-query-parameters Comma separated list of message headers to include as query parameters for Raw operation. Do not url-encode values as this will be done automatically. String camel.component.salesforce.refresh-token Refresh token already obtained in the refresh token OAuth flow. One needs to setup a web application and configure a callback URL to receive the refresh token, or configure using the builtin callback at and then retrive the refresh_token from the URL at the end of the flow. Note that in development organizations Salesforce allows hosting the callback web application at localhost. String camel.component.salesforce.report-id Salesforce1 Analytics report Id. String camel.component.salesforce.report-metadata Salesforce1 Analytics report metadata for filtering. The option is a org.apache.camel.component.salesforce.api.dto.analytics.reports.ReportMetadata type. ReportMetadata camel.component.salesforce.result-id Bulk API Result ID. String camel.component.salesforce.s-object-blob-field-name SObject blob field name. String camel.component.salesforce.s-object-class Fully qualified SObject class name, usually generated using camel-salesforce-maven-plugin. String camel.component.salesforce.s-object-fields SObject fields to retrieve. String camel.component.salesforce.s-object-id SObject ID if required by API. String camel.component.salesforce.s-object-id-name SObject external ID field name. String camel.component.salesforce.s-object-id-value SObject external ID field value. String camel.component.salesforce.s-object-name SObject name if required or supported by API. String camel.component.salesforce.s-object-query Salesforce SOQL query string. String camel.component.salesforce.s-object-search Salesforce SOSL search string. String camel.component.salesforce.ssl-context-parameters SSL parameters to use, see SSLContextParameters class for all available options. The option is a org.apache.camel.support.jsse.SSLContextParameters type. SSLContextParameters camel.component.salesforce.update-topic Whether to update an existing Push Topic when using the Streaming API, defaults to false. false Boolean camel.component.salesforce.use-global-ssl-context-parameters Enable usage of global SSL context parameters. false Boolean camel.component.salesforce.user-name Username used in OAuth flow to gain access to access token. It's easy to get started with password OAuth flow, but in general one should avoid it as it is deemed less secure than other flows. String camel.component.salesforce.worker-pool-max-size Maximum size of the thread pool used to handle HTTP responses. 20 Integer camel.component.salesforce.worker-pool-size Size of the thread pool used to handle HTTP responses. 10 Integer
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-salesforce</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency>", "salesforce:operationName:topicName", "salesforce:topic?options", "salesforce:operationName?options", "// in your Camel route set the header before Salesforce endpoint // .setHeader(\"Sforce-Limit-Info\", constant(\"api-usage\")) .to(\"salesforce:getGlobalObjects\") .to(myProcessor); // myProcessor will receive `Sforce-Limit-Info` header on the outbound // message class MyProcessor implements Processor { public void process(Exchange exchange) throws Exception { Message in = exchange.getIn(); String apiLimits = in.getHeader(\"Sforce-Limit-Info\", String.class); } }", "...to(\"salesforce:upsertSObject?sObjectIdName=Name\")", "...to(\"salesforce:createBatch\")..", "from(\"salesforce:CamelTestTopic?notifyForFields=ALL&notifyForOperations=ALL&sObjectName=Merchandise__c&updateTopic=true&sObjectQuery=SELECT Id, Name FROM Merchandise__c\")", "from(\"salesforce:CamelTestTopic&sObjectName=Merchandise__c\")", "class Order_Event__e extends AbstractDTOBase { @JsonProperty(\"OrderNumber\") private String orderNumber; // ... other properties and getters/setters } from(\"timer:tick\") .process(exchange -> { final Message in = exchange.getIn(); String orderNumber = \"ORD\" + exchange.getProperty(Exchange.TIMER_COUNTER); Order_Event__e event = new Order_Event__e(); event.setOrderNumber(orderNumber); in.setBody(event); }) .to(\"salesforce:createSObject\");", "from(\"timer:tick\") .process(exchange -> { final Message in = exchange.getIn(); String orderNumber = \"ORD\" + exchange.getProperty(Exchange.TIMER_COUNTER); in.setBody(\"{\\\"OrderNumber\\\":\\\"\" + orderNumber + \"\\\"}\"); }) .to(\"salesforce:createSObject?sObjectName=Order_Event__e\");", "PlatformEvent event = consumer.receiveBody(\"salesforce:event/Order_Event__e\", PlatformEvent.class);", "from(\"salesforce:data/ChangeEvents?replayId=-1\").log(\"being notified of all change events\") from(\"salesforce:data/AccountChangeEvent?replayId=-1\").log(\"being notified of change events for Account records\") from(\"salesforce:data/Employee__ChangeEvent?replayId=-1\").log(\"being notified of change events for Employee__c custom object\")", "public class ContentProcessor implements Processor { public void process(Exchange exchange) throws Exception { Message message = exchange.getIn(); ContentVersion cv = new ContentVersion(); ContentWorkspace cw = getWorkspace(exchange); cv.setFirstPublishLocationId(cw.getId()); cv.setTitle(\"test document\"); cv.setPathOnClient(\"test_doc.html\"); byte[] document = message.getBody(byte[].class); ObjectMapper mapper = new ObjectMapper(); String enc = mapper.convertValue(document, String.class); cv.setVersionDataUrl(enc); message.setBody(cv); } protected ContentWorkspace getWorkSpace(Exchange exchange) { // Look up the content workspace somehow, maybe use enrich() to add it to a // header that can be extracted here ---- } }", "from(\"file:///home/camel/library\") .to(new ContentProcessor()) // convert bytes from the file into a ContentVersion SObject // for the salesforce component .to(\"salesforce:createSObject\");", "from(\"direct:querySalesforce\") .to(\"salesforce:limits\") .choice() .when(spel(\"#{1.0 * body.dailyApiRequests.remaining / body.dailyApiRequests.max < 0.1}\")) .to(\"salesforce:query?...\") .otherwise() .setBody(constant(\"Used up Salesforce API limits, leaving 10% for critical routes\")) .endChoice()", "from(\"direct:example1\")// .setHeader(\"approval.ContextId\", simple(\"USD{body['contextId']}\")) .setHeader(\"approval.NextApproverIds\", simple(\"USD{body['nextApproverIds']}\")) .to(\"salesforce:approval?\"// + \"approval.actionType=Submit\"// + \"&approval.comments=this is a test\"// + \"&approval.processDefinitionNameOrId=Test_Account_Process\"// + \"&approval.skipEntryCriteria=true\");", "final Map<String, String> body = new HashMap<>(); body.put(\"contextId\", accountIds.iterator().next()); body.put(\"nextApproverIds\", userId); final ApprovalResult result = template.requestBody(\"direct:example1\", body, ApprovalResult.class);", "from(\"direct:fetchRecentItems\") to(\"salesforce:recent\") .split().body() .log(\"USD{body.name} at USD{body.attributes.url}\");", "Account account = Contact president = Contact marketing = Account anotherAccount = Contact sales = Asset someAsset = // build the tree SObjectTree request = new SObjectTree(); request.addObject(account).addChildren(president, marketing); request.addObject(anotherAccount).addChild(sales).addChild(someAsset); final SObjectTree response = template.requestBody(\"salesforce:composite-tree\", tree, SObjectTree.class); final Map<Boolean, List<SObjectNode>> result = response.allNodes() .collect(Collectors.groupingBy(SObjectNode::hasErrors)); final List<SObjectNode> withErrors = result.get(true); final List<SObjectNode> succeeded = result.get(false); final String firstId = succeeded.get(0).getId();", "final String acountId = final SObjectBatch batch = new SObjectBatch(\"38.0\"); final Account updates = new Account(); updates.setName(\"NewName\"); batch.addUpdate(\"Account\", accountId, updates); final Account newAccount = new Account(); newAccount.setName(\"Account created from Composite batch API\"); batch.addCreate(newAccount); batch.addGet(\"Account\", accountId, \"Name\", \"BillingPostalCode\"); batch.addDelete(\"Account\", accountId); final SObjectBatchResponse response = template.requestBody(\"salesforce:composite-batch\", batch, SObjectBatchResponse.class); boolean hasErrors = response.hasErrors(); // if any of the requests has resulted in either 4xx or 5xx HTTP status final List<SObjectBatchResult> results = response.getResults(); // results of three operations sent in batch final SObjectBatchResult updateResult = results.get(0); // update result final int updateStatus = updateResult.getStatusCode(); // probably 204 final Object updateResultData = updateResult.getResult(); // probably null final SObjectBatchResult createResult = results.get(1); // create result @SuppressWarnings(\"unchecked\") final Map<String, Object> createData = (Map<String, Object>) createResult.getResult(); final String newAccountId = createData.get(\"id\"); // id of the new account, this is for JSON, for XML it would be createData.get(\"Result\").get(\"id\") final SObjectBatchResult retrieveResult = results.get(2); // retrieve result @SuppressWarnings(\"unchecked\") final Map<String, Object> retrieveData = (Map<String, Object>) retrieveResult.getResult(); final String accountName = retrieveData.get(\"Name\"); // Name of the retrieved account, this is for JSON, for XML it would be createData.get(\"Account\").get(\"Name\") final String accountBillingPostalCode = retrieveData.get(\"BillingPostalCode\"); // Name of the retrieved account, this is for JSON, for XML it would be createData.get(\"Account\").get(\"BillingPostalCode\") final SObjectBatchResult deleteResult = results.get(3); // delete result final int updateStatus = deleteResult.getStatusCode(); // probably 204 final Object updateResultData = deleteResult.getResult(); // probably null", "SObjectComposite composite = new SObjectComposite(\"38.0\", true); // first insert operation via an external id final Account updateAccount = new TestAccount(); updateAccount.setName(\"Salesforce\"); updateAccount.setBillingStreet(\"Landmark @ 1 Market Street\"); updateAccount.setBillingCity(\"San Francisco\"); updateAccount.setBillingState(\"California\"); updateAccount.setIndustry(Account_IndustryEnum.TECHNOLOGY); composite.addUpdate(\"Account\", \"001xx000003DIpcAAG\", updateAccount, \"UpdatedAccount\"); final Contact newContact = new TestContact(); newContact.setLastName(\"John Doe\"); newContact.setPhone(\"1234567890\"); composite.addCreate(newContact, \"NewContact\"); final AccountContactJunction__c junction = new AccountContactJunction__c(); junction.setAccount__c(\"001xx000003DIpcAAG\"); junction.setContactId__c(\"@{NewContact.id}\"); composite.addCreate(junction, \"JunctionRecord\"); final SObjectCompositeResponse response = template.requestBody(\"salesforce:composite\", composite, SObjectCompositeResponse.class); final List<SObjectCompositeResult> results = response.getCompositeResponse(); final SObjectCompositeResult accountUpdateResult = results.stream().filter(r -> \"UpdatedAccount\".equals(r.getReferenceId())).findFirst().get() final int statusCode = accountUpdateResult.getHttpStatusCode(); // should be 200 final Map<String, ?> accountUpdateBody = accountUpdateResult.getBody(); final SObjectCompositeResult contactCreationResult = results.stream().filter(r -> \"JunctionRecord\".equals(r.getReferenceId())).findFirst().get()", "from(\"timer:fire?period=2000\").setBody(constant(\"{\\n\" + \" \\\"allOrNone\\\" : true,\\n\" + \" \\\"records\\\" : [ { \\n\" + \" \\\"attributes\\\" : {\\\"type\\\" : \\\"FOO\\\"},\\n\" + \" \\\"Name\\\" : \\\"123456789\\\",\\n\" + \" \\\"FOO\\\" : \\\"XXXX\\\",\\n\" + \" \\\"ACCOUNT\\\" : 2100.0\\n\" + \" \\\"ExternalID\\\" : \\\"EXTERNAL\\\"\\n\" \" }]\\n\" + \"}\") .to(\"salesforce:composite?rawPayload=true\") .log(\"USD{body}\");", "from(\"direct:queryExample\") .setHeader(\"q\", \"SELECT Id, LastName FROM Contact\") .to(\"salesforce:raw?format=JSON&rawMethod=GET&rawQueryParameters=q&rawPath=/services/data/v51.0/query\") // deserialize JSON results or handle in some other way", "from(\"direct:createAContact\") .setBody(constant(\"<Contact><LastName>TestLast</LastName></Contact>\")) .to(\"salesforce:raw?format=XML&rawMethod=POST&rawPath=/services/data/v51.0/sobjects/Contact\")", "<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?> <Result> <id>0034x00000RnV6zAAF</id> <success>true</success> </Result>", "accountSObject.getFieldsToNull().add(\"Site\");", "String allCustomFieldsQuery = QueryHelper.queryToFetchFilteredFieldsOf(new Account(), SObjectField::isCustom);", "mvn camel-salesforce:generate -DcamelSalesforce.clientId=<clientid> -DcamelSalesforce.clientSecret=<clientsecret> -DcamelSalesforce.userName=<username> -DcamelSalesforce.password=<password>", "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-salesforce-starter</artifactId> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/camel_spring_boot_reference/csb-camel-salesforce-component-starter
19.4.2. Procmail Recipes
19.4.2. Procmail Recipes New users often find the construction of recipes the most difficult part of learning to use Procmail. This difficulty is often attributed to recipes matching messages by using regular expressions which are used to specify qualifications for string matching. However, regular expressions are not very difficult to construct and even less difficult to understand when read. Additionally, the consistency of the way Procmail recipes are written, regardless of regular expressions, makes it easy to learn by example. To see example Procmail recipes, see Section 19.4.2.5, "Recipe Examples" . Procmail recipes take the following form: The first two characters in a Procmail recipe are a colon and a zero. Various flags can be placed after the zero to control how Procmail processes the recipe. A colon after the flags section specifies that a lockfile is created for this message. If a lockfile is created, the name can be specified by replacing lockfile-name . A recipe can contain several conditions to match against the message. If it has no conditions, every message matches the recipe. Regular expressions are placed in some conditions to facilitate message matching. If multiple conditions are used, they must all match for the action to be performed. Conditions are checked based on the flags set in the recipe's first line. Optional special characters placed after the asterisk character ( * ) can further control the condition. The action-to-perform argument specifies the action taken when the message matches one of the conditions. There can only be one action per recipe. In many cases, the name of a mailbox is used here to direct matching messages into that file, effectively sorting the email. Special action characters may also be used before the action is specified. See Section 19.4.2.4, "Special Conditions and Actions" for more information. 19.4.2.1. Delivering vs. Non-Delivering Recipes The action used if the recipe matches a particular message determines whether it is considered a delivering or non-delivering recipe. A delivering recipe contains an action that writes the message to a file, sends the message to another program, or forwards the message to another email address. A non-delivering recipe covers any other actions, such as a nesting block . A nesting block is a set of actions, contained in braces { } , that are performed on messages which match the recipe's conditions. Nesting blocks can be nested inside one another, providing greater control for identifying and performing actions on messages. When messages match a delivering recipe, Procmail performs the specified action and stops comparing the message against any other recipes. Messages that match non-delivering recipes continue to be compared against other recipes. 19.4.2.2. Flags Flags are essential to determine how or if a recipe's conditions are compared to a message. The egrep utility is used internally for matching of the conditions. The following flags are commonly used: A - Specifies that this recipe is only used if the recipe without an A or a flag also matched this message. a - Specifies that this recipe is only used if the recipe with an A or a flag also matched this message and was successfully completed. B - Parses the body of the message and looks for matching conditions. b - Uses the body in any resulting action, such as writing the message to a file or forwarding it. This is the default behavior. c - Generates a carbon copy of the email. This is useful with delivering recipes, since the required action can be performed on the message and a copy of the message can continue being processed in the rc files. D - Makes the egrep comparison case-sensitive. By default, the comparison process is not case-sensitive. E - While similar to the A flag, the conditions in the recipe are only compared to the message if the immediately preceding recipe without an E flag did not match. This is comparable to an else action. e - The recipe is compared to the message only if the action specified in the immediately preceding recipe fails. f - Uses the pipe as a filter. H - Parses the header of the message and looks for matching conditions. This is the default behavior. h - Uses the header in a resulting action. This is the default behavior. w - Tells Procmail to wait for the specified filter or program to finish, and reports whether or not it was successful before considering the message filtered. W - Is identical to w except that "Program failure" messages are suppressed. For a detailed list of additional flags, see the procmailrc man page. 19.4.2.3. Specifying a Local Lockfile Lockfiles are very useful with Procmail to ensure that more than one process does not try to alter a message simultaneously. Specify a local lockfile by placing a colon ( : ) after any flags on a recipe's first line. This creates a local lockfile based on the destination file name plus whatever has been set in the LOCKEXT global environment variable. Alternatively, specify the name of the local lockfile to be used with this recipe after the colon. 19.4.2.4. Special Conditions and Actions Special characters used before Procmail recipe conditions and actions change the way they are interpreted. The following characters may be used after the asterisk character ( * ) at the beginning of a recipe's condition line: ! - In the condition line, this character inverts the condition, causing a match to occur only if the condition does not match the message. < - Checks if the message is under a specified number of bytes. > - Checks if the message is over a specified number of bytes. The following characters are used to perform special actions: ! - In the action line, this character tells Procmail to forward the message to the specified email addresses. USD - Refers to a variable set earlier in the rc file. This is often used to set a common mailbox that is referred to by various recipes. | - Starts a specified program to process the message. { and } - Constructs a nesting block, used to contain additional recipes to apply to matching messages. If no special character is used at the beginning of the action line, Procmail assumes that the action line is specifying the mailbox in which to write the message. 19.4.2.5. Recipe Examples Procmail is an extremely flexible program, but as a result of this flexibility, composing Procmail recipes from scratch can be difficult for new users. The best way to develop the skills to build Procmail recipe conditions stems from a strong understanding of regular expressions combined with looking at many examples built by others. A thorough explanation of regular expressions is beyond the scope of this section. The structure of Procmail recipes and useful sample Procmail recipes can be found at various places on the Internet. The proper use and adaptation of regular expressions can be derived by viewing these recipe examples. In addition, introductory information about basic regular expression rules can be found in the grep(1) man page. The following simple examples demonstrate the basic structure of Procmail recipes and can provide the foundation for more intricate constructions. A basic recipe may not even contain conditions, as is illustrated in the following example: The first line specifies that a local lockfile is to be created but does not specify a name, so Procmail uses the destination file name and appends the value specified in the LOCKEXT environment variable. No condition is specified, so every message matches this recipe and is placed in the single spool file called new-mail.spool , located within the directory specified by the MAILDIR environment variable. An MUA can then view messages in this file. A basic recipe, such as this, can be placed at the end of all rc files to direct messages to a default location. The following example matched messages from a specific email address and throws them away. With this example, any messages sent by [email protected] are sent to the /dev/null device, deleting them. Warning Be certain that rules are working as intended before sending messages to /dev/null for permanent deletion. If a recipe inadvertently catches unintended messages, and those messages disappear, it becomes difficult to troubleshoot the rule. A better solution is to point the recipe's action to a special mailbox, which can be checked from time to time to look for false positives. Once satisfied that no messages are accidentally being matched, delete the mailbox and direct the action to send the messages to /dev/null . The following recipe grabs email sent from a particular mailing list and places it in a specified folder. Any messages sent from the [email protected] mailing list are placed in the tuxlug mailbox automatically for the MUA. Note that the condition in this example matches the message if it has the mailing list's email address on the From , Cc , or To lines. Consult the many Procmail online resources available in Section 19.6, "Additional Resources" for more detailed and powerful recipes. 19.4.2.6. Spam Filters Because it is called by Sendmail, Postfix, and Fetchmail upon receiving new emails, Procmail can be used as a powerful tool for combating spam. This is particularly true when Procmail is used in conjunction with SpamAssassin. When used together, these two applications can quickly identify spam emails, and sort or destroy them. SpamAssassin uses header analysis, text analysis, blacklists, a spam-tracking database, and self-learning Bayesian spam analysis to quickly and accurately identify and tag spam. Note In order to use SpamAssassin , first ensure the spamassassin package is installed on your system by running, as root : For more information on installing packages with Yum, see Section 8.2.4, "Installing Packages" . The easiest way for a local user to use SpamAssassin is to place the following line near the top of the ~/.procmailrc file: The /etc/mail/spamassassin/spamassassin-default.rc contains a simple Procmail rule that activates SpamAssassin for all incoming email. If an email is determined to be spam, it is tagged in the header as such and the title is prepended with the following pattern: The message body of the email is also prepended with a running tally of what elements caused it to be diagnosed as spam. To file email tagged as spam, a rule similar to the following can be used: This rule files all email tagged in the header as spam into a mailbox called spam . Since SpamAssassin is a Perl script, it may be necessary on busy servers to use the binary SpamAssassin daemon ( spamd ) and the client application ( spamc ). Configuring SpamAssassin this way, however, requires root access to the host. To start the spamd daemon, type the following command: To start the SpamAssassin daemon when the system is booted, use an initscript utility, such as the Services Configuration Tool ( system-config-services ), to turn on the spamassassin service. See Chapter 12, Services and Daemons for more information about starting and stopping services. To configure Procmail to use the SpamAssassin client application instead of the Perl script, place the following line near the top of the ~/.procmailrc file. For a system-wide configuration, place it in /etc/procmailrc :
[ ":0 [ flags ] [ : lockfile-name ] * [ condition_1_special-condition-character condition_1_regular_expression ] * [ condition_2_special-condition-character condition-2_regular_expression ] * [ condition_N_special-condition-character condition-N_regular_expression ] special-action-character action-to-perform", ":0: new-mail.spool", ":0 * ^From: [email protected] /dev/null", ":0: * ^(From|Cc|To).*tux-lug tuxlug", "~]# yum install spamassassin", "INCLUDERC=/etc/mail/spamassassin/spamassassin-default.rc", "*****SPAM*****", ":0 Hw * ^X-Spam-Status: Yes spam", "~]# service spamassassin start", "INCLUDERC=/etc/mail/spamassassin/spamassassin-spamc.rc" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-email-procmail-recipes
Chapter 7. Red Hat Directory Server 11.4
Chapter 7. Red Hat Directory Server 11.4 7.1. Highlighted updates and new features This section documents new features and important updates in Directory Server 11.4. Directory Server rebased to version 1.4.3.27 The 389-ds-base packages have been upgraded to upstream version 1.4.3.27, which provides a number of bug fixes and enhancements over the version. For a complete list of notable changes, read the upstream release notes before updating: https://directory.fedoraproject.org/docs/389ds/releases/release-1-4-3-24.html https://directory.fedoraproject.org/docs/389ds/releases/release-1-4-3-23.html https://directory.fedoraproject.org/docs/389ds/releases/release-1-4-3-22.html https://directory.fedoraproject.org/docs/389ds/releases/release-1-4-3-21.html https://directory.fedoraproject.org/docs/389ds/releases/release-1-4-3-20.html https://directory.fedoraproject.org/docs/389ds/releases/release-1-4-3-19.html https://directory.fedoraproject.org/docs/389ds/releases/release-1-4-3-18.html https://directory.fedoraproject.org/docs/389ds/releases/release-1-4-3-17.html Highlighted updates and new features in the 389-ds-base packages Features in Red Hat Directory Server, that are included in the 389-ds-base packages, are documented in the Red Hat Enterprise Linux 8.5 Release Notes: Directory Server now supports temporary passwords Directory Server supports the entryUUID attribute The dnaInterval configuration attribute is now supported Directory Server can exclude attributes and suffixes from the retro changelog database Directory Server provides monitoring settings that can prevent database corruption caused by lock exhaustion Added a new message to help set up nsSSLPersonalitySSL 7.2. Bug fixes This section describes bugs fixed in Directory Server 11.4 that have a significant impact on users. The dsconf utility no longer fails when using LDAPS URLs Previously, the dsconf utility did not correctly resolve TLS settings for remote connections. As a consequence, even if the certificate configuration was correct, using dsconf with a remote LDAPS URL failed with an certificate verify failed error. The dsconf connection code has been fixed. As a result, using remote LDAPS URLs with dsconf now works as expected. Bug fixes in the 389-ds-base packages Bug fixes in Red Hat Directory Server, that are included in the 389-ds-base packages, are documented in the Red Hat Enterprise Linux 8.5 Release Notes: The database indexes created by plug-ins are now enabled The replication session update speed is now enhanced 7.3. Known issues This section documents known problems and, if applicable, workarounds in Directory Server 11.4. Directory Server settings that are changed outside the web console's window are not automatically visible Because of the design of the Directory Server module in the Red Hat Enterprise Linux 8 web console, the web console does not automatically display the latest settings if a user changes the configuration outside of the console's window. For example, if you change the configuration using the command line while the web console is open, the new settings are not automatically updated in the web console. This applies also if you change the configuration using the web console on a different computer. To work around the problem, manually refresh the web console in the browser if the configuration has been changed outside the console's window. The Directory Server Web Console does not provide an LDAP browser The web console enables administrators to manage and configure Directory Server 11 instances. However, it does not provide an integrated LDAP browser. To manage users and groups in Directory Server, use the dsidm utility. To display and modify directory entries, use a third-party LDAP browser or the OpenLDAP client utilities provided by the openldap-clients package. Known issues in the 389-ds-base packages Known issues in Red Hat Directory Server, that are included in the 389-ds-base packages, are documented in the Red Hat Enterprise Linux 8.5 Release Notes: The default keyword for enabled ciphers in the NSS does not work in conjunction with other ciphers
null
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/release_notes/directory-server-11.4
Chapter 51. Case management Showcase application
Chapter 51. Case management Showcase application The Showcase application is included in the Red Hat Process Automation Manager distribution to demonstrate the capabilities of case management in an application environment. Showcase is intended to be used as a proof of concept that aims to show the interaction between business process management (BPM) and case management. You can use the application to start, close, monitor, and interact with cases. Showcase must be installed in addition to the Business Central application and KIE Server. The Showcase application is required to start new case instances, however the case work is still performed in Business Central. After a case instance is created and is being worked on, you can monitor the case in the Showcase application by clicking the case in the Case List to open the case Overview page. Showcase Support The Showcase application is not an integral part of Red Hat Process Automation Manager and is intended for demonstration purposes for case management. Showcase is provided to encourage customers to adopt and modify it to work for their specific needs. The content of the application itself does not carry product-specific Service Level Agreements (SLAs). We encourage you to report issues, request for enhancements, and any other feedback for consideration in Showcase updates. Red Hat Support will provide guidance on the use of this template on a commercially reasonable basis for its intended use, excluding the provided example UI code provided within. Note Production support is limited to the Red Hat Process Automation Manager distribution.
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_process_services_in_red_hat_process_automation_manager/case-management-showcase-application-con-case-management-showcase
Managing content in automation hub
Managing content in automation hub Red Hat Ansible Automation Platform 2.4 Create and manage collections, content and repositories in automation hub Red Hat Customer Content Services
[ "collections: # Install a collection from Ansible Galaxy. - name: community.aws version: 5.2.0 source: https://galaxy.ansible.com", "{\"file\": \"filename\", \"signature\": \"filename.asc\"}", "#!/usr/bin/env bash FILE_PATH=USD1 SIGNATURE_PATH=\"USD1.asc\" ADMIN_ID=\"USDPULP_SIGNING_KEY_FINGERPRINT\" PASSWORD=\"password\" Create a detached signature gpg --quiet --batch --pinentry-mode loopback --yes --passphrase USDPASSWORD --homedir ~/.gnupg/ --detach-sign --default-key USDADMIN_ID --armor --output USDSIGNATURE_PATH USDFILE_PATH Check the exit status STATUS=USD? if [ USDSTATUS -eq 0 ]; then echo {\\\"file\\\": \\\"USDFILE_PATH\\\", \\\"signature\\\": \\\"USDSIGNATURE_PATH\\\"} else exit USDSTATUS fi", "[all:vars] . . . automationhub_create_default_collection_signing_service = True automationhub_auto_sign_collections = True automationhub_require_content_approval = True automationhub_collection_signing_service_key = /abs/path/to/galaxy_signing_service.gpg automationhub_collection_signing_service_script = /abs/path/to/collection_signing.sh", "gpg --import --no-default-keyring --keyring ~/.ansible/pubring.kbx my-public-key.asc", "ansible-galaxy collection install namespace.collection --signature https://examplehost.com/detached_signature.asc --signature file:///path/to/local/detached_signature.asc --keyring ~/.ansible/pubring.kbx", "requirements.yml collections: - name: ns.coll version: 1.0.0 signatures: - https://examplehost.com/detached_signature.asc - file:///path/to/local/detached_signature.asc ansible-galaxy collection verify -r requirements.yml --keyring ~/.ansible/pubring.kbx", "ansible-playbook collection_seed.yml -e automationhub_api_token=<api_token> -e automationhub_main_url=https://automationhub.example.com -e automationhub_require_content_approval=true", "Collections: - name: community.kubernetes - name: community.aws version:\">=5.0.0\"", "podman login registry.redhat.io", "podman pull registry.redhat.io/ <container_image_name> : <tag>", "podman images", "podman tag registry.redhat.io/ <container_image_name> : <tag> <automation_hub_hostname> / <container_image_name>", "podman images", "podman login -u= <username> -p= <password> <automation_hub_url>", "podman push <automation_hub_url> / <container_image_name>", "#!/usr/bin/env bash pulp_container SigningService will pass the next 4 variables to the script. MANIFEST_PATH=USD1 FINGERPRINT=\"USDPULP_SIGNING_KEY_FINGERPRINT\" IMAGE_REFERENCE=\"USDREFERENCE\" SIGNATURE_PATH=\"USDSIG_PATH\" Create container signature using skopeo skopeo standalone-sign USDMANIFEST_PATH USDIMAGE_REFERENCE USDFINGERPRINT --output USDSIGNATURE_PATH Optionally pass the passphrase to the key if password protected. --passphrase-file /path/to/key_password.txt Check the exit status STATUS=USD? if [ USDSTATUS -eq 0 ]; then echo {\\\"signature_path\\\": \\\"USDSIGNATURE_PATH\\\"} else exit USDSTATUS fi", "[all:vars] . . . automationhub_create_default_container_signing_service = True automationhub_container_signing_service_key = /absolute/path/to/key/to/sign automationhub_container_signing_service_script = /absolute/path/to/script/that/signs", "> podman pull <container-name>", "> podman tag <container-name> <server-address>/<container-name>:<tag name>", "> podman push <server-address>/<container-name>:<tag name> --tls-verify=false --sign-by <reference to the gpg key on your local>", "> podman push <server-address>/<container-name>:<tag name> --tls-verify=false", "> sudo <name of editor> /etc/containers/policy.json", "{ \"default\": [{\"type\": \"reject\"}], \"transports\": { \"docker\": { \"quay.io\": [{\"type\": \"insecureAcceptAnything\"}], \"docker.io\": [{\"type\": \"insecureAcceptAnything\"}], \"<server-address>\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/tmp/containersig.txt\" }] } } }", "{ \"default\": [{\"type\": \"reject\"}], \"transports\": { \"docker\": { \"quay.io\": [{\"type\": \"insecureAcceptAnything\"}], \"docker.io\": [{\"type\": \"insecureAcceptAnything\"}], \"<server-address>\": [{ \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/tmp/<key file name>\", \"signedIdentity\": { \"type\": \"matchExact\" } }] } } }", "> podman pull <server-address>/<container-name>:<tag name> --tls-verify=false" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html-single/managing_content_in_automation_hub/index
Chapter 2. Configuring and deploying the overcloud for autoscaling
Chapter 2. Configuring and deploying the overcloud for autoscaling You must configure the templates for the services on your overcloud that enable autoscaling. Procedure Create environment templates and a resource registry for autoscaling services before you deploy the overcloud for autoscaling. For more information, see Section 2.1, "Configuring the overcloud for autoscaling" Deploy the overcloud. For more information, see Section 2.2, "Deploying the overcloud for autoscaling" 2.1. Configuring the overcloud for autoscaling Create the environment templates and resource registry that you need to deploy the services that provide autoscaling. Procedure Log in to the undercloud host as the stack user. Create a directory for the autoscaling configuration files: USD mkdir -p USDHOME/templates/autoscaling/ Create the resource registry file for the definitions that the services require for autoscaling: USD cat <<EOF > USDHOME/templates/autoscaling/resources-autoscaling.yaml resource_registry: OS::TripleO::Services::AodhApi: /usr/share/openstack-tripleo-heat-templates/deployment/aodh/aodh-api-container-puppet.yaml OS::TripleO::Services::AodhEvaluator: /usr/share/openstack-tripleo-heat-templates/deployment/aodh/aodh-evaluator-container-puppet.yaml OS::TripleO::Services::AodhListener: /usr/share/openstack-tripleo-heat-templates/deployment/aodh/aodh-listener-container-puppet.yaml OS::TripleO::Services::AodhNotifier: /usr/share/openstack-tripleo-heat-templates/deployment/aodh/aodh-notifier-container-puppet.yaml OS::TripleO::Services::CeilometerAgentCentral: /usr/share/openstack-tripleo-heat-templates/deployment/ceilometer/ceilometer-agent-central-container-puppet.yaml OS::TripleO::Services::CeilometerAgentNotification: /usr/share/openstack-tripleo-heat-templates/deployment/ceilometer/ceilometer-agent-notification-container-puppet.yaml OS::TripleO::Services::ComputeCeilometerAgent: /usr/share/openstack-tripleo-heat-templates/deployment/ceilometer/ceilometer-agent-compute-container-puppet.yaml OS::TripleO::Services::GnocchiApi: /usr/share/openstack-tripleo-heat-templates/deployment/gnocchi/gnocchi-api-container-puppet.yaml OS::TripleO::Services::GnocchiMetricd: /usr/share/openstack-tripleo-heat-templates/deployment/gnocchi/gnocchi-metricd-container-puppet.yaml OS::TripleO::Services::GnocchiStatsd: /usr/share/openstack-tripleo-heat-templates/deployment/gnocchi/gnocchi-statsd-container-puppet.yaml OS::TripleO::Services::HeatApi: /usr/share/openstack-tripleo-heat-templates/deployment/heat/heat-api-container-puppet.yaml OS::TripleO::Services::HeatApiCfn: /usr/share/openstack-tripleo-heat-templates/deployment/heat/heat-api-cfn-container-puppet.yaml OS::TripleO::Services::HeatApiCloudwatch: /usr/share/openstack-tripleo-heat-templates/deployment/heat/heat-api-cloudwatch-disabled-puppet.yaml OS::TripleO::Services::HeatEngine: /usr/share/openstack-tripleo-heat-templates/deployment/heat/heat-engine-container-puppet.yaml OS::TripleO::Services::Redis: /usr/share/openstack-tripleo-heat-templates/deployment/database/redis-container-puppet.yaml EOF Create an environment template to configure the services required for autoscaling: cat <<EOF > USDHOME/templates/autoscaling/parameters-autoscaling.yaml parameter_defaults: NotificationDriver: 'messagingv2' GnocchiDebug: false CeilometerEnableGnocchi: true ManagePipeline: true ManageEventPipeline: true EventPipelinePublishers: - gnocchi://?archive_policy=generic PipelinePublishers: - gnocchi://?archive_policy=generic ManagePolling: true ExtraConfig: ceilometer::agent::polling::polling_interval: 60 EOF If you use Red Hat Ceph Storage as the data storage back end for the time-series database service, add the following parameters to your parameters-autoscaling.yaml file: parameter_defaults: GnocchiRbdPoolName: 'metrics' GnocchiBackend: 'rbd' You must create the defined archive policy generic before you can store metrics. You define this archive policy after the deployment. For more information, see Section 3.1, "Creating the generic archive policy for autoscaling" . Set the polling_interval parameter, for example, 60 seconds. The value of the polling_interval parameter must match the gnocchi granularity value that you defined when you created the archive policy. For more information, see Section 3.1, "Creating the generic archive policy for autoscaling" . Deploy the overcloud. For more information, see Section 2.2, "Deploying the overcloud for autoscaling" 2.2. Deploying the overcloud for autoscaling You can deploy the overcloud for autoscaling by using director or by using a standalone environment. Prerequisites You have created the environment templates for deploying the services that provide autoscaling capabilities. For more information, see Section 2.1, "Configuring the overcloud for autoscaling" . Procedure Section 2.2.1, "Deploying the overcloud for autoscaling by using director" Section 2.2.2, "Deploying the overcloud for autoscaling in a standalone environment" 2.2.1. Deploying the overcloud for autoscaling by using director Use director to deploy the overcloud. If you are using a standalone environment, see Section 2.2.2, "Deploying the overcloud for autoscaling in a standalone environment" . Prerequisites A deployed undercloud. For more information, see Installing director on the undercloud . Procedure Log in to the undercloud as the stack user. Source the stackrc undercloud credentials file: [stack@director ~]USD source ~/stackrc Add the autoscaling environment files to the stack with your other environment files and deploy the overcloud: (undercloud)USD openstack overcloud deploy --templates \ -e [your environment files] \ -e USDHOME/templates/autoscaling/parameters-autoscaling.yaml \ -e USDHOME/templates/autoscaling/resources-autoscaling.yaml 2.2.2. Deploying the overcloud for autoscaling in a standalone environment To test the environment files in a pre-production environment, you can deploy the overcloud with the services required for autoscaling by using a standalone deployment. Note This procedure uses example values and commands that you must change to suit a production environment. If you want to use director to deploy the overcloud for autoscaling, see Section 2.2.1, "Deploying the overcloud for autoscaling by using director" . Prerequisites An all-in-one RHOSP environment has been staged with the python3-tripleoclient. For more information, see Installing the all-in-one Red Hat OpenStack Platform environment . An all-in-one RHOSP environment has been staged with the base configuration. For more information, see Configuring the all-in-one Red Hat OpenStack Platform environment . Procedure Change to the user that manages your overcloud deployments, for example, the stack user: Replace or set the environment variables USDIP , USDNETMASK and USDVIP for the overcloud deployment: USD export IP=192.168.25.2 USD export VIP=192.168.25.3 USD export NETMASK=24 Deploy the overcloud to test and verify the resource and parameter files: USD sudo openstack tripleo deploy \ --templates \ --local-ip=USDIP/USDNETMASK \ --control-virtual-ip=USDVIP \ -e /usr/share/openstack-tripleo-heat-templates/environments/standalone/standalone-tripleo.yaml \ -r /usr/share/openstack-tripleo-heat-templates/roles/Standalone.yaml \ -e USDHOME/containers-prepare-parameters.yaml \ -e USDHOME/standalone_parameters.yaml \ -e USDHOME/templates/autoscaling/resources-autoscaling.yaml \ -e USDHOME/templates/autoscaling/parameters-autoscaling.yaml \ --output-dir USDHOME \ --standalone Export the OS_CLOUD environment variable: USD export OS_CLOUD=standalone Additional resources Director Installation and Usage guide. Standalone Deployment Guide . 2.3. Verifying the overcloud deployment for autoscaling Verify that the autoscaling services are deployed and enabled. Verification output is from a standalone environment, but director-based environments provide similar output. Prerequisites You have deployed the autoscaling services in an existing overcloud using standalone or director. For more information, see Section 2.2, "Deploying the overcloud for autoscaling" . Procedure Log in to your environment as the stack user. For standalone environments set the OS_CLOUD environment variable: [stack@standalone ~]USD export OS_CLOUD=standalone For director environments, source the stackrc undercloud credentials file: [stack@undercloud ~]USD source ~/stackrc Verification Verify that the deployment was successful and ensure that the service API endpoints for autoscaling are available: USD openstack endpoint list --service metric +----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------+ | 2956a12327b744b29abd4577837b2e6f | regionOne | gnocchi | metric | True | internal | http://192.168.25.3:8041 | | 583453c58b064f69af3de3479675051a | regionOne | gnocchi | metric | True | admin | http://192.168.25.3:8041 | | fa029da0e2c047fc9d9c50eb6b4876c6 | regionOne | gnocchi | metric | True | public | http://192.168.25.3:8041 | +----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------+ USD openstack endpoint list --service alarming +----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------+ | 08c70ec137b44ed68590f4d5c31162bb | regionOne | aodh | alarming | True | internal | http://192.168.25.3:8042 | | 194042887f3d4eb4b638192a0fe60996 | regionOne | aodh | alarming | True | admin | http://192.168.25.3:8042 | | 2604b693740245ed8960b31dfea1f963 | regionOne | aodh | alarming | True | public | http://192.168.25.3:8042 | +----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------+ USD openstack endpoint list --service orchestration +----------------------------------+-----------+--------------+---------------+---------+-----------+-------------------------------------------+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +----------------------------------+-----------+--------------+---------------+---------+-----------+-------------------------------------------+ | 00966a24dd4141349e12680307c11848 | regionOne | heat | orchestration | True | admin | http://192.168.25.3:8004/v1/%(tenant_id)s | | 831e411bb6d44f6db9f5103d659f901e | regionOne | heat | orchestration | True | public | http://192.168.25.3:8004/v1/%(tenant_id)s | | d5be22349add43ae95be4284a42a4a60 | regionOne | heat | orchestration | True | internal | http://192.168.25.3:8004/v1/%(tenant_id)s | +----------------------------------+-----------+--------------+---------------+---------+-----------+-------------------------------------------+ Verify that the services are running on the overcloud: USD sudo podman ps --filter=name='heat|gnocchi|ceilometer|aodh' CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 31e75d62367f registry.redhat.io/rhosp-rhel9/openstack-aodh-api:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) aodh_api 77acf3487736 registry.redhat.io/rhosp-rhel9/openstack-aodh-listener:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) aodh_listener 29ec47b69799 registry.redhat.io/rhosp-rhel9/openstack-aodh-evaluator:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) aodh_evaluator 43efaa86c769 registry.redhat.io/rhosp-rhel9/openstack-aodh-notifier:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) aodh_notifier 0ac8cb2c7470 registry.redhat.io/rhosp-rhel9/openstack-aodh-api:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) aodh_api_cron 31b55e091f57 registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) ceilometer_agent_central 5f61331a17d8 registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) ceilometer_agent_compute 7c5ef75d8f1b registry.redhat.io/rhosp-rhel9/openstack-ceilometer-notification:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) ceilometer_agent_notification 88fa57cc1235 registry.redhat.io/rhosp-rhel9/openstack-gnocchi-api:17.0 kolla_start 23 minutes ago Up 23 minutes ago (healthy) gnocchi_api 0f05a58197d5 registry.redhat.io/rhosp-rhel9/openstack-gnocchi-metricd:17.0 kolla_start 23 minutes ago Up 23 minutes ago (healthy) gnocchi_metricd 6d806c285500 registry.redhat.io/rhosp-rhel9/openstack-gnocchi-statsd:17.0 kolla_start 23 minutes ago Up 23 minutes ago (healthy) gnocchi_statsd 7c02cac34c69 registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) heat_api_cron d3903df545ce registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) heat_api db1d33506e3d registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) heat_api_cfn 051446294c70 registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) heat_engine Verify that the time-series database service is available: USD openstack metric status --fit-width +-----------------------------------------------------+--------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-----------------------------------------------------+--------------------------------------------------------------------------------------------------------------------+ | metricd/processors | ['standalone-80.general.local.0.a94fbf77-1ac0-49ed-bfe2-a89f014fde01', | | | 'standalone-80.general.local.3.28ca78d7-a80e-4515-8060-233360b410eb', | | | 'standalone-80.general.local.1.7e8b5a5b-2ca1-49be-bc22-25f51d67c00a', | | | 'standalone-80.general.local.2.3c4fe59e-23cd-4742-833d-42ff0a4cb692'] | | storage/number of metric having measures to process | 0 | | storage/total number of measures to process | 0 | +-----------------------------------------------------+--------------------------------------------------------------------------------------------------------------------+
[ "mkdir -p USDHOME/templates/autoscaling/", "cat <<EOF > USDHOME/templates/autoscaling/resources-autoscaling.yaml resource_registry: OS::TripleO::Services::AodhApi: /usr/share/openstack-tripleo-heat-templates/deployment/aodh/aodh-api-container-puppet.yaml OS::TripleO::Services::AodhEvaluator: /usr/share/openstack-tripleo-heat-templates/deployment/aodh/aodh-evaluator-container-puppet.yaml OS::TripleO::Services::AodhListener: /usr/share/openstack-tripleo-heat-templates/deployment/aodh/aodh-listener-container-puppet.yaml OS::TripleO::Services::AodhNotifier: /usr/share/openstack-tripleo-heat-templates/deployment/aodh/aodh-notifier-container-puppet.yaml OS::TripleO::Services::CeilometerAgentCentral: /usr/share/openstack-tripleo-heat-templates/deployment/ceilometer/ceilometer-agent-central-container-puppet.yaml OS::TripleO::Services::CeilometerAgentNotification: /usr/share/openstack-tripleo-heat-templates/deployment/ceilometer/ceilometer-agent-notification-container-puppet.yaml OS::TripleO::Services::ComputeCeilometerAgent: /usr/share/openstack-tripleo-heat-templates/deployment/ceilometer/ceilometer-agent-compute-container-puppet.yaml OS::TripleO::Services::GnocchiApi: /usr/share/openstack-tripleo-heat-templates/deployment/gnocchi/gnocchi-api-container-puppet.yaml OS::TripleO::Services::GnocchiMetricd: /usr/share/openstack-tripleo-heat-templates/deployment/gnocchi/gnocchi-metricd-container-puppet.yaml OS::TripleO::Services::GnocchiStatsd: /usr/share/openstack-tripleo-heat-templates/deployment/gnocchi/gnocchi-statsd-container-puppet.yaml OS::TripleO::Services::HeatApi: /usr/share/openstack-tripleo-heat-templates/deployment/heat/heat-api-container-puppet.yaml OS::TripleO::Services::HeatApiCfn: /usr/share/openstack-tripleo-heat-templates/deployment/heat/heat-api-cfn-container-puppet.yaml OS::TripleO::Services::HeatApiCloudwatch: /usr/share/openstack-tripleo-heat-templates/deployment/heat/heat-api-cloudwatch-disabled-puppet.yaml OS::TripleO::Services::HeatEngine: /usr/share/openstack-tripleo-heat-templates/deployment/heat/heat-engine-container-puppet.yaml OS::TripleO::Services::Redis: /usr/share/openstack-tripleo-heat-templates/deployment/database/redis-container-puppet.yaml EOF", "cat <<EOF > USDHOME/templates/autoscaling/parameters-autoscaling.yaml parameter_defaults: NotificationDriver: 'messagingv2' GnocchiDebug: false CeilometerEnableGnocchi: true ManagePipeline: true ManageEventPipeline: true EventPipelinePublishers: - gnocchi://?archive_policy=generic PipelinePublishers: - gnocchi://?archive_policy=generic ManagePolling: true ExtraConfig: ceilometer::agent::polling::polling_interval: 60 EOF", "parameter_defaults: GnocchiRbdPoolName: 'metrics' GnocchiBackend: 'rbd'", "[stack@director ~]USD source ~/stackrc", "(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e USDHOME/templates/autoscaling/parameters-autoscaling.yaml -e USDHOME/templates/autoscaling/resources-autoscaling.yaml", "su - stack", "export IP=192.168.25.2 export VIP=192.168.25.3 export NETMASK=24", "sudo openstack tripleo deploy --templates --local-ip=USDIP/USDNETMASK --control-virtual-ip=USDVIP -e /usr/share/openstack-tripleo-heat-templates/environments/standalone/standalone-tripleo.yaml -r /usr/share/openstack-tripleo-heat-templates/roles/Standalone.yaml -e USDHOME/containers-prepare-parameters.yaml -e USDHOME/standalone_parameters.yaml -e USDHOME/templates/autoscaling/resources-autoscaling.yaml -e USDHOME/templates/autoscaling/parameters-autoscaling.yaml --output-dir USDHOME --standalone", "export OS_CLOUD=standalone", "[stack@standalone ~]USD export OS_CLOUD=standalone", "[stack@undercloud ~]USD source ~/stackrc", "openstack endpoint list --service metric +----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------+ | 2956a12327b744b29abd4577837b2e6f | regionOne | gnocchi | metric | True | internal | http://192.168.25.3:8041 | | 583453c58b064f69af3de3479675051a | regionOne | gnocchi | metric | True | admin | http://192.168.25.3:8041 | | fa029da0e2c047fc9d9c50eb6b4876c6 | regionOne | gnocchi | metric | True | public | http://192.168.25.3:8041 | +----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------+", "openstack endpoint list --service alarming +----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------+ | 08c70ec137b44ed68590f4d5c31162bb | regionOne | aodh | alarming | True | internal | http://192.168.25.3:8042 | | 194042887f3d4eb4b638192a0fe60996 | regionOne | aodh | alarming | True | admin | http://192.168.25.3:8042 | | 2604b693740245ed8960b31dfea1f963 | regionOne | aodh | alarming | True | public | http://192.168.25.3:8042 | +----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------+", "openstack endpoint list --service orchestration +----------------------------------+-----------+--------------+---------------+---------+-----------+-------------------------------------------+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +----------------------------------+-----------+--------------+---------------+---------+-----------+-------------------------------------------+ | 00966a24dd4141349e12680307c11848 | regionOne | heat | orchestration | True | admin | http://192.168.25.3:8004/v1/%(tenant_id)s | | 831e411bb6d44f6db9f5103d659f901e | regionOne | heat | orchestration | True | public | http://192.168.25.3:8004/v1/%(tenant_id)s | | d5be22349add43ae95be4284a42a4a60 | regionOne | heat | orchestration | True | internal | http://192.168.25.3:8004/v1/%(tenant_id)s | +----------------------------------+-----------+--------------+---------------+---------+-----------+-------------------------------------------+", "sudo podman ps --filter=name='heat|gnocchi|ceilometer|aodh' CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 31e75d62367f registry.redhat.io/rhosp-rhel9/openstack-aodh-api:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) aodh_api 77acf3487736 registry.redhat.io/rhosp-rhel9/openstack-aodh-listener:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) aodh_listener 29ec47b69799 registry.redhat.io/rhosp-rhel9/openstack-aodh-evaluator:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) aodh_evaluator 43efaa86c769 registry.redhat.io/rhosp-rhel9/openstack-aodh-notifier:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) aodh_notifier 0ac8cb2c7470 registry.redhat.io/rhosp-rhel9/openstack-aodh-api:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) aodh_api_cron 31b55e091f57 registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) ceilometer_agent_central 5f61331a17d8 registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) ceilometer_agent_compute 7c5ef75d8f1b registry.redhat.io/rhosp-rhel9/openstack-ceilometer-notification:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) ceilometer_agent_notification 88fa57cc1235 registry.redhat.io/rhosp-rhel9/openstack-gnocchi-api:17.0 kolla_start 23 minutes ago Up 23 minutes ago (healthy) gnocchi_api 0f05a58197d5 registry.redhat.io/rhosp-rhel9/openstack-gnocchi-metricd:17.0 kolla_start 23 minutes ago Up 23 minutes ago (healthy) gnocchi_metricd 6d806c285500 registry.redhat.io/rhosp-rhel9/openstack-gnocchi-statsd:17.0 kolla_start 23 minutes ago Up 23 minutes ago (healthy) gnocchi_statsd 7c02cac34c69 registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) heat_api_cron d3903df545ce registry.redhat.io/rhosp-rhel9/openstack-heat-api:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) heat_api db1d33506e3d registry.redhat.io/rhosp-rhel9/openstack-heat-api-cfn:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) heat_api_cfn 051446294c70 registry.redhat.io/rhosp-rhel9/openstack-heat-engine:17.0 kolla_start 27 minutes ago Up 27 minutes ago (healthy) heat_engine", "openstack metric status --fit-width +-----------------------------------------------------+--------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-----------------------------------------------------+--------------------------------------------------------------------------------------------------------------------+ | metricd/processors | ['standalone-80.general.local.0.a94fbf77-1ac0-49ed-bfe2-a89f014fde01', | | | 'standalone-80.general.local.3.28ca78d7-a80e-4515-8060-233360b410eb', | | | 'standalone-80.general.local.1.7e8b5a5b-2ca1-49be-bc22-25f51d67c00a', | | | 'standalone-80.general.local.2.3c4fe59e-23cd-4742-833d-42ff0a4cb692'] | | storage/number of metric having measures to process | 0 | | storage/total number of measures to process | 0 | +-----------------------------------------------------+--------------------------------------------------------------------------------------------------------------------+" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/autoscaling_for_instances/assembly-configuring-and-deploying-the-overcloud-for-autoscaling_assembly-configuring-and-deploying-the-overcloud-for-autoscaling
Chapter 4. ComponentStatus [v1]
Chapter 4. ComponentStatus [v1] Description ComponentStatus (and ComponentStatusList) holds the cluster validation info. Deprecated: This API is deprecated in v1.19+ Type object 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources conditions array List of component conditions observed conditions[] object Information about the condition of a component. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 4.1.1. .conditions Description List of component conditions observed Type array 4.1.2. .conditions[] Description Information about the condition of a component. Type object Required type status Property Type Description error string Condition error code for a component. For example, a health check error code. message string Message about the condition for a component. For example, information about a health check. status string Status of the condition for a component. Valid values for "Healthy": "True", "False", or "Unknown". type string Type of condition for a component. Valid value: "Healthy" 4.2. API endpoints The following API endpoints are available: /api/v1/componentstatuses GET : list objects of kind ComponentStatus /api/v1/componentstatuses/{name} GET : read the specified ComponentStatus 4.2.1. /api/v1/componentstatuses Table 4.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind ComponentStatus Table 4.2. HTTP responses HTTP code Reponse body 200 - OK ComponentStatusList schema 401 - Unauthorized Empty 4.2.2. /api/v1/componentstatuses/{name} Table 4.3. Global path parameters Parameter Type Description name string name of the ComponentStatus Table 4.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read the specified ComponentStatus Table 4.5. HTTP responses HTTP code Reponse body 200 - OK ComponentStatus schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/metadata_apis/componentstatus-v1
probe::signal.sys_tgkill
probe::signal.sys_tgkill Name probe::signal.sys_tgkill - Sending kill signal to a thread group Synopsis Values name Name of the probe point sig_name A string representation of the signal sig The specific kill signal sent to the process tgid The thread group ID of the thread receiving the kill signal pid_name The name of the signal recipient sig_pid The PID of the thread receiving the kill signal Description The tgkill call is similar to tkill, except that it also allows the caller to specify the thread group ID of the thread to be signalled. This protects against TID reuse.
[ "signal.sys_tgkill" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-signal-sys-tgkill
Chapter 2. Getting started with Red Hat JBoss Web Server for OpenShift
Chapter 2. Getting started with Red Hat JBoss Web Server for OpenShift You can import the latest Red Hat JBoss Web Server for OpenShift image streams and templates from the Red Hat container registry. You can subsequently use the JWS for OpenShift Source-to-Image (S2I) process to create JBoss Web Server for OpenShift applications by using existing maven binaries or from source code. Before you follow the instructions in this document, you must ensure that an OpenShift cluster is already installed and configured as a prerequisite. For more information about installing and configuring OpenShift clusters, see the OpenShift Container Platform Installing guide. Note The JWS for OpenShift application templates are distributed for Tomcat 10. 2.1. Configuring an authentication token for the Red Hat Container Registry Before you can import and use a Red Hat JBoss Web Server for OpenShift image, you must first ensure that you have configured an authentication token to access the Red Hat Container Registry. You can create an authentication token by using a registry service account. This means that you do not have to use or store your Red Hat account username and password in your OpenShift configuration. Procedure Follow the instructions on the Red Hat Customer Portal to create an authentication token using a registry service account . On the Token Information page for your token, click the OpenShift Secret tab and download the YAML file that contains the OpenShift secret for the token. Use the YAML file that you have downloaded to create the authentication token secret for your OpenShift project. For example: To configure the secret for your OpenShift project, enter the following commands: Note In the preceding examples, replace 1234567-myserviceaccount with the name of the secret that you created in the step. Additional resources Red Hat Container Registry Authentication web page Allowing pods to reference images from other secured registries 2.2. Importing JBoss Web Server image streams and templates You can import Red Hat JBoss Web Server for OpenShift image streams and templates from the Red Hat Container Registry. You must import the latest JBoss Web Server image streams and templates for your JDK into the namespace of your OpenShift project. Prerequisites You have configured an authentication token for the Red Hat Container Registry . Procedure Log in to the Red Hat Container Registry by using your Customer Portal credentials. For more information, see Red Hat Container Registry Authentication . To import the image stream for OpenJDK 17, enter the following command: The preceding command imports the UBI8 JDK 17 image stream, jboss-webserver60-openjdk17-tomcat10-openshift-ubi8 , and all templates specified in the command. 2.3. Importing the latest JWS for OpenShift image You can import the latest available JWS for OpenShift image by using the import-image command. Red Hat provides a JWS for OpenShift image for OpenJDK 17 with the JBoss Web Server 6.0 release. Prerequisites You are logged in to the Red Hat Container Registry . You have imported image streams and templates . Procedure To update the core JBoss Web Server 6.0 tomcat 10 with OpenJDK 17 OpenShift image, enter the following command: Note The 6.0.0 tag at the end of each image you import refers to the stream version that is set in the image stream . 2.4. JWS for OpenShift S2I process You can run and configure the JWS for OpenShift images by using the OpenShift source-to-image (S2I) process with the application template parameters and environment variables. The S2I process for the JWS for OpenShift images works as follows: If the configuration source directory contains a Maven settings.xml file, the settings.xml file is moved to the USDHOME /.m2/ directory of the new image. If the source repository contains a pom.xml file, a Maven build is triggered using the contents of the USDMAVEN_ARGS environment variable. By default, the package goal is used with the openshift profile, which includes the -DskipTests argument to skip tests, and the -Dcom.redhat.xpaas.repo.redhatga argument to enable the Red Hat GA repository. The results of a successful Maven build are copied to the /opt/jws-6.0/tomcat/webapps directory. This includes all WAR files from the source directory that is specified by the USDARTIFACT_DIR environment variable. The default value of USDARTIFACT_DIR is the target/ directory. You can use the USDMAVEN_ARGS_APPEND environment variable to modify the Maven arguments. All WAR files from the deployments source directory are copied to the /opt/jws-6.0/tomcat/webapps directory. All files in the configuration source directory are copied to the /opt/jws-6.0/tomcat/conf/ directory, excluding the Maven settings.xml file. All files in the lib source directory are copied to the /opt/jws-6.0/tomcat/lib/ directory. Note If you want to use custom Tomcat configuration files, use the same file names that are used for a normal Tomcat installation such as context.xml and server.xml . For more information about configuring the S2I process to use a custom Maven artifacts repository mirror, see Maven artifact repository mirrors and JWS for OpenShift . Additional resources Apache Maven Project website 2.5. Creating a JWS for OpenShift application by using existing Maven binaries You can create a JWS for OpenShift application by using existing Maven binaries. You can use the oc start-build command to deploy existing applications on OpenShift. Note This procedure shows how to create an example application that is based on the tomcat-websocket-chat quickstart example. Prerequisites You have an existing .war , .ear , or .jar file for the application that you want to deploy on JWS for OpenShift or you have built the application locally. For example, to build the tomcat-websocket-chat application locally, perform the following steps: To clone the source code, enter the following command: Configure the Red Hat JBoss Middleware Maven repository, as described in Configure the Red Hat JBoss Middleware Maven Repository . For more information about the Maven repository, see the Red Hat JBoss Enerprise Maven Repository web page. To build the application, enter the following commands: The preceding command produces the following output: Procedure On your local file system, create a source directory for the binary build and a deployments subdirectory. For example, to create a /ocp source directory and a /deployments subdirectory for the tomcat-websocket-chat application, enter the following commands: Note The source directory can contain any content required by your application that is not included in the Maven binary. For more information, see JWS for OpenShift S2I process . Copy the .war , .ear , or .jar binary files to the deployments subdirectory. For example, to copy the .war file for the example tomcat-websocket-chat application, enter the following command: Note In the preceding example, target/websocket-chat.war is the path to the binary file you want to copy. Application archives in the deployments subdirectory of the source directory are copied to the USDJWS_HOME/tomcat/webapps/ directory of the image that is being built on OpenShift. To allow the application to be deployed successfully, you must ensure that the directory hierarchy that contains the web application data is structured correctly. For more information, see JWS for OpenShift S2I process . Log in to the OpenShift instance: Create a new project if required. For example: Note In the preceding example, jws-bin-demo is the name of the project you want to create. Identify the JWS for OpenShift image stream to use for your application: The preceding command produces the following type of output: Note The -n openshift option specifies the project to use. The oc get is -n openshift command gets the image stream resources from the openshift project. Create the new build configuration, and ensure that you specify the image stream and application name. For example, to create the new build configuration for the example tomcat-websocket-chat application: Note In the preceding example, jws-wsch-app is the name of the JWS for OpenShift application. The preceding command produces the following type of output: Start the binary build. For example: Note In the preceding example, jws-wsch-app is the name of the JWS for OpenShift application, and ocp is the name of the source directory. The preceding command instructs OpenShift to use the source directory that you have created for binary input of the OpenShift image build. The preceding command produces the following type of output: Uploading directory "ocp" as binary input for the build ... build "jws-wsch-app-1" started Receiving source from STDIN as archive ... Copying all deployments war artifacts from /home/jboss/source/deployments directory into `/opt/jws-6.0/tomcat/webapps` for later deployment... '/home/jboss/source/deployments/websocket-chat.war' -> '/opt/jws-6.0/tomcat/webapps/websocket-chat.war' Pushing image 172.30.202.111:5000/jws-bin-demo/jws-wsch-app:latest ... Pushed 0/7 layers, 7% complete Pushed 1/7 layers, 14% complete Pushed 2/7 layers, 29% complete Pushed 3/7 layers, 49% complete Pushed 4/7 layers, 62% complete Pushed 5/7 layers, 92% complete Pushed 6/7 layers, 100% complete Pushed 7/7 layers, 100% complete Push successful Create a new OpenShift application based on the image: For example: Note In the preceding example, jws-wsch-app is the name of the JWS for OpenShift application. The preceding command produces the following type of output: Expose the service to make the application accessible to users: For example, to make the example jws-wsch-app application accessible, perform the following steps: Check the name of the service to expose: The preceding command produces the following type of output: Expose the service: The preceding command produces the following type of output: Retrieve the address of the exposed route: Open a web browser and enter the URL to access the application. For example, to access the example jws-wsch-app application, enter the following URL: http:// <address_of_exposed_route> /websocket-chat Note In the preceding example, replace <address_of_exposed_route> with the appropriate value for your deployment. Additional resources oc start-build command 2.6. Creating a JWS for OpenShift application from source code You can create a JWS for OpenShift application from source code. For detailed information about creating new OpenShift applications from source code, see OpenShift.com - Creating an application from source code . Prerequisites The application data is structured correctly. For more information, see JWS for OpenShift S2I process . Procedure Log in to the OpenShift instance: Create a new project if required: Note In the preceding example, replace <project-name> with the name of the project you want to create. Identify the JWS for OpenShift image stream to use for your application: The preceding command produces the following type of output: Note The -n openshift option specifies the project to use. The oc get is -n openshift command gets the image stream resources from the openshift project. Create the new OpenShift application from source code by using Red Hat JBoss Web Server for OpenShift images: For example: The preceding command adds the source code to the image and compiles the source code. The preceding command also creates the build configuration and services. To expose the application, perform the following steps: To check the name of the service to expose: The preceding command produces the following type of output: To expose the service: The preceding command produces the following type of output: To retrieve the address of the exposed route: Open a web browser and enter the following URL to access the application: http:// <address_of_exposed_route> / <java_application_name> Note In the preceding example, replace <address_of_exposed_route> and <java_application_name> with appropriate values for your deployment. 2.7. Adding additional JAR files in the tomcat/lib directory You can use Docker to add additional Java Archive (JAR) files in the tomcat/lib directory. Procedure Start the image in Docker: Find the CONTAINER ID : Copy the library to the tomcat/lib/ directory: Commit the changes to a new image: Create a new image tag: Push the image to a registry:
[ "create -f 1234567_myserviceaccount-secret.yaml", "secrets link default 1234567-myserviceaccount-pull-secret --for=pull secrets link builder 1234567-myserviceaccount-pull-secret --for=pull", "for resource in jws60-openjdk17-tomcat10-ubi8-basic-s2i.json jws60-openjdk17-tomcat10-ubi8-https-s2i.json jws60-openjdk17-tomcat10-ubi8-image-stream.json do replace -n openshift --force -f https://raw.githubusercontent.com/jboss-container-images/jboss-webserver-6-openshift-image/jws60el8-v6.0.0/templates/USD{resource} done", "oc -n openshift import-image jboss-webserver60-openjdk17-tomcat10-openshift-ubi8:6.0.0", "git clone https://github.com/web-servers/tomcat-websocket-chat-quickstart.git", "cd tomcat-websocket-chat-quickstart/tomcat-websocket-chat/ mvn clean package", "[INFO] Scanning for projects [INFO] [INFO] ------------------------------------------------------------------------ [INFO] Building Tomcat websocket example 1.2.0.Final [INFO] ------------------------------------------------------------------------ [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 01:28 min [INFO] Finished at: 2018-01-16T15:59:16+10:00 [INFO] Final Memory: 19M/271M [INFO] ------------------------------------------------------------------------", "cd tomcat-websocket-chat-quickstart/tomcat-websocket-chat/ mkdir -p ocp/deployments", "cp target/websocket-chat.war ocp/deployments/", "oc login <url>", "oc new-project jws-bin-demo", "oc get is -n openshift | grep ^jboss-webserver | cut -f1 -d ' '", "jboss-webserver60-openjdk17-tomcat10-openshift-ubi8", "oc new-build --binary=true --image-stream=jboss-webserver60-openjdk17-tomcat10-openshift-ubi8:latest --name=jws-wsch-app", "--> Found image 8c3b85b (4 weeks old) in image stream \"openshift/jboss-webserver60-tomcat10-openshift\" under tag \"latest\" for \"jboss-webserver60\" JBoss Web Server 6.0 -------------------- Platform for building and running web applications on JBoss Web Server 6.0 - Tomcat v10 Tags: builder, java, tomcat10 * A source build using binary input will be created * The resulting image will be pushed to image stream \"jws-wsch-app:latest\" * A binary build was created, use 'start-build --from-dir' to trigger a new build --> Creating resources with label build=jws-wsch-app imagestream \"jws-wsch-app\" created buildconfig \"jws-wsch-app\" created --> Success", "oc start-build jws-wsch-app --from-dir=./ocp --follow", "Uploading directory \"ocp\" as binary input for the build build \"jws-wsch-app-1\" started Receiving source from STDIN as archive Copying all deployments war artifacts from /home/jboss/source/deployments directory into `/opt/jws-6.0/tomcat/webapps` for later deployment '/home/jboss/source/deployments/websocket-chat.war' -> '/opt/jws-6.0/tomcat/webapps/websocket-chat.war' Pushing image 172.30.202.111:5000/jws-bin-demo/jws-wsch-app:latest Pushed 0/7 layers, 7% complete Pushed 1/7 layers, 14% complete Pushed 2/7 layers, 29% complete Pushed 3/7 layers, 49% complete Pushed 4/7 layers, 62% complete Pushed 5/7 layers, 92% complete Pushed 6/7 layers, 100% complete Pushed 7/7 layers, 100% complete Push successful", "oc new-app jws-wsch-app", "--> Found image e5f3a6b (About a minute old) in image stream \"jws-bin-demo/jws-wsch-app\" under tag \"latest\" for \"jws-wsch-app\" JBoss Web Server 6.0 -------------------- Platform for building and running web applications on JBoss Web Server 6.0 - Tomcat v10 Tags: builder, java, tomcat10 * This image will be deployed in deployment config \"jws-wsch-app\" * Ports 8080/tcp, 8443/tcp, 8778/tcp will be load balanced by service \"jws-wsch-app\" * Other containers can access this service through the hostname \"jws-wsch-app\" --> Creating resources deploymentconfig \"jws-wsch-app\" created service \"jws-wsch-app\" created --> Success Application is not exposed. You can expose services to the outside world by executing one or more of the commands below: 'oc expose svc/jws-wsch-app' Run 'oc status' to view your app.", "oc get svc -o name", "service/jws-wsch-app", "oc expose svc/jws-wsch-app", "route \"jws-wsch-app\" exposed", "get routes --no-headers -o custom-columns='host:spec.host' jws-wsch-app", "oc login <url>", "oc new-project <project-name>", "oc get is -n openshift | grep ^jboss-webserver | cut -f1 -d ' '", "jboss-webserver60-openjdk17-tomcat10-openshift-ubi8", "oc new-app <source_code_location> --image-stream=jboss-webserver60-openjdk17-tomcat10-openshift-ubi8:latest --name= <openshift_application_name>", "oc new-app https://github.com/web-servers/tomcat-websocket-chat-quickstart.git#main --image-stream=jboss-webserver60-openjdk17-tomcat10-openshift-ubi8:latest --context-dir='tomcat-websocket-chat' --name=jws-wsch-app", "oc get svc -o name", "service/ <openshift_application_name>", "oc expose svc/ <openshift_application_name>", "route \" <openshift_application_name> \" exposed", "get routes --no-headers -o custom-columns='host:spec.host' <openshift_application_name>", "docker run --network host -i -t -p 8080:8080 ImageURL", "docker ps | grep <ImageName>", "docker cp <yourLibrary> <CONTAINER ID> :/opt/jws-6.0/tomcat/lib/", "docker commit <CONTAINER ID> <NEW IMAGE NAME>", "docker tag <NEW IMAGE NAME> :latest <NEW IMAGE REGISTRY URL> : <TAG>", "docker push <NEW IMAGE REGISTRY URL>" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/red_hat_jboss_web_server_for_openshift/jws_on_openshift_get_started
Chapter 2. Kafka Bridge quickstart
Chapter 2. Kafka Bridge quickstart Use this quickstart to try out the Kafka Bridge in your local development environment. You will learn how to do the following: Produce messages to topics and partitions in your Kafka cluster Create a Kafka Bridge consumer Perform basic consumer operations, such as subscribing the consumer to topics and retrieving the messages that you produced In this quickstart, HTTP requests are formatted as curl commands that you can copy and paste to your terminal. Ensure you have the prerequisites and then follow the tasks in the order provided in this chapter. In this quickstart, you will produce and consume messages in JSON format. Prerequisites for the quickstart A Kafka cluster is running on the host machine. 2.1. Downloading a Kafka Bridge archive A zipped distribution of the Kafka Bridge is available for download. Procedure Download the latest version of the Kafka Bridge archive from the Customer Portal . 2.2. Installing the Kafka Bridge Use the script provided with the Kafka Bridge archive to install the Kafka Bridge. The application.properties file provided with the installation archive provides default configuration settings. The following default property values configure the Kafka Bridge to listen for requests on port 8080. Default configuration properties http.host=0.0.0.0 http.port=8080 Prerequisites The Kafka Bridge installation archive is downloaded Procedure If you have not already done so, unzip the Kafka Bridge installation archive to any directory. Run the Kafka Bridge script using the configuration properties as a parameter: For example: ./bin/kafka_bridge_run.sh --config-file= <path> /application.properties Check to see that the installation was successful in the log. HTTP-Kafka Bridge started and listening on port 8080 HTTP-Kafka Bridge bootstrap servers localhost:9092 What to do Produce messages to topics and partitions . 2.3. Producing messages to topics and partitions Use the Kafka Bridge to produce messages to a Kafka topic in JSON format by using the topics endpoint. You can produce messages to topics in JSON format by using the topics endpoint. You can specify destination partitions for messages in the request body. The partitions endpoint provides an alternative method for specifying a single destination partition for all messages as a path parameter. In this procedure, messages are produced to a topic called bridge-quickstart-topic . Prerequisites The Kafka cluster has a topic with three partitions. You can use the kafka-topics.sh utility to create topics. Example topic creation with three partitions bin/kafka-topics.sh --bootstrap-server localhost:9092 --create --topic bridge-quickstart-topic --partitions 3 --replication-factor 1 Verifying the topic was created bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe --topic bridge-quickstart-topic Note If you deployed Streams for Apache Kafka on OpenShift, you can create a topic using the KafkaTopic custom resource. Procedure Using the Kafka Bridge, produce three messages to the topic you created: curl -X POST \ http://localhost:8080/topics/bridge-quickstart-topic \ -H 'content-type: application/vnd.kafka.json.v2+json' \ -d '{ "records": [ { "key": "my-key", "value": "sales-lead-0001" }, { "value": "sales-lead-0002", "partition": 2 }, { "value": "sales-lead-0003" } ] }' sales-lead-0001 is sent to a partition based on the hash of the key. sales-lead-0002 is sent directly to partition 2. sales-lead-0003 is sent to a partition in the bridge-quickstart-topic topic using a round-robin method. If the request is successful, the Kafka Bridge returns an offsets array, along with a 200 code and a content-type header of application/vnd.kafka.v2+json . For each message, the offsets array describes: The partition that the message was sent to The current message offset of the partition Example response #... { "offsets":[ { "partition":0, "offset":0 }, { "partition":2, "offset":0 }, { "partition":0, "offset":1 } ] } Additional topic requests Make other curl requests to find information on topics and partitions. List topics curl -X GET \ http://localhost:8080/topics Example response [ "__strimzi_store_topic", "__strimzi-topic-operator-kstreams-topic-store-changelog", "bridge-quickstart-topic", "my-topic" ] Get topic configuration and partition details curl -X GET \ http://localhost:8080/topics/bridge-quickstart-topic Example response { "name": "bridge-quickstart-topic", "configs": { "compression.type": "producer", "leader.replication.throttled.replicas": "", "min.insync.replicas": "1", "message.downconversion.enable": "true", "segment.jitter.ms": "0", "cleanup.policy": "delete", "flush.ms": "9223372036854775807", "follower.replication.throttled.replicas": "", "segment.bytes": "1073741824", "retention.ms": "604800000", "flush.messages": "9223372036854775807", "message.format.version": "2.8-IV1", "max.compaction.lag.ms": "9223372036854775807", "file.delete.delay.ms": "60000", "max.message.bytes": "1048588", "min.compaction.lag.ms": "0", "message.timestamp.type": "CreateTime", "preallocate": "false", "index.interval.bytes": "4096", "min.cleanable.dirty.ratio": "0.5", "unclean.leader.election.enable": "false", "retention.bytes": "-1", "delete.retention.ms": "86400000", "segment.ms": "604800000", "message.timestamp.difference.max.ms": "9223372036854775807", "segment.index.bytes": "10485760" }, "partitions": [ { "partition": 0, "leader": 0, "replicas": [ { "broker": 0, "leader": true, "in_sync": true }, { "broker": 1, "leader": false, "in_sync": true }, { "broker": 2, "leader": false, "in_sync": true } ] }, { "partition": 1, "leader": 2, "replicas": [ { "broker": 2, "leader": true, "in_sync": true }, { "broker": 0, "leader": false, "in_sync": true }, { "broker": 1, "leader": false, "in_sync": true } ] }, { "partition": 2, "leader": 1, "replicas": [ { "broker": 1, "leader": true, "in_sync": true }, { "broker": 2, "leader": false, "in_sync": true }, { "broker": 0, "leader": false, "in_sync": true } ] } ] } List the partitions of a specific topic curl -X GET \ http://localhost:8080/topics/bridge-quickstart-topic/partitions Example response [ { "partition": 0, "leader": 0, "replicas": [ { "broker": 0, "leader": true, "in_sync": true }, { "broker": 1, "leader": false, "in_sync": true }, { "broker": 2, "leader": false, "in_sync": true } ] }, { "partition": 1, "leader": 2, "replicas": [ { "broker": 2, "leader": true, "in_sync": true }, { "broker": 0, "leader": false, "in_sync": true }, { "broker": 1, "leader": false, "in_sync": true } ] }, { "partition": 2, "leader": 1, "replicas": [ { "broker": 1, "leader": true, "in_sync": true }, { "broker": 2, "leader": false, "in_sync": true }, { "broker": 0, "leader": false, "in_sync": true } ] } ] List the details of a specific topic partition curl -X GET \ http://localhost:8080/topics/bridge-quickstart-topic/partitions/0 Example response { "partition": 0, "leader": 0, "replicas": [ { "broker": 0, "leader": true, "in_sync": true }, { "broker": 1, "leader": false, "in_sync": true }, { "broker": 2, "leader": false, "in_sync": true } ] } List the offsets of a specific topic partition curl -X GET \ http://localhost:8080/topics/bridge-quickstart-topic/partitions/0/offsets Example response { "beginning_offset": 0, "end_offset": 1 } What to do After producing messages to topics and partitions, create a Kafka Bridge consumer . Additional resources POST /topics/{topicname} POST /topics/{topicname}/partitions/{partitionid} 2.4. Creating a Kafka Bridge consumer Before you can perform any consumer operations in the Kafka cluster, you must first create a consumer by using the consumers endpoint. The consumer is referred to as a Kafka Bridge consumer . Procedure Create a Kafka Bridge consumer in a new consumer group named bridge-quickstart-consumer-group : curl -X POST http://localhost:8080/consumers/bridge-quickstart-consumer-group \ -H 'content-type: application/vnd.kafka.v2+json' \ -d '{ "name": "bridge-quickstart-consumer", "auto.offset.reset": "earliest", "format": "json", "enable.auto.commit": false, "fetch.min.bytes": 512, "consumer.request.timeout.ms": 30000 }' The consumer is named bridge-quickstart-consumer and the embedded data format is set as json . Some basic configuration settings are defined. The consumer will not commit offsets to the log automatically because the enable.auto.commit setting is false . You will commit the offsets manually later in this quickstart. If the request is successful, the Kafka Bridge returns the consumer ID ( instance_id ) and base URL ( base_uri ) in the response body, along with a 200 code. Example response #... { "instance_id": "bridge-quickstart-consumer", "base_uri":"http:// <bridge_id> -bridge-service:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer" } Copy the base URL ( base_uri ) to use in the other consumer operations in this quickstart. What to do Now that you have created a Kafka Bridge consumer, you can subscribe it to topics . Additional resources POST /consumers/{groupid} 2.5. Subscribing a Kafka Bridge consumer to topics After you have created a Kafka Bridge consumer, subscribe it to one or more topics by using the subscription endpoint. When subscribed, the consumer starts receiving all messages that are produced to the topic. Procedure Subscribe the consumer to the bridge-quickstart-topic topic that you created earlier, in Producing messages to topics and partitions : curl -X POST http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/subscription \ -H 'content-type: application/vnd.kafka.v2+json' \ -d '{ "topics": [ "bridge-quickstart-topic" ] }' The topics array can contain a single topic (as shown here) or multiple topics. If you want to subscribe the consumer to multiple topics that match a regular expression, you can use the topic_pattern string instead of the topics array. If the request is successful, the Kafka Bridge returns a 204 (No Content) code only. When using an Apache Kafka client, the HTTP subscribe operation adds topics to the local consumer's subscriptions. Joining a consumer group and obtaining partition assignments occur after running multiple HTTP poll operations, starting the partition rebalance and join-group process. It's important to note that the initial HTTP poll operations may not return any records. What to do After subscribing a Kafka Bridge consumer to topics, you can retrieve messages from the consumer . Additional resources POST /consumers/{groupid}/instances/{name}/subscription 2.6. Retrieving the latest messages from a Kafka Bridge consumer Retrieve the latest messages from the Kafka Bridge consumer by requesting data from the records endpoint. In production, HTTP clients can call this endpoint repeatedly (in a loop). Procedure Produce additional messages to the Kafka Bridge consumer, as described in Producing messages to topics and partitions . Submit a GET request to the records endpoint: curl -X GET http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/records \ -H 'accept: application/vnd.kafka.json.v2+json' After creating and subscribing to a Kafka Bridge consumer, a first GET request will return an empty response because the poll operation starts a rebalancing process to assign partitions. Repeat step two to retrieve messages from the Kafka Bridge consumer. The Kafka Bridge returns an array of messages - describing the topic name, key, value, partition, and offset - in the response body, along with a 200 code. Messages are retrieved from the latest offset by default. HTTP/1.1 200 OK content-type: application/vnd.kafka.json.v2+json #... [ { "topic":"bridge-quickstart-topic", "key":"my-key", "value":"sales-lead-0001", "partition":0, "offset":0 }, { "topic":"bridge-quickstart-topic", "key":null, "value":"sales-lead-0003", "partition":0, "offset":1 }, #... Note If an empty response is returned, produce more records to the consumer as described in Producing messages to topics and partitions , and then try retrieving messages again. What to do After retrieving messages from a Kafka Bridge consumer, try committing offsets to the log . Additional resources GET /consumers/{groupid}/instances/{name}/records 2.7. Commiting offsets to the log Use the offsets endpoint to manually commit offsets to the log for all messages received by the Kafka Bridge consumer. This is required because the Kafka Bridge consumer that you created earlier, in Creating a Kafka Bridge consumer , was configured with the enable.auto.commit setting as false . Procedure Commit offsets to the log for the bridge-quickstart-consumer : curl -X POST http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/offsets Because no request body is submitted, offsets are committed for all the records that have been received by the consumer. Alternatively, the request body can contain an array of ( OffsetCommitSeek ) that specifies the topics and partitions that you want to commit offsets for. If the request is successful, the Kafka Bridge returns a 204 code only. What to do After committing offsets to the log, try out the endpoints for seeking to offsets . Additional resources POST /consumers/{groupid}/instances/{name}/offsets 2.8. Seeking to offsets for a partition Use the positions endpoints to configure the Kafka Bridge consumer to retrieve messages for a partition from a specific offset, and then from the latest offset. This is referred to in Apache Kafka as a seek operation. Procedure Seek to a specific offset for partition 0 of the quickstart-bridge-topic topic: curl -X POST http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/positions \ -H 'content-type: application/vnd.kafka.v2+json' \ -d '{ "offsets": [ { "topic": "bridge-quickstart-topic", "partition": 0, "offset": 2 } ] }' If the request is successful, the Kafka Bridge returns a 204 code only. Submit a GET request to the records endpoint: curl -X GET http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/records \ -H 'accept: application/vnd.kafka.json.v2+json' The Kafka Bridge returns messages from the offset that you seeked to. Restore the default message retrieval behavior by seeking to the last offset for the same partition. This time, use the positions/end endpoint. curl -X POST http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/positions/end \ -H 'content-type: application/vnd.kafka.v2+json' \ -d '{ "partitions": [ { "topic": "bridge-quickstart-topic", "partition": 0 } ] }' If the request is successful, the Kafka Bridge returns another 204 code. Note You can also use the positions/beginning endpoint to seek to the first offset for one or more partitions. What to do In this quickstart, you have used the Kafka Bridge to perform several common operations on a Kafka cluster. You can now delete the Kafka Bridge consumer that you created earlier. Additional resources POST /consumers/{groupid}/instances/{name}/positions POST /consumers/{groupid}/instances/{name}/positions/beginning POST /consumers/{groupid}/instances/{name}/positions/end 2.9. Deleting a Kafka Bridge consumer Delete the Kafka Bridge consumer that you used throughout this quickstart. Procedure Delete the Kafka Bridge consumer by sending a DELETE request to the instances endpoint. curl -X DELETE http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer If the request is successful, the Kafka Bridge returns a 204 code. Additional resources DELETE /consumers/{groupid}/instances/{name}
[ "http.host=0.0.0.0 http.port=8080", "./bin/kafka_bridge_run.sh --config-file= <path> /application.properties", "HTTP-Kafka Bridge started and listening on port 8080 HTTP-Kafka Bridge bootstrap servers localhost:9092", "bin/kafka-topics.sh --bootstrap-server localhost:9092 --create --topic bridge-quickstart-topic --partitions 3 --replication-factor 1", "bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe --topic bridge-quickstart-topic", "curl -X POST http://localhost:8080/topics/bridge-quickstart-topic -H 'content-type: application/vnd.kafka.json.v2+json' -d '{ \"records\": [ { \"key\": \"my-key\", \"value\": \"sales-lead-0001\" }, { \"value\": \"sales-lead-0002\", \"partition\": 2 }, { \"value\": \"sales-lead-0003\" } ] }'", "# { \"offsets\":[ { \"partition\":0, \"offset\":0 }, { \"partition\":2, \"offset\":0 }, { \"partition\":0, \"offset\":1 } ] }", "curl -X GET http://localhost:8080/topics", "[ \"__strimzi_store_topic\", \"__strimzi-topic-operator-kstreams-topic-store-changelog\", \"bridge-quickstart-topic\", \"my-topic\" ]", "curl -X GET http://localhost:8080/topics/bridge-quickstart-topic", "{ \"name\": \"bridge-quickstart-topic\", \"configs\": { \"compression.type\": \"producer\", \"leader.replication.throttled.replicas\": \"\", \"min.insync.replicas\": \"1\", \"message.downconversion.enable\": \"true\", \"segment.jitter.ms\": \"0\", \"cleanup.policy\": \"delete\", \"flush.ms\": \"9223372036854775807\", \"follower.replication.throttled.replicas\": \"\", \"segment.bytes\": \"1073741824\", \"retention.ms\": \"604800000\", \"flush.messages\": \"9223372036854775807\", \"message.format.version\": \"2.8-IV1\", \"max.compaction.lag.ms\": \"9223372036854775807\", \"file.delete.delay.ms\": \"60000\", \"max.message.bytes\": \"1048588\", \"min.compaction.lag.ms\": \"0\", \"message.timestamp.type\": \"CreateTime\", \"preallocate\": \"false\", \"index.interval.bytes\": \"4096\", \"min.cleanable.dirty.ratio\": \"0.5\", \"unclean.leader.election.enable\": \"false\", \"retention.bytes\": \"-1\", \"delete.retention.ms\": \"86400000\", \"segment.ms\": \"604800000\", \"message.timestamp.difference.max.ms\": \"9223372036854775807\", \"segment.index.bytes\": \"10485760\" }, \"partitions\": [ { \"partition\": 0, \"leader\": 0, \"replicas\": [ { \"broker\": 0, \"leader\": true, \"in_sync\": true }, { \"broker\": 1, \"leader\": false, \"in_sync\": true }, { \"broker\": 2, \"leader\": false, \"in_sync\": true } ] }, { \"partition\": 1, \"leader\": 2, \"replicas\": [ { \"broker\": 2, \"leader\": true, \"in_sync\": true }, { \"broker\": 0, \"leader\": false, \"in_sync\": true }, { \"broker\": 1, \"leader\": false, \"in_sync\": true } ] }, { \"partition\": 2, \"leader\": 1, \"replicas\": [ { \"broker\": 1, \"leader\": true, \"in_sync\": true }, { \"broker\": 2, \"leader\": false, \"in_sync\": true }, { \"broker\": 0, \"leader\": false, \"in_sync\": true } ] } ] }", "curl -X GET http://localhost:8080/topics/bridge-quickstart-topic/partitions", "[ { \"partition\": 0, \"leader\": 0, \"replicas\": [ { \"broker\": 0, \"leader\": true, \"in_sync\": true }, { \"broker\": 1, \"leader\": false, \"in_sync\": true }, { \"broker\": 2, \"leader\": false, \"in_sync\": true } ] }, { \"partition\": 1, \"leader\": 2, \"replicas\": [ { \"broker\": 2, \"leader\": true, \"in_sync\": true }, { \"broker\": 0, \"leader\": false, \"in_sync\": true }, { \"broker\": 1, \"leader\": false, \"in_sync\": true } ] }, { \"partition\": 2, \"leader\": 1, \"replicas\": [ { \"broker\": 1, \"leader\": true, \"in_sync\": true }, { \"broker\": 2, \"leader\": false, \"in_sync\": true }, { \"broker\": 0, \"leader\": false, \"in_sync\": true } ] } ]", "curl -X GET http://localhost:8080/topics/bridge-quickstart-topic/partitions/0", "{ \"partition\": 0, \"leader\": 0, \"replicas\": [ { \"broker\": 0, \"leader\": true, \"in_sync\": true }, { \"broker\": 1, \"leader\": false, \"in_sync\": true }, { \"broker\": 2, \"leader\": false, \"in_sync\": true } ] }", "curl -X GET http://localhost:8080/topics/bridge-quickstart-topic/partitions/0/offsets", "{ \"beginning_offset\": 0, \"end_offset\": 1 }", "curl -X POST http://localhost:8080/consumers/bridge-quickstart-consumer-group -H 'content-type: application/vnd.kafka.v2+json' -d '{ \"name\": \"bridge-quickstart-consumer\", \"auto.offset.reset\": \"earliest\", \"format\": \"json\", \"enable.auto.commit\": false, \"fetch.min.bytes\": 512, \"consumer.request.timeout.ms\": 30000 }'", "# { \"instance_id\": \"bridge-quickstart-consumer\", \"base_uri\":\"http:// <bridge_id> -bridge-service:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer\" }", "curl -X POST http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/subscription -H 'content-type: application/vnd.kafka.v2+json' -d '{ \"topics\": [ \"bridge-quickstart-topic\" ] }'", "curl -X GET http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/records -H 'accept: application/vnd.kafka.json.v2+json'", "HTTP/1.1 200 OK content-type: application/vnd.kafka.json.v2+json # [ { \"topic\":\"bridge-quickstart-topic\", \"key\":\"my-key\", \"value\":\"sales-lead-0001\", \"partition\":0, \"offset\":0 }, { \"topic\":\"bridge-quickstart-topic\", \"key\":null, \"value\":\"sales-lead-0003\", \"partition\":0, \"offset\":1 }, #", "curl -X POST http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/offsets", "curl -X POST http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/positions -H 'content-type: application/vnd.kafka.v2+json' -d '{ \"offsets\": [ { \"topic\": \"bridge-quickstart-topic\", \"partition\": 0, \"offset\": 2 } ] }'", "curl -X GET http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/records -H 'accept: application/vnd.kafka.json.v2+json'", "curl -X POST http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer/positions/end -H 'content-type: application/vnd.kafka.v2+json' -d '{ \"partitions\": [ { \"topic\": \"bridge-quickstart-topic\", \"partition\": 0 } ] }'", "curl -X DELETE http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/using_the_streams_for_apache_kafka_bridge/assembly-kafka-bridge-quickstart-bridge
Chapter 29. Configuring ethtool settings in NetworkManager connection profiles
Chapter 29. Configuring ethtool settings in NetworkManager connection profiles NetworkManager can configure certain network driver and hardware settings persistently. Compared to using the ethtool utility to manage these settings, this has the benefit of not losing the settings after a reboot. You can set the following ethtool settings in NetworkManager connection profiles: Offload features Network interface controllers can use the TCP offload engine (TOE) to offload processing certain operations to the network controller. This improves the network throughput. Interrupt coalesce settings By using interrupt coalescing, the system collects network packets and generates a single interrupt for multiple packets. This increases the amount of data sent to the kernel with one hardware interrupt, which reduces the interrupt load, and maximizes the throughput. Ring buffers These buffers store incoming and outgoing network packets. You can increase the ring buffer sizes to reduce a high packet drop rate. 29.1. Configuring an ethtool offload feature by using nmcli You can use NetworkManager to enable and disable ethtool offload features in a connection profile. Procedure For example, to enable the RX offload feature and disable TX offload in the enp1s0 connection profile, enter: This command explicitly enables RX offload and disables TX offload. To remove the setting of an offload feature that you previously enabled or disabled, set the feature's parameter to a null value. For example, to remove the configuration for TX offload, enter: Reactivate the network profile: Verification Use the ethtool -k command to display the current offload features of a network device: Additional resources nm-settings-nmcli(5) man page on your system 29.2. Configuring an ethtool offload feature by using the network RHEL system role Network interface controllers can use the TCP offload engine (TOE) to offload processing certain operations to the network controller. This improves the network throughput. You configure offload features in the connection profile of the network interface. By using Ansible and the network RHEL system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook. Warning You cannot use the network RHEL system role to update only specific values in an existing connection profile. The role ensures that a connection profile exactly matches the settings in a playbook. If a connection profile with the same name already exists, the role applies the settings from the playbook and resets all other settings in the profile to their defaults. To prevent resetting values, always specify the whole configuration of the network connection profile in the playbook, including the settings that you do not want to change. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address settings and offload features ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes ethtool: features: gro: no gso: yes tx_sctp_segmentation: no state: up The settings specified in the example playbook include the following: gro: no Disables Generic receive offload (GRO). gso: yes Enables Generic segmentation offload (GSO). tx_sctp_segmentation: no Disables TX stream control transmission protocol (SCTP) segmentation. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.network/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Query the Ansible facts of the managed node and verify the offload settings: Additional resources /usr/share/ansible/roles/rhel-system-roles.network/README.md file /usr/share/doc/rhel-system-roles/network/ directory 29.3. Configuring an ethtool coalesce settings by using nmcli You can use NetworkManager to set ethtool coalesce settings in connection profiles. Procedure For example, to set the maximum number of received packets to delay to 128 in the enp1s0 connection profile, enter: To remove a coalesce setting, set it to a null value. For example, to remove the ethtool.coalesce-rx-frames setting, enter: To reactivate the network profile: Verification Use the ethtool -c command to display the current offload features of a network device: Additional resources nm-settings-nmcli(5) man page on your system 29.4. Configuring an ethtool coalesce settings by using the network RHEL system role By using interrupt coalescing, the system collects network packets and generates a single interrupt for multiple packets. This increases the amount of data sent to the kernel with one hardware interrupt, which reduces the interrupt load, and maximizes the throughput. You configure coalesce settings in the connection profile of the network interface. By using Ansible and the network RHEL role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook. Warning You cannot use the network RHEL system role to update only specific values in an existing connection profile. The role ensures that a connection profile exactly matches the settings in a playbook. If a connection profile with the same name already exists, the role applies the settings from the playbook and resets all other settings in the profile to their defaults. To prevent resetting values, always specify the whole configuration of the network connection profile in the playbook, including the settings that you do not want to change. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address settings and coalesce settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes ethtool: coalesce: rx_frames: 128 tx_frames: 128 state: up The settings specified in the example playbook include the following: rx_frames: <value> Sets the number of RX frames. gso: <value> Sets the number of TX frames. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.network/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Display the current offload features of the network device: Additional resources /usr/share/ansible/roles/rhel-system-roles.network/README.md file /usr/share/doc/rhel-system-roles/network/ directory 29.5. Increasing the ring buffer size to reduce a high packet drop rate by using nmcli Increase the size of an Ethernet device's ring buffers if the packet drop rate causes applications to report a loss of data, timeouts, or other issues. Receive ring buffers are shared between the device driver and network interface controller (NIC). The card assigns a transmit (TX) and receive (RX) ring buffer. As the name implies, the ring buffer is a circular buffer where an overflow overwrites existing data. There are two ways to move data from the NIC to the kernel, hardware interrupts and software interrupts, also called SoftIRQs. The kernel uses the RX ring buffer to store incoming packets until the device driver can process them. The device driver drains the RX ring, typically by using SoftIRQs, which puts the incoming packets into a kernel data structure called an sk_buff or skb to begin its journey through the kernel and up to the application that owns the relevant socket. The kernel uses the TX ring buffer to hold outgoing packets which should be sent to the network. These ring buffers reside at the bottom of the stack and are a crucial point at which packet drop can occur, which in turn will adversely affect network performance. Procedure Display the packet drop statistics of the interface: Note that the output of the command depends on the network card and the driver. High values in discard or drop counters indicate that the available buffer fills up faster than the kernel can process the packets. Increasing the ring buffers can help to avoid such loss. Display the maximum ring buffer sizes: If the values in the Pre-set maximums section are higher than in the Current hardware settings section, you can change the settings in the steps. Identify the NetworkManager connection profile that uses the interface: Update the connection profile, and increase the ring buffers: To increase the RX ring buffer, enter: To increase the TX ring buffer, enter: Reload the NetworkManager connection: Important Depending on the driver your NIC uses, changing in the ring buffer can shortly interrupt the network connection. Additional resources ifconfig and ip commands report packet drops (Red Hat Knowledgebase) Should I be concerned about a 0.05% packet drop rate? (Red Hat Knowledgebase) ethtool(8) man page on your system 29.6. Increasing the ring buffer size to reduce a high packet drop rate by using the network RHEL system role Increase the size of an Ethernet device's ring buffers if the packet drop rate causes applications to report a loss of data, timeouts, or other issues. Ring buffers are circular buffers where an overflow overwrites existing data. The network card assigns a transmit (TX) and receive (RX) ring buffer. Receive ring buffers are shared between the device driver and the network interface controller (NIC). Data can move from NIC to the kernel through either hardware interrupts or software interrupts, also called SoftIRQs. The kernel uses the RX ring buffer to store incoming packets until the device driver can process them. The device driver drains the RX ring, typically by using SoftIRQs, which puts the incoming packets into a kernel data structure called an sk_buff or skb to begin its journey through the kernel and up to the application that owns the relevant socket. The kernel uses the TX ring buffer to hold outgoing packets which should be sent to the network. These ring buffers reside at the bottom of the stack and are a crucial point at which packet drop can occur, which in turn will adversely affect network performance. You configure ring buffer settings in the NetworkManager connection profiles. By using Ansible and the network RHEL system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook. Warning You cannot use the network RHEL system role to update only specific values in an existing connection profile. The role ensures that a connection profile exactly matches the settings in a playbook. If a connection profile with the same name already exists, the role applies the settings from the playbook and resets all other settings in the profile to their defaults. To prevent resetting values, always specify the whole configuration of the network connection profile in the playbook, including the settings that you do not want to change. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. You know the maximum ring buffer sizes that the device supports. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address setting and increased ring buffer sizes ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes ethtool: ring: rx: 4096 tx: 4096 state: up The settings specified in the example playbook include the following: rx: <value> Sets the maximum number of received ring buffer entries. tx: <value> Sets the maximum number of transmitted ring buffer entries. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.network/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Display the maximum ring buffer sizes: Additional resources /usr/share/ansible/roles/rhel-system-roles.network/README.md file /usr/share/doc/rhel-system-roles/network/ directory
[ "nmcli con modify enp1s0 ethtool.feature-rx on ethtool.feature-tx off", "nmcli con modify enp1s0 ethtool.feature-tx \"\"", "nmcli connection up enp1s0", "ethtool -k network_device", "--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address settings and offload features ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes ethtool: features: gro: no gso: yes tx_sctp_segmentation: no state: up", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m ansible.builtin.setup \"ansible_enp1s0\": { \"active\": true, \"device\": \"enp1s0\", \"features\": { \"rx_gro_hw\": \"off, \"tx_gso_list\": \"on, \"tx_sctp_segmentation\": \"off\", }", "nmcli connection modify enp1s0 ethtool.coalesce-rx-frames 128", "nmcli connection modify enp1s0 ethtool.coalesce-rx-frames \"\"", "nmcli connection up enp1s0", "ethtool -c network_device", "--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address settings and coalesce settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes ethtool: coalesce: rx_frames: 128 tx_frames: 128 state: up", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'ethtool -c enp1s0' managed-node-01.example.com | CHANGED | rc=0 >> rx-frames: 128 tx-frames: 128", "ethtool -S enp1s0 rx_queue_0_drops: 97326 rx_queue_1_drops: 63783", "ethtool -g enp1s0 Ring parameters for enp1s0 : Pre-set maximums: RX: 4096 RX Mini: 0 RX Jumbo: 16320 TX: 4096 Current hardware settings: RX: 255 RX Mini: 0 RX Jumbo: 0 TX: 255", "nmcli connection show NAME UUID TYPE DEVICE Example-Connection a5eb6490-cc20-3668-81f8-0314a27f3f75 ethernet enp1s0", "nmcli connection modify Example-Connection ethtool.ring-rx 4096", "nmcli connection modify Example-Connection ethtool.ring-tx 4096", "nmcli connection up Example-Connection", "--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address setting and increased ring buffer sizes ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes ethtool: ring: rx: 4096 tx: 4096 state: up", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'ethtool -g enp1s0' managed-node-01.example.com | CHANGED | rc=0 >> Current hardware settings: RX: 4096 RX Mini: 0 RX Jumbo: 0 TX: 4096" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_networking/configuring-ethtool-settings-in-networkmanager-connection-profiles_configuring-and-managing-networking
Chapter 39. Migrating from an LDAP Directory to IdM
Chapter 39. Migrating from an LDAP Directory to IdM As an administrator, you previously deployed an LDAP server for authentication and identity lookups and now you want to migrate the back end to Identity Management. You want to use the IdM migration tool to transfer user accounts, including passwords, and group, without losing data. Additionally you want to avoid expensive configuration updates on the clients. The migration process described here, assumes a simple deployment scenario with one name space in LDAP and one in IdM. For more complex environments, such as multiple name spaces or custom schema, contact the Red Hat support services. 39.1. An Overview of an LDAP to IdM Migration The actual migration part of moving from an LDAP server to Identity Management - the process of moving the data from one server to the other - is fairly straightforward. The process is simple: move data, move passwords, and move clients. The most expensive part of the migration is deciding how clients are going to be configured to use Identity Management. For each client in the infrastructure, you need to decide what services (such as Kerberos and SSSD) are being used and what services can be used in the final IdM deployment. A secondary, but significant, consideration is planning how to migrate passwords. Identity Management requires Kerberos hashes for every user account in addition to passwords. Some of the considerations and migration paths for passwords are covered in Section 39.1.2, "Planning Password Migration" . 39.1.1. Planning the Client Configuration Identity Management can support a number of different client configurations, with varying degrees of functionality, flexibility, and security. Decide which configuration is best for each individual client based on its operating system, functional area (such as development machines, production servers, or user laptops), and your IT maintenance priorities. Important The different client configurations are not mutually exclusive . Most environments will have a mix of different ways that clients use to connect to the IdM domain. Administrators must decide which scenario is best for each individual client. 39.1.1.1. Initial Client Configuration (Pre-Migration) Before deciding where you want to go with the client configuration in Identity Management, first establish where you are before the migration. The initial state for almost all LDAP deployments that will be migrated is that there is an LDAP service providing identity and authentication services. Figure 39.1. Basic LDAP Directory and Client Configuration Linux and Unix clients use PAM_LDAP and NSS_LDAP libraries to connect directly to the LDAP services. These libraries allow clients to retrieve user information from the LDAP directory as if the data were stored in /etc/passwd or /etc/shadow . (In real life, the infrastructure may be more complex if a client uses LDAP for identity lookups and Kerberos for authentication or other configurations.) There are structural differences between an LDAP directory and an IdM server, particularly in schema support and the structure of the directory tree. (For more background on those differences, see Section 1.1.2, "Contrasting Identity Management with a Standard LDAP Directory" .) While those differences may impact data (especially with the directory tree, which affects entry names), they have little impact on the client configuration , so it really has little impact on migrating clients to Identity Management. 39.1.1.2. Recommended Configuration for Red Hat Enterprise Linux Clients Red Hat Enterprise Linux has a service called the System Security Services Daemon (SSSD). SSSD uses special PAM and NSS libraries ( pam_sss and nss_sss , respectively) which allow SSSD to be integrated very closely with Identity Management and leverage the full authentication and identity features in Identity Management. SSSD has a number of useful features, like caching identity information so that users can log in even if the connection is lost to the central server; these are described in the System-Level Authentication Guide . Unlike generic LDAP directory services (using pam_ldap and nss_ldap ), SSSD establishes relationships between identity and authentication information by defining domains . A domain in SSSD defines four back end functions: authentication, identity lookups, access, and password changes. The SSSD domain is then configured to use a provider to supply the information for any one (or all) of those four functions. An identity provider is always required in the domain configuration. The other three providers are optional; if an authentication, access, or password provider is not defined, then the identity provider is used for that function. SSSD can use Identity Management for all of its back end functions. This is the ideal configuration because it provides the full range of Identity Management functionality, unlike generic LDAP identity providers or Kerberos authentication. For example, during daily operation, SSSD enforces host-based access control rules and security features in Identity Management. Note During the migration process from an LDAP directory to Identity Management, SSSD can seamlessly migrate user passwords without additional user interaction. Figure 39.2. Clients and SSSD with an IdM Back End The ipa-client-install script automatically configured SSSD to use IdM for all four of its back end services, so Red Hat Enterprise Linux clients are set up with the recommended configuration by default. Note This client configuration is only supported for Red Hat Enterprise Linux 6.1 and later and Red Hat Enterprise Linux 5.7 later, which support the latest versions of SSSD and ipa-client . Older versions of Red Hat Enterprise Linux can be configured as described in Section 39.1.1.3, "Alternative Supported Configuration" . 39.1.1.3. Alternative Supported Configuration Unix and Linux systems such as Mac, Solaris, HP-UX, AIX, and Scientific Linux support all of the services that IdM manages but do not use SSSD. Likewise, older Red Hat Enterprise Linux versions (6.1 and 5.6) support SSSD but have an older version, which does not support IdM as an identity provider. When it is not possible to use a modern version of SSSD on a system, then clients can be configured to connect to the IdM server as if it were an LDAP directory service for identity lookups (using nss_ldap ) and to IdM as if it were a regular Kerberos KDC (using pam_krb5 ). Figure 39.3. Clients and IdM with LDAP and Kerberos If a Red Hat Enterprise Linux client is using an older version of SSSD, SSSD can still be configured to use the IdM server as its identity provider and its Kerberos authentication domain; this is described in the SSSD configuration section of the System-Level Authentication Guide . Any IdM domain client can be configured to use nss_ldap and pam_krb5 to connect to the IdM server. For some maintenance situations and IT structures, a scenario that fits the lowest common denominator may be required, using LDAP for both identity and authentication ( nss_ldap and pam_ldap ). However, it is generally best practice to use the most secure configuration possible for a client. This means SSSD or LDAP for identities and Kerberos for authentication. 39.1.2. Planning Password Migration Probably the most visible issue that can impact LDAP-to-Identity Management migration is migrating user passwords. Identity Management (by default) uses Kerberos for authentication and requires that each user has Kerberos hashes stored in the Identity Management Directory Server in addition to the standard user passwords. To generate these hashes, the user password needs to be available to the IdM server in clear text. When you create a user, the password is available in clear text before it is hashed and stored in Identity Management. However, when the user is migrated from an LDAP directory, the associated user password is already hashed, so the corresponding Kerberos key cannot be generated. Important Users cannot authenticate to the IdM domain or access IdM resources until they have Kerberos hashes. If a user does not have a Kerberos hash [6] , that user cannot log into the IdM domain even if he has a user account. There are three options for migrating passwords: forcing a password change, using a web page, and using SSSD. Migrating users from an existing system provides a smoother transition but also requires parallel management of LDAP directory and IdM during the migration and transition process. If you do not preserve passwords, the migration can be performed more quickly but it requires more manual work by administrators and users. 39.1.2.1. Method 1: Using Temporary Passwords and Requiring a Change When passwords are changed in Identity Management, they will be created with the appropriate Kerberos hashes. So one alternative for administrators is to force users to change their passwords by resetting all user passwords when user accounts are migrated. The new users are assigned a temporary password which they change at the first login. No passwords are migrated. For details, see Section 22.1.1, "Changing and Resetting User Passwords" . 39.1.2.2. Method 2: Using the Migration Web Page When it is running in migration mode, Identity Management has a special web page in its web UI that will capture a cleartext password and create the appropriate Kerberos hash. Administrators could tell users to authenticate once to this web page, which would properly update their user accounts with their password and corresponding Kerberos hash, without requiring password changes. 39.1.2.3. Method 3: Using SSSD (Recommended) SSSD can work with IdM to mitigate the user impact on migrating by generating the required user keys. For deployments with a lot of users or where users should not be burdened with password changes, this is the best scenario. A user tries to log into a machine with SSSD. SSSD attempts to perform Kerberos authentication against the IdM server. Even though the user exists in the system, the authentication will fail with the error key type is not supported because the Kerberos hashes do not yet exist. SSSD then performs a plain text LDAP bind over a secure connection. IdM intercepts this bind request. If the user has a Kerberos principal but no Kerberos hashes, then the IdM identity provider generates the hashes and stores them in the user entry. If authentication is successful, SSSD disconnects from IdM and tries Kerberos authentication again. This time, the request succeeds because the hash exists in the entry. That entire process is entirely transparent to the user; as far as users know, they simply log into a client service and it works as normal. 39.1.2.4. Migrating Cleartext LDAP Passwords Although in most deployments LDAP passwords are stored encrypted, there may be some users or some environments that use cleartext passwords for user entries. When users are migrated from the LDAP server to the IdM server, their cleartext passwords are not migrated over. Identity Management does not allow cleartext passwords. Instead, a Kerberos principal is created for the user, the keytab is set to true, and the password is set as expired. This means that Identity Management requires the user to reset the password at the login. Note If passwords are hashed, the password is successfully migrated through SSSD and the migration web page, as in Section 39.1.2.2, "Method 2: Using the Migration Web Page" and Section 39.1.2.3, "Method 3: Using SSSD (Recommended)" . 39.1.2.5. Automatically Resetting Passwords That Do Not Meet Requirements If user passwords in the original directory do not meet the password policies defined in Identity Management, then the passwords must be reset after migration. Password resets are done automatically the first time the users attempts to kinit into the IdM domain. 39.1.3. Migration Considerations and Requirements As you are planning a migration from an LDAP server to Identity Management, make sure that your LDAP environment is able to work with the Identity Management migration script. 39.1.3.1. LDAP Servers Supported for Migration The migration process from an LDAP server to Identity Management uses a special script, ipa migrate-ds , to perform the migration. This script has certain expectations about the structure of the LDAP directory and LDAP entries in order to work. Migration is supported only for LDAPv3-compliant directory services, which include several common directories: Sun ONE Directory Server Apache Directory Server OpenLDAP Migration from an LDAP server to Identity Management has been tested with Red Hat Directory Server and OpenLDAP. Note Migration using the migration script is not supported for Microsoft Active Directory because it is not an LDAPv3-compliant directory. For assistance with migrating from Active Directory, contact Red Hat Professional Services. 39.1.3.2. Migration Environment Requirements There are many different possible configuration scenarios for both Red Hat Directory Server and Identity Management, and any of those scenarios may affect the migration process. For the example migration procedures in this chapter, these are the assumptions about the environment: A single LDAP directory domain is being migrated to one IdM realm. No consolidation is involved. User passwords are stored as a hash in the LDAP directory. For a list of supported hashes, see the passwordStorageScheme attribute in the Table 19.2. Password Policy-related Attributes in the Red Hat Directory Server 10 Administration Guide . The LDAP directory instance is both the identity store and the authentication method. Client machines are configured to use pam_ldap or nss_ldap to connect to the LDAP server. Entries use only the standard LDAP schema. Entries that contain custom object classes or attributes are not migrated to Identity Management. 39.1.3.3. Migration - IdM System Requirements With a moderately-sized directory (around 10,000 users and 10 groups), it is necessary to have a powerful enough target system (the IdM system) to allow the migration to proceed. The minimum requirements for a migration are: 4 cores 4GB of RAM 30GB of disk space A SASL buffer size of 2MB (default for an IdM server) In case of migration errors, increase the buffer size: Set the nsslapd-sasl-max-buffer-size value in bytes. 39.1.3.4. Considerations about Sudo Rules If you are using sudo with LDAP already, you must manually migrate the sudo rules stored in LDAP. Red Hat recommends to re-create netgroups in IdM as hostgroups. IdM presents hostgroups automatically as traditional netgroups for sudo configurations which do not use the SSSD sudo provider. 39.1.3.5. Migration Tools Identity Management uses a specific command, ipa migrate-ds , to drive the migration process so that LDAP directory data are properly formatted and imported cleanly into the IdM server. When using ipa migrate-ds , the remote system user, specified by the --bind-dn option, needs to have read access to the userPassword attribute, otherwise passwords will not be migrated. The Identity Management server must be configured to run in migration mode, and then the migration script can be used. For details, see Section 39.3, "Migrating an LDAP Server to Identity Management" . 39.1.3.6. Improving Migration Performance An LDAP migration is essentially a specialized import operation for the 389 Directory Server instance within the IdM server. Tuning the 389 Directory Server instance for better import operation performance can help improve the overall migration performance. There are two parameters that directly affect import performance: The nsslapd-cachememsize attribute, which defines the size allowed for the entry cache. This is a buffer, that is automatically set to 80% of the total cache memory size. For large import operations, this parameter (and possibly the memory cache itself) can be increased to more efficiently handle a large number of entries or entries with larger attributes. For details how to modify the attribute using the ldapmodify , see Setting the Entry Cache Size in the Red Hat Directory Server 10 Performance Tuning Guide . The system ulimit configuration option sets the maximum number of allowed processes for a system user. Processing a large database can exceed the limit. If this happens, increase the value: For further information, see Red Hat Directory Server Performance Tuning Guide at https://access.redhat.com/documentation/en-us/red_hat_directory_server/11/html-single/performance_tuning_guide/index . 39.1.3.7. Migration Sequence There are four major steps when migrating to Identity Management, but the order varies slightly depending on whether you want to migrate the server first or the clients first. With a client-based migration, SSSD is used to change the client configuration while an IdM server is configured: Deploy SSSD. Reconfigure clients to connect to the current LDAP server and then fail over to IdM. Install the IdM server. Migrate the user data using the IdM ipa migrate-ds script. This exports the data from the LDAP directory, formats for the IdM schema, and then imports it into IdM. Take the LDAP server offline and allow clients to fail over to Identity Management transparently. With a server migration, the LDAP to Identity Management migration comes first: Install the IdM server. Migrate the user data using the IdM ipa migrate-ds script. This exports the data from the LDAP directory, formats it for the IdM schema, and then imports it into IdM. Optional. Deploy SSSD. Reconfigure clients to connect to IdM. It is not possible to simply replace the LDAP server. The IdM directory tree - and therefore user entry DNs - is different than the directory tree. While it is required that clients be reconfigured, clients do not need to be reconfigured immediately. Updated clients can point to the IdM server while other clients point to the old LDAP directory, allowing a reasonable testing and transition phase after the data are migrated. Note Do not run both an LDAP directory service and the IdM server for very long in parallel. This introduces the risk of user data being inconsistent between the two services. Both processes provide a general migration procedure, but it may not work in every environment. Set up a test LDAP environment and test the migration process before attempting to migrate the real LDAP environment. [6] It is possible to use LDAP authentication in Identity Management instead of Kerberos authentication, which means that Kerberos hashes are not required for users. However, this limits the capabilities of Identity Management and is not recommended.
[ "https://ipaserver.example.com/ipa/migration", "[jsmith@server ~]USD kinit Password for [email protected]: Password expired. You must change it now. Enter new password: Enter it again:", "ldapmodify -x -D 'cn=directory manager' -w password -h ipaserver.example.com -p 389 dn: cn=config changetype: modify replace: nsslapd-sasl-max-buffer-size nsslapd-sasl-max-buffer-size: 4194304 modifying entry \"cn=config\"", "ulimit -u 4096" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/Migrating_from_a_Directory_Server_to_IPA
2.3. External Materialization Options
2.3. External Materialization Options If you are trying to load the materialized table "Portfolio.UpdateProduct", for which the materialization table is defined as "mv_view.UpdateProduct", use any JDBC Query tool like SquirreL and make a JDBC connection to the VDB you created and issue following SQL command: INSERT INTO mv_view.mv_UpdateProduct SELECT * FROM Portfolio.UpdateProduct OPTION NOCACHE Here is how you would create an AdminShell script to automatically load the materialized table: Use this command to execute the script: adminshell.sh . load.groovy Note If you want to set up a job to run this script frequently at regular intervals, then on Red Hat Enterprise Linux use "cron tab" or on Microsoft Windows use "Windows Scheduler" to refresh the rows in the materialized table. Every time the script runs it will refresh the contents. This job needs to be run only when user access is restricted. Important There are some situation in which this process of loading the cache will not work. Here are some situations in which it will not work: If it is updating all the rows in the materialized table, and you only need to update only few rows to avoid long refresh time. If it takes an hour for your reload your materialized table, queries executed during that time will fail to povide correct results. Also ensure that you create indexes on your materialization table after the data is loaded, as having indexes during the load process slows down the loading of data, especially when you are dealing with a large number of rows.
[ "sql=connect(USD{url}, USD{user}, USD{password}); sql.execute(\"DELETE FROM mv_view.mv_UpdateProduct\"); sql.execute(\"INSERT INTO mv_view.mv_UpdateProduct SELECT * FROM Portfolio.UpdateProduct OPTION NOCACHE\"); sql.close();" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_5_caching_guide/external_materialization_options
Chapter 1. Fencing Pre-Configuration
Chapter 1. Fencing Pre-Configuration This chapter describes tasks to perform and considerations to make before deploying fencing on clusters using Red Hat High Availability Add-On, and consists of the following sections. Section 1.1, "Configuring ACPI For Use with Integrated Fence Devices" 1.1. Configuring ACPI For Use with Integrated Fence Devices If your cluster uses integrated fence devices, you must configure ACPI (Advanced Configuration and Power Interface) to ensure immediate and complete fencing. If a cluster node is configured to be fenced by an integrated fence device, disable ACPI Soft-Off for that node. Disabling ACPI Soft-Off allows an integrated fence device to turn off a node immediately and completely rather than attempting a clean shutdown (for example, shutdown -h now ). Otherwise, if ACPI Soft-Off is enabled, an integrated fence device can take four or more seconds to turn off a node (refer to note that follows). In addition, if ACPI Soft-Off is enabled and a node panics or freezes during shutdown, an integrated fence device may not be able to turn off the node. Under those circumstances, fencing is delayed or unsuccessful. Consequently, when a node is fenced with an integrated fence device and ACPI Soft-Off is enabled, a cluster recovers slowly or requires administrative intervention to recover. Note The amount of time required to fence a node depends on the integrated fence device used. Some integrated fence devices perform the equivalent of pressing and holding the power button; therefore, the fence device turns off the node in four to five seconds. Other integrated fence devices perform the equivalent of pressing the power button momentarily, relying on the operating system to turn off the node; therefore, the fence device turns off the node in a time span much longer than four to five seconds. To disable ACPI Soft-Off, use chkconfig management and verify that the node turns off immediately when fenced. The preferred way to disable ACPI Soft-Off is with chkconfig management: however, if that method is not satisfactory for your cluster, you can disable ACPI Soft-Off with one of the following alternate methods: Changing the BIOS setting to "instant-off" or an equivalent setting that turns off the node without delay Note Disabling ACPI Soft-Off with the BIOS may not be possible with some computers. Appending acpi=off to the kernel boot command line of the /boot/grub/grub.conf file Important This method completely disables ACPI; some computers do not boot correctly if ACPI is completely disabled. Use this method only if the other methods are not effective for your cluster. The following sections provide procedures for the preferred method and alternate methods of disabling ACPI Soft-Off: Section 1.1.1, "Disabling ACPI Soft-Off with chkconfig Management" - Preferred method Section 1.1.2, "Disabling ACPI Soft-Off with the BIOS" - First alternate method Section 1.1.3, "Disabling ACPI Completely in the grub.conf File" - Second alternate method 1.1.1. Disabling ACPI Soft-Off with chkconfig Management You can use chkconfig management to disable ACPI Soft-Off either by removing the ACPI daemon ( acpid ) from chkconfig management or by turning off acpid . Note This is the preferred method of disabling ACPI Soft-Off. Disable ACPI Soft-Off with chkconfig management at each cluster node as follows: Run either of the following commands: chkconfig --del acpid - This command removes acpid from chkconfig management. - OR - chkconfig --level 2345 acpid off - This command turns off acpid . Reboot the node. When the cluster is configured and running, verify that the node turns off immediately when fenced. Note You can fence the node with the fence_node command or Conga . 1.1.2. Disabling ACPI Soft-Off with the BIOS The preferred method of disabling ACPI Soft-Off is with chkconfig management ( Section 1.1.1, "Disabling ACPI Soft-Off with chkconfig Management" ). However, if the preferred method is not effective for your cluster, follow the procedure in this section. Note Disabling ACPI Soft-Off with the BIOS may not be possible with some computers. You can disable ACPI Soft-Off by configuring the BIOS of each cluster node as follows: Reboot the node and start the BIOS CMOS Setup Utility program. Navigate to the Power menu (or equivalent power management menu). At the Power menu, set the Soft-Off by PWR-BTTN function (or equivalent) to Instant-Off (or the equivalent setting that turns off the node by means of the power button without delay). Example 1.1, " BIOS CMOS Setup Utility : Soft-Off by PWR-BTTN set to Instant-Off " shows a Power menu with ACPI Function set to Enabled and Soft-Off by PWR-BTTN set to Instant-Off . Note The equivalents to ACPI Function , Soft-Off by PWR-BTTN , and Instant-Off may vary among computers. However, the objective of this procedure is to configure the BIOS so that the computer is turned off by means of the power button without delay. Exit the BIOS CMOS Setup Utility program, saving the BIOS configuration. When the cluster is configured and running, verify that the node turns off immediately when fenced. Note You can fence the node with the fence_node command or Conga . Example 1.1. BIOS CMOS Setup Utility : Soft-Off by PWR-BTTN set to Instant-Off This example shows ACPI Function set to Enabled , and Soft-Off by PWR-BTTN set to Instant-Off . 1.1.3. Disabling ACPI Completely in the grub.conf File The preferred method of disabling ACPI Soft-Off is with chkconfig management ( Section 1.1.1, "Disabling ACPI Soft-Off with chkconfig Management" ). If the preferred method is not effective for your cluster, you can disable ACPI Soft-Off with the BIOS power management ( Section 1.1.2, "Disabling ACPI Soft-Off with the BIOS" ). If neither of those methods is effective for your cluster, you can disable ACPI completely by appending acpi=off to the kernel boot command line in the grub.conf file. Important This method completely disables ACPI; some computers do not boot correctly if ACPI is completely disabled. Use this method only if the other methods are not effective for your cluster. You can disable ACPI completely by editing the grub.conf file of each cluster node as follows: Open /boot/grub/grub.conf with a text editor. Append acpi=off to the kernel boot command line in /boot/grub/grub.conf (see Example 1.2, "Kernel Boot Command Line with acpi=off Appended to It" ). Reboot the node. When the cluster is configured and running, verify that the node turns off immediately when fenced. Note You can fence the node with the fence_node command or Conga . Example 1.2. Kernel Boot Command Line with acpi=off Appended to It In this example, acpi=off has been appended to the kernel boot command line - the line starting with "kernel /vmlinuz-2.6.32-193.el6.x86_64.img".
[ "+------------------------------------------|-----------------+ | ACPI Function [Enabled] | Item Help | | ACPI Suspend Type [S1(POS)] |-----------------| | x Run VGABIOS if S3 Resume [Auto] | Menu Level * | | Suspend Mode [Disabled] | | | HDD Power Down [Disabled] | | | Soft-Off by PWR-BTTN [Instant-Off]| | | CPU THRM-Throttling [50.0%] | | | Wake-Up by PCI card [Enabled] | | | Power On by Ring [Enabled] | | | Wake Up On LAN [Enabled] | | | x USB KB Wake-Up From S3 [Disabled] | | | Resume by Alarm [Disabled] | | | x Date(of Month) Alarm 0 | | | x Time(hh:mm:ss) Alarm 0 : 0 : | | | POWER ON Function [BUTTON ONLY]| | | x KB Power ON Password Enter | | | x Hot Key Power ON Ctrl-F1 | | +------------------------------------------|-----------------+", "grub.conf generated by anaconda # Note that you do not have to rerun grub after making changes to this file NOTICE: You have a /boot partition. This means that all kernel and initrd paths are relative to /boot/, eg. root (hd0,0) kernel /vmlinuz-version ro root=/dev/mapper/vg_doc01-lv_root initrd /initrd-[generic-]version.img #boot=/dev/hda default=0 timeout=5 serial --unit=0 --speed=115200 terminal --timeout=5 serial console title Red Hat Enterprise Linux Server (2.6.32-193.el6.x86_64) root (hd0,0) kernel /vmlinuz-2.6.32-193.el6.x86_64 ro root=/dev/mapper/vg_doc01-lv_root console=ttyS0,115200n8 acpi=off initrd /initramrs-2.6.32-131.0.15.el6.x86_64.img" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/fence_configuration_guide/ch-before-config-ca
probe::signal.systkill.return
probe::signal.systkill.return Name probe::signal.systkill.return - Sending kill signal to a thread completed Synopsis signal.systkill.return Values retstr The return value to either __group_send_sig_info, name Name of the probe point
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-signal-systkill-return
D.4. Promoting a Replica to a Master CA Server
D.4. Promoting a Replica to a Master CA Server If your IdM deployment uses an embedded certificate authority (CA), one of the IdM CA servers acts as the master CA: it manages the renewal of CA subsystem certificates and generates certificate revocation lists (CRLs). By default, the master CA is the first server on which the system administrator installed the CA role using the ipa-server-install or ipa-ca-install command. If you plan to take the master CA server offline or decommission it, promote a replica to take the its place as the master CA: Make sure the replica is configured to handle CA subsystem certificate renewal. See Section D.4.1, "Changing Which Server Handles Certificate Renewal" . Configure the replica to generate CRLs. See Section 6.5.2.2, "Changing Which Server Generates CRLs" . D.4.1. Changing Which Server Handles Certificate Renewal To change which server handles certificate renewal, use the following procedure on an IdM server: Determine which server is the current renewal master: On Red Hat Enterprise Linux 7.3 and later: On Red Hat Enterprise Linux 7.2 and earlier: In both examples, server.example.com is the current renewal master. To set a different server to handle certificate renewal: On Red Hat Enterprise Linux 7.4 and later: On Red Hat Enterprise Linux 7.3 and earlier: Note This command sets the server on which you run the command as the new renewal master. These commands also automatically reconfigures the CA from renewal master to clone.
[ "ipa config-show | grep \"CA renewal master\" IPA CA renewal master: server.example.com", "ldapsearch -H ldap://USDHOSTNAME -D 'cn=Directory Manager' -W -b 'cn=masters,cn=ipa,cn=etc,dc=example,dc=com' '(&(cn=CA)(ipaConfigString=caRenewalMaster))' dn CA, server.example.com, masters, ipa, etc, example.com dn: cn=CA,cn= server.example.com ,cn=masters,cn=ipa,cn=etc,dc=example,dc=com", "ipa config-mod --ca-renewal-master-server new_server.example.com", "ipa-csreplica-manage set-renewal-master" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/moving-crl-gen-old
Chapter 12. Storage Pools
Chapter 12. Storage Pools This chapter includes instructions on creating storage pools of assorted types. A storage pool is a quantity of storage set aside by an administrator, often a dedicated storage administrator, for use by virtual machines. Storage pools are often divided into storage volumes either by the storage administrator or the system administrator, and the volumes are assigned to guest virtual machines as block devices. Example 12.1. NFS storage pool Suppose a storage administrator responsible for an NFS server creates a share to store guest virtual machines' data. The system administrator defines a pool on the host physical machine with the details of the share (nfs.example.com: /path/to/share should be mounted on /vm_data ). When the pool is started, libvirt mounts the share on the specified directory, just as if the system administrator logged in and executed mount nfs.example.com:/path/to/share /vmdata . If the pool is configured to autostart, libvirt ensures that the NFS share is mounted on the directory specified when libvirt is started. Once the pool starts, the files that the NFS share, are reported as volumes, and the storage volumes' paths are then queried using the libvirt APIs. The volumes' paths can then be copied into the section of a guest virtual machine's XML definition file describing the source storage for the guest virtual machine's block devices. With NFS, applications using the libvirt APIs can create and delete volumes in the pool (files within the NFS share) up to the limit of the size of the pool (the maximum storage capacity of the share). Not all pool types support creating and deleting volumes. Stopping the pool negates the start operation, in this case, unmounts the NFS share. The data on the share is not modified by the destroy operation, despite the name. See man virsh for more details. Note Storage pools and volumes are not required for the proper operation of guest virtual machines. Pools and volumes provide a way for libvirt to ensure that a particular piece of storage will be available for a guest virtual machine, but some administrators will prefer to manage their own storage and guest virtual machines will operate properly without any pools or volumes defined. On systems that do not use pools, system administrators must ensure the availability of the guest virtual machines' storage using whatever tools they prefer, for example, adding the NFS share to the host physical machine's fstab so that the share is mounted at boot time. Warning When creating storage pools on a guest, make sure to follow security considerations. This information is discussed in more detail in the Red Hat Enterprise Linux Virtualization Security Guide which can be found at https://access.redhat.com/site/documentation/ . 12.1. Disk-based Storage Pools This section covers creating disk based storage devices for guest virtual machines. Warning Guests should not be given write access to whole disks or block devices (for example, /dev/sdb ). Use partitions (for example, /dev/sdb1 ) or LVM volumes. If you pass an entire block device to the guest, the guest will likely partition it or create its own LVM groups on it. This can cause the host physical machine to detect these partitions or LVM groups and cause errors. 12.1.1. Creating a Disk-based Storage Pool Using virsh This procedure creates a new storage pool using a disk device with the virsh command. Warning Dedicating a disk to a storage pool will reformat and erase all data presently stored on the disk device. It is strongly recommended to back up the storage device before commencing with the following procedure. Create a GPT disk label on the disk The disk must be relabeled with a GUID Partition Table (GPT) disk label. GPT disk labels allow for creating a large numbers of partitions, up to 128 partitions, on each device. GPT partition tables can store partition data for far more partitions than the MS-DOS partition table. Create the storage pool configuration file Create a temporary XML text file containing the storage pool information required for the new device. The file must be in the format shown below, and contain the following fields: <name>guest_images_disk</name> The name parameter determines the name of the storage pool. This example uses the name guest_images_disk in the example below. <device path=' /dev/sdb '/> The device parameter with the path attribute specifies the device path of the storage device. This example uses the device /dev/sdb . <target> <path> /dev </path></target> The file system target parameter with the path sub-parameter determines the location on the host physical machine file system to attach volumes created with this storage pool. For example, sdb1, sdb2, sdb3. Using /dev/ , as in the example below, means volumes created from this storage pool can be accessed as /dev /sdb1, /dev /sdb2, /dev /sdb3. <format type=' gpt '/> The format parameter specifies the partition table type. This example uses the gpt in the example below, to match the GPT disk label type created in the step. Create the XML file for the storage pool device with a text editor. Example 12.2. Disk based storage device storage pool Attach the device Add the storage pool definition using the virsh pool-define command with the XML configuration file created in the step. Start the storage pool Start the storage pool with the virsh pool-start command. Verify the pool is started with the virsh pool-list --all command. Turn on autostart Turn on autostart for the storage pool. Autostart configures the libvirtd service to start the storage pool when the service starts. Verify the storage pool configuration Verify the storage pool was created correctly, the sizes reported correctly, and the state reports as running . Optional: Remove the temporary configuration file Remove the temporary storage pool XML configuration file if it is not needed. A disk based storage pool is now available. 12.1.2. Deleting a Storage Pool Using virsh The following demonstrates how to delete a storage pool using virsh: To avoid any issues with other guest virtual machines using the same pool, it is best to stop the storage pool and release any resources in use by it. Remove the storage pool's definition
[ "parted /dev/sdb GNU Parted 2.1 Using /dev/sdb Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) mklabel New disk label type? gpt (parted) quit Information: You may need to update /etc/fstab. #", "<pool type='disk'> <name> guest_images_disk </name> <source> <device path=' /dev/sdb '/> <format type=' gpt '/> </source> <target> <path> /dev </path> </target> </pool>", "virsh pool-define ~/guest_images_disk.xml Pool guest_images_disk defined from /root/guest_images_disk.xml virsh pool-list --all Name State Autostart ----------------------------------------- default active yes guest_images_disk inactive no", "virsh pool-start guest_images_disk Pool guest_images_disk started virsh pool-list --all Name State Autostart ----------------------------------------- default active yes guest_images_disk active no", "virsh pool-autostart guest_images_disk Pool guest_images_disk marked as autostarted virsh pool-list --all Name State Autostart ----------------------------------------- default active yes guest_images_disk active yes", "virsh pool-info guest_images_disk Name: guest_images_disk UUID: 551a67c8-5f2a-012c-3844-df29b167431c State: running Capacity: 465.76 GB Allocation: 0.00 Available: 465.76 GB ls -la /dev/sdb brw-rw----. 1 root disk 8, 16 May 30 14:08 /dev/sdb virsh vol-list guest_images_disk Name Path -----------------------------------------", "rm ~/ guest_images_disk .xml", "virsh pool-destroy guest_images_disk", "virsh pool-undefine guest_images_disk" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/chap-Virtualization_Administration_Guide-Storage_Pools-Storage_Pools
Chapter 36. Jira Source
Chapter 36. Jira Source Receive notifications about new issues from Jira. 36.1. Configuration Options The following table summarizes the configuration options available for the jira-source Kamelet: Property Name Description Type Default Example jiraUrl * Jira URL The URL of your instance of Jira string "http://my_jira.com:8081" password * Password The password to access Jira string username * Username The username to access Jira string jql JQL A query to filter issues string "project=MyProject" Note Fields marked with an asterisk (*) are mandatory. 36.2. Dependencies At runtime, the jira-source Kamelet relies upon the presence of the following dependencies: camel:jackson camel:kamelet camel:jira 36.3. Usage This section describes how you can use the jira-source . 36.3.1. Knative Source You can use the jira-source Kamelet as a Knative source by binding it to a Knative object. jira-source-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jira-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jira-source properties: jiraUrl: "http://my_jira.com:8081" password: "The Password" username: "The Username" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 36.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 36.3.1.2. Procedure for using the cluster CLI Save the jira-source-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the source by using the following command: oc apply -f jira-source-binding.yaml 36.3.1.3. Procedure for using the Kamel CLI Configure and run the source by using the following command: kamel bind jira-source -p "source.jiraUrl=http://my_jira.com:8081" -p "source.password=The Password" -p "source.username=The Username" channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. 36.3.2. Kafka Source You can use the jira-source Kamelet as a Kafka source by binding it to a Kafka topic. jira-source-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jira-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jira-source properties: jiraUrl: "http://my_jira.com:8081" password: "The Password" username: "The Username" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 36.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 36.3.2.2. Procedure for using the cluster CLI Save the jira-source-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the source by using the following command: oc apply -f jira-source-binding.yaml 36.3.2.3. Procedure for using the Kamel CLI Configure and run the source by using the following command: kamel bind jira-source -p "source.jiraUrl=http://my_jira.com:8081" -p "source.password=The Password" -p "source.username=The Username" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 36.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/jira-source.kamelet.yaml
[ "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jira-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jira-source properties: jiraUrl: \"http://my_jira.com:8081\" password: \"The Password\" username: \"The Username\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel", "apply -f jira-source-binding.yaml", "kamel bind jira-source -p \"source.jiraUrl=http://my_jira.com:8081\" -p \"source.password=The Password\" -p \"source.username=The Username\" channel:mychannel", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jira-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jira-source properties: jiraUrl: \"http://my_jira.com:8081\" password: \"The Password\" username: \"The Username\" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic", "apply -f jira-source-binding.yaml", "kamel bind jira-source -p \"source.jiraUrl=http://my_jira.com:8081\" -p \"source.password=The Password\" -p \"source.username=The Username\" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.7/html/kamelets_reference/jira-source
Chapter 3. Configuring authentication
Chapter 3. Configuring authentication Important Basic Authentication has been deprecated. If you are using Basic Authentication, you must change to one of the currently supported authenticated methods. For more information about changing from Basic Authentication to certificate-based authentication for user access, refer to How to switch from Basic Auth to Certificate Authentication for Red Hat Insights . 3.1. Authentication methods Depending on how you use Red Hat Insights for Red Hat Enterprise Linux, you must use one of the following authentication methods: Certificate-based authentication (CERT) Certificate-based authentication is the default authentication method. Certificates are generated when you register a system with Red Hat Subscription Manager (RHSM), or when your system is managed by Red Hat Satellite system management. The client configuration file includes authmethod=CERT by default. No additional configuration changes are required. Activation keys The preferred authentication method uses activation keys, along with the Organization ID, to register a system with Red Hat hosted services such as RHSM or remote host configuration (RHC). The activation keys for your organization are listed on the Activation Keys page in the Red Hat Hybrid Cloud Console. You can use an activation key as an authentication token to register a system with Red Hat hosted services, such as Red Hat Subscription Manager (RHSM) or remote host configuration (RHC). Administrators can create, edit, and delete activation keys for your organization. Service accounts Service accounts authenticate applications and services, whereas user accounts authenticate human users. Use service account authentication when: An application or service needs access to specific resources. The application or service needs to access resources without the need for human intervention. The application or service needs to access resources from multiple locations. Service accounts employ a token-based authentication model for API access to cloud services. CERT and activation keys use certificate-based authentication. For more information about the transition from basic authentication to service accounts and instructions for updating accounts that use Basic Authentication for API access, refer to Transition of Red Hat Hybrid Cloud Console APIs from basic authentication to token-based authentication via service accounts . For more information about how to use service accounts, refer to Creating and Managing Service Accounts . Additional resources Creating and managing activation keys in the Red Hat Hybrid Cloud Console . Getting started with activation keys on the Red Hat Hybrid Cloud Console How to switch from Basic Auth to Certificate Authentication for Red Hat Insights Transition of Red Hat Hybrid Cloud Console APIs from basic authentication to token-based authentication via service accounts Creating and Managing Service Accounts 3.2. Using activation keys for authentication An activation key is a preshared authentication token that enables authorized users to register and configure systems. It eliminates the need to store, use, and share a personal username and password combination, which increases security and facilitates automation. You can use an activation key and a numeric organization identifier (organization ID) to register a system with Red Hat hosted services, such as Red Hat Subscription Manager (RHSM) or remote host configuration (rhc). Your organization's activation keys and organization ID are displayed on the Activation Keys page in the Hybrid Cloud Console. For more information about how to create and manage activation keys for your systems, see Creating and managing activation keys in the Red Hat Hybrid Cloud Console . 3.3. Registering systems with Red Hat Hosted Services After you install the Insights client, you need to register your system. This requires two steps: Registering with Red Hat hosted services, such as Red Hat Subscription Manager (RHSM) or remote host configuration (rhc). Registering the system with the Insights client. For more information about registering the system with Insights client, refer to: Getting Started with Insights Prerequisites Admin login access to each system Activation key Organization ID Procedure RHEL 7 and 8 To register a system running Red Hat Enterprise Linux version 7 or 8, use an activation key and your Organization ID to register with RHSM. RHEL 9 To register a system running RHEL 9 or later, use an activation key to register with the rhc client. If you do not want to run rhc management services on your system, use the same commands for RHEL 9 systems as you would for RHEL 7 or RHEL 8. Additional resources Getting Started with Insights For more information about the rhc client, refer to Remote Host Configuration and Management Getting started with activation keys on the Red Hat Hybrid Cloud Console Creating and managing activation keys in the Red Hat Hybrid Cloud Console . Getting Started with RHEL System Registration Client Configuration guide for Insights
[ "subscription-manager register --activationkey=_activation_key_name_ --org=_organization_ID_", "rhc connect --activation-key example_key --organization your_org_ID" ]
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/client_configuration_guide_for_red_hat_insights/assembly-client-data-cg-authentication
Chapter 2. Updating Satellite Server
Chapter 2. Updating Satellite Server Update your connected Satellite Server to the minor version. For information to update a disconnected Satellite setup, see Chapter 3, Updating a disconnected Satellite Server . Prerequisites Back up your Satellite Server. For more information, see Backing Up Satellite Server and Capsule Server in Administering Red Hat Satellite . Warning If you customize configuration files, manually or using a tool such as Hiera, these changes are overwritten when the maintenance script runs during upgrading or updating. You can use the --noop option with the satellite-installer to test for changes. For more information, see the Red Hat Knowledgebase solution How to use the noop option to check for changes in Satellite config files during an upgrade. Procedure Ensure the Satellite Maintenance repository is enabled: Check the available versions to confirm the minor version is listed: Use the health check option to determine if the system is ready for upgrade. On first use of this command, satellite-maintain prompts you to enter the hammer admin user credentials and saves them in the /etc/foreman-maintain/foreman-maintain-hammer.yml file. Review the results and address any highlighted error conditions before performing the upgrade. Because of the lengthy update time, use a utility such as tmux to suspend and reattach a communication session. You can then check the upgrade progress without staying connected to the command shell continuously. If you lose connection to the command shell where the upgrade command is running, you can see the logged messages in the /var/log/foreman-installer/satellite.log file to check if the process completed successfully. Perform the upgrade: Determine if the system needs a reboot: If the command told you to reboot, then reboot the system: Additional resources To restore the backup of the Satellite Server or Capsule Server, see Restoring Satellite Server or Capsule Server from a Backup
[ "subscription-manager repos --enable satellite-maintenance-6.15-for-rhel-8-x86_64-rpms", "satellite-maintain upgrade list-versions", "satellite-maintain upgrade check --target-version 6.15.z", "satellite-maintain upgrade run --target-version 6.15.z", "dnf needs-restarting --reboothint", "reboot" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/updating_red_hat_satellite/updating_server_updating
2.2. Migrating the Data Warehouse Service to a Separate Machine
2.2. Migrating the Data Warehouse Service to a Separate Machine You can migrate the Data Warehouse service installed and configured on the Red Hat Virtualization Manager to a separate machine. Hosting the Data Warehouse service on a separate machine helps to reduce the load on the Manager machine. Notice that this procedure migrates the Data Warehouse service only. To migrate the Data Warehouse database ( ovirt_engine_history ) prior to migrating the Data Warehouse service, see Migrating the Data Warehouse Database to a Separate Machine . Note Red Hat only supports installing the Data Warehouse database, the Data Warehouse service and Grafana all on the same machine as each other, even though you can install each of these components on separate machines from each other. Prerequisites You must have installed and configured the Manager and Data Warehouse on the same machine. To set up the new Data Warehouse machine, you must have the following: The password from the Manager's /etc/ovirt-engine/engine.conf.d/10-setup-database.conf file. Allowed access from the Data Warehouse machine to the Manager database machine's TCP port 5432. The username and password for the Data Warehouse database from the Manager's /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/10-setup-database.conf file. If you migrated the ovirt_engine_history database using the procedures described in Migrating the Data Warehouse Database to a Separate Machine , the backup includes these credentials, which you defined during the database setup on that machine. Installing this scenario requires four steps: Setting up the New Data Warehouse Machine Stopping the Data Warehouse service on the Manager machine Configuring the new Data Warehouse machine Disabling the Data Warehouse package on the Manager machine 2.2.1. Setting up the New Data Warehouse Machine Enable the Red Hat Virtualization repositories and install the Data Warehouse setup package on a Red Hat Enterprise Linux 8 machine: Enable the required repositories: Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted: # subscription-manager register Find the Red Hat Virtualization Manager subscription pool and record the pool ID: # subscription-manager list --available Use the pool ID to attach the subscription to the system: # subscription-manager attach --pool= pool_id Configure the repositories: # subscription-manager repos \ --disable='*' \ --enable=rhel-8-for-x86_64-baseos-eus-rpms \ --enable=rhel-8-for-x86_64-appstream-eus-rpms \ --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms \ --enable=fast-datapath-for-rhel-8-x86_64-rpms \ --enable=jb-eap-7.4-for-rhel-8-x86_64-rpms # subscription-manager release --set=8.6 Enable the pki-deps module. # dnf module -y enable pki-deps Ensure that all packages currently installed are up to date: # dnf upgrade --nobest Install the ovirt-engine-dwh-setup package: # dnf install ovirt-engine-dwh-setup 2.2.2. Stopping the Data Warehouse Service on the Manager Machine Procedure Stop the Data Warehouse service: # systemctl stop ovirt-engine-dwhd.service If the database is hosted on a remote machine, you must manually grant access by editing the postgres.conf file. Edit the /var/lib/pgsql/data/postgresql.conf file and modify the listen_addresses line so that it matches the following: listen_addresses = '*' If the line does not exist or has been commented out, add it manually. If the database is hosted on the Manager machine and was configured during a clean setup of the Red Hat Virtualization Manager, access is granted by default. Restart the postgresql service: # systemctl restart postgresql 2.2.3. Configuring the New Data Warehouse Machine The order of the options or settings shown in this section may differ depending on your environment. If you are migrating both the ovirt_engine_history database and the Data Warehouse service to the same machine, run the following, otherwise proceed to the step. # sed -i '/^ENGINE_DB_/d' \ /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/10-setup-database.conf # sed -i \ -e 's;^\(OVESETUP_ENGINE_CORE/enable=bool\):True;\1:False;' \ -e '/^OVESETUP_CONFIG\/fqdn/d' \ /etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf Remove the apache/grafana PKI files, so that they are regenerated by engine-setup with correct values: Run the engine-setup command to begin configuration of Data Warehouse on the machine: # engine-setup Press Enter to accept the automatically detected host name, or enter an alternative host name and press Enter : Host fully qualified DNS name of this server [ autodetected host name ]: Press Enter to automatically configure the firewall, or type No and press Enter to maintain existing settings: Setup can automatically configure the firewall on this system. Note: automatic configuration of the firewall may overwrite current settings. Do you want Setup to configure the firewall? (Yes, No) [Yes]: If you choose to automatically configure the firewall, and no firewall managers are active, you are prompted to select your chosen firewall manager from a list of supported options. Type the name of the firewall manager and press Enter . This applies even in cases where only one option is listed. Enter the fully qualified domain name and password for the Manager. Press Enter to accept the default values in each other field: Host fully qualified DNS name of the engine server []: engine-fqdn Setup needs to do some actions on the remote engine server. Either automatically, using ssh as root to access it, or you will be prompted to manually perform each such action. Please choose one of the following: 1 - Access remote engine server using ssh as root 2 - Perform each action manually, use files to copy content around (1, 2) [1]: ssh port on remote engine server [22]: root password on remote engine server engine-fqdn : password Enter the FQDN and password for the Manager database machine. Press Enter to accept the default values in each other field: Engine database host []: manager-db-fqdn Engine database port [5432]: Engine database secured connection (Yes, No) [No]: Engine database name [engine]: Engine database user [engine]: Engine database password: password Confirm your installation settings: Please confirm installation settings (OK, Cancel) [OK]: The Data Warehouse service is now configured on the remote machine. Proceed to disable the Data Warehouse service on the Manager machine. Note If you want to change the Data Warehouse sampling scale to the recommended scale on a remote server, see Changing the Data Warehouse Sampling Scale . 2.2.4. Disabling the Data Warehouse Service on the Manager Machine Prerequisites The Grafana service on the Manager machine is disabled: # systemctl disable --now grafana-server.service Procedure On the Manager machine, restart the Manager: # service ovirt-engine restart Run the following command to modify the file /etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf and set the options to False : # sed -i \ -e 's;^\(OVESETUP_DWH_CORE/enable=bool\):True;\1:False;' \ -e 's;^\(OVESETUP_DWH_CONFIG/remoteEngineConfigured=bool\):True;\1:False;' \ /etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf # sed -i \ -e 's;^\(OVESETUP_GRAFANA_CORE/enable=bool\):True;\1:False;' \ /etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf Disable the Data Warehouse service: # systemctl disable ovirt-engine-dwhd.service Remove the Data Warehouse files: # rm -f /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/*.conf /var/lib/ovirt-engine-dwh/backups/* The Data Warehouse service is now hosted on a separate machine from the Manager.
[ "subscription-manager register", "subscription-manager list --available", "subscription-manager attach --pool= pool_id", "subscription-manager repos --disable='*' --enable=rhel-8-for-x86_64-baseos-eus-rpms --enable=rhel-8-for-x86_64-appstream-eus-rpms --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms --enable=fast-datapath-for-rhel-8-x86_64-rpms --enable=jb-eap-7.4-for-rhel-8-x86_64-rpms subscription-manager release --set=8.6", "dnf module -y enable pki-deps", "dnf upgrade --nobest", "dnf install ovirt-engine-dwh-setup", "systemctl stop ovirt-engine-dwhd.service", "listen_addresses = '*'", "systemctl restart postgresql", "sed -i '/^ENGINE_DB_/d' /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/10-setup-database.conf sed -i -e 's;^\\(OVESETUP_ENGINE_CORE/enable=bool\\):True;\\1:False;' -e '/^OVESETUP_CONFIG\\/fqdn/d' /etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf", "rm -f /etc/pki/ovirt-engine/certs/apache.cer /etc/pki/ovirt-engine/certs/apache-grafana.cer /etc/pki/ovirt-engine/keys/apache.key.nopass /etc/pki/ovirt-engine/keys/apache-grafana.key.nopass /etc/pki/ovirt-engine/apache-ca.pem /etc/pki/ovirt-engine/apache-grafana-ca.pem", "engine-setup", "Host fully qualified DNS name of this server [ autodetected host name ]:", "Setup can automatically configure the firewall on this system. Note: automatic configuration of the firewall may overwrite current settings. Do you want Setup to configure the firewall? (Yes, No) [Yes]:", "Host fully qualified DNS name of the engine server []: engine-fqdn Setup needs to do some actions on the remote engine server. Either automatically, using ssh as root to access it, or you will be prompted to manually perform each such action. Please choose one of the following: 1 - Access remote engine server using ssh as root 2 - Perform each action manually, use files to copy content around (1, 2) [1]: ssh port on remote engine server [22]: root password on remote engine server engine-fqdn : password", "Engine database host []: manager-db-fqdn Engine database port [5432]: Engine database secured connection (Yes, No) [No]: Engine database name [engine]: Engine database user [engine]: Engine database password: password", "Please confirm installation settings (OK, Cancel) [OK]:", "systemctl disable --now grafana-server.service", "service ovirt-engine restart", "sed -i -e 's;^\\(OVESETUP_DWH_CORE/enable=bool\\):True;\\1:False;' -e 's;^\\(OVESETUP_DWH_CONFIG/remoteEngineConfigured=bool\\):True;\\1:False;' /etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf sed -i -e 's;^\\(OVESETUP_GRAFANA_CORE/enable=bool\\):True;\\1:False;' /etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf", "systemctl disable ovirt-engine-dwhd.service", "rm -f /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/*.conf /var/lib/ovirt-engine-dwh/backups/*" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/data_warehouse_guide/migrating_the_data_warehouse_service_to_a_separate_machine
Chapter 2. The value of registering your RHEL system to Red Hat
Chapter 2. The value of registering your RHEL system to Red Hat Registration establishes an authorized connection between your system and Red Hat. Red Hat issues the registered system, whether a physical or virtual machine, a certificate that identifies and authenticates the system so that it can receive protected content, software updates, security patches, support, and managed services from Red Hat. With a valid subscription, you can register a Red Hat Enterprise Linux (RHEL) system in the following ways: During the installation process, using an installer graphical user interface (GUI) or text user interface (TUI) After installation, using the command-line interface (CLI) Automatically, during or after installation, using a kickstart script or an activation key. The specific steps to register your system depend on the version of RHEL that you are using and the registration method that you choose. Registering your system to Red Hat enables features and capabilities that you can use to manage your system and report data. For example, a registered system is authorized to access protected content repositories for subscribed products through the Red Hat Content Delivery Network (CDN) or a Red Hat Satellite Server. These content repositories contain Red Hat software packages and updates that are available only to customers with an active subscription. These packages and updates include security patches, bug fixes, and new features for RHEL and other Red Hat products. Important The entitlement-based subscription model is deprecated and will be retired in the future. Simple content access is now the default subscription model. It provides an improved subscription experience that eliminates the need to attach a subscription to a system before you can access Red Hat subscription content on that system. If your Red Hat account uses the entitlement-based subscription model, contact your Red hat account team, for example, a technical account manager (TAM) or solution architect (SA) to prepare for migration to simple content access. For more information, see Transition of subscription services to the hybrid cloud .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/interactively_installing_rhel_from_installation_media/the-value-of-registering-your-rhel-system-to-red-hat_rhel-installer